Abstract | ||
---|---|---|
It is often asserted that deep networks learn "features", traditionally expressed by the activations of intermediate nodes. We explore an alternative concept by defining features as partial derivatives of model output with respect to model parameters-extending a simple yet powerful idea from generalized linear models. The resulting features are not equivalent to node activations, and we show that they can induce a holographic representation of the complete model: the network's output on given data can be exactly replicated by a simple linear model over such features extracted from any ordered cut. We demonstrate useful advantages for this feature representation over standard representations based on node activations. |
Year | Venue | Field |
---|---|---|
2017 | CONFERENCE ON UNCERTAINTY IN ARTIFICIAL INTELLIGENCE (UAI2017) | Holography,Computer graphics (images),Computer science,Artificial intelligence,Machine learning |
DocType | Citations | PageRank |
Conference | 0 | 0.34 |
References | Authors | |
10 | 3 |
Name | Order | Citations | PageRank |
---|---|---|---|
Martin Zinkevich | 1 | 1893 | 160.99 |
Alex Davies | 2 | 0 | 0.34 |
Dale Schuurmans | 3 | 2760 | 317.49 |