Title
Compressing Deep Neural Networks Using A Rank-Constrained Topology
Abstract
We present a general approach to reduce the size of feed forward deep neural networks (DNNs). We propose a rank constrained topology, which factors the weights in the input layer of the DNN in terms of a low-rank representation: unlike previous work, our technique is applied at the level of the filters learned at individual hidden layer nodes, and exploits the natural two-dimensional time-frequency structure in the input. These techniques are applied on a small-footprint DNN-based keyword spotting task, where we find that we can reduce model size by 75% relative to the baseline, without any loss in performance. Furthermore, we find that the proposed approach is more effective at improving model performance compared to other popular dimensionality reduction techniques, when evaluated with a comparable number of parameters.
Year
Venue
Keywords
2015
16TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION (INTERSPEECH 2015), VOLS 1-5
deep neural networks, low-rank approximation, keyword spotting, embedded speech recognition
Field
DocType
Citations 
Topology,Dimensionality reduction,Pattern recognition,Computer science,Keyword spotting,Speech recognition,Artificial intelligence,Deep neural networks,Feed forward
Conference
9
PageRank 
References 
Authors
0.56
12
4
Name
Order
Citations
PageRank
Preetum Nakkiran1646.05
Raziel Álvarez2303.84
Rohit Prabhavalkar316322.56
Carolina Parada424213.11