Title
Supervised deep learning with auxiliary networks
Abstract
Deep learning well demonstrates its potential in learning latent feature representations. Recent years have witnessed an increasing enthusiasm for regularizing deep neural networks by incorporating various side information, such as user-provided labels or pairwise constraints. However, the effectiveness and parameter sensitivity of such algorithms have been major obstacles for putting them into practice. The major contribution of our work is the exposition of a novel supervised deep learning algorithm, which distinguishes from two unique traits. First, it regularizes the network construction by utilizing similarity or dissimilarity constraints between data pairs, rather than sample-specific annotations. Such kind of side information is more flexible and greatly mitigates the workload of annotators. Secondly, unlike prior works, our proposed algorithm decouples the supervision information and intrinsic data structure. We design two heterogeneous networks, each of which encodes either supervision or unsupervised data structure respectively. Specifically, we term the supervision-oriented network as \"auxiliary network\" since it is principally used for facilitating the parameter learning of the other one and will be removed when handling out-of-sample data. The two networks are complementary to each other and bridged by enforcing the correlation of their parameters. We name the proposed algorithm SUpervision-Guided AutoencodeR (SUGAR). Comparing prior works on unsupervised deep networks and supervised learning, SUGAR better balances numerical tractability and the flexible utilization of supervision information. The classification performance on MNIST digits and eight benchmark datasets demonstrates that SUGAR can effectively improve the performance by using the auxiliary networks, on both shallow and deep architectures. Particularly, when multiple SUGARs are stacked, the performance is significantly boosted. On the selected benchmarks, ours achieve up to 11.35% relative accuracy improvement compared to the state-of-the-art models.
Year
DOI
Venue
2014
10.1145/2623330.2623618
KDD
Keywords
Field
DocType
autoencoder,deep neural networks,learning,supervision
Data mining,Data structure,Autoencoder,MNIST database,Computer science,Deep belief network,Supervised learning,Unsupervised learning,Artificial intelligence,Deep learning,Heterogeneous network,Machine learning
Conference
Citations 
PageRank 
References 
8
0.43
26
Authors
4
Name
Order
Citations
PageRank
Junbo Zhang189637.75
Guangjian Tian2144.56
Yadong Mu382648.32
Wei Fan44205253.58