Abstract | ||
---|---|---|
Deep neural networks comprise several hidden layers of units, which can be pre-trained one at a time via an unsupervised greedy approach. A whole network can then be trained (fine-tuned) in a supervised fashion. One possible pre-training strategy is to regard each hidden layer in the network as the input layer of an auto-encoder. Since auto-encoders aim to reconstruct their own input, their training must be based on some cost function capable of measuring reconstruction performance. Similarly, the supervised fine-tuning of a deep network needs to be based on some cost function that reflects prediction performance. In this work we compare different combinations of cost functions in terms of their impact on layer-wise reconstruction performance and on supervised classification performance of deep networks. We employed two classic functions, namely the cross-entropy (CE) cost and the sum of squared errors (SSE), as well as the exponential (EXP) cost, inspired by the error entropy concept. Our results were based on a number of artificial and real-world data sets. |
Year | DOI | Venue |
---|---|---|
2013 | 10.1109/MICAI.2013.20 | MICAI (Special Sessions) |
Keywords | Field | DocType |
supervised classification performance,deep network,hidden layer,train stacked auto-encoders,different cost functions,supervised fashion,deep neural network,cost function,prediction performance,supervised fine-tuning,reconstruction performance,layer-wise reconstruction performance,neural nets,learning artificial intelligence,entropy,greedy algorithms | Data set,Exponential function,Square (algebra),Pattern recognition,Computer science,Auto encoders,Greedy algorithm,Artificial intelligence,Artificial neural network,Machine learning,Deep neural networks | Conference |
ISBN | Citations | PageRank |
978-1-4799-2604-6 | 11 | 0.78 |
References | Authors | |
5 | 6 |
Name | Order | Citations | PageRank |
---|---|---|---|
Telmo Amaral | 1 | 40 | 5.90 |
Luís M. Silva | 2 | 88 | 9.02 |
Luís A. Alexandre | 3 | 703 | 47.66 |
Chetak Kandaswamy | 4 | 46 | 4.51 |
Jorge M. Santos | 5 | 123 | 11.75 |
Joaquim Marques de Sá | 6 | 72 | 9.04 |