Abstract | ||
---|---|---|
Though deep neural networks (DNNs) achieve remarkable performances in many artificial intelligence tasks, the lack of training instances remains a notorious challenge. As the network goes deeper, the generalization accuracy decays rapidly in the situation of lacking massive amounts of training data. In this paper, we propose novel deep neural network structures that can be inherited from all existing DNNs with almost the same level of complexity, and develop simple training algorithms. We show our paradigm successfully resolves the lack of data issue. Tests on the CIFAR10 and CIFAR100 image recognition datasets show that the new paradigm leads to 20$%$ to $30%$ relative error rate reduction compared to their base DNNs. The intuition of our algorithms for deep residual network stems from theories of the partial differential equation (PDE) control problems. Code will be made available. |
Year | Venue | Field |
---|---|---|
2018 | arXiv: Learning | Residual,Activation function,Data dependent,Intuition,Artificial intelligence,Deep learning,Artificial neural network,Partial differential equation,Machine learning,Approximation error,Mathematics |
DocType | Volume | Citations |
Journal | abs/1802.00168 | 1 |
PageRank | References | Authors |
0.35 | 15 | 6 |
Name | Order | Citations | PageRank |
---|---|---|---|
Bao Wang | 1 | 11 | 2.55 |
Xiyang Luo | 2 | 17 | 5.09 |
Zhen Li | 3 | 33 | 12.70 |
Wei Zhu | 4 | 48 | 12.13 |
Zuoqiang Shi | 5 | 121 | 18.35 |
Stanley Osher | 6 | 7973 | 514.62 |