Abstract | ||
---|---|---|
While Truncated Back-Propagation through Time (BPTT) is the most popular approach to training Recurrent Neural Networks (RNNs), it suffers from being inherently sequential (making parallelization difficult) and from truncating gradient flow between distant time-steps. We investigate whether Target Propagation (TPROP) style approaches can address these shortcomings. Unfortunately, extensive experiments suggest that TPROP generally underperforms BPTT, and we end with an analysis of this phenomenon, and suggestions for future work. |
Year | Venue | Field |
---|---|---|
2017 | arXiv: Computation and Language | Computer science,Recurrent neural network,Artificial intelligence,Language model,Machine learning |
DocType | Volume | Citations |
Journal | abs/1702.04770 | 1 |
PageRank | References | Authors |
0.39 | 9 | 7 |
Name | Order | Citations | PageRank |
---|---|---|---|
Sam Wiseman | 1 | 101 | 9.02 |
Sumit Chopra | 2 | 2835 | 181.37 |
Marc'Aurelio Ranzato | 3 | 5242 | 470.94 |
Arthur Szlam | 4 | 1056 | 68.60 |
Ruoyu Sun | 5 | 296 | 16.15 |
Soumith Chintala | 6 | 2056 | 102.09 |
Nicolas Vasilache | 7 | 41 | 4.24 |