Title
Strategies for determining effective step size of the backpropagation algorithm for on-line learning
Abstract
In this paper, we investigate proper strategies for determining the step size of the backpropagation (BP) algorithm for on-line learning. It is known that for off-line learning, the step size can be determined adaptively during learning. For on-line learning, since the same data may never appear again, we cannot use the same strategy proposed for off-line learning. If we do not update the neural network with a proper step size for on-line learning, the performance of the network may not be improved steadily. Here, we investigate four strategies for updating the step size. They are (1) constant, (2) random, (3) linearly decreasing, and (4) inversely proportional, respectively. The first strategy uses a constant step size during learning, the second strategy uses a random step size, the third strategy decreases the step size linearly, and the fourth strategy updates the step size inversely proportional to time. Experimental results show that, the third and the fourth strategies are more effective. In addition, compared with the third strategy, the fourth one is more stable, and usually can improve the performance steadily.
Year
DOI
Venue
2015
10.1109/SOCPAR.2015.7492800
2015 7th International Conference of Soft Computing and Pattern Recognition (SoCPaR)
Keywords
Field
DocType
Decision Boundary Making,Multilayer Percep-tron,Backpropagation Algorithm,On-Line Learning
Computer science,Artificial intelligence,Backpropagation,Artificial neural network,Machine learning
Conference
ISSN
Citations 
PageRank 
2381-7542
0
0.34
References 
Authors
2
4
Name
Order
Citations
PageRank
Yuya Kaneda152.70
Qiangfu Zhao221462.36
Yong Liu32526265.08
Yan Pei412522.89