Title
Optimal Stopping And Effective Machine Complexity In Learning
Abstract
We study the problem of when to stop learning a class of feedforward networks-- networks with linear outputs neuron and fixed input weights -- when they aretrained with a gradient descent algorithm on a finite number of examples. Undergeneral regularity conditions, it is shown that there are in general three distinctphases in the generalization performance in the learning process, and in particular,the network has better generalization performance when learning is stopped at acertain time ...
Year
DOI
Venue
1993
10.1109/ISIT.1995.531518
PROCEEDINGS 1995 IEEE INTERNATIONAL SYMPOSIUM ON INFORMATION THEORY
Keywords
Field
DocType
gradient descent,optimal stopping
Online machine learning,Early stopping,Mathematical optimization,Multi-task learning,Stability (learning theory),Probably approximately correct learning,Active learning (machine learning),Computer science,Wake-sleep algorithm,Artificial intelligence,Computational learning theory,Machine learning
Conference
Citations 
PageRank 
References 
32
3.63
3
Authors
3
Name
Order
Citations
PageRank
Wang, Changfeng1323.63
Santosh S. Venkatesh238171.80
J. Stephen Judd37715.20