Title
When unlearning helps
Abstract
Overregularization seen in child language learning, for example, verb tense constructs, involves abandoning correct behaviours for incorrect ones and later reverting to correct behaviours. Quite a number of other child development phenomena also follow this U-shaped form of learning, unlearning and relearning. A decisive learner does not do this and, more generally, never abandons an hypothesis H for an inequivalent one where it later conjectures an hypothesis equivalent to H, where equivalence means semantical or behavioural equivalence. The first main result of the present paper entails that decisiveness is a real restriction on Gold's model of explanatory (or in the limit) learning of grammars for languages from positive data. This result also solves an open problem posed in 1986 by Osherson, Stob and Weinstein. Second-time decisive learners semantically conjecture each of their hypotheses for any language at most twice. By contrast, such learners are shown not to restrict Gold's model of learning. Non U-shaped learning liberalizes the requirement of decisiveness from being a restriction on all hypotheses output to the same restriction but only on correct hypotheses. The situation regarding learning power for non U-shaped learning is a little more complex than that for decisiveness. This is explained shortly below. Gold's original model for learning grammars from positive data, called EX-learning, requires, for success, syntactic convergence to a correct grammar. A slight variant, called BC-learning, requires only semantic convergence to a sequence of correct grammars that need not be syntactically identical to one another. The second main result says that non U-shaped learning does not restrict EX-learning. However, from an argument of Fulk, Jain and Osherson, non U-shaped learning does restrict BC-learning. In the final section is discussed the possible meaning, for cognitive science, of these results and, in this regard, indicated are some avenues worthy of future investigation.
Year
DOI
Venue
2008
10.1016/j.ic.2007.10.005
Inf. Comput.
Keywords
Field
DocType
cognitive science,original model,inductive inference of grammars for languages from positive data,correct hypothesis,68t05,positive data,hypotheses output,main result,correct behaviour,computational learning theory,u-shaped form,correct grammar,03d25,03d80,non u-shaped learning,child language learning,language learning,child development,inductive inference
Rule-based machine translation,Discrete mathematics,Verb,Computer science,Cognitive psychology,Grammar,Equivalence (measure theory),Language acquisition,Artificial intelligence,Computational learning theory,Syntax,Semantics
Journal
Volume
Issue
ISSN
206
5
Information and Computation
Citations 
PageRank 
References 
19
0.71
17
Authors
5
Name
Order
Citations
PageRank
Ganesh Baliga1785.72
John Case216913.65
Wolfgang Merkle316718.46
Frank Stephan4493.79
Rolf Wiehagen5835105.73