Title
Neural learning of approximate simple regular languages
Abstract
Discrete-time recurrent neural networks (DTRNN) havebeen used to infer DFA from sets of examples and counterexamples;however, discrete algorithmic methods are much better at this task andclearly outperform DTRNN in space and time complexity. We show,however, how DTRNN may be used to learn not the exact language thatexplains the whole learning set but an approximate and much simplerlanguage that explains a great majority of the examples by using simplerules. This is accomplished by ...
Year
Venue
Keywords
1999
ESANN
time complexity,recurrent neural network,discrete time,regular language
Field
DocType
Citations 
Error function,Neural learning,Computer science,Spacetime,Recurrent neural network,Theoretical computer science,Time delay neural network,Simpli,Artificial intelligence,Regular language,Counterexample,Machine learning
Conference
0
PageRank 
References 
Authors
0.34
0
4
Name
Order
Citations
PageRank
Mikel L. Forcada150858.98
Antonio M. Corbí-Bellot2484.25
Marco Gori300.34
Marco Maggini476672.78