Title
Which Input Abstraction is Better for a Robot Syntax Acquisition Model? Phonemes, Words or Grammatical Constructions?
Abstract
There has been a considerable progress these last years in speech recognition systems [13]. The word recognition error rate went down with the arrival of deep learning methods. However, if one uses cloud-based speech API and integrates it inside a robotic architecture [33], one still encounters considerable cases of wrong sentences recognition. Thus speech recognition can not be considered as solved especially when an utterance is considered in isolation of its context. Particular solutions, that can be adapted to different Human-Robot Interaction applications and contexts, have to be found. In this perspective, the way children learn language and how our brains process utterances may help us improve how robot process language. Getting inspiration from language acquisition theories and how the brain processes sentences we previously developed a neuro-inspired model of sentence processing. In this study, we investigate how this model can process different levels of abstractions as input: sequences of phonemes, sequences of words or grammatical constructions. We see that even if the model was only tested on grammatical constructions before, it has better performances with words and phonemes inputs.
Year
Venue
Field
2018
Joint IEEE International Conference on Development and Learning and Epigenetic Robotics ICDL-EpiRob
Abstraction,Sentence processing,Computer science,Word recognition,Word error rate,Utterance,Language acquisition,Natural language processing,Artificial intelligence,Deep learning,Syntax
DocType
ISSN
Citations 
Conference
2161-9484
0
PageRank 
References 
Authors
0.34
0
1
Name
Order
Citations
PageRank
Xavier Hinaut1367.93