Abstract | ||
---|---|---|
Adult-to-child interactions are often characterized by prosodically-exaggerated speech accompanied by visually captivating co-speech gestures. In a series of adult studies, we have shown that these gestures are linked in a sophisticated manner to the prosodic structure of adults' utterances. In the current study, we use the Preferential Looking Paradigm to demonstrate that two-year-olds can use the alignment of these gestures to speech to deduce the meaning of words. Index Terms: speech perception, audiovisual alignment, word learning |
Year | Venue | Keywords |
---|---|---|
2008 | AVSP | indexing terms,speech perception |
Field | DocType | Citations |
Speech corpus,Speech analytics,Gesture,Computer science,Motor theory of speech perception,Speech recognition,Natural language processing,Artificial intelligence,Speech perception,Speech error,Speech production,Speech shadowing | Conference | 0 |
PageRank | References | Authors |
0.34 | 0 | 2 |
Name | Order | Citations | PageRank |
---|---|---|---|
Alexandra Jesse | 1 | 5 | 4.44 |
Elizabeth K. Johnson | 2 | 1 | 1.72 |