Abstract | ||
---|---|---|
In this paper, we describe a project that explores a novel experimental setup towards building a spoken, multi-modally rich, and human-like multiparty tutoring robot. A human-robot interaction setup is designed, and a human-human dialogue corpus is collected. The corpus targets the development of a dialogue system platform to study verbal and nonverbal tutoring strategies in multiparty spoken interactions with robots which are capable of spoken dialogue. The dialogue task is centered on two participants involved in a dialogue aiming to solve a card-ordering game. Along with the participants sits a tutor (robot) that helps the participants perform the task, and organizes and balances their interaction. Different multimodal signals captured and auto-synchronized by different audio-visual capture technologies, such as a microphone array, Kinects, and video cameras, were coupled with manual annotations. These are used build a situated model of the interaction based on the participants personalities, their state of attention, their conversational engagement and verbal dominance, and how that is correlated with the verbal and visual feed-back, turn-management, and conversation regulatory actions generated by the tutor. Driven by the analysis of the corpus, we will show also the detailed design methodologies for an affective, and multimodally rich dialogue system that allows the robot to measure incrementally the attention states, and the dominance for each participant, allowing the robot head Furhat to maintain a well-coordinated, balanced, and engaging conversation, that attempts to maximize the agreement and the contribution to solve the task. This project sets the first steps to explore the potential of using multimodal dialogue systems to build interactive robots that can serve in educational, team building, and collaborative task solving applications. |
Year | DOI | Venue |
---|---|---|
2014 | 10.1145/2559636.2563681 | HRI |
Keywords | Field | DocType |
dialogue system platform,multimodal dialogue system,human-human dialogue corpus,participants personality,multimodally rich dialogue system,human-like multiparty tutoring robot,robot head furhat,multiparty multimodal,human-robot collaborative,collaborative task,interactive robot,dialogue task,human robot interaction,human computer interaction | Situated,Educational technology,TUTOR,Conversation,Greeklish,Computer science,Human–computer interaction,Robot,Language technology,Human–robot interaction | Conference |
ISSN | ISBN | Citations |
2167-2121 | 978-1-4503-2658-2 | 1 |
PageRank | References | Authors |
0.37 | 4 | 13 |
Name | Order | Citations | PageRank |
---|---|---|---|
Samer Al Moubayed | 1 | 234 | 23.13 |
Jonas Beskow | 2 | 668 | 96.64 |
Bajibabu Bollepalli | 3 | 22 | 7.17 |
Joakim Gustafson | 4 | 392 | 58.37 |
Ahmed Hussen Abdelaziz | 5 | 37 | 5.61 |
Martin Johansson | 6 | 5 | 1.23 |
Maria Koutsombogera | 7 | 35 | 7.43 |
José David Lopes | 8 | 13 | 3.04 |
Jekaterina Novikova | 9 | 64 | 12.97 |
Catharine Oertel | 10 | 77 | 9.46 |
Gabriel Skantze | 11 | 485 | 46.16 |
Kalin Stefanov | 12 | 15 | 3.62 |
Gül Varol | 13 | 1 | 0.37 |