Abstract | ||
---|---|---|
Multimodal signals allow us to gain insights into internal cognitive processes of a person, for example: speech and gesture analysis yields cues about hesitations, knowledgeability, or alertness, eye tracking yields information about a person's focus of attention, task, or cognitive state, EEG yields information about a person's cognitive load or information appraisal. Capturing cognitive processes is an important research tool to understand human behavior as well as a crucial part of a user model to an adaptive interactive system such as a robot or a tutoring system. As cognitive processes are often multifaceted, a comprehensive model requires the combination of multiple complementary signals. In this workshop at the ACM International Conference on Multimodal Interfaces (ICMI) conference in Boulder, Colorado, USA, we discussed the state-of-the-art in monitoring and modeling cognitive processes from multi-modal signals.
|
Year | DOI | Venue |
---|---|---|
2018 | 10.1145/3242969.3265861 | ICMI |
Field | DocType | ISBN |
Multimodality,Computer science,Eye tracking,Human–computer interaction,User modeling,Robot,Cognition,Cognitive load,Electroencephalography,Alertness | Conference | 978-1-4503-5692-3 |
Citations | PageRank | References |
0 | 0.34 | 0 |
Authors | ||
6 |
Name | Order | Citations | PageRank |
---|---|---|---|
Felix Putze | 1 | 205 | 29.73 |
Jutta Hild | 2 | 22 | 6.19 |
Akane Sano | 3 | 84 | 8.96 |
Enkelejda Kasneci | 4 | 202 | 33.86 |
Erin T. Solovey | 5 | 730 | 48.09 |
T. Schultz | 6 | 2423 | 252.72 |