Title
Modeling gaze behavior for virtual demonstrators
Abstract
Achieving autonomous virtual humans with coherent and natural motions is key for being effective in many educational, training and therapeutic applications. Among several aspects to be considered, the gaze behavior is an important non-verbal communication channel that plays a vital role in the effectiveness of the obtained animations. This paper focuses on analyzing gaze behavior in demonstrative tasks involving arbitrary locations for target objects and listeners. Our analysis is based on full-body motions captured from human participants performing real demonstrative tasks in varied situations. We address temporal information and coordination with targets and observers at varied positions.
Year
DOI
Venue
2011
10.1007/978-3-642-23974-8_17
IVA
Keywords
Field
DocType
varied position,human participant,autonomous virtual human,demonstrative task,target object,important non-verbal communication channel,varied situation,natural motion,virtual demonstrator,real demonstrative task,arbitrary location,virtual reality
Virtual reality,Gaze,Computer science,Demonstrative,Communication channel,Multimedia,Motion synthesis
Conference
Citations 
PageRank 
References 
3
0.44
11
Authors
4
Name
Order
Citations
PageRank
Yazhou Huang1818.03
Justin L. Matthews263.61
Teenie Matlock311724.50
Marcelo Kallmann463959.35