Title
Automatic analysis of 3D gaze coordinates on scene objects using data from eye-tracking and motion-capture systems
Abstract
We implemented a system, called the VICON-EyeTracking Visualizer, that combines mobile eye tracking data with motion capture data to calculate and visualize the 3D gaze vector within the motion capture co-ordinate system. To ensure that both devices were temporally synchronized we used previously developed software by us. By placing reflective markers on objects in the scene, their positions are known and by spatially synchronizing both the eye tracker and the motion capture system allows us to automatically compute how many times and where fixations occur, thus overcoming the time consuming and error-prone disadvantages of the traditional manual annotation process. We evaluated our approach by comparing its outcome for a simple looking task and a more complex grasping task against the average results produced by the manual annotation process. Preliminary data reveals that the program only differed from the average manual annotation results by approximately 3 percent in the looking task with regard to the number of fixations and cumulative fixation duration on each point in the scene. In case of the more complex grasping task the results depend on the object size: for larger objects there was good agreement (less than 16 percent (or 950ms)), but this degraded for smaller objects, where there are more saccades towards object boundaries. The advantages of our approach are easy user calibration, the ability to have unrestricted body movements (due to the mobile eye-tracking system), and that it can be used with any wearable eye tracker and marker based motion tracking system. Extending existing approaches, our system is also able to monitor fixations on moving objects. The automatic analysis of gaze and movement data in complex 3D scenes can be applied to a variety of research domains, i. e., Human Computer Interaction, Virtual Reality or grasping and gesture research.
Year
DOI
Venue
2012
10.1145/2168556.2168561
ETRA
Keywords
Field
DocType
movement data,motion-capture system,scene object,motion capture system,average manual annotation result,automatic analysis,motion capture data,eye tracker,co-ordinate system,motion tracking system,manual annotation process,mobile eye-tracking system,preliminary data,human computer interaction,cumulant,motion capture,eye tracking,motion tracking,virtual reality
Computer vision,Motion capture,Virtual reality,Fixation (psychology),Gaze,Simulation,Gesture,Computer science,Software,Eye tracking,Artificial intelligence,Match moving
Conference
Citations 
PageRank 
References 
11
0.68
5
Authors
6
Name
Order
Citations
PageRank
KAI ESSIG1334.49
Daniel Dornbusch2192.51
Daniel Prinzhorn3110.68
Helge Ritter42020415.97
Jonathan Maycock5605.01
Thomas Schack6337.51