Abstract | ||
---|---|---|
ABSTRACT Learning through embodiment is a promising concept, potentially capable to remove many layers of abstraction hindering the learning process. Walk the Graph, our HoloLens2-based AR application, provides an inquiry-based learning setting for understanding graphs through the full-body movement of the user. In this paper, as part of our ongoing work to build an AI framework to quantify and predict the learning gain of the user, we examine the predictive potential of gaze data collected during the app usage. To classify users into groups with different learning gains, we construct a map of areas of interest (AOI) based on the gaze data itself. Subsequently, using a sliding window approach, we extract engineered features from the collected in-app as well as gaze data. Our experimental results have shown that a Support Vector Machine with selected features achieved the highest F1 score (0.658; baseline: 0.251) compared to other approaches including a K-Nearest Neighbor and a Random Forest Classifier although in each of the cases the lion’s share of the predictive power is indeed provided by the gaze-based features. |
Year | DOI | Venue |
---|---|---|
2021 | 10.1145/3460418.3479358 | Ubiquitous Computing |
Keywords | DocType | Citations |
Inquiry-based Learning, Augmented Reality, Eye Tracking | Conference | 0 |
PageRank | References | Authors |
0.34 | 0 | 6 |
Name | Order | Citations | PageRank |
---|---|---|---|
David Dzsotjan | 1 | 0 | 0.34 |
Kim Ludwig-Petsch | 2 | 0 | 0.34 |
Sergey Mukhametov | 3 | 0 | 0.34 |
Shoya Ishimaru | 4 | 148 | 21.31 |
Stefan Küchemann | 5 | 0 | 0.34 |
Jochen Kuhn | 6 | 22 | 8.28 |