Abstract | ||
---|---|---|
Recently, several eye-gaze input systems have been developed. Some of these systems consider the eye-blinking action to be additional input information. A main purpose of eye-gaze input systems is to serve as a communication aid for the severely disabled. The input system, which employs eye blinks as command inputs, needs to identify voluntary (conscious) blinks. In the past, we developed an eye-gaze input system for the creation of Japanese text. Our previous system employed an indicator selection method for command inputs. This system was able to identify two types of voluntary blinks. These two types of voluntary blinks work as functions governing indicator selection and error correction, respectively. In the evaluation experiment of the previous system, errors were occasionally observed in the estimation of the number of indicators at which the user was gazing. In this study, we propose a new input system that employs a selection method based on a novel indicator estimation algorithm. We conducted an experiment to evaluate the performance of Japanese text creation using our new input system. This study reports that using our new input system improves the speed of text input. In addition, we demonstrate a comparison of the various related eye-gaze input systems. |
Year | DOI | Venue |
---|---|---|
2019 | 10.1007/s10015-018-0517-z | Artificial Life and Robotics |
Keywords | DocType | Volume |
Eye blink input, Eye-gaze input, Image analysis, Input interface, Voluntary blink | Journal | 24 |
Issue | ISSN | Citations |
3 | 1614-7456 | 0 |
PageRank | References | Authors |
0.34 | 5 | 4 |
Name | Order | Citations | PageRank |
---|---|---|---|
Hironobu Sato | 1 | 2 | 1.13 |
Kiyohiko Abe | 2 | 14 | 4.66 |
Shogo Matsuno | 3 | 4 | 1.21 |
Minoru Ohyama | 4 | 33 | 9.48 |