Abstract | ||
---|---|---|
As sensing technology has evolved, spatial user interfaces have become increasingly popular platforms for interacting with video games and virtual environments. In particular, recent advances in consumer-level motion tracking devices such as the Microsoft Kinect have sparked a dramatic increase in user interfaces controlled directly by the user's hands and body. However, existing skeleton tracking middleware created for these sensors, such as those developed by Microsoft and OpenNI, tend to focus on coarse full-body motions, and suffers from several well-documented limitations when attempting to track the positions of the user's hands and segment them from the background. In this paper, we present an approach for more robustly handling these failure cases by combining the original skeleton tracking positions with the color and depth information returned from the sensor. |
Year | DOI | Venue |
---|---|---|
2013 | 10.1145/2491367.2491401 | SUI |
Keywords | Field | DocType |
consumer-level motion,microsoft kinect,dramatic increase,enhanced real-time hand segmentation,skeleton data,popular platform,fusing depth,coarse full-body motion,depth information,spatial user interface,skeleton tracking middleware,original skeleton,failure case | Middleware,Computer vision,Computer graphics (images),Segmentation,Computer science,Artificial intelligence,User interface,Match moving | Conference |
Citations | PageRank | References |
1 | 0.37 | 0 |
Authors | ||
3 |
Name | Order | Citations | PageRank |
---|---|---|---|
Yu-Jen Huang | 1 | 154 | 14.91 |
Mark Bolas | 2 | 880 | 89.87 |
Evan A. Suma | 3 | 780 | 67.37 |