Abstract | ||
---|---|---|
Depth sensing cameras that can acquire RGB and depth information are being widely used. They can expand and enhance various camera-based applications and are cheap but strong tools for computer human interaction. RGB and depth sensing cameras have quite different key parameters, such as exposure time. We focus on the differences in their motion robustness; the RGB camera has relatively long exposure times while those of ToF (Time-of-flight) based depth sensing camera are relatively short. An experiment on visual tag reading, one typical application, shows that depth sensing cameras can robustly decode moving tags. The proposed technique will yield robust tag reading, indoor localization, and color image stabilization while walking and jogging or even glancing momentarily without requiring any special additional devices. |
Year | DOI | Venue |
---|---|---|
2015 | 10.1145/2815585.2817807 | UIST (Adjunct Volume) |
Field | DocType | Citations |
Computer vision,Computer graphics (images),Computer science,Robustness (computer science),RGB color model,Artificial intelligence,Color image | Conference | 0 |
PageRank | References | Authors |
0.34 | 1 | 3 |
Name | Order | Citations | PageRank |
---|---|---|---|
Wataru Yamada | 1 | 29 | 15.22 |
Hiroyuki Manabe | 2 | 93 | 20.31 |
Hiroshi Inamura | 3 | 253 | 25.67 |