Abstract | ||
---|---|---|
We discuss and predict the evolution of Simultaneous Localisation and Mapping (SLAM) into a general geometric and semantic `Spatial AIu0027 perception capability for intelligent embodied devices. A big gap remains between the visual perception performance that devices such as augmented reality eyewear or comsumer robots will require and what is possible within the constraints imposed by real products. Co-design of algorithms, processors and sensors will be needed. We explore the computational structure of current and future Spatial AI algorithms and consider this within the landscape of ongoing hardware developments. |
Year | Venue | Field |
---|---|---|
2018 | arXiv: Artificial Intelligence | Eyewear,Computer science,Augmented reality,Embodied cognition,Human–computer interaction,Artificial intelligence,Robot,Simultaneous localisation and mapping,Perception,Visual perception,Machine learning |
DocType | Volume | Citations |
Journal | abs/1803.11288 | 4 |
PageRank | References | Authors |
0.40 | 28 | 1 |
Name | Order | Citations | PageRank |
---|---|---|---|
Andrew J. Davison | 1 | 6707 | 350.85 |