Title
Efficient Indoor Positioning with Visual Experiences via Lifelong Learning
Abstract
Positioning with visual sensors in indoor environments has many advantages: it doesn’t require infrastructure or accurate maps, and is more robust and accurate than other modalities such as WiFi. However, one of the biggest hurdles that prevents its practical application on mobile devices is the time-consuming visual processing pipeline. To overcome this problem, this paper proposes a novel lifelong learning approach to enable efficient and real-time visual positioning. We explore the fact that when following a previous visual experience for multiple times, one could gradually discover clues on how to traverse it with much less effort, e.g., which parts of the scene are more informative, and what kind of visual elements we should expect. Such second-order information is recorded as parameters, which provide key insights of the context and empower our system to dynamically optimise itself to stay localised with minimum cost. We implement the proposed approach on an array of mobile and wearable devices, and evaluate its performance in two indoor settings. Experimental results show our approach can reduce the visual processing time up to two orders of magnitude, while achieving sub-metre positioning accuracy.
Year
DOI
Venue
2019
10.1109/TMC.2018.2852645
IEEE Transactions on Mobile Computing
Keywords
Field
DocType
Visualization,Pipelines,Real-time systems,Feature extraction,Smart glasses,Navigation,Vocabulary
Modalities,Visual processing,Visualization,Computer science,Feature extraction,Human–computer interaction,Mobile device,Lifelong learning,Wearable technology,Traverse,Distributed computing
Journal
Volume
Issue
ISSN
18
4
1536-1233
Citations 
PageRank 
References 
1
0.36
0
Authors
7
Name
Order
Citations
PageRank
Hongkai Wen131325.88
Ronald Clark21319.10
Sen Wang327921.15
xiaoxuan lu41368.39
Bowen Du522341.46
Wen Hu61373104.78
Niki Trigoni7116085.23