Abstract | ||
---|---|---|
Local features are widely used for content-based image retrieval and augmented reality applications. Typically, feature descriptors are calculated from the gradients of a canonical patch around a repeatable key point in the image. In previous work, we showed that one can alternatively transmit the compressed canonical patch and perform descriptor computation at the receiving end with comparable performance. In this paper, we propose a temporally coherent key point detector in order to allow efficient interframe coding of canonical patches. In inter-patch compression, one strives to transmit each patch with as few bits as possible by simply modifying a previously transmitted patch. This enables server-based mobile augmented reality where a continuous stream of salient information, sufficient for the image-based retrieval, can be sent over a wireless link at the smallest possible bit-rate. Experimental results show that our technique achieves a similar image matching performance at 1/10 of the bit-rate when compared to detecting key points independently frame-by-frame. |
Year | DOI | Venue |
---|---|---|
2012 | 10.1109/ISM.2012.18 | ISM |
Keywords | Field | DocType |
frame-by-frame,canonical patch,image coding,local features,image matching,temporally coherent key point detector,similar image,feature descriptors,image-based retrieval,wireless link,transmitted patch,image matching performance,comparable performance,content-based image retrieval,feature extraction,augmented reality,server-based mobile augmented reality,descriptor computation,image retrieval,salient information continuous stream,interframe coding,temporally coherent key point,repeatable key point,interpatch compression,canonical patches,mobile augmented reality,key point,mobile computing,augmented reality application,content-based retrieval,augmented reality applications | Mobile computing,Computer vision,Automatic image annotation,Wireless,Pattern recognition,Feature detection (computer vision),Computer science,Image retrieval,Augmented reality,Feature extraction,Artificial intelligence,Visual Word | Conference |
ISBN | Citations | PageRank |
978-1-4673-4370-1 | 10 | 0.59 |
References | Authors | |
6 | 5 |
Name | Order | Citations | PageRank |
---|---|---|---|
Mina Makar | 1 | 93 | 7.27 |
Sam S. Tsai | 2 | 724 | 36.51 |
Vijay Chandrasekhar | 3 | 949 | 45.35 |
David Chen | 4 | 40 | 2.09 |
Bernd Girod | 5 | 8988 | 1062.96 |