Abstract | ||
---|---|---|
In this paper, we present a representation method for motion capture data by exploiting the nearly repeated characteristics and spatio-temporal coherence in human motion. We extract similar motion clips of variable lengths or speeds across the database. Since the coding costs between these matched clips are small, we propose the repeated motion analysis to extract the refered and repeated clip pairs with maximum compression gains. For further utilization of motion coherence, we approximate the subspace-projected clip motions or residuals by interpolated functions with range-aware adaptive quantization. Our experiments demonstrate that the proposed feature-aware method is of high computational efficiency. Furthermore, it also provides substantial compression gains with comparable reconstruction and perceptual errors. |
Year | DOI | Venue |
---|---|---|
2011 | 10.1109/TVCG.2010.87 | IEEE Trans. Vis. Comput. Graph. |
Keywords | Field | DocType |
similar motion clip,motion coherence,proposed feature-aware method,motion capture data,adaptive motion data representation,repeated clip pair,maximum compression gain,representation method,subspace-projected clip motion,human motion,repeated motion analysis,data structures,encoding,principal component analysis,motion estimation,data representation,trajectory,pixel | Motion capture,Computer vision,Quarter-pixel motion,Computer science,Interpolation,Coherence (physics),Theoretical computer science,Artificial intelligence,Motion analysis,Motion estimation,Quantization (signal processing),Trajectory | Journal |
Volume | Issue | ISSN |
17 | 4 | 1941-0506 |
Citations | PageRank | References |
10 | 0.52 | 16 |
Authors | ||
4 |
Name | Order | Citations | PageRank |
---|---|---|---|
I-Chen Lin | 1 | 63 | 9.92 |
Jen-Yu Peng | 2 | 14 | 1.26 |
Chao-Chih Lin | 3 | 10 | 0.52 |
Ming-Han Tsai | 4 | 36 | 4.84 |