Abstract | ||
---|---|---|
The rapid development of autonomous driving, abnormal behavior detection, and behavior recognition makes an increasing demand for multi-person pose estimation-based applications, especially on mobile platforms. However, to achieve high accuracy, state-of-the-art methods tend to have a large model size and complex post-processing algorithm, which costs intense computation and long end-to-end latency. To solve this problem, we propose an architecture optimization and weight pruning framework to accelerate inference of multi-person pose estimation on mobile devices. With our optimization framework, we achieve up to 2.51x faster model inference speed with higher accuracy compared to representative lightweight multi-person pose estimator. |
Year | DOI | Venue |
---|---|---|
2021 | 10.24963/ijcai.2021/715 | IJCAI |
DocType | Citations | PageRank |
Conference | 0 | 0.34 |
References | Authors | |
0 | 8 |
Name | Order | Citations | PageRank |
---|---|---|---|
Xuan Shen | 1 | 1 | 1.70 |
Geng Yuan | 2 | 0 | 2.70 |
Wei Niu | 3 | 24 | 11.21 |
Xiaolong Ma | 4 | 22 | 5.90 |
Jiexiong Guan | 5 | 2 | 2.38 |
Zhengang Li | 6 | 15 | 7.27 |
Bin Ren | 7 | 82 | 18.03 |
Yanzhi Wang | 8 | 1082 | 136.11 |