Title
Interacting Attention Graph for Single Image Two-Hand Reconstruction
Abstract
Graph convolutional network (GCN) has achieved great success in single hand reconstruction task, while interacting two-hand reconstruction by GCN remains unexplored. In this paper, we present Interacting Attention Graph Hand (IntagHand), the first graph convolution based network that reconstructs two interacting hands from a single RGB image. To solve occlusion and interaction challenges of two-hand reconstruction, we introduce two novel attention based modules in each upsampling step of the original GCN. The first module is the pyramid image feature attention (PIFA) module, which utilizes multiresolution features to implicitly obtain vertex-to-image alignment. The second module is the cross hand attention (CHA) module that encodes the coherence of interacting hands by building dense cross-attention between two hand vertices. As a result, our model outperforms all existing two-hand re-construction methods by a large margin on InterHand2.6M benchmark. Moreover, ablation studies verify the effectiveness of both PIFA and CHA modules for improving the reconstruction accuracy. Results on in-the-wild images and live video streams further demonstrate the generalization ability of our network. Our code is available at https://github.com/Dw1010/IntagHand.
Year
DOI
Venue
2022
10.1109/CVPR52688.2022.00278
IEEE Conference on Computer Vision and Pattern Recognition
Keywords
DocType
Volume
3D from single images, Face and gestures, Pose estimation and tracking
Conference
2022
Issue
Citations 
PageRank 
1
0
0.34
References 
Authors
0
7
Name
Order
Citations
PageRank
Mengcheng Li100.68
An Liang284.00
Hongwen Zhang300.34
Lianpeng Wu400.34
Feng Chen564.07
Tao Yu685.87
Yebin Liu768849.05