Title
Detailed 2D-3D Joint Representation for Human-Object Interaction
Abstract
Human-Object Interaction (HOI) detection lies at the core of action understanding. Besides 2D information such as human/object appearance and locations, 3D pose is also usually utilized in HOI learning since its view-independence. However, rough 3D body joints just carry sparse body information and are not sufficient to understand complex interactions. Thus, we need detailed 3D body shape to go further. Meanwhile, the interacted object in 3D is also not fully studied in HOI learning. In light of these, we propose a detailed 2D-3D joint representation learning method. First, we utilize the single-view human body capture method to obtain detailed 3D body, face and hand shapes. Next, we estimate the 3D object location and size with reference to the 2D human-object spatial configuration and object category priors. Finally, a joint learning framework and cross-modal consistency tasks are proposed to learn the joint HOI representation. To better evaluate the 2D ambiguity processing capacity of models, we propose a new benchmark named Ambiguous-HOI consisting of hard ambiguous images. Extensive experiments in large-scale HOI benchmark and Ambiguous-HOI show impressive effectiveness of our method. Code and data are available at https://github.com/DirtyHarryLYL/DJ-RN.
Year
DOI
Venue
2020
10.1109/CVPR42600.2020.01018
CVPR
DocType
Citations 
PageRank 
Conference
1
0.35
References 
Authors
32
7
Name
Order
Citations
PageRank
Yonglu Li1227.05
Xinpeng Liu251.75
Lu Han310.35
Wang Shiyi410.69
Liu Junqi510.35
Jiefeng Li6213.65
Cewu Lu799362.08