Abstract | ||
---|---|---|
Egocentric videos offer fine-grained information for high-fidelity modeling of human behaviors. Hands and interacting objects are one crucial aspect of understanding a viewer’s behaviors and intentions. We provide a labeled dataset consisting of 11,243 egocentric images with per-pixel segmentation labels of hands and objects being interacted with during a diverse array of daily activities. Our dataset is the first to label detailed hand-object contact boundaries. We introduce a context-aware compositional data augmentation technique to adapt to out-of-distribution YouTube egocentric video. We show that our robust hand-object segmentation model and dataset can serve as a foundational tool to boost or enable several downstream vision applications, including hand state classification, video activity recognition, 3D mesh reconstruction of hand-object interactions, and video inpainting of hand-object foregrounds in egocentric videos. Dataset and code are available at: https://github.com/owenzlz/EgoHOS. |
Year | DOI | Venue |
---|---|---|
2022 | 10.1007/978-3-031-19818-2_8 | European Conference on Computer Vision |
Keywords | DocType | Citations |
Datasets,Egocentric hand-object segmentation,Egocentric activity recognition,Hand-object mesh reconstruction | Conference | 0 |
PageRank | References | Authors |
0.34 | 0 | 4 |
Name | Order | Citations | PageRank |
---|---|---|---|
Zhang Lingzhi | 1 | 0 | 2.37 |
Shenghao Zhou | 2 | 0 | 1.35 |
Simon Stent | 3 | 0 | 2.03 |
Jianbo Shi | 4 | 10207 | 1031.66 |