Title
IntegralAction: Pose-driven Feature Integration for Robust Human Action Recognition in Videos
Abstract
Most current action recognition methods heavily rely on appearance information by taking an RGB sequence of entire image regions as input. While being effective in exploiting contextual information around humans, e.g., human appearance and scene category, they are easily fooled by out-of-context action videos where the contexts do not exactly match with target actions. In contrast, pose-based methods, which take a sequence of human skeletons only as input, suffer from inaccurate pose estimation or ambiguity of human pose per se. Integrating these two approaches has turned out to be non-trivial; training a model with both appearance and pose ends up with a strong bias towards appearance and does not generalize well to unseen videos. To address this problem, we propose to learn pose-driven feature integration that dynamically combines appearance and pose streams by observing pose features on the fly. The main idea is to let the pose stream decide how much and which appearance information is used in integration based on whether the given pose information is reliable or not. We show that the proposed IntegralAction achieves highly robust performance across in-context and out-of-context action video datasets. The codes are available in here.
Year
DOI
Venue
2021
10.1109/CVPRW53098.2021.00372
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGITION WORKSHOPS (CVPRW 2021)
DocType
ISSN
Citations 
Conference
2160-7508
0
PageRank 
References 
Authors
0.34
13
4
Name
Order
Citations
PageRank
Gyeongsik Moon1172.97
Heeseung Kwon241.76
Kyoung Mu Lee33228153.84
Minsu Cho467735.74