Title
ViGAT: Bottom-Up Event Recognition and Explanation in Video Using Factorized Graph Attention Network
Abstract
In this paper a pure-attention bottom-up approach, called ViGAT, that utilizes an object detector together with a Vision Transformer (ViT) backbone network to derive object and frame features, and a head network to process these features for the task of event recognition and explanation in video, is proposed. The ViGAT head consists of graph attention network (GAT) blocks factorized along the spatial and temporal dimensions in order to capture effectively both local and long-term dependencies between objects or frames. Moreover, using the weighted in-degrees (WiDs) derived from the adjacency matrices at the various GAT blocks, we show that the proposed architecture can identify the most salient objects and frames that explain the decision of the network. A comprehensive evaluation study is performed, demonstrating that the proposed approach provides state-of-the-art results on three large, publicly available video datasets (FCVID, MiniKinetics, ActivityNet). Source code is made publicly available at: https://github.com/bmezaris/ViGAT
Year
DOI
Venue
2022
10.1109/ACCESS.2022.3213652
IEEE ACCESS
Keywords
DocType
Volume
Feature extraction, Spatiotemporal phenomena, Transformers, Event recognition, Proposals, Detectors, Data mining, Video recording, Object recognition, Video event recognition, eXplainable AI (XAI), graph attention network, factorized attention, bottom-up
Journal
10
ISSN
Citations 
PageRank 
2169-3536
0
0.34
References 
Authors
0
3
Name
Order
Citations
PageRank
Nikolaos Gkalelis100.34
Dimitrios Daskalakis200.34
V. Mezaris329316.26