Title
Bidirectional Attention-Recognition Model for Fine-grained Object Classification
Abstract
Fine-grained object classification (FGOC) is a challenging research topic in multimedia computing with machine learning, which faces two pivotal conundrums: focusing <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">attention</italic> on the discriminate part regions, and then processing <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">recognition</italic> with the part-based features. Existing approaches generally adopt a unidirectional two-step structure, that first locate the discriminate parts and then recognize the part-based features. However, they neglect the truth that part localization and feature recognition can be reinforced in a bidirectional process. In this paper, we propose a novel bidirectional attention-recognition model (BARM) to actualize the bidirectional reinforcement for FGOC. The proposed BARM consists of one attention agent for discriminate part regions proposing and one recognition agent for feature extraction and recognition. Meanwhile, a feedback flow is creatively established to optimize the attention agent directly by recognition agent. Therefore, in BARM the <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">attention</italic> agent and the <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">recognition</italic> agent can reinforce each other in a bidirectional way and the overall framework can be trained end-to-end without neither object nor parts annotations. Moreover, a novel Multiple Random Erasing data augmentation is proposed, and it exhibits impressive pertinency and superiority for FGOC. Conducted on several extensive FGOC benchmarks, BARM outperforms the present state-of-the-art methods in classification accuracy. Furthermore, BARM exhibits a clear interpretability and keeps consistent with the human perception in visualization experiments.
Year
DOI
Venue
2020
10.1109/TMM.2019.2954747
IEEE Transactions on Multimedia
Keywords
DocType
Volume
Feature extraction,Proposals,Annotations,Visualization,Task analysis,Training,Computational modeling
Journal
22
Issue
ISSN
Citations 
7
1520-9210
5
PageRank 
References 
Authors
0.43
0
6
Name
Order
Citations
PageRank
Chuanbin Liu184.54
Hongtao Xie243947.79
Zheng-Jun Zha32822152.79
Lingyun Yu45511.26
Zhineng Chen519225.29
Yongdong Zhang626327.77