Title
Implicit and Explicit Concept Relations in Deep Neural Networks for Multi-Label Video/Image Annotation
Abstract
In this paper, we propose a deep convolutional neural network (DCNN) architecture that addresses the problem of video/image concept annotation by exploiting concept relations at two different levels. At the first level, we build on ideas from multi-task learning, and propose an approach to learn concept-specific representations that are sparse, linear combinations of representations of latent concepts. By enforcing the sharing of the latent concept representations, we exploit the implicit relations between the target concepts. At a second level, we build on ideas from structured output learning and propose the introduction, at training time, of a new cost term that explicitly models the correlations between the concepts. By doing so, we explicitly model the structure in the output space (i.e., the concept labels). Both of the above are implemented using standard convolutional layers and are incorporated in a single DCNN architecture that can then be trained end-to-end with standard back-propagation. Experiments on four large-scale video and image data sets show that the proposed DCNN improves concept annotation accuracy and outperforms the related state-of-the-art methods.
Year
DOI
Venue
2019
10.1109/tcsvt.2018.2848458
IEEE Transactions on Circuits and Systems for Video Technology
Keywords
Field
DocType
Task analysis,Correlation,Standards,Training,Electronic mail,Neural networks,Semantics
Linear combination,Data set,Automatic image annotation,Annotation,Task analysis,Pattern recognition,Convolutional neural network,Computer science,Artificial intelligence,Artificial neural network,Semantics
Journal
Volume
Issue
ISSN
29
6
1051-8215
Citations 
PageRank 
References 
3
0.42
0
Authors
3
Name
Order
Citations
PageRank
Fotini Markatopoulou1355.93
Vasileios Mezaris280381.40
Ioannis Patras31960123.15