Title
Video Annotation by Active Learning and Semi-Supervised Ensembling
Abstract
Supervised and semi-supervised learning are frequently applied methods to annotate videos by mapping low-level features into semantic concepts. Due to the large semantic gap, the main constraint of these methods is that the information contained in a limited-size labeled dataset can hardly represent the distributions of the semantic concepts. In this paper, we propose a novel semi-automatic video annotation framework, active learning with semi-supervised ensembling, which tries to tackle the disadvantages of current video annotation solutions. Firstly the initial training set is constructed based on distribution analysis of the entire video dataset and then an active learning scheme is combined into a semi-supervised ensembling framework, which selects the samples to maximize the margin of the ensemble classifier based on both labeled and unlabeled data. Experimental results show that the proposed method performs superior to general semi-supervised learning algorithms and typical active learning algorithms in terms of annotation accuracy and stability
Year
DOI
Venue
2006
10.1109/ICME.2006.262673
ICME
Keywords
Field
DocType
semisupervised ensembling,semantic networks,low-level feature mapping,video annotation,learning (artificial intelligence),pattern classification,video dataset,ensemble classifier,maximization,active learning,distribution analysis,video databases,labeling,learning artificial intelligence,engines,automation,skeleton,semi supervised learning,assembly,stability,semantic gap,testing
Data mining,Semi-supervised learning,Computer science,Automation,Artificial intelligence,Classifier (linguistics),Annotation,Active learning,Pattern recognition,Semantic gap,Semantic network,Maximization,Machine learning
Conference
ISBN
Citations 
PageRank 
1-4244-0367-7
5
0.43
References 
Authors
7
5
Name
Order
Citations
PageRank
Yan Song173451.98
Guo-Jun Qi22778119.78
Xian-Sheng Hua36566328.17
Li-Rong Dai41070117.92
Ren-Hua Wang534441.36