Title | ||
---|---|---|
Bimodal fusion of low-level visual features and high-level semantic features for near-duplicate video clip detection |
Abstract | ||
---|---|---|
The detection of near-duplicate video clips (NDVCs) is an area of current research interest and intense development. Most NDVC detection methods represent video clips with a unique set of low-level visual features, typically describing color or texture information. However, low-level visual features are sensitive to transformations of the video content. Given the observation that transformations tend to preserve the semantic information conveyed by the video content, we propose a novel approach for identifying NDVCs, making use of both low-level visual features (this is, MPEG-7 visual features) and high-level semantic features (this is, 32 semantic concepts detected using trained classifiers). Experimental results obtained for the publicly available MUSCLE-VCD-2007 and TRECVID 2008 video sets show that bimodal fusion of visual and semantic features facilitates robust NDVC detection. In particular, the proposed method is able to identify NDVCs with a low missed detection rate (3% on average) and a low false alarm rate (2% on average). In addition, the combined use of visual and semantic features outperforms the separate use of either of them in terms of NDVC detection effectiveness. Further, we demonstrate that the effectiveness of the proposed method is on par with or better than the effectiveness of three state-of-the-art NDVC detection methods either making use of temporal ordinal measurement, features computed using the Scale-Invariant Feature Transform (SIFT), or bag-of-visual-words (BoVW). We also show that the influence of the effectiveness of semantic concept detection on the effectiveness of NDVC detection is limited, as long as the mean average precision (MAP) of the semantic concept detectors used is higher than 0.3. Finally, we illustrate that the computational complexity of our NDVC detection method is competitive with the computational complexity of the three aforementioned NDVC detection methods. |
Year | DOI | Venue |
---|---|---|
2011 | 10.1016/j.image.2011.04.001 | Sig. Proc.: Image Comm. |
Keywords | Field | DocType |
high-level semantic feature,bimodal fusion,semantic feature,ndvc detection,semantic features,ndvc detection method,ndvc detection effectiveness,video copy detection,low-level visual feature,video signatures,video content,facilitates robust ndvc detection,near-duplicate video clip detection,detection rate,near-duplicates,aforementioned ndvc detection method,semantic concept detection,bag of visual words,false alarm rate,scale invariant feature transform,computational complexity,mean average precision | Computer vision,Scale-invariant feature transform,Pattern recognition,Computer science,TRECVID,Fusion,Semantic information,Video copy detection,Artificial intelligence,Constant false alarm rate,Feature transform,Computational complexity theory | Journal |
Volume | Issue | ISSN |
26 | 10 | Signal Processing: Image Communication |
Citations | PageRank | References |
1 | 0.40 | 37 |
Authors | ||
4 |
Name | Order | Citations | PageRank |
---|---|---|---|
Hyunseok Min | 1 | 55 | 7.87 |
Jae Young Choi | 2 | 459 | 40.10 |
Wesley De Neve | 3 | 525 | 54.41 |
Yong Man Ro | 4 | 1192 | 125.87 |