Title
Feature fusion and redundancy pruning for rush video summarization
Abstract
This paper presents a video summarization technique for rushes that employs high-level feature fusion to identify segments for inclusion. It aims to capture distinct video events using a variety of features: k-means based weighting, speech, camera motion, significant differences in HSV color space, and a dynamic time warping (DTW) based feature that suppresses repeated scenes. The feature functions are used to drive a weighted k-means based clustering to identify visually distinct, important segments that constitute the final summary. The optimal weights corresponding to the individual features are obtained using a gradient descent algorithm that maximizes the recall of ground truth events from representative training videos. Analysis reveals a lengthy computation time but high quality results (60% average recall over 42 test videos) as based on manually-judged inclusion ofdistinct shots. The summaries were judged relatively easy to view and had an average amount of redundancy.
Year
DOI
Venue
2007
10.1145/1290031.1290047
TVS
Keywords
Field
DocType
redundancy pruning,high-level feature fusion,dynamic time warping,average recall,rush video summarization,feature function,representative training video,lengthy computation time,distinct video event,individual feature,manually-judged inclusion ofdistinct shot,average amount,k means,gradient descent,ground truth
HSL and HSV,Automatic summarization,Computer vision,Gradient descent,Weighting,Dynamic time warping,Pattern recognition,Computer science,Redundancy (engineering),Ground truth,Artificial intelligence,Cluster analysis
Conference
Citations 
PageRank 
References 
15
0.78
8
Authors
7
Name
Order
Citations
PageRank
Jim Kleban11989.81
Anindya Sarkar223315.01
Emily Moxley31568.95
Stephen Mangiat4241.72
Swapna Joshi5283.18
Thomas Kuo6272.41
B. S. Manjunath77561783.37