Abstract | ||
---|---|---|
This paper proposes a video summarization method based on novel spatio-temporal features that combine motion magnitude, object class prediction, and saturation. Motion magnitude measures how much motion there is in a video. Object class prediction provides information about an object in a video. Saturation measures the colorfulness of a video. Convolutional neural networks (CNNs) are incorporated for object class prediction. The sum of the normalized features per shot are ranked in descending order, and the summary is determined by the highest ranking shots. This ranking can be conditioned on the object class, and the high-ranking shots for different object classes are also proposed as a summary of the input video. The performance of the summarization method is evaluated on the SumMe datasets, and the results reveal that the proposed method achieves better performance than the summary of worst human and most other state-of-the-art video summarization methods. |
Year | Venue | Keywords |
---|---|---|
2017 | 2017 24TH IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP) | Video Summarization, Video Analysis, Motion Magnitude, Saturation, Convolutional Neural Networks |
Field | DocType | ISSN |
Automatic summarization,Magnitude (mathematics),Computer vision,Colorfulness,Normalization (statistics),Pattern recognition,Ranking,Visualization,Computer science,Feature extraction,Artificial intelligence,Artificial neural network | Conference | 1522-4880 |
Citations | PageRank | References |
0 | 0.34 | 0 |
Authors | ||
2 |
Name | Order | Citations | PageRank |
---|---|---|---|
Hyunwoo Nam | 1 | 1 | 0.77 |
Chang D. Yoo | 2 | 375 | 45.88 |