Title
Video Summarization by Deep Visual and Categorical Diversity
Abstract
The authors propose a video-summarisation method based on visual and categorical diversities using pre-trained deep visual and categorical models. Their method extracts visual and categorical features from a pre-trained deep convolutional network (DCN) and a pre-trained word-embedding matrix. Using visual and categorical information they obtain a video diversity estimation, which is used as an importance score to select segments from the input video that best describes it. Their method also allows performing queries during the search process, in this way personalising the resulting video summaries according to the particular intended purposes. The performance of the method is evaluated using different pre-trained DCN models in order to select the architecture with the best throughput. They then compare it with other state-of-the-art proposals in video summarisation using a data-driven approach with the public dataset SumMe, which contains annotated videos with per-fragment importance. The results show that their method outperforms other proposals in most of the examples. As an additional advantage, their method requires a simple and direct implementation that does not require a training stage.
Year
DOI
Venue
2019
10.1049/iet-cvi.2018.5436
Iet Computer Vision
Keywords
Field
DocType
video signal processing,feature extraction,learning (artificial intelligence),neural nets,video retrieval,query processing
Architecture,Pattern recognition,Categorical models,Categorical variable,Artificial intelligence,Throughput,Mathematics,Machine learning
Journal
Volume
Issue
ISSN
13
6
1751-9632
Citations 
PageRank 
References 
0
0.34
0
Authors
4
Name
Order
Citations
PageRank
Pedro Atencio Ortiz121.06
German Sanchez200.34
John William Branch3911.23
Claudio Delrieux44814.57