Title
Comparing keyframe summaries of egocentric videos: Closest-to-centroid baseline
Abstract
Evaluation of keyframe video summaries is a notoriously difficult problem. So far, there is no consensus on guidelines, protocols, benchmarks and baseline models. This study contributes in three ways: (1) We propose a new baseline model for creating a keyframe summary, called Closest-to-Centroid, and show that it is a better contestant compared to the two most popular baselines: uniform sampling and choosing the mid-event frame. (2) We also propose a method for matching the visual appearance of keyframes, suitable for comparing summaries of egocentric videos and lifelogging photostreams. (3) We examine 24 image feature spaces (different descriptors) including colour, texture, shape, motion and a feature space extracted by a pre-trained convolutional neural network (CNN). Our results using the four egocentric videos in the UTE database favour low-level shape and colour feature spaces for use with CC.
Year
DOI
Venue
2017
10.1109/IPTA.2017.8310123
2017 Seventh International Conference on Image Processing Theory, Tools and Applications (IPTA)
Keywords
Field
DocType
Video summarisation,Keyframe selection,Egocentric video,Image feature descriptors,Closest-to-Centroid baseline model,Keyframe evaluation protocol
Computer vision,Histogram,Feature vector,Pattern recognition,Convolutional neural network,Visualization,Computer science,Feature extraction,Sampling (statistics),Artificial intelligence,Centroid,Visual appearance
Conference
ISSN
ISBN
Citations 
2154-512X
978-1-5386-1843-1
0
PageRank 
References 
Authors
0.34
30
3
Name
Order
Citations
PageRank
Ludmila I. Kuncheva142.13
Paria Yousefi231.75
Jurandy Almeida343135.15