Title
Learning personalized video highlights from detailed MPEG-7 metadata
Abstract
We present a new framework for generating personalized video digests from detailed event metadata. In the new approach high level semantic features (e.g., number of offensive events) are extracted from an existing metadata signal using time windows (e.g., features within 16 sec. intervals). Personalized video digests are generated using a supervised learning algorithm which takes as input examples of important/unimportant events. Window-based features are extracted from the metadata and used to train the system and build a classifier that, given metadata for a new video, classifies segments into important and unimportant, according to a specific user, to generate personalized video digests. Our experimental results using soccer video suggest that extracting high level semantic information from existing metadata can be used effectively (80% precision and 85% recall using cross validation) in generating personalized video digests.
Year
DOI
Venue
2002
10.1109/ICIP.2002.1037977
Image Processing. 2002. Proceedings. 2002 International Conference  
Keywords
Field
DocType
feature extraction,image classification,learning (artificial intelligence),meta data,video signal processing,MPEG-7 metadata,Window-based features,detailed event metadata,high level semantic features,important events,metadata signal,offensive events,personalized video digests,semantic information,soccer video,supervised learning algorithm,time windows,unimportant events
Metadata repository,Metadata,Computer vision,Information retrieval,Computer science,Supervised learning,Feature extraction,Bandwidth (signal processing),Artificial intelligence,Contextual image classification,Classifier (linguistics),Cross-validation
Conference
Volume
ISSN
Citations 
1
1522-4880
34
PageRank 
References 
Authors
4.64
4
4
Name
Order
Citations
PageRank
Alejandro Jaimes11461104.52
Tomio Echigo234825.41
Masayoshi Teraguchi3435.85
Fumiko Satoh4344.64