Title
Multi-modal Language Models for Lecture Video Retrieval
Abstract
We propose Multi-modal Language Models (MLMs), which adapt latent variable techniques for document analysis to exploring co-occurrence relationships in multi-modal data. In this paper, we focus on the application of MLMs to indexing text from slides and speech in lecture videos, and subsequently employ a multi-modal probabilistic ranking function for lecture video retrieval. The MLM achieves highly competitive results against well established retrieval methods such as the Vector Space Model and Probabilistic Latent Semantic Analysis. When noise is present in the data, retrieval performance with MLMs is shown to improve with the quality of the spoken text extracted from the video.
Year
DOI
Venue
2014
10.1145/2647868.2654964
ACM Multimedia 2001
Keywords
Field
DocType
content analysis and indexing,latent variable modeling,multi-modal probabilistic ranking,multi-modal retrieval
Divergence-from-randomness model,Information retrieval,Computer science,Latent variable model,Search engine indexing,Latent variable,Probabilistic latent semantic analysis,Natural language processing,Artificial intelligence,Vector space model,Probabilistic logic,Language model
Conference
Citations 
PageRank 
References 
6
0.46
12
Authors
4
Name
Order
Citations
PageRank
Huizhong Chen125311.32
Matthew Cooper279876.01
Dhiraj Joshi32719122.87
Bernd Girod489881062.96