Title
Meeting state recognition from visual and aural labels
Abstract
In this paper we present a meeting state recognizer based on a combination of multi-modal sensor data in a smart room. Our approach is based on the training of a statistical model to use semantical cues generated by perceptual components. These perceptual components generate these cues in processing the output of one or multiple sensors. The presented recognizer is designed to work with an arbitrary combination of multi-modal input sensors. We have defined a set of states representing both meeting and non-meeting situations, and a set of features we base our classification on. Thus, we can model situations like presentation or break which are important information for many applications. We have hand-annotated a set of meeting recordings to verify our statistical classification, as appropriate multi-modal corpora are currently very sparse. We have also used several statistical classification methods for the best classification, which we validated on the hand-annotated corpus of real meeting data.
Year
DOI
Venue
2007
10.1007/978-3-540-78155-4_3
MLMI
Keywords
Field
DocType
meeting state recognition,multi-modal sensor data,statistical classification,perceptual component,meeting recording,multi-modal input sensor,real meeting data,statistical classification method,meeting state recognizer,aural label,best classification,appropriate multi-modal corpus,statistical model
State recognition,Pattern recognition,Computer science,Speech recognition,Artificial intelligence,Statistical model,Statistical classification,Perception,Multiple sensors,Machine learning,Smart rooms
Conference
Volume
ISSN
ISBN
4892
0302-9743
3-540-78154-4
Citations 
PageRank 
References 
4
0.71
8
Authors
4
Name
Order
Citations
PageRank
Jan Cuřín1315.51
Pascal Fleury2303.77
Jan Kleindienst322023.74
Robert Kessl451.07