Title
Constrained speaker diarization of TV series based on visual patterns
Abstract
Speaker diarization, usually denoted as the “who spoke when” task, turns out to be particularly challenging when applied to fictional films, where many characters talk in various acoustic conditions (background music, sound effects...). Despite this acoustic variability, such movies exhibit specific visual patterns in the dialogue scenes. In this paper, we introduce a two-step method to achieve speaker diarization in TV series: a speaker diarization is first performed locally in the scenes detected as dialogues; then, the hypothesized local speakers are merged in a second agglomerative clustering process, with the constraint that speakers locally hypothesized to be distinct must not be assigned to the same cluster. The performances of our approach are compared to those obtained by standard speaker diarization tools applied to the same data.
Year
DOI
Venue
2014
10.1109/SLT.2014.7078606
Spoken Language Technology Workshop
Keywords
Field
DocType
pattern clustering,speaker recognition,video signal processing,TV series,acoustic conditions,acoustic variability,agglomerative clustering process,constrained speaker diarization,fictional films,hypothesized local speakers,scene detection,visual patterns,who spoke when task,Speaker diarization,kagglomerative clustering,video structuration
Hierarchical clustering,Pattern recognition,Computer science,Speech recognition,Speaker diarisation,Artificial intelligence,Visual patterns
Conference
Volume
ISSN
Citations 
abs/1812.07209
2014 IEEE Spoken Language Technology Workshop (SLT), Dec 2014, South Lake Tahoe, United States. IEEE, pp.390-395, 2014, \&\#x3008;10.1109/SLT.2014.7078606\&\#x3009
1
PageRank 
References 
Authors
0.35
0
2
Name
Order
Citations
PageRank
Xavier Bost186.42
Georges Linares28719.73