Title
Speaker recognition from whispered speech: A tutorial survey and an application of time-varying linear prediction.
Abstract
From the available biometric technologies, automatic speaker recognition is one of the most convenient and accessible ones due to abundance of mobile devices equipped with a microphone, allowing users to be authenticated across multiple environments and devices. Speaker recognition also finds use in forensics and surveillance. Due to the acoustic mismatch induced by varied environments and devices of the same speaker, leading to increased number of identification errors, much of the research focuses on compensating for such technology-induced variations, especially using machine learning at the statistical back-end. Another much less studied but at least as detrimental source of acoustic variation, however, arises from mismatched speaking styles induced by the speaker, leading to a substantial performance drop in recognition accuracy. This is a major problem especially in forensics where perpetrators may purposefully disguise their identity by varying their speaking style. We focus on one of the most commonly used ways of disguising one’s speaker identity, namely, whispering. We approach the problem of normal-whisper acoustic mismatch compensation from the viewpoint of robust feature extraction. Since whispered speech is intelligible, yet a low-intensity signal and therefore prone to extrinsic distortions, we take advantage of robust, long-term speech analysis methods that utilize slow articulatory movements in speech production. In specific, we address the problem using a novel method, frequency-domain linear prediction with time-varying linear prediction (FDLP-TVLP), which is an extension of the 2-dimensional autoregressive (2DAR) model that allows vocal tract filter parameters to be time-varying, rather than piecewise constant as in classic short-term speech analysis. Our speaker recognition experiments on the whisper subset of the CHAINS corpus indicate that when tested in normal-whisper mismatched conditions, the proposed FDLP-TVLP features improve speaker recognition performance by 7–10% over standard MFCC features in relative terms. We further observe that the proposed FDLP-TVLP features perform better than the FDLP and 2DAR methods for whispered speech.
Year
DOI
Venue
2018
10.1016/j.specom.2018.02.009
Speech Communication
Keywords
Field
DocType
Speaker recognition,Speaking style mismatch,Disguise,Whisper,2-Dimensional autoregression (2D-AR),Time-varying linear prediction (TVLP)
Mel-frequency cepstrum,Pattern recognition,Computer science,Whispering,Feature extraction,Linear prediction,Speech recognition,Speaker recognition,Artificial intelligence,Biometrics,Speech production,Vocal tract
Journal
Volume
ISSN
Citations 
99
0167-6393
3
PageRank 
References 
Authors
0.39
33
5
Name
Order
Citations
PageRank
Ville Vestman1296.42
Dhananjaya N. Gowda2282.99
Md. Sahidullah332624.99
Paavo Alku472898.07
Tomi Kinnunen5132386.67