Title
Achieving Human Parity in Conversational Speech Recognition.
Abstract
Conversational speech recognition has served as a flagship speech recognition task since the release of the Switchboard corpus in the 1990s. In this paper, we measure the human error rate on the widely used NIST 2000 test set, and find that our latest automated system has reached human parity. The error rate of professional transcribers is 5.9% for the Switchboard portion of the data, in which newly acquainted pairs of people discuss an assigned topic, and 11.3% for the CallHome portion where friends and family members have open-ended conversations. In both cases, our automated system establishes a new state of the art, and edges past the human benchmark, achieving error rates of 5.8% and 11.0%, respectively. The key to our systemu0027s performance is the use of various convolutional and LSTM acoustic model architectures, combined with a novel spatial smoothing method and lattice-free MMI acoustic training, multiple recurrent neural network language modeling approaches, and a systematic use of system combination.
Year
Venue
Field
2016
arXiv: Computation and Language
Computer science,Recurrent neural network,Artificial intelligence,Natural language processing,Language model,Word error rate,Speech recognition,NIST,Smoothing,Parity (mathematics),Machine learning,Acoustic model,Test set
DocType
Volume
Citations 
Journal
abs/1610.05256
64
PageRank 
References 
Authors
2.51
24
8
Name
Order
Citations
PageRank
W. Xiong1642.85
Jasha Droppo286168.35
Xuedong Huang31390283.19
frank seide41489101.15
Mike Seltzer51044.08
Andreas Stolcke66690712.46
Dong Yu76264475.73
Geoffrey Zweig83406320.25