Title
Deep Audio-Visual Speech Recognition
Abstract
The goal of this work is to recognise phrases and sentences being spoken by a talking face, with or without the audio. Unlike previous works that have focussed on recognising a limited number of words or phrases, we tackle lip reading as an <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">open-world</i> problem – unconstrained natural language sentences, and in the wild videos. Our key contributions are: (1) we compare two models for lip reading, one using a CTC loss, and the other using a sequence-to-sequence loss. Both models are built on top of the transformer self-attention architecture; (2) we investigate to what extent lip reading is complementary to audio speech recognition, especially when the audio signal is noisy; (3) we introduce and publicly release a new dataset for audio-visual speech recognition, LRS2-BBC, consisting of thousands of natural sentences from British television. The models that we train surpass the performance of all previous work on a lip reading benchmark dataset by a significant margin.
Year
DOI
Venue
2018
10.1109/TPAMI.2018.2889052
IEEE Transactions on Pattern Analysis and Machine Intelligence
Keywords
Field
DocType
Humans,Speech Perception,Algorithms,Lipreading,Speech
Computer vision,Audio signal,Architecture,Computer science,Speech recognition,Natural language,Audio-visual speech recognition,Artificial intelligence
Journal
Volume
Issue
ISSN
44
12
0162-8828
Citations 
PageRank 
References 
22
0.68
34
Authors
5
Name
Order
Citations
PageRank
Triantafyllos Afouras11219.19
Joon Son Chung220820.20
Andrew Senior34687260.55
Oriol Vinyals49419418.45
Andrew Zisserman5459983200.71