Title
Exploring End-To-End Attention-Based Neural Networks For Native Language Identification.
Abstract
Automatic identification of speakers’ native language (L1) based on their speech in a second language (L2) is a challenging research problem that can aid several spoken language technologies such as automatic speech recognition (ASR), speaker recognition, and voice biometrics in interactive voice applications. End-to-end learning, in which the features and the classification model are learned jointly in a single system, is an emerging field in the areas of speech recognition, speaker verification and spoken language understanding. In this paper, we present our study on attention-based end-to-end modeling for native language identification on a database of 11 different L1s. Using this methodology, we can determine the native language of the speaker directly from the raw acoustic features. Experimental results from our study show that our best end-to-end model can achieve promising results by capturing speech commonalities across L1s using an attention mechanism. In addition, fusion of proposed systems with the baseline system leads to significant performance improvements.
Year
DOI
Venue
2018
10.1109/SLT.2018.8639689
SLT
Keywords
Field
DocType
Speech recognition,Acoustics,Task analysis,Computational modeling,Training,Testing,Neural networks
Task analysis,Computer science,End-to-end principle,Speech recognition,Speaker recognition,Baseline system,Native-language identification,Artificial neural network,First language,Spoken language
Conference
ISSN
ISBN
Citations 
2639-5479
978-1-5386-4334-1
0
PageRank 
References 
Authors
0.34
0
3
Name
Order
Citations
PageRank
Rutuja Ubale123.17
Qian Yao252751.55
Keelan Evanini314.42