Title
Deep quaternion neural networks for spoken language understanding
Abstract
Deep Neural Networks (DNN) received a great interest from researchers due to their capability to construct robust abstract representations of heterogeneous documents in a latent subspace. Nonetheless, mere real-valued deep neural networks require an appropriate adaptation, such as the convolution process, to capture latent relations between input features. Moreover, real-valued deep neural networks reveal little in way of document internal dependencies, by only considering words or topics contained in the document as an isolate basic element. Quaternion-valued multi-layer perceptrons (QMLP), and autoencoders (QAE) have been introduced to capture such latent dependencies, alongside to represent multidimensional data. Nonetheless, a three-layered neural network does not benefit from the high abstraction capability of DNNs. The paper proposes first to extend the hyper-complex algebra to deep neural networks (QDNN) and, then, introduces pre-trained deep quaternion neural networks (QDNN-AE) with dedicated quaternion encoder-decoders (QAE). The experiments conduced on a theme identification task of spoken dialogues from the DECODA data set show, inter alia, that the QDNN-AE reaches a promising gain of 2.2% compared to the standard real-valued DNN-AE.
Year
DOI
Venue
2017
10.1109/ASRU.2017.8268978
2017 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU)
Keywords
Field
DocType
Quaternions,deep neural networks,spoken language understanding,autoencoders,machine learning
Abstraction,Subspace topology,Convolution,Computer science,Quaternion,Artificial intelligence,Artificial neural network,Deep neural networks,Spoken language
Conference
ISBN
Citations 
PageRank 
978-1-5090-4789-5
0
0.34
References 
Authors
0
3
Name
Order
Citations
PageRank
Titouan Parcollet1169.23
Mohamed Morchid28422.79
georges linar es313629.55