Abstract | ||
---|---|---|
We describe a unified multi-turn multi-task spoken language understanding (SLU) solution capable of handling multiple context sensitive classification (intent determination) and sequence labeling (slot filling) tasks simultaneously. The proposed architecture is based on recurrent convolutional neural networks (RCNN) with shared feature layers and globally normalized sequence modeling components. The temporal dependencies within and across different tasks are encoded succinctly as recurrent connections. The dialog system responses beyond SLU component are also exploited as effective external features. We show with extensive experiments on a number of datasets that the proposed joint learning framework generates state-of-the-art results for both classification and tagging, and the contextual modeling based on recurrent and external features significantly improves the context sensitivity of SLU models. |
Year | Venue | Keywords |
---|---|---|
2015 | 16TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION (INTERSPEECH 2015), VOLS 1-5 | convolutional neural networks, recurrent neural networks, spoken language understanding |
Field | DocType | Citations |
Architecture,Normalization (statistics),Sequence labeling,Convolutional neural network,Computer science,Speech recognition,Natural language processing,Dialog system,Sequence modeling,Artificial intelligence,Language understanding,Spoken language | Conference | 4 |
PageRank | References | Authors |
0.51 | 10 | 3 |
Name | Order | Citations | PageRank |
---|---|---|---|
Chunxi Liu | 1 | 91 | 8.44 |
Puyang Xu | 2 | 105 | 11.52 |
Ruhi Sarikaya | 3 | 698 | 64.49 |