Abstract | ||
---|---|---|
The problem of sequence labelling in language understanding would benefit from approaches inspired by semantic priming phenomena. We propose that an attention-based RNN architecture can be used to simulate semantic priming for sequence labelling. Specifically, we employ pretrained word embeddings to characterize the semantic relationship between utterances and labels. We validate the approach using varying sizes of the ATIS and MEDIA datasets, and show up to 1.4-1.9% improvement in F1 score. The developed framework can enable more explainable and generalizable spoken language understanding systems. |
Year | Venue | Field |
---|---|---|
2018 | NAMED ENTITIES | Computer science,Priming (psychology),Natural language processing,Artificial intelligence |
DocType | Citations | PageRank |
Conference | 0 | 0.34 |
References | Authors | |
0 | 5 |
Name | Order | Citations | PageRank |
---|---|---|---|
Jiewen Wu | 1 | 0 | 0.34 |
Rafael E. Banchs | 2 | 566 | 63.64 |
Luis Fernando D'Haro | 3 | 181 | 25.97 |
Pavitra Krishnaswamy | 4 | 3 | 1.44 |
Nancy F. Chen | 5 | 120 | 28.98 |