Abstract | ||
---|---|---|
Natural language generation (NLG) is a critical component in spoken dialogue systems. Classic NLG can be divided into two phases: (1) sentence planning: deciding on the overall sentence structure, (2) surface realization: determining specific word forms and flattening the sentence structure into a string. Many simple NLG models are based on recurrent neural networks (RNN) and sequence-to-sequence (seq2seq) model, which basically contains an encoder-decoder structure; these NLG models generate sentences from scratch by jointly optimizing sentence planning and surface realization using a simple cross entropy loss training criterion. However, the simple encoder-decoder architecture usually suffers from generating complex and long sentences, because the decoder has to learn all grammar and diction knowledge. This paper introduces a hierarchical decoding NLG model based on linguistic patterns in different levels, and shows that the proposed method outperforms the traditional one with a smaller model size. Furthermore, the design of the hierarchical decoding is flexible and easily-extensible in various NLG systems. |
Year | Venue | DocType |
---|---|---|
2018 | NAACL-HLT | Journal |
Volume | Citations | PageRank |
abs/1808.02747 | 2 | 0.37 |
References | Authors | |
5 | 4 |
Name | Order | Citations | PageRank |
---|---|---|---|
Shang-Yu Su | 1 | 9 | 4.88 |
Kai-Ling Lo | 2 | 2 | 0.37 |
Yi Ting Yeh | 3 | 2 | 0.37 |
Yun-Nung Chen | 4 | 324 | 35.41 |