Abstract | ||
---|---|---|
Currently, the most dominant neural code generation modelsare often equipped with a tree-structured LSTM decoder, which outputs a sequence of actions to construct an Abstract Syntax Tree (AST) via pre-order traversal. However, such a decoder has two obvious drawbacks. First, except for the parent action, other faraway and important history actions rarely contribute to the current decision. Second, it also neglects future actions, which may be crucial for the prediction of the current action. To deal with these issues, in this paper, we propose a novel AST structure enhanced decoder for code generation, which significantly extends the decoder with respect to the above two aspects. First, we introduce an AST information enhanced attention mechanism to fully exploit history actions, of which impacts are further distinguished according to their syntactic distances, action types and relative positions; Second, we jointly model the predictions of current action and its important future action via multi-task learning, where the learned hidden state of the latter can be further leveraged to improve the former. Experimental results on commonly-used datasets demonstrate the effectiveness of our proposed decoder.
<sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">1</sup> |
Year | DOI | Venue |
---|---|---|
2022 | 10.1109/TASLP.2021.3138717 | IEEE/ACM Transactions on Audio, Speech, and Language Processing |
Keywords | DocType | Volume |
Code generation,abstract syntax tree,attention mechanism,future action prediction | Journal | 30 |
Issue | ISSN | Citations |
1 | 2329-9290 | 0 |
PageRank | References | Authors |
0.34 | 7 | 6 |
Name | Order | Citations | PageRank |
---|---|---|---|
Hui Jiang | 1 | 0 | 0.34 |
Linfeng Song | 2 | 87 | 16.75 |
Ge Yubin | 3 | 3 | 1.84 |
Fandong Meng | 4 | 31 | 19.11 |
Junfeng Yao | 5 | 0 | 0.34 |
Jinsong Su | 6 | 260 | 41.51 |