Abstract | ||
---|---|---|
This paper investigates the construction of a strong baseline based on general purpose sequence-to-sequence models for constituency parsing. We incorporate several techniques that were mainly developed in natural language generation tasks, e.g., machine translation and summarization, and demonstrate that the sequence-to-sequence model achieves the current top-notch parsers' performance without requiring explicit task-specific knowledge or architecture of constituent parsing. |
Year | Venue | Field |
---|---|---|
2018 | PROCEEDINGS OF THE 56TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, VOL 2 | Computer science,Artificial intelligence,Natural language processing,Parsing,Empirical research |
DocType | Volume | Citations |
Conference | P18-2 | 1 |
PageRank | References | Authors |
0.35 | 0 | 5 |
Name | Order | Citations | PageRank |
---|---|---|---|
Jun Suzuki | 1 | 55 | 10.39 |
Sho Takase | 2 | 28 | 10.23 |
Hidetaka Kamigaito | 3 | 5 | 3.17 |
Makoto Morishita | 4 | 1 | 0.69 |
Masaaki Nagata | 5 | 19 | 5.41 |