Title | ||
---|---|---|
ThisIsCompetition at SemEval-2019 Task 9: BERT is unstable for out-of-domain samples. |
Abstract | ||
---|---|---|
This paper describes our system, Joint Encoders for Stable Suggestion Inference (JESSI), for the SemEval 2019 Task 9: Suggestion Mining from Online Reviews and Forums. JESSI is a combination of two sentence encoders: (a) one using multiple pre-trained word embeddings learned from log-bilinear regression (GloVe) and translation (CoVe) models, and (b) one on top of word encodings from a pre-trained deep bidirectional transformer (BERT). We include a domain adversarial training module when training for out-of-domain samples. Our experiments show that while BERT performs exceptionally well for in-domain samples, several runs of the model show that it is unstable for out-of-domain samples. The problem is mitigated tremendously by (1) combining BERT with a non-BERT encoder, and (2) using an RNN-based classifier on top of BERT. Our final models obtained second place with 77.78% F-Score on Subtask A (i.e. in-domain) and achieved an F-Score of 79.59% on Subtask B (i.e. out-of-domain), even without using any additional external data. |
Year | Venue | Field |
---|---|---|
2019 | North American Chapter of the Association for Computational Linguistics | SemEval,Computer science,Inference,Artificial intelligence,Encoder,Natural language processing,Classifier (linguistics),Sentence |
DocType | Volume | Citations |
Journal | abs/1904.03339 | 1 |
PageRank | References | Authors |
0.35 | 0 | 7 |
Name | Order | Citations | PageRank |
---|---|---|---|
Cheon-Eum Park | 1 | 1 | 3.05 |
Juae Kim | 2 | 1 | 0.68 |
hyeongu lee | 3 | 4 | 1.46 |
Reinald Kim Amplayo | 4 | 22 | 8.44 |
Harksoo Kim | 5 | 170 | 26.76 |
Jungyun Seo | 6 | 5 | 3.44 |
Changki Lee | 7 | 279 | 26.18 |