X-PuDu at SemEval-2022 Task 7: A Replaced Token Detection Task Pre-trained Model with Pattern-aware Ensembling for Identifying Plausible Clarifications. | 0 | 0.34 | 2022 |
X-PuDu at SemEval-2022 Task 6: Multilingual Learning for English and Arabic Sarcasm Detection. | 0 | 0.34 | 2022 |
Correcting Chinese Spelling Errors with Phonetic Pre-training. | 0 | 0.34 | 2021 |
abcbpc at SemEval-2021 Task 7 - ERNIE-based Multi-task Model for Detecting and Rating Humor and Offense. | 0 | 0.34 | 2021 |
ERNIE-M - Enhanced Multilingual Representation by Aligning Cross-lingual Semantics with Monolingual Corpora. | 0 | 0.34 | 2021 |
kk2018 at SemEval-2020 Task 9: Adversarial Training for Code-Mixing Sentiment Classification | 0 | 0.34 | 2020 |
ERNIE at SemEval-2020 Task 10: Learning Word Emphasis Selection by Pre-trained Language Model | 0 | 0.34 | 2020 |
Galileo at SemEval-2020 Task 12: Multi-lingual Learning for Offensive Language Identification using Pre-trained Language Models | 0 | 0.34 | 2020 |
Ernie 2.0: A Continual Pre-Training Framework For Language Understanding | 1 | 0.34 | 2020 |
OleNet at SemEval-2019 Task 9: BERT based Multi-Perspective Models for Suggestion Mining. | 0 | 0.34 | 2019 |
ERNIE: Enhanced Representation through Knowledge Integration. | 3 | 0.37 | 2019 |