Abstract | ||
---|---|---|
We propose a unified multilingual model for humor detection which can be trained under a transfer learning framework. 1) The model is built based on pre-trained multilingual BERT, thereby is able to make predictions on Chinese, Russian and Spanish corpora. 2) We step out from single sentence classification and propose sequence-pair prediction which considers the inter-sentence relationship. 3) We propose the Sentence Discrepancy Prediction (SDP) loss, aiming to measure the semantic discrepancy of the sequence-pair, which often appears in the setup and punchline of a joke. Our method achieves two SoTA and a second-place on three humor detection corpora in three languages (Russian, Spanish and Chinese), and also improves F1-score by 4%-6%, which demonstrates the effectiveness of it in humor detection tasks. |
Year | Venue | DocType |
---|---|---|
2020 | EAMT | Conference |
Citations | PageRank | References |
0 | 0.34 | 0 |
Authors | ||
5 |
Name | Order | Citations | PageRank |
---|---|---|---|
Minghan Wang | 1 | 0 | 5.07 |
Hao Yang | 2 | 0 | 7.44 |
Ying Qin | 3 | 1 | 2.05 |
Shiliang Sun | 4 | 1 | 2.05 |
Yao Deng | 5 | 0 | 1.69 |