Abstract | ||
---|---|---|
The great majority of languages in the world are considered under-resourced for the successful application of deep learning methods. In this work, we propose a meta-learning approach to document classification in limited-resource setting and demonstrate its effectiveness in two different settings: few-shot, cross-lingual adaptation to previously unseen languages; and multilingual joint training when limited target-language data is available during training. We conduct a systematic comparison of several meta-learning methods, investigate multiple settings in terms of data availability and show that meta-learning thrives in settings with a heterogeneous task distribution. We propose a simple, yet effective adjustment to existing meta-learning methods which allows for better and more stable learning, and set a new state of the art on several languages while performing on-par on others, using only a small amount of labeled data. |
Year | Venue | DocType |
---|---|---|
2021 | EACL | Conference |
Citations | PageRank | References |
0 | 0.34 | 0 |
Authors | ||
4 |
Name | Order | Citations | PageRank |
---|---|---|---|
Niels van der Heijden | 1 | 0 | 0.68 |
Helen Yannakoudakis | 2 | 154 | 13.22 |
Pushkar Mishra | 3 | 1 | 3.39 |
Ekaterina Shutova | 4 | 0 | 4.06 |