Title | ||
---|---|---|
A Simple and Effective Method To Eliminate the Self Language Bias in Multilingual Representations. |
Abstract | ||
---|---|---|
Language agnostic and semantic-language information isolation is an emerging research direction for multilingual representations models. We explore this problem from a novel angle of geometric algebra and semantic space. A simple but highly effective method "Language Information Removal (LIR)" factors out language identity information from semantic related components in multilingual representations pre-trained on multi-monolingual data. A post-training and model-agnostic method, LIR only uses simple linear operations, e.g. matrix factorization and orthogonal projection. LIR reveals that for weak-alignment multilingual systems, the principal components of semantic spaces primarily encodes language identity information. We first evaluate the LIR on a cross-lingual question answer retrieval task (LAReQA), which requires the strong alignment for the multilingual embedding space. Experiment shows that LIR is highly effectively on this task, yielding almost 100% relative improvement in MAP for weak-alignment models. We then evaluate the LIR on Amazon Reviews and XEVAL dataset, with the observation that removing language information is able to improve the cross-lingual transfer performance. |
Year | Venue | DocType |
---|---|---|
2021 | EMNLP | Conference |
Volume | Citations | PageRank |
2021.emnlp-main | 0 | 0.34 |
References | Authors | |
0 | 4 |
Name | Order | Citations | PageRank |
---|---|---|---|
Han-Shuo Ye | 1 | 16 | 2.88 |
Yinfei Yang | 2 | 99 | 16.53 |
Daniel Cer | 3 | 784 | 36.35 |
Eric Darve | 4 | 440 | 44.79 |