Title
Leveraging Similar Users for Personalized Language Modeling with Limited Data
Abstract
Personalized language models are designed and trained to capture language patterns specific to individual users. This makes them more accurate at predicting what a user will write. However, when a new user joins a platform and not enough text is available, it is harder to build effective personalized language models. We propose a solution for this problem, using a model trained on users that are similar to a new user. In this paper, we explore strategies for finding the similarity between new users and existing ones and methods for using the data from existing users who are a good match. We further explore the trade-off between available data for new users and how well their language can be modeled.
Year
DOI
Venue
2022
10.18653/v1/2022.acl-long.122
PROCEEDINGS OF THE 60TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), VOL 1: (LONG PAPERS)
DocType
Volume
Citations 
Conference
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
0
PageRank 
References 
Authors
0.34
0
5
Name
Order
Citations
PageRank
Charles Welch100.34
Chenxi Gu200.34
Jonathan K. Kummerfeld39316.19
Verónica Pérez-Rosas4405.02
Rada Mihalcea56460445.54