Title
Adapting Language Models When Training on Privacy-Transformed Data.
Abstract
In recent years, voice-controlled personal assistants have revolutionized the interaction with smart devices and mobile applications. These dialogue tools are then used by system providers to improve and retrain the language models (LMs). Each spoken message reveals personal information, hence, it is necessary to remove the private data from the input utterances. However, this may harm the LM training because privacy-transformed data is unlikely to match the test distribution. This paper aims to fill the gap by focusing on the adaptation of LM initially trained on privacy-transformed utterances. Our data sanitization process relies on named-entity recognition. We propose an LM adaptation strategy over the private data with minimum losses. Class-based modeling is an effective approach to overcome data sparsity in the context of n-gram model training. On the other hand, neural LMs can handle longer contexts which can yield better predictions. Our methodology combines the predictive power of class-based models and the generalization capability of neural models together. With privacy transformation, we have a relative 11% word error rate (WER) increase compared to an LM trained on the clean data. Despite the privacy-preserving, we can still achieve comparable accuracy. Empirical evaluations attain a relative WER improvement of 8% over the initial model.
Year
Venue
Keywords
2022
International Conference on Language Resources and Evaluation (LREC)
Language model,Context (language use),Word error rate,Personally identifiable information,Adaptation (computer science),Machine learning,Class (computer programming),Process (engineering),Computer science,Generalization,Artificial intelligence
DocType
Citations 
PageRank 
Conference
0
0.34
References 
Authors
0
4
Name
Order
Citations
PageRank
Mehmet Ali Tugtekin Turan100.34
Dietrich Klakow215.76
Emmanuel Vincent32963186.26
Denis Jouvet402.03