Title
Continual Learning for Fake Audio Detection.
Abstract
Fake audio attack becomes a major threat to the speaker verification system. Although current detection approaches have achieved promising results on dataset-specific scenarios, they encounter difficulties on unseen spoofing data. Fine-tuning and retraining from scratch have been applied to incorporate new data. However, fine-tuning leads to performance degradation on previous data. Retraining takes a lot of time and computation resources. Besides, previous data are unavailable due to privacy in some situations. To solve the above problems, this paper proposes detecting fake without forgetting, a continual-learning-based method, to make the model learn new spoofing attacks incrementally. A knowledge distillation loss is introduced to loss function to preserve the memory of original model. Supposing the distribution of genuine voice is consistent among different scenarios, an extra embedding similarity loss is used as another constraint to further do a positive sample alignment. Experiments are conducted on the ASVspoof2019 dataset. The results show that our proposed method outperforms fine-tuning by the relative reduction of average equal error rate up to 81.62%.
Year
DOI
Venue
2021
10.21437/Interspeech.2021-794
Interspeech
DocType
Citations 
PageRank 
Conference
1
0.36
References 
Authors
0
6
Name
Order
Citations
PageRank
Haoxin Ma111.71
Jiangyan Yi21917.99
Jianhua Tao3848138.00
Ye Bai475.52
Zhengkun Tian535.79
Chenglong Wang611.03