Title
STABLEMOE: Stable Routing Strategy for Mixture of Experts
Abstract
The Mixture-of-Experts (MoE) technique can scale up the model size of Transformers with an affordable computational overhead. We point out that existing learning-to-route MoE methods suffer from the routing fluctuation issue, i.e., the target expert of the same input may change along with training, but only one expert will be activated for the input during inference. The routing fluctuation tends to harm sample efficiency because the same input updates different experts but only one is finally used. In this paper, we propose STABLEMOE with two training stages to address the routing fluctuation problem. In the first training stage, we learn a balanced and cohesive routing strategy and distill it into a lightweight router decoupled from the backbone model. In the second training stage, we utilize the distilled router to determine the token-to-expert assignment and freeze it for a stable routing strategy. We validate our method on language modeling and multilingual machine translation. The results show that STABLEMOE outperforms existing MoE methods in terms of both convergence speed and performance. The code is available at https://github.com/Hunter-DDM/stablemoe.
Year
DOI
Venue
2022
10.18653/v1/2022.acl-long.489
PROCEEDINGS OF THE 60TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), VOL 1: (LONG PAPERS)
DocType
Volume
Citations 
Conference
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
0
PageRank 
References 
Authors
0.34
0
7
Name
Order
Citations
PageRank
Damai Dai112.72
Li Dong258231.86
Shuming Ma38315.92
Bo Zheng400.34
Zhifang Sui517239.06
Baobao Chang644546.85
Furu Wei71956107.57