Title
Large-Scale Training System for 100-Million Classification at Alibaba
Abstract
In the last decades, extreme classification has become an essential topic for deep learning. It has achieved great success in many areas, especially in computer vision and natural language processing (NLP). However, it is very challenging to train a deep model with millions of classes due to the memory and computation explosion in the last output layer. In this paper, we propose a large-scale training system to address these challenges. First, we build a hybrid parallel training framework to make the training process feasible. Second, we propose a novel softmax variation named KNN softmax, which reduces both the GPU memory consumption and computation costs and improves the throughput of training. Then, to eliminate the communication overhead, we propose a new overlapping pipeline and a gradient sparsification method. Furthermore, we design a fast continuous convergence strategy to reduce total training iterations by adaptively adjusting learning rate and updating model parameters. With the help of all the proposed methods, we gain 3.9× throughput of our training system and reduce almost 60% of training iterations. The experimental results show that using an in-house 256 GPUs cluster, we could train a classifier of 100 million classes on Alibaba Retail Product Dataset in about five days while achieving a comparable accuracy with the naive softmax training process.
Year
DOI
Venue
2020
10.1145/3394486.3403342
KDD '20: The 26th ACM SIGKDD Conference on Knowledge Discovery and Data Mining Virtual Event CA USA July, 2020
DocType
ISBN
Citations 
Conference
978-1-4503-7998-4
1
PageRank 
References 
Authors
0.36
9
8
Name
Order
Citations
PageRank
Liuyihan Song142.15
Pan Pan2104.29
Kang Zhao3205.11
Hao Yang492.48
Yiming Chen551.48
Yingya Zhang6213.81
Yinghui Xu717220.23
Rong Jin86206334.26