Title
Explore Adaptive Dropout Deep Computing And Reinforcement Learning To Large-Scale Tasks Processing For Big Data
Abstract
Large-scale task processing has become one of the research hotspots in big data analysis and processing. Nowadays, some large-scale task processing methods based on deep neural networks have been proposed continually. Since the training of these models is always based on historical data, there are many tasks that can't be recognized and processed via previous knowledge and experience in the actual applications. Moreover, the issue of model over-fitting is prone to occur in the process of using deep learning for complex structures learning. Based on this argument, this paper has proposed an improved large-scale task processing approach, Tard (large-scale Tasks processing based on Adaptive dropout deep computing and Reinforcement learning for big Data). We design an adaptive Dropout deep computing model based virtual network mapping method to achieve large-scale task allocation. Meanwhile, according to the fact that task requests in training set do not always possess corresponding labels, we employ the policy gradient and back propagation for model training, so that the virtual node mapping scheme can continuously evolve towards the direction of higher revenue. Eventually, each task request can be allocated onto the appropriate task processing node to achieve efficient execution. The experimental results show that Tard can not only effectively avoid model over-fitting, but also improve the capability of task request recognition and processing in actual application while satisfying large-scale task requests.
Year
DOI
Venue
2019
10.1109/ICCChina.2019.8855933
2019 IEEE/CIC INTERNATIONAL CONFERENCE ON COMMUNICATIONS IN CHINA (ICCC)
Field
DocType
ISSN
Revenue,Training set,Computer science,Virtual network mapping,Real-time computing,Artificial intelligence,Deep learning,Backpropagation,Big data,Deep neural networks,Machine learning,Reinforcement learning
Conference
2377-8644
Citations 
PageRank 
References 
0
0.34
0
Authors
5
Name
Order
Citations
PageRank
Jia Zhao100.34
Ming Hu200.34
Yan Ding34012.03
Gaochao Xu418324.11
Chunyi Wu552.43