Title
Small cell switch policy: A reinforcement learning approach
Abstract
Small cell is a flexible solution to satisfy the continuously increasing wireless traffic demand. In this paper, we focus on on-off switch operation on small cell base stations (SBS) in heterogeneous networks. In our scenario, the users can either choose SBS when it is active or macro cell base station (MBS) to transmit data. Start-up energy cost is considered when SBS switches on. The whole network acts as a queueing system, and network latency is also under consideration. The network traffic is modeled by a Markov Modulated Poisson Process (MMPP) whose parameters are unknown to the network control center. To maximize the system reward, we introduce a reinforcement learning approach to obtain the optimal on-off switch policy. The learning procedure is defined as a Markov Decision Process (MDP). An estimation method is proposed to measure the load of the network. A single-agent Q-learning algorithm is proposed afterwards. The convergence of this algorithm is proved. Simulation results are given to evaluate the performance of the proposed algorithm.
Year
DOI
Venue
2014
10.1109/WCSP.2014.6992126
WCSP
Keywords
DocType
Citations 
data communication,queueing system,cellular radio,macrocell base station,single-agent Q-learning algorithm,learning (artificial intelligence),queueing theory,wireless traffic demand,convergence algorithm,radio networks,SBS switch,convergence,telecommunication computing,MBS,heterogenous network latency,MMPP,small cell optimal on-off switch policy,reinforcement learning approach,Markov decision process,start-up energy cost,Markov modulated poisson process,telecommunication traffic,small cell base station,data transmission,MDP,network control center,Markov processes
Conference
2
PageRank 
References 
Authors
0.42
0
5
Name
Order
Citations
PageRank
Luyang Wang142.51
Xinxin Feng2337.08
Xiaoying Gan334448.16
Jing Liu4929.96
Hui Yu520018.98