Abstract | ||
---|---|---|
We consider a two-tier urban Heterogeneous Network where small cells powered with renewable energy are deployed in order to provide capacity extension and to offload macro base stations. We use reinforcement learning techniques to concoct an algorithm that autonomously learns energy inflow and traffic demand patterns. This algorithm is based on a decentralized multi-agent Q-learning technique that, by interacting with the environment, obtains optimal policies aimed at improving the system performance in terms of drop rate, throughput and energy efficiency. Simulation results show that our solution effectively adapts to changing environmental conditions and meets most of our performance objectives. At the end of the paper we identify areas for improvement. |
Year | DOI | Venue |
---|---|---|
2015 | 10.1109/ICCW.2015.7247475 | 2015 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATION WORKSHOP (ICCW) |
Keywords | Field | DocType |
Mobile Networks, HetNet, Sustainability, Renewable Energy, Energy Efficiency, Q-Learning | Energy conservation,Demand patterns,Algorithm design,Computer science,Efficient energy use,Q-learning,Real-time computing,Throughput,Heterogeneous network,Reinforcement learning | Conference |
ISSN | Citations | PageRank |
2164-7038 | 7 | 0.43 |
References | Authors | |
7 | 4 |
Name | Order | Citations | PageRank |
---|---|---|---|
Marco Miozzo | 1 | 337 | 31.39 |
Lorenza Giupponi | 2 | 567 | 53.70 |
Michele Rossi | 3 | 228 | 26.33 |
Paolo Dini | 4 | 226 | 30.82 |