论文标题

使用MDT数据和深入增强学习的蜂窝网络能力和覆盖范围增强

Cellular Network Capacity and Coverage Enhancement with MDT Data and Deep Reinforcement Learning

论文作者

Skocaj, Marco, Amorosa, Lorenzo Mario, Ghinamo, Giorgio, Muratore, Giuliano, Micheli, Davide, Zabini, Flavio, Verdone, Roberto

论文摘要

近年来,通信网络中的数据和计算资源的可用性显着增加。这导致数据驱动的网络自动化算法的增加。本文研究了驱动器测试(MDT)驱动的深钢筋学习(DRL)算法的最小化,以通过从Tim的蜂窝网络中调整天线倾斜度来优化覆盖范围和容量。我们共同利用MDT数据,电磁模拟和网络关键性能指标(KPI)来定义模拟网络环境,以训练深Q-Network(DQN)代理。一些调整已被引入经典的DQN公式,以提高代理的样本效率,稳定性和性能。特别是,定制勘探政策旨在在培训时引入软限制。结果表明,就长期奖励和样本效率而言,所提出的算法的表现优于DQN和最佳拳手搜索等基线方法。我们的结果表明,MDT驱动的方法构成了自动覆盖和移动无线网络容量优化的宝贵工具。

Recent years witnessed a remarkable increase in the availability of data and computing resources in communication networks. This contributed to the rise of data-driven over model-driven algorithms for network automation. This paper investigates a Minimization of Drive Tests (MDT)-driven Deep Reinforcement Learning (DRL) algorithm to optimize coverage and capacity by tuning antennas tilts on a cluster of cells from TIM's cellular network. We jointly utilize MDT data, electromagnetic simulations, and network Key Performance indicators (KPIs) to define a simulated network environment for the training of a Deep Q-Network (DQN) agent. Some tweaks have been introduced to the classical DQN formulation to improve the agent's sample efficiency, stability, and performance. In particular, a custom exploration policy is designed to introduce soft constraints at training time. Results show that the proposed algorithm outperforms baseline approaches like DQN and best-fist search in terms of long-term reward and sample efficiency. Our results indicate that MDT-driven approaches constitute a valuable tool for autonomous coverage and capacity optimization of mobile radio networks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源