Deep Reinforcement Learning for Resource Allocation in 5G Networks

Thursday 23 January 2025


The quest for efficient resource allocation in 5G networks has led researchers to explore novel approaches, including deep reinforcement learning (DRL). A recent study demonstrates the effectiveness of DRL in optimizing radio resource management (RRM) for coexistence between narrowband internet of things (NB-IoT), LTE-M, and 5G new radio (NR).


In traditional RRM systems, base stations allocate resources based on fixed rules or heuristics. However, these methods often fail to account for the dynamic nature of wireless networks, leading to suboptimal performance. DRL, on the other hand, enables agents to learn from experience and adapt to changing conditions.


The study presents a unified framework for DRL-based RRM, encompassing power allocation, interference management, and user scheduling. The authors propose three DRL algorithms: deep Q-network (DQN), proximal policy optimization (PGN), and deep deterministic policy gradient (DDPGN). Each algorithm is designed to optimize a specific performance metric, such as sum rate, fairness, or delay.


The experiments demonstrate that the DRL-based RRM algorithms outperform traditional methods in terms of overall throughput, fairness, and delay. The DQN algorithm, which learns to approximate a value function, shows promising results for power allocation and interference management. PGN, which uses policy gradients to optimize user scheduling, achieves better performance than DQN in terms of fairness. DDPGN, which combines actor-critic methods with deep learning, excels in delay-sensitive applications.


The study highlights the potential of DRL for RRM in 5G networks, particularly in scenarios where coexistence between NB-IoT, LTE-M, and 5G NR devices is necessary. By leveraging DRL, network operators can optimize resource allocation to ensure efficient use of spectrum and improve overall network performance.


The research also underscores the importance of considering small-scale fading effects, which can significantly impact RRM decisions in 5G networks. The authors demonstrate that incorporating small-scale fading into the DRL framework can lead to further improvements in performance.


As 5G networks continue to evolve, the need for efficient and adaptive RRM strategies will only grow. The study’s findings suggest that DRL has the potential to play a crucial role in optimizing resource allocation for coexistence between different wireless technologies. With its ability to learn from experience and adapt to changing conditions, DRL could be a game-changer for 5G networks.


Cite this article: “Deep Reinforcement Learning for Resource Allocation in 5G Networks”, The Science Archive, 2025.


Deep Reinforcement Learning, Radio Resource Management, 5G Networks, Narrowband Internet Of Things, Lte-M, 5G New Radio, Power Allocation, Interference Management, User Scheduling, Small-Scale Fading Effects


Reference: Shahida Jabeen, “A Deep Reinforcement Learning based Scheduler for IoT Devices in Co-existence with 5G-NR” (2025).


Discussion