Title |
A Survey on Deep Reinforcement Learning Approaches for Power System Control and Optimization |
Authors |
장호천(Haotian Zhang) ; 왕천(Chen Wang) ; 이민주(Minju Lee) ; 이명훈(Myoung Hoon Lee) ; 문준(Jun Moon) |
DOI |
https://doi.org/10.5370/KIEE.2025.74.6.1041 |
Keywords |
Deep reinforcement learning; energy dispatch; topology control; emergency load shedding |
Abstract |
With the increasing complexity of modern power systems due to the access of large-scale renewable energy sources, minimizing operational costs while achieving stable grid operation has become a core challenge in power scheduling and optimization. Energy dispatch, topology control and emergency load shedding are key measures to improve power system stability and flexibility. However, the outputs of their traditional control policies rely on predefined rules or mathematical optimization models, which are prone to computational bottlenecks and response lags in high-dimensional dynamic environments, making it difficult to meet the demands of smart grids. In recent years, deep reinforcement learning (DRL) has gradually become a cutting-edge technology for power system scheduling and control by virtue of its powerful adaptive learning and decision optimization capabilities. According to the existing research, DRL can improve the flexibility and anti-interference ability of the power grid by learning the optimal policies through autonomous interaction, surpassing the real-time decision-making ability of traditional optimization methods in high-dimensional state space. In this paper, we systematically review the applications of DRL in energy dispatch, topology control and emergency load shedding, focus on its optimization policies, technological breakthroughs and applicability, and analyze the current challenges and future research directions. |