Self-Dispatch of Wind-Storage Integrated System: A Deep Reinforcement Learning Approach

Xiangyu Wei, Yue Xiang, Junlong Li, Xin Zhang

Research output: Contribution to journalArticlepeer-review

20 Citations (SciVal)


The uncertainty of wind power and electricity price restrict the profitability of wind-storage integrated system (WSS) participating in real-time market (RTM). This paper presents a self-dispatch model for WSS based on deep reinforcement learning (DRL). The designed model is able to learn the integrated bidding and charging policy of WSS from the historical data. Besides, the maximum entropy and distributed prioritized experience replay frame, known as Ape-X, is used in this model. The Ape-X decouples the acting and learning in training by a central shared replay memory to enhance the efficiency and performance of the DRL procedures. Besides, the maximum entropy framework enables the designed agent to explore various optimal possibilities, thus the learned policy is more stable considering the uncertainty of wind power and electricity price. Compared with traditional methods, this model brings more benefits to wind farms while ensuring robustness.

Original languageEnglish
Pages (from-to)1861 - 1864
JournalIEEE Transactions on Sustainable Energy
Issue number3
Early online date7 Mar 2022
Publication statusPublished - 31 Jul 2022

Bibliographical note

Funding Information:
This work was supported by the National Natural Science Foundation of China under Grants U2166211 and 52177103. Paper no. PESL-00158-2021.

Publisher Copyright:
© 2010-2012 IEEE.


  • Wind farm
  • deep reinforcement learning
  • distributed prioritized experience replay
  • electricity market
  • energy storage system
  • maximum entropy

ASJC Scopus subject areas

  • Renewable Energy, Sustainability and the Environment


Dive into the research topics of 'Self-Dispatch of Wind-Storage Integrated System: A Deep Reinforcement Learning Approach'. Together they form a unique fingerprint.

Cite this