Self-Dispatch of Wind-Storage Integrated System: A Deep Reinforcement Learning Approach

Xiangyu Wei, Yue Xiang, Junlong Li, Xin Zhang

Research output: Contribution to journalArticlepeer-review

30 Citations (SciVal)

Abstract

The uncertainty of wind power and electricity price restrict the profitability of wind-storage integrated system (WSS) participating in real-time market (RTM). This paper presents a self-dispatch model for WSS based on deep reinforcement learning (DRL). The designed model is able to learn the integrated bidding and charging policy of WSS from the historical data. Besides, the maximum entropy and distributed prioritized experience replay frame, known as Ape-X, is used in this model. The Ape-X decouples the acting and learning in training by a central shared replay memory to enhance the efficiency and performance of the DRL procedures. Besides, the maximum entropy framework enables the designed agent to explore various optimal possibilities, thus the learned policy is more stable considering the uncertainty of wind power and electricity price. Compared with traditional methods, this model brings more benefits to wind farms while ensuring robustness.

Original languageEnglish
Pages (from-to)1861 - 1864
JournalIEEE Transactions on Sustainable Energy
Volume13
Issue number3
Early online date7 Mar 2022
DOIs
Publication statusPublished - 31 Jul 2022

Keywords

  • Wind farm
  • deep reinforcement learning
  • distributed prioritized experience replay
  • electricity market
  • energy storage system
  • maximum entropy

ASJC Scopus subject areas

  • Renewable Energy, Sustainability and the Environment

Fingerprint

Dive into the research topics of 'Self-Dispatch of Wind-Storage Integrated System: A Deep Reinforcement Learning Approach'. Together they form a unique fingerprint.

Cite this