Abstract
The uncertainty of wind power and electricity price restrict the profitability of wind-storage integrated system (WSS) participating in real-time market (RTM). This paper presents a self-dispatch model for WSS based on deep reinforcement learning (DRL). The designed model is able to learn the integrated bidding and charging policy of WSS from the historical data. Besides, the maximum entropy and distributed prioritized experience replay frame, known as Ape-X, is used in this model. The Ape-X decouples the acting and learning in training by a central shared replay memory to enhance the efficiency and performance of the DRL procedures. Besides, the maximum entropy framework enables the designed agent to explore various optimal possibilities, thus the learned policy is more stable considering the uncertainty of wind power and electricity price. Compared with traditional methods, this model brings more benefits to wind farms while ensuring robustness.
Original language | English |
---|---|
Pages (from-to) | 1861 - 1864 |
Journal | IEEE Transactions on Sustainable Energy |
Volume | 13 |
Issue number | 3 |
Early online date | 7 Mar 2022 |
DOIs | |
Publication status | Published - 31 Jul 2022 |
Keywords
- Wind farm
- deep reinforcement learning
- distributed prioritized experience replay
- electricity market
- energy storage system
- maximum entropy
ASJC Scopus subject areas
- Renewable Energy, Sustainability and the Environment