Optimizing Hybrid Microgrids in Real-time: A Comparative Analysis of Two Reinforcement Learning Training Methods
DOI:
https://doi.org/10.24949/njes.v16i2.757Abstract
Reinforcement learning has been employed in recent research articles to optimize the energy storage system scheduling in microgrids, aiming to reduce overall system costs. However, applying reinforcement learning in real-time scenarios introduces uncertainties and delays due to the extensive training required to develop the optimal policy for the storage system. This work addresses these challenges and explores potential solutions for real-time dispatch control actions of the battery in a grid-tied microgrid. The study considers different approaches for training the agent, distinguishing between online and offline scheduling of the energy storage system. The limitations of these approaches and their implications on real-time performance are also analyzed. By developing a comprehensive microgrid model and comparing two training approaches, this research contributes to novel insights for efficient real-time scheduling of energy storage systems in grid-tied microgrids. The proposed approach presents a promising path towards addressing uncertainties and achieving optimal operation in grid-tied microgrids. In terms of average cost per year, the difference between the two approaches is 4% if foresight of the real data is perfect, otherwise the real-time approach is more cost-effective.
Downloads
Published
Issue
Section
License
Copyright (c) 2023 NUST Journal of Engineering Sciences
This work is licensed under a Creative Commons Attribution 4.0 International License.