Conference article

Reinforcement Learning for Thermostatically Controlled Loads Control using Modelica and Python

Oleh Lukianykhin
Ukrainian Catholic University

Tetiana Bogodorova
Rensselaer Polytechnic Institute

Download articlehttps://doi.org/10.3384/ecp202017431

Published in: Proceedings of Asian Modelica Conference 2020, Tokyo, Japan, October 08-09, 2020

Linköping Electronic Conference Proceedings 174:4, p. 31-40

Show more +

Published: 2020-11-02

ISBN: 978-91-7929-775-6

ISSN: 1650-3686 (print), 1650-3740 (online)

Abstract

The aim of the project is to investigate and assess opportunities for applying reinforcement learning (RL) for power system control. As a proof of concept (PoC), voltage control of thermostatically controlled loads (TCLs) for power consumption regulation was developed using Modelica-based pipeline. The Q-learning RL algorithm has been validated for deterministic and stochastic initialization of TCLs. The latter modelling is closer to real grid behaviour, which challenges the control development, considering the stochastic nature of load switching. In addition, the paper shows the influence of Q-learning parameters, including discretization of state-action space, on the controller performance.

Keywords

Modelica, Python, Reinforcement Learning, Q-learning, Thermostatically Controlled Loads, Power System, Demand Response, Dymola Open AI Gym, JModelica.org, OpenModelica

References

No references available

Citations in Crossref