Home / Papers / Distributed Reinforcement Learning with ADMM-RL

Distributed Reinforcement Learning with ADMM-RL

17 Citations2019
P. Graf, J. Annoni, C. Bay
2019 American Control Conference (ACC)

A new algorithm for distributed Reinforcement Learning (RL), a combination of the Alternating Direction Method of Multipliers (ADMM) and reinforcement learning that allows for integrating learned controllers as subsystems in generally convergent distributed control applications.

Abstract

This paper presents a new algorithm for distributed Reinforcement Learning (RL). RL is an artificial intelligence (AI) control strategy such that controls for highly nonlinear systems over multi-step time horizons may be learned by experience, rather than directly computed on the fly by optimization. Here we introduce ADMM-RL, a combination of the Alternating Direction Method of Multipliers (ADMM) and reinforcement learning that allows for integrating learned controllers as subsystems in generally convergent distributed control applications. ADMM has become the workhorse algorithm for distributed control, combining the advantages of dual decomposition (namely, enabling decoupled, parallel, distributed solution) with the advantages of the method of multipliers (namely, convexification/stability). Our ADMM-RL algorithm replaces one or more of the subproblems in ADMM with several steps of RL. When the nested iterations converge, we are left with a pretrained subsolver that can potentially increase the efficiency of the deployed distributed controller by orders of magnitude. We illustrate ADMM-RL in both distributed wind farm yaw control and distributed grid-aware demand aggregation for water heaters.