Optimal synchronization control of multiagent systems with input saturation via off-policy reinforcement learning

Jiahu Qin, Man Li, Yang Shi, Qichao Ma, Wei Xing Zheng

Research output: Contribution to journalArticlepeer-review

117 Citations (Scopus)

Abstract

In this paper, we aim to investigate the optimal synchronization problem for a group of generic linear systems with input saturation. To seek the optimal controller, Hamilton-Jacobi-Bellman (HJB) equations involving nonquadratic input energy terms in coupled forms are established. The solutions to these coupled HJB equations are further proven to be optimal and the induced controllers constitute interactive Nash equilibrium. Due to the difficulty to analytically solve HJB equations, especially in coupled forms, and the possible lack of model information of the systems, we apply the data-based off-policy reinforcement learning algorithm to learn the optimal control policies. A byproduct of this off-policy algorithm is shown that it is insensitive to probing noise that is exerted to the system to maintain persistence of excitation condition. In order to implement this off-policy algorithm, we employ actor and critic neural networks to approximate the controllers and the cost functions. Furthermore, the estimated control policies obtained by this presented implementation are proven to converge to the optimal ones under certain conditions. Finally, an illustrative example is provided to verify the effectiveness of the proposed algorithm.
Original languageEnglish
Pages (from-to)85-96
Number of pages12
JournalIEEE Transactions on Neural Networks and Learning Systems
Volume30
Issue number1
DOIs
Publication statusPublished - 2019

Keywords

  • algorithms
  • machine learning
  • multiagent systems
  • neural networks (computer science)
  • synchronization

Fingerprint

Dive into the research topics of 'Optimal synchronization control of multiagent systems with input saturation via off-policy reinforcement learning'. Together they form a unique fingerprint.

Cite this