Abstract
In this article, a fixed-time convergent reinforcement learning (RL) algorithm is proposed to accomplish the secure formation control of a second-order multiagent system (MAS) under the false data injection (FDI) attack. To alleviate the FDI attack on the control signal, a zero-sum graphical game is introduced to analyze the attack–defense process, in which the secure formation controller intends to minimize the common performance index function, whereas the purpose of the attacker is the opposite. Attaining the optimal secure formation control policy located at the Nash equilibrium depends on solving the game-associated coupled Hamilton–Jacobi–Isaacs equation. Taking into account fixed-time convergence, a critic-only online RL algorithm with the experience replay technique is designed. Meanwhile, the corresponding convergence and stability proofs are provided. A simulation example is presented to show the effectiveness of the devised scheme.
| Original language | English |
|---|---|
| Pages (from-to) | 1203-1214 |
| Number of pages | 12 |
| Journal | IEEE Transactions on Control of Network Systems |
| Volume | 12 |
| Issue number | 2 |
| DOIs | |
| Publication status | Published - 2025 |
Bibliographical note
Publisher Copyright:© 2014 IEEE.
Keywords
- false data injection (FDI)
- fixed-time reinforcement learning
- Multi-agent system (MAS)
- secure formation control
- zero-sum graphical game