Abstract | ||
---|---|---|
This article presents a learning-based controller to solve the cooperative formation control problem for multiagent system (MAS) with collision avoidance. First, the consensus problem of first-order MAS is mostly solved by linear matrix inequality (LMI) without consideration of energy loss. To overcome these difficulties, an adaptive dynamic programming (ADP) technique is fit to solve the consensus problem and similar formation control problem for second-order MAS by the establishment of a performance index function. Besides, we introduce the generalized policy iteration (GPI) algorithm as a kind of ADP technique without the problem of low convergence speed and high computational complexity. Combined with previous works, it can be found that our proposed structure can be extended to high-order cases based on the structure of local neighborhood formation error and algorithm. Afterward, the convergence analysis, optimality analysis, and stability analysis are given. Neural networks (NNs) are also implemented to approximate the iterative control policies and value functions, respectively. Moreover, we realize that many collisions may occur in the formation control problem. Inspired by the idea of the artificial potential field (APF) technique, the concept of the repulsive force field is introduced based on our proposed learning-based structure to avoid collisions simply and efficiently. Finally, a simulation is provided to demonstrate the effectiveness of our proposed method. |
Year | DOI | Venue |
---|---|---|
2022 | 10.1109/TSMC.2022.3153030 | IEEE Transactions on Systems, Man, and Cybernetics: Systems |
Keywords | DocType | Volume |
Adaptive dynamic programming (ADP),artificial potential field (APF),generalized policy iteration (GPI),multiagent system (MAS),neural networks (NNs) | Journal | 52 |
Issue | ISSN | Citations |
12 | 2168-2216 | 0 |
PageRank | References | Authors |
0.34 | 30 | 2 |
Name | Order | Citations | PageRank |
---|---|---|---|
C. Mu | 1 | 131 | 10.88 |
Jiangwen Peng | 2 | 0 | 0.68 |