Title
Distributed Optimal Tracking Control of Discrete-Time Multiagent Systems via Event-Triggered Reinforcement Learning
Abstract
In this paper, an event-triggered optimal tracking control of discrete-time multi-agent systems is addressed by using reinforcement learning. In contrast to traditional reinforcement learning-based methods for optimal coordination and control of multi-agent systems with a time-triggered control mechanism, an event-triggered mechanism is proposed to update the controller only when the designed events are triggered, which reduces the computational burden and transmission load. The stability analysis of the closed-loop multi-agent systems with event-triggered controller is described. Further, to implement the proposed scheme, an actor-critic neural network learning structure is proposed to approximate performance indices and to on-line learn the event-triggered optimal control. During the training process, event-triggered weight tuning law has been designed, wherein the weight parameters of the actor neural networks are adjusted only during triggering instances compared with traditional methods with fixed updating periods. Further, a convergence analysis of the actor-critic neural network is provided via Lyapunov method. Finally, two simulation examples show the effectiveness and performance of the obtained event-triggered reinforcement learning controller.
Year
DOI
Venue
2022
10.1109/TCSI.2022.3177407
IEEE Transactions on Circuits and Systems I: Regular Papers
Keywords
DocType
Volume
Optimal tracking control,event-triggered mechanism,multi-agent systems,reinforcement learning,actor-critic neural networks
Journal
69
Issue
ISSN
Citations 
9
1549-8328
0
PageRank 
References 
Authors
0.34
37
5
Name
Order
Citations
PageRank
Zhinan Peng100.68
Rui Luo200.34
Jiangping Hu312.41
Kaibo Shi400.68
Bijoy K Ghosh524638.84