Title
Event-triggered reinforcement learning approach for unknown nonlinear continuous-time system
Abstract
This paper provides an adaptive event-triggered method using adaptive dynamic programming (ADP) for the nonlinear continuous-time system. Comparing to the traditional method with fixed sampling period, the event-triggered method samples the state only when an event is triggered and therefore the computational cost is reduced. We demonstrate the theoretical analysis on the stability of the event-triggered method, and integrate it with the ADP approach. The system dynamics are assumed unknown. The corresponding ADP algorithm is given and the neural network techniques are applied to implement this method. The simulation results verify the theoretical analysis and justify the efficiency of the proposed event-triggered technique using the ADP approach.
Year
DOI
Venue
2014
10.1109/IJCNN.2014.6889787
IJCNN
Keywords
Field
DocType
adaptive dynamic programming,neurocontrollers,neural network technique,learning (artificial intelligence),nonlinear dynamical systems,continuous time systems,computational cost reduction,adaptive event triggered method,adp approach,unknown nonlinear continuous time system,dynamic programming,system dynamics,adaptive systems,stability,event triggered reinforcement learning approach,learning artificial intelligence,approximation algorithms,neural networks,stability analysis
Dynamic programming,Nonlinear system,Computer science,Control theory,Sampling (signal processing),Continuous time system,Event triggered,System dynamics,Artificial intelligence,Artificial neural network,Machine learning,Reinforcement learning
Conference
ISSN
Citations 
PageRank 
2161-4393
9
0.52
References 
Authors
24
5
Name
Order
Citations
PageRank
Xiangnan Zhong134616.35
Zhen Ni252533.47
Haibo He33653213.96
Xin Xu41365100.22
Dongbin Zhao5102582.21