Title
Optimized Backstepping Control Using Reinforcement Learning of Observer-Critic-Actor Architecture Based on Fuzzy System for a Class of Nonlinear Strict-Feedback Systems
Abstract
In this article, a fuzzy logic system (FLS)-based adaptive optimized backstepping control is developed by employing reinforcement learning (RL) strategy for a class of nonlinear strict feedback systems with unmeasured states. For making the virtual and actual controls are optimized solution of the corresponding subsystem, RL of observer-critic-actor architecture based on FLS approximations is constructed in every backstepping step, where the observer aims to estimate the unmeasurable states, and the critic and actor aim to evaluate control performance and perform control behavior, respectively. In the proposed optimized control, on the one hand, the state observer method can avoid to require the design constants making its characteristic polynomial Hurwitz, which is universally demanded in the existing observer methods; on the other hand, the RL is significantly simple in algorithm, because the critic and actor training laws are derived from the negative gradient of a simple positive function, which is produced from the partial derivative of Hamilton–Jacobi–Bellman (HJB) equation, instead of the square of approximated HJB equation. Therefore this optimized scheme can be more easily applied and widely extended. Finally, from two aspects of theory and simulation, it is demonstrated that the desired objective can be fulfilled.
Year
DOI
Venue
2022
10.1109/TFUZZ.2022.3148865
IEEE Transactions on Fuzzy Systems
Keywords
DocType
Volume
Nonlinear strict feedback system,optimized backstepping (OB),reinforcement learning (RL),state observer,unmeasured state
Journal
30
Issue
ISSN
Citations 
10
1063-6706
0
PageRank 
References 
Authors
0.34
25
3
Name
Order
Citations
PageRank
Guoxing Wen101.01
Bin Li278279.80
Ben Niu323544.62