Title
Reinforcement learning solution for HJB equation arising in constrained optimal control problem
Abstract
The constrained optimal control problem depends on the solution of the complicated Hamilton-Jacobi-Bellman equation (HJBE). In this paper, a data-based off-policy reinforcement learning (RL) method is proposed, which learns the solution of the HJBE and the optimal control policy from real system data. One important feature of the off-policy RL is that its policy evaluation can be realized with data generated by other behavior policies, not necessarily the target policy, which solves the insufficient exploration problem. The convergence of the off-policy RL is proved by demonstrating its equivalence to the successive approximation approach. Its implementation procedure is based on the actor-critic neural networks structure, where the function approximation is conducted with linearly independent basis functions. Subsequently, the convergence of the implementation procedure with function approximation is also proved. Finally, its effectiveness is verified through computer simulations.
Year
DOI
Venue
2015
10.1016/j.neunet.2015.08.007
Neural Networks
Keywords
Field
DocType
Constrained optimal control,Data-based,Off-policy reinforcement learning,Hamilton–Jacobi–Bellman equation,The method of weighted residuals
Hamilton–Jacobi–Bellman equation,Mathematical optimization,Optimal control,Function approximation,Q-learning,Exploration problem,Artificial intelligence,Basis function,Artificial neural network,Machine learning,Mathematics,Reinforcement learning
Journal
Volume
Issue
ISSN
71
C
0893-6080
Citations 
PageRank 
References 
40
1.13
30
Authors
4
Name
Order
Citations
PageRank
Biao Luo155423.80
Huai-Ning Wu2210498.52
Tingwen Huang35684310.24
Derong Liu41816.71