Title
Policy Approximation in Policy Iteration Approximate Dynamic Programming for Discrete-Time Nonlinear Systems.
Abstract
Policy iteration approximate dynamic programming (DP) is an important algorithm for solving optimal decision and control problems. In this paper, we focus on the problem associated with policy approximation in policy iteration approximate DP for discrete-time nonlinear systems using infinite-horizon undiscounted value functions. Taking policy approximation error into account, we demonstrate asympt...
Year
DOI
Venue
2018
10.1109/TNNLS.2017.2702566
IEEE Transactions on Neural Networks and Learning Systems
Keywords
Field
DocType
Approximation algorithms,Approximation error,Convergence,Optimal control,Nonlinear systems,Discrete-time systems
Approximation algorithm,Dynamic programming,Mathematical optimization,Optimal control,Computer science,Bellman equation,Volterra series,Exponential stability,Approximation error,Bounded function
Journal
Volume
Issue
ISSN
29
7
2162-237X
Citations 
PageRank 
References 
3
0.39
26
Authors
4
Name
Order
Citations
PageRank
Wentao Guo1544.60
Jennie Si274670.23
Feng Liu326928.35
Shengwei Mei419634.27