Title
A Convex Optimization Approach to Dynamic Programming in Continuous State and Action Spaces
Abstract
In this paper, a convex optimization-based method is proposed for numerically solving dynamic programs in continuous state and action spaces. The key idea is to approximate the output of the Bellman operator at a particular state by the optimal value of a convex program. The approximate Bellman operator has a computational advantage because it involves a convex optimization problem in the case of control-affine systems and convex costs. Using this feature, we propose a simple dynamic programming algorithm to evaluate the approximate value function at pre-specified grid points by solving convex optimization problems in each iteration. We show that the proposed method approximates the optimal value function with a uniform convergence property in the case of convex optimal value functions. We also propose aninterpolation-freedesign method for a control policy, of which performance converges uniformly to the optimum as the grid resolution becomes finer. When a nonlinear control-affine system is considered, the convex optimization approach provides an approximate policy with a provable suboptimality bound. For general cases, the proposed convex formulation of dynamic programming operators can be modified as a nonconvex bilevel program, in which the inner problem is a linear program, without losing the uniform convergence properties.
Year
DOI
Venue
2018
10.1007/s10957-020-01747-1
JOURNAL OF OPTIMIZATION THEORY AND APPLICATIONS
Keywords
Field
DocType
Dynamic programming,Convex optimization,Optimal control,Stochastic control
Discretization,Dynamic programming,Mathematical optimization,Convexity,Uniform convergence,Bellman equation,Linear programming,Convex optimization,State space,Mathematics
Journal
Volume
Issue
ISSN
187.0
1.0
0022-3239
Citations 
PageRank 
References 
0
0.34
0
Authors
1
Name
Order
Citations
PageRank
Insoon Yang1359.17