Title
Optimal Stabilization Using Lyapunov Measures
Abstract
The focus of the paper is on the computation of optimal feedback stabilizing control for discrete time control system. We use Lyapunov measure, dual to the Lyapunov function, for the design of optimal stabilizing feedback con- troller. The linear Perron-Frobenius operator is used to pose the optimal stabilization problem as a infinite dimensional linear program. Finite dimensional approximation of the linear program is obtained using set oriented numerical methods. Simulation results for the optimal stabilization of periodic orbit in one dimensional logistic map are presented. Stability analysis and stabilization of nonlinear systems are two of the most important and extensively studied problems in control theory. Lyapunov function and Lyapuov function based methods have played an important role in providing solutions to these problems. In particular, the Lyapunov function is used for stability analysis and the control Lya- punov function (CLF) is used for the design of stabilizing feedback controllers. Another problem which is extensively studied in controls literature is the optimal control prob- lem (OCP). Optimal control for the OCP can be obtained from the solution of the Hamilton Jacobi Bellman (HJB) equation. Under the additional assumption of detectability and stabilizability of nonlinear system, the optimal cost function if positive can also be used as control Lyapunov function. This establishes the connection between stability (Lyapunov function) and optimality (HJB equation). The HJB equation is a nonlinear partial different equation and hence, difficult to solve analytically and one has to resort to approximate numerical schemes for its solution. We review some of the literature particularly relevant to this paper on the approximation of HJB equation and OCP. In (1), an adaptive space discretization scheme is used to obtain the solution of deterministic and stochastic discrete time HJB (dynamic programming) equation. Optimal cost function is obtained as a fixed point solution of a linear dynamic programming operator. In (2), (3), cell mapping ap- proach is used to construct approximate numerical solutions for deterministic and stochastic optimal control problems. In (4), (5), set oriented numerical methods are used to un- derestimate the optimal one-step cost for transition between different state-space discretizations in the context of optimal control and optimal stabilization. This allows to represent the minimal cost control problem as one of finding the minimum cost path to reach the invariant set on a graph with edge costs derived from the under-estimation procedure. Djikstra's algo- rithm is used to construct an approximate solution to the HJB equation. In (6), (7), (8) solutions to deterministic optimal control problems are proposed by casting them as infinite dimensional linear programs. Approximate solution to the infinite dimensional linear program is then obtained using finite dimensional approximation of the linear programming problems or using sequence of LMI relaxation. In this paper we propose the use of Lyapunov measure for the optimal stabilization of nonlinear systems. Lyapunov measure is introduced in (9), to study weaker set wise notion of almost everywhere stability and is shown to be dual to the Lyapunov function. Existence of Lyapunov measure guarantees stability from almost every with respect to Lebesgue measure initial conditions in the phase space. Control Lyapunov measureis introduced in (10) to provide Lyapunov measure based framework for the stabilization of nonlinear systems. In this paper we extend this framework for the optimal stabilization of nonlinear systems. One of the main highlights and contributions of this paper is that finite dimensional deterministic optimal stabilizing control is obtained as the solution of finite linear program. This paper is organized as follows. In section II, we provide a brief overview of some of the key results from (9),(11),(10) for stability analysis and stabilization of nonlinear systems using Lyapunov measure. The framework for optimal sta- bilization using Lyapunov measure and transfer operators is posed as an infinite dimensional linear program in sec- tion III. A computational approach based on set oriented numerical methods in proposed for the finite dimensional approximation of the linear program in section IV. Simulation results for optimal stabilization of periodic orbit in one dimensional logistic map are presented in section V, followed by conclusion and discussion in section VI.
Year
DOI
Venue
2014
10.1109/TAC.2013.2289707
Automatic Control, IEEE Transactions  
Keywords
Field
DocType
Lyapunov methods,approximation theory,discrete time systems,feedback,linear programming,optimal control,set theory,stability,Lyapunov function-based stabilization methods,Lyapunov measures,discrete time dynamical systems,feedback control stabilization,finite dimensional approximation,infinite dimensional linear program,linear Perron-Frobenius transfer operator,optimal feedback stabilization,period two orbit stabilization,set-oriented numerical methods,set-theoretic notion,Almost everywhere stability,numerical methods,optimal stabilization
Lyapunov function,Lyapunov equation,Optimal control,Control theory,Computer science,Logistic map,Lyapunov optimization,Lyapunov redesign,Linear programming,Numerical analysis
Journal
Volume
Issue
ISSN
59
5
0018-9286
ISBN
Citations 
PageRank 
978-1-4244-2079-7
10
0.77
References 
Authors
16
2
Name
Order
Citations
PageRank
Arvind U. Raghunathan116320.63
Umesh Vaidya213127.95