Title
Differential graphical games: Policy iteration solutions and coupled Riccati formulation
Abstract
This paper introduces novel Integral Reinforcement Learning solution to a class of differential games known as differential graphical games. The agents' error dynamics are coupled dynamical systems driven by the control input of each agent and the control inputs of its neighbors. A new class of control policies is developed to solve the differential graphical games with innovative performance index which is used to measure the system performance. The graphical game Integral Reinforcement Learning Bellman equations are shown to be equivalent to certain graphical game coupled Hamilton-Jacobi-Bellman equations developed herein. Online Policy Iteration algorithm is proposed to solve the differential graphical game in real-time. Convergence of the policy iteration algorithm is shown under mild assumptions about the inter-connectivity properties of the graph. Novel coupled Riccati formulation is developed to solve the differential graphical games.
Year
DOI
Venue
2014
10.1109/ECC.2014.6862473
ECC
Keywords
Field
DocType
riccati equations,computer games,graph theory,iterative methods,learning (artificial intelligence),hamilton-jacobi-bellman equation,agents error dynamics,coupled riccati formulation,coupled dynamical system,differential graphical games,innovative performance index,integral reinforcement learning solution,online policy iteration algorithm,policy iteration solution,synchronization,hamilton jacobi bellman equation,optimal control,vectors,games,nash equilibrium,learning artificial intelligence
Convergence (routing),Graph,Mathematical optimization,Performance index,Bellman equation,Dynamical systems theory,Mathematics,Reinforcement learning
Conference
ISBN
Citations 
PageRank 
978-3-9524269-1-3
4
0.48
References 
Authors
10
3
Name
Order
Citations
PageRank
Abouheaf, M.I.140.82
FRANK L. LEWIS25782402.68
Magdi S. Mahmoud379098.50