Title
Data-Driven Integral Reinforcement Learning for Continuous-Time Non-Zero-Sum Games.
Abstract
This paper develops an integral value iteration (VI) method to efficiently find online the Nash equilibrium solution of two-player non-zero-sum (NZS) differential games for linear systems with partially unknown dynamics. To guarantee the closed-loop stability about the Nash equilibrium, the explicit upper bound for the discounted factor is given. To show the efficacy of the presented online model-free solution, the integral VI method is compared with the model-based off-line policy iteration method. Moreover, the theoretical analysis of the integral VI algorithm in terms of three aspects, i.e., positive definiteness properties of the updated cost functions, the stability of the closed-loop systems, and the conditions that guarantee the monotone convergence, is provided in detail. Finally, the simulation results demonstrate the efifcacy of the presented algorithms.
Year
DOI
Venue
2019
10.1109/ACCESS.2019.2923845
IEEE ACCESS
Keywords
Field
DocType
Coupled Riccati equations,integral reinforcement learning,non-zero-sum games,optimal control
Applied mathematics,Linear system,Computer science,Iterative method,Integral equation,Game theory,Zero-sum game,Positive definiteness,Nash equilibrium,Reinforcement learning,Distributed computing
Journal
Volume
ISSN
Citations 
7
2169-3536
0
PageRank 
References 
Authors
0.34
0
6
Name
Order
Citations
PageRank
Yongliang Yang13210.70
Liming Wang200.68
Hamidreza Modares378536.68
Dawei Ding410015.43
wende5123.68
Wunsch II Donald C.6135491.73