首页 | 本学科首页   官方微博 | 高级检索  
     


Adaptive dynamic programming for online solution of a zero-sum differential game
Authors:Draguna VRABIE and Frank LEWIS
Affiliation:1. United Technologies Research Center, East Hartford, CT 06108, U.S.A.
2. Automation and Robotics Research Institute, University of Texas at Arlington, Fort Worth, TX 76118, U.S.A
Abstract:This paper will present an approximate/adaptive dynamic programming (ADP) algorithm, that uses the idea of integral reinforcement learning (IRL), to determine online the Nash equilibrium solution for the two-player zerosum differential game with linear dynamics and infinite horizon quadratic cost. The algorithm is built around an iterative method that has been developed in the control engineering community for solving the continuous-time game algebraic Riccati equation (CT-GARE), which underlies the game problem. We here show how the ADP techniques will enhance the capabilities of the offline method allowing an online solution without the requirement of complete knowledge of the system dynamics. The feasibility of the ADP scheme is demonstrated in simulation for a power system control application. The adaptation goal is the best control policy that will face in an optimal manner the highest load disturbance.
Keywords:Approximate/Adaptive dynamic programming  Game algebraic Riccati equation  Zero-sum differential game  Nash equilibrium
本文献已被 CNKI 维普 万方数据 SpringerLink 等数据库收录!
点击此处可从《控制理论与应用(英文版)》浏览原始摘要信息
点击此处可从《控制理论与应用(英文版)》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号