首页 | 本学科首页   官方微博 | 高级检索  
     


Online adaptive algorithm for optimal control with integral reinforcement learning
Authors:Kyriakos G Vamvoudakis  Draguna Vrabie  Frank L Lewis
Affiliation:1. Center for Control, Dynamical‐Systems and Computation, University of California, , Santa Barbara, CA, USA;2. United Technologies Research Center, , Connecticut, USA;3. UTA Research Institute, University of Texas at Arlington, , Texas, USA
Abstract:In this paper, we introduce an online algorithm that uses integral reinforcement knowledge for learning the continuous‐time optimal control solution for nonlinear systems with infinite horizon costs and partial knowledge of the system dynamics. This algorithm is a data‐based approach to the solution of the Hamilton–Jacobi–Bellman equation, and it does not require explicit knowledge on the system's drift dynamics. A novel adaptive control algorithm is given that is based on policy iteration and implemented using an actor/critic structure having two adaptive approximator structures. Both actor and critic approximation networks are adapted simultaneously. A persistence of excitation condition is required to guarantee convergence of the critic to the actual optimal value function. Novel adaptive control tuning algorithms are given for both critic and actor networks, with extra terms in the actor tuning law being required to guarantee closed loop dynamical stability. The approximate convergence to the optimal controller is proven, and stability of the system is also guaranteed. Simulation examples support the theoretical result. Copyright © 2013 John Wiley & Sons, Ltd.
Keywords:synchronous integral reinforcement learning  Hamilton–  Jacobi–  Bellman equation  persistence of excitation  approximated dynamic programming
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号