首页 | 本学科首页   官方微博 | 高级检索  
     


Adaptive dynamic programming for discrete-time linear quadratic regulation based on multirate generalised policy iteration
Authors:Tae Yoon Chun  Jae Young Lee  Yoon Ho Choi
Affiliation:1. School of Electrical and Electronic Engineering, Yonsei University, Seoul, Korea;2. Department of Computing Science, University of Alberta, Edmonton, Canada;3. Department of Electronic Engineering, Kyonggi University, Suwon, Korea
Abstract:In this paper, we propose two multirate generalised policy iteration (GPI) algorithms applied to discrete-time linear quadratic regulation problems. The proposed algorithms are extensions of the existing GPI algorithm that consists of the approximate policy evaluation and policy improvement steps. The two proposed schemes, named heuristic dynamic programming (HDP) and dual HDP (DHP), based on multirate GPI, use multi-step estimation (M-step Bellman equation) at the approximate policy evaluation step for estimating the value function and its gradient called costate, respectively. Then, we show that these two methods with the same update horizon can be considered equivalent in the iteration domain. Furthermore, monotonically increasing and decreasing convergences, so called value iteration (VI)-mode and policy iteration (PI)-mode convergences, are proved to hold for the proposed multirate GPIs. Further, general convergence properties in terms of eigenvalues are also studied. The data-driven online implementation methods for the proposed HDP and DHP are demonstrated and finally, we present the results of numerical simulations performed to verify the effectiveness of the proposed methods.
Keywords:Multirate generalised policy iteration  heuristic dynamic programming  dual heuristic dynamic programming  adaptive dynamic programming  mixed-mode convergence  linear quadratic regulation
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号