首页 | 本学科首页   官方微博 | 高级检索  
     


Optimization over state feedback policies for robust control with constraints
Authors:Paul J Goulart [Author Vitae]  Eric C Kerrigan [Author Vitae]
Affiliation:a Department of Engineering, University of Cambridge, Trumpington Street, Cambridge CB2 1PZ, UK
b Department of Aeronautics and Department of Electrical and Electronic Engineering, Imperial College London, Exhibition Road, London SW7 2AZ, UK
Abstract:This paper is concerned with the optimal control of linear discrete-time systems subject to unknown but bounded state disturbances and mixed polytopic constraints on the state and input. It is shown that the class of admissible affine state feedback control policies with knowledge of prior states is equivalent to the class of admissible feedback policies that are affine functions of the past disturbance sequence. This implies that a broad class of constrained finite horizon robust and optimal control problems, where the optimization is over affine state feedback policies, can be solved in a computationally efficient fashion using convex optimization methods. This equivalence result is used to design a robust receding horizon control (RHC) state feedback policy such that the closed-loop system is input-to-state stable (ISS) and the constraints are satisfied for all time and all allowable disturbance sequences. The cost to be minimized in the associated finite horizon optimal control problem is quadratic in the disturbance-free state and input sequences. The value of the receding horizon control law can be calculated at each sample instant using a single, tractable and convex quadratic program (QP) if the disturbance set is polytopic, or a tractable second-order cone program (SOCP) if the disturbance set is given by a 2-norm bound.
Keywords:Robust control  Constraint satisfaction  Robust optimization  Predictive control  Optimal control
本文献已被 ScienceDirect 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号