首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
本文研究了一类离散时间非齐次马尔可夫跳跃线性系统的线型二次高斯(linear quadratic Gaussian,LQG)问题,其中系统模态转移概率矩阵随时间随机变化,其变化特性由一高阶马尔可夫链描述.对于该系统的LQG问题,文中首先给出了线性最优滤波器,得到最优状态估计;其次,验证分离定理成立,并利用利用动态规划方法设计了系统最优控制器;最后,数值仿真结果验证了所设计控制器的有效性.  相似文献   

2.
祝超群  郭戈 《控制与决策》2014,29(5):802-808

针对随机事件驱动的网络化控制系统, 研究其中的有限时域和无限时域内最优控制器的设计问题. 首先, 根据执行器介质访问机制将网络化控制系统建模为具有多个状态的马尔科夫跳变系统; 然后, 基于动态规划和马尔科夫跳变线性系统理论设计满足二次型性能指标的最优控制序列, 通过求解耦合黎卡提方程的镇定解, 给出最优控制律的计算方法, 使得网络化控制系统均方指数稳定; 最后, 通过仿真实验表明了所提出方法的有效性.

  相似文献   

3.
Discrete-time coupled algebraic Riccati equations that arise in quadratic optimal control and H -control of Markovian jump linear systems are considered. First, the equations that arise from the quadratic optimal control problem are studied. The matrix cost is only assumed to be hermitian. Conditions for the existence of the maximal hermitian solution are derived in terms of the concept of mean square stabilizability and a convex set not being empty. A connection with convex optimization is established, leading to a numerical algorithm. A necessary and sufficient condition for the existence of a stabilizing solution (in the mean square sense) is derived. Sufficient conditions in terms of the usual observability and detectability tests for linear systems are also obtained. Finally, the coupled algebraic Riccati equations that arise from the H -control of discrete-time Markovian jump linear systems are analyzed. An algorithm for deriving a stabilizing solution, if it exists, is obtained. These results generalize and unify several previous ones presented in the literature of discrete-time coupled Riccati equations of Markovian jump linear systems. Date received: November 14, 1996. Date revised: January 12, 1999.  相似文献   

4.
在无限时区风险灵敏度指标下,研究了一类具有严格反馈形式的 Markov 跳跃非线性系统的控制器设计问题.首先,将此问题的可解性转化为一类 HJB 方程的可解性;然后根据此方程,构造性地给出一个与模态无关的控制器,此控制器可保证闭环系统是依概率有界的,且风险灵敏度指标不大于任意给定的正常数,特别地,当噪声项在原点处消逝时,能够确保风险灵敏度指标为零;最后,通过仿真例子验证了理论结果的正确性.  相似文献   

5.
研究线性Markov切换系统的随机Nash微分博弈问题。首先借助线性Markov切换系统随机最优控制的相关结果,得到了有限时域和无线时域Nash均衡解的存在条件等价于其相应微分(代数) Riccati方程存在解,并给出了最优解的显式形式;然后应用相应的微分博弈结果分析线性Markov切换系统的混合H2/H∞控制问题;最后通过数值算例验证了所提出方法的可行性。  相似文献   

6.
In this paper we consider the stochastic optimal control problem of discrete-time Markov jump with multiplicative noise linear systems. The performance criterion is assumed to be formed by a linear combination of a quadratic part and a linear part in the state and control variables. The weighting matrices of the state and control for the quadratic part are allowed to be indefinite. We present a necessary and sufficient condition under which the problem is well posed and a state feedback solution can be derived from a set of coupled generalized Riccati difference equations interconnected with a set of coupled linear recursive equations. For the case in which the quadratic-term matrices are non-negative, this necessary and sufficient condition can be written in a more explicit way. The results are applied to a problem of portfolio optimization.  相似文献   

7.
In this note, we consider the finite-horizon quadratic optimal control problem of discrete-time Markovian jump linear systems driven by a wide sense white noise sequence. We assume that the output variable and the jump parameters are available to the controller. It is desired to design a dynamic Markovian jump controller such that the closed-loop system minimizes the quadratic functional cost of the system over a finite horizon period of time. As in the case with no jumps, we show that an optimal controller can be obtained from two coupled Riccati difference equations, one associated to the optimal control problem when the state variable is available, and the other one associated to the optimal filtering problem. This is a principle of separation for the finite horizon quadratic optimal control problem for discrete-time Markovian jump linear systems. When there is only one mode of operation our results coincide with the traditional separation principle for the linear quadratic Gaussian control of discrete-time linear systems.  相似文献   

8.
This paper deals with the robust H2-control of discrete-time Markovian jump linear systems. It is assumed that both the state and jump variables are available to the controller. Uncertainties satisfying some norm bounded conditions are considered on the parameters of the system. An upper bound for the H2-control problem is derived in terms of a linear matrix inequality (LMI) optimization problem. For the case in which there are no uncertainties, we show that the convex formulation is equivalent to the existence of the mean square stabilizing solution for the set of coupled algebraic Riccati equations arising on the quadratic optimal control problem of discrete-time Markovian jump linear systems. Therefore, for the case with no uncertainties, the convex formulation considered in this paper imposes no extra conditions than those in the usual dynamic programming approach. Finally some numerical examples are presented to illustrate the technique.  相似文献   

9.
Jump linear quadratic regulator with controlled jump rates   总被引:1,自引:0,他引:1  
Deals with the class of continuous-time linear systems with Markovian jumps. We assume that jump rates are controlled. Our purpose is to study the jump linear quadratic (JLQ) regulator of the class of systems. The structure of the optimal controller is established. For a one-dimensional (1-D) system, an algorithm for solving the corresponding set of coupled Riccati equations of this optimal control problem is provided. Two numerical examples are given to show the usefulness of our results  相似文献   

10.
We consider a linear-quadratic problem of minimax optimal control for stochastic uncertain control systems with output measurement. The uncertainty in the system satisfies a stochastic integral quadratic constraint. To convert the constrained optimization problem into an unconstrained one, a special S-procedure is applied. The resulting unconstrained game-type optimization problem is then converted into a risk-sensitive stochastic control problem with an exponential-of-integral cost functional. This is achieved via a certain duality relation between stochastic dynamic games and risk-sensitive stochastic control. The solution of the risk-sensitive stochastic control problem in terms of a pair of differential matrix Riccati equations is then used to establish a minimax optimal control law for the original uncertain system with uncertainty subject to the stochastic integral quadratic constraint. Date received: May 13, 1997. Date revised: March 18, 1998.  相似文献   

11.
It is well known that stochastic control systems can be viewed as Markov decision processes (MDPs) with continuous state spaces. In this paper, we propose to apply the policy iteration approach in MDPs to the optimal control problem of stochastic systems. We first provide an optimality equation based on performance potentials and develop a policy iteration procedure. Then we apply policy iteration to the jump linear quadratic problem and obtain the coupled Riccati equations for their optimal solutions. The approach is applicable to linear as well as nonlinear systems and can be implemented on-line on real world systems without identifying all the system structure and parameters.  相似文献   

12.
This paper is concerned with the optimal time‐weighted H2 model reduction problem for discrete Markovian jump linear systems (MJLSs). The purpose is to find a mean square stable MJLS of lower order such that the time‐weighted H2 norm of the corresponding error system is minimized for a given mean square stable discrete MJLSs. The notation of time‐weighted H2 norm of discrete MJLS is defined for the first time, and then a computational formula of this norm is given, which requires the solution of two sets of recursive discrete Markovian jump Lyapunov‐type linear matrix equations. Based on the time‐weighted H2 norm formula, we propose a gradient flow method to solve the optimal time‐weighted H2 model reduction problem. A necessary condition for minimality is derived, which generalizes the standard result for systems when Markov jumps and the time‐weighting term do not appear. Finally, numerical examples are used to illustrate the effectiveness of the proposed approach. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

13.
A modified optimal algorithm for multirate output feedback controllers of linear stochastic periodic systems is developed. By combining the discrete-time linear quadratic regulation (LQR) control problem and the discrete-time stochastic linear quadratic regulation (SLQR) control problem to obtain an extended linear quadratic regulation (ELQR) control problem, one derives a general optimal algorithm to balance the advantages of the optimal transient response of the LQR control problem and the optimal steady-state regulation of the SLQR control problem. In general, the solution of this algorithm is obtained by solving a set of coupled matrix equations. Special cases for which the coupled matrix equations can be reduced to a discrete-time algebraic Riccati equation are discussed. A reducable case is the optimal algorithm derived by H.M. Al-Rahmani and G.F. Franklin (1990), where the system has complete state information and the discrete-time quadratic performance index is transformed from a continuous-time one  相似文献   

14.
Finite-dimensional optimal risk-sensitive filters and smoothers are obtained for discrete-time nonlinear systems by adjusting the standard exponential of a quadratic risk-sensitive cost index to one involving the plant nonlinearity. It is seen that these filters and smoothers are the same as those for a fictitious linear plant with the exponential of squared estimation error as the corresponding risk-sensitive cost index. Such finite-dimensional filters do not exist for nonlinear systems in the case of minimum variance filtering and control  相似文献   

15.
Solves a finite-horizon partially observed risk-sensitive stochastic optimal control problem for discrete-time nonlinear systems and obtains small noise and small risk limits. The small noise limit is interpreted as a deterministic partially observed dynamic game, and new insights into the optimal solution of such game problems are obtained. Both the risk-sensitive stochastic control problem and the deterministic dynamic game problem are solved using information states, dynamic programming, and associated separated policies. A certainty equivalence principle is also discussed. The authors' results have implications for the nonlinear robust stabilization problem. The small risk limit is a standard partially observed risk-neutral stochastic optimal control problem  相似文献   

16.
An optimal control problem is considered for a nonlinear stochastic system with an interrupted observation mechanism that is characterized in terms of a jump Markov process taking on the values 0 or 1. The state of the system is described by a diffusion process, but the observation has components modulated by the jump process. The admissible control laws are smooth functions of the observation. Using the calculus of variations, necessary conditions on optimal controls are derived. These conditions amount to solving a set of four coupled nonlinear partial differential equations. A numerical procedure for solving these equations is suggested and an example dealt with numerically.  相似文献   

17.
The Hamilton–Jacobi–Bellman (HJB) equation can be solved to obtain optimal closed-loop control policies for general nonlinear systems. As it is seldom possible to solve the HJB equation exactly for nonlinear systems, either analytically or numerically, methods to build approximate solutions through simulation based learning have been studied in various names like neurodynamic programming (NDP) and approximate dynamic programming (ADP). The aspect of learning connects these methods to reinforcement learning (RL), which also tries to learn optimal decision policies through trial-and-error based learning. This study develops a model-based RL method, which iteratively learns the solution to the HJB and its associated equations. We focus particularly on the control-affine system with a quadratic objective function and the finite horizon optimal control (FHOC) problem with time-varying reference trajectories. The HJB solutions for such systems involve time-varying value, costate, and policy functions subject to boundary conditions. To represent the time-varying HJB solution in high-dimensional state space in a general and efficient way, deep neural networks (DNNs) are employed. It is shown that the use of DNNs, compared to shallow neural networks (SNNs), can significantly improve the performance of a learned policy in the presence of uncertain initial state and state noise. Examples involving a batch chemical reactor and a one-dimensional diffusion-convection-reaction system are used to demonstrate this and other key aspects of the method.  相似文献   

18.
In this paper we consider the H2-control problem of discrete-time Markovian jump linear systems. We assume that only an output and the jump parameters are available to the controller. It is desired to design a dynamic Markovian jump controller such that the closed-loop system is mean square stable and minimizes the H2-norm of the system. As in the case with no jumps, we show that an optimal controller can be obtained from two sets of coupled algebraic Riccati equations, one associated with the optimal control problem when the state variable is available, and the other associated with the optimal filtering problem. This is the principle of separation for discrete-time Markovian jump linear systems. When there is only one mode of operation our results coincide with the traditional separation principle for the H2-control of discrete-time linear systems. Date received: June 1, 2001. Date revised: October 13, 2003.  相似文献   

19.
In this paper we optimally solve a stochastic perfectly observed dynamic game for discrete-time linear systems with Markov jump parameters (LSMJPs). The results here encompass both the cooperative and non-cooperative case. Included also, is a verification theorem. Besides being interesting in its own right, the motivation here lies, inter alia, in the results of recent vintage, which show that, for the classical linear case, the risk-sensitive optimal control problem approach is intimately bound up with the H  相似文献   

20.
In this paper, we present an iterative technique based on Monte Carlo simulations for deriving the optimal control of the infinite horizon linear regulator problem of discrete-time Markovian jump linear systems for the case in which the transition probability matrix of the Markov chain is not known. We trace a parallel with the theory of TD(λ) algorithms for Markovian decision processes to develop a TD(λ) like algorithm for the optimal control associated to the maximal solution of a set of coupled algebraic Riccati equations (CARE). It is assumed that either there is a sample of past observations of the Markov chain that can be used for the iterative algorithm, or it can be generated through a computer program. Our proofs rely on the spectral radius of the closed loop operators associated to the mean square stability of the system being less than 1.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号