首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
采用滑动扇区方法,研究了不确定随机马尔可夫跳变系统的变结构控制设计问题。首先给出随机马尔可夫跳变系统滑动扇区的定义,然后基于线性矩阵不等式技术,提出一种滑动扇区及变结构控制律设计方法。经过理论证明该控制律能够确保随机马尔可夫跳变不确定系统二次稳定,并有效地抑制抖振。最后数值仿真算例验证了控制方案的有效性。  相似文献   

2.
A simple dual control problem with an analytical solution   总被引:1,自引:0,他引:1  
A stochastic control problem for which the optimal dual control law can be calculated analytically is given. The system is a four state Markov chain with transition probabilities that depend on the control variable. The performance of the optimal dual control law and of some suboptimal control laws are calculated and compared.  相似文献   

3.
Consideration is given to a class of systems described by a finite set of controlled diffusion Itô processes that are control-affine, with jump transitions between them, and are defined by the evolution of a uniform Markovian chain (Markovian switching). Each state of this chain corresponds to a certain system mode. A stochastic version of the notion dissipativity by Willems is introduced, and properties of diffusion processes with Markovian switching are studied. The relationship between passivity and stabilizability in the process of output-feedback control is established. The obtained results are applied to the problem of robust simultaneous stabilization for the set of nonlinear systems with undetermined parameters. As a partial case, a problem of robust simultaneous stabilization for the set of linear systems where final results are obtained in terms of linear matrix inequalities.  相似文献   

4.
网络控制系统随机稳定性研究   总被引:9,自引:2,他引:7  
马卫国  邵诚 《自动化学报》2007,33(8):878-882
研究了具有随机网络诱导时延及数据包丢失的网络控制系统随机稳定性问题. 本文用一个具有两个状态的马尔可夫链来描述数据通过网络传输时随机数据包丢失过程, 利用马尔可夫跳变线性系统理论, 将网络控制系统建模为一个具有两种运行模式的马尔可夫跳变线性系统, 给出了在状态反馈控制下网络控制系统随机稳定的线性矩阵不等式形式的充分条件, 最后用一个仿真示例验证了该方法的有效性.  相似文献   

5.
基于终端不变集的 Markov 跳变系统约束预测控制   总被引:5,自引:2,他引:3  
刘飞  蔡胤 《自动化学报》2008,34(4):496-499
针对离散 Markov 跳变系统, 研究带输入输出约束的有限时域预测控制问题. 对于给定预测时域内的每条模态轨迹, 设计控制输入序列, 驱动系统状态到达相应的终端不变集内, 在预测时域外, 则寻求一个虚拟的状态反馈控制器以保证系统的随机稳定性, 在此基础上, 分别给出了以线性矩阵不等式 (LMI) 描述的带输入、输出约束预测控制器的设计方法.  相似文献   

6.
Jump Markov linear systems are linear systems whose parameters evolve with time according to a finite-state Markov chain. Given a set of observations, our aim is to estimate the states of the finite-state Markov chain and the continuous (in space) states of the linear system. The computational cost in computing conditional mean or maximum a posteriori (MAP) state estimates of the Markov chain or the state of the jump Markov linear system grows exponentially in the number of observations. We present three globally convergent algorithms based on stochastic sampling methods for state estimation of jump Markov linear systems. The cost per iteration is linear in the data length. The first proposed algorithm is a data augmentation (DA) scheme that yields conditional mean state estimates. The second proposed scheme is a stochastic annealing (SA) version of DA that computes the joint MAP sequence estimate of the finite and continuous states. Finally, a Metropolis-Hastings DA scheme based on SA is designed to yield the MAP estimate of the finite-state Markov chain. Convergence results of the three above-mentioned stochastic algorithms are obtained. Computer simulations are carried out to evaluate the performances of the proposed algorithms. The problem of estimating a sparse signal developing from a neutron sensor based on a set of noisy data from a neutron sensor and the problem of narrow-band interference suppression in spread spectrum code-division multiple-access (CDMA) systems are considered  相似文献   

7.
This paper addresses the stabilization problem for single-input Markov jump linear systems via mode-dependent quantized state feedback. Given a measure of quantization coarseness, a mode-dependent logarithmic quantizer and a mode-dependent linear state feedback law can achieve optimal coarseness for mean square quadratic stabilization of a Markov jump linear system, similar to existing results for linear time-invariant systems. The sector bound approach is shown to be non-conservative in investigating the corresponding quantized state feedback problem, and then a method of optimal quantizer/controller design in terms of linear matrix inequalities is presented. Moreover, when the mode process is not observed by the controller and quantizer, a mode estimation algorithm obtained by maximizing a certain probability criterion is given. Finally, an application to networked control systems further demonstrates the usefulness of the results.  相似文献   

8.
针对一类含不匹配扰动的随机隐Markov跳变系统, 本文研究了基于扩展状态观测器(ESO)的有限时间异步 控制问题. 首先, 引入一组扩展变量将隐Markov跳变系统转换成一组新的随机扩展系统, 补偿不匹配扰动对系统控 制输出的影响. 基于Lyapunov–Krasovskii泛函方法, 给出使得基于ESO的闭环随机隐Markov增广跳变系统是正系 统, 且有限时间有界的充分条件. 进而得到直接求解观测器增益和控制器增益的线性矩阵不等式. 最后, 通过仿真结 果验证了本文所设计的异步状态反馈控制器和观测器的有效性和可行性.  相似文献   

9.
本文研究一类非齐次马尔可夫跳跃正线性系统的稳定与镇定问题.该系统中模态的变化服从非齐次马尔可夫过程,其模态转移速率/概率矩阵是随时间随机变化的,且变化规律由一个高层马尔可夫过程描述,本文提出一种双层马尔可夫跳跃正系统模型来刻画此类系统特征.在此基础上,利用切换线性余正李雅普诺夫函数给出此类连续和离散时间非齐次马尔可夫跳跃正线性系统平均稳定的判据.然后,运用线性规划方法设计依赖于模态–模态转移速率/概率矩阵的状态反馈控制器,进而实现闭环系统的平均稳定性.最后,以功率分配系统为例给出仿真算例,验证了所设计控制策略的有效性.  相似文献   

10.
We consider a stochastic control problem with linear dynamics with jumps, convex cost criterion, and convex state constraint, in which the control enters the drift, the diffusion, and the jump coefficients. We allow these coefficients to be random, and do not impose any Lp-bounds on the control.

We obtain a stochastic maximum principle for this model that provides both necessary and sufficient conditions of optimality. This is the first version of the stochastic maximum principle that covers the consumption–investment problem in which there are jumps in the price system.  相似文献   


11.
祝超群  郭戈 《控制与决策》2014,29(5):802-808

针对随机事件驱动的网络化控制系统, 研究其中的有限时域和无限时域内最优控制器的设计问题. 首先, 根据执行器介质访问机制将网络化控制系统建模为具有多个状态的马尔科夫跳变系统; 然后, 基于动态规划和马尔科夫跳变线性系统理论设计满足二次型性能指标的最优控制序列, 通过求解耦合黎卡提方程的镇定解, 给出最优控制律的计算方法, 使得网络化控制系统均方指数稳定; 最后, 通过仿真实验表明了所提出方法的有效性.

  相似文献   

12.
研究线性Markov切换系统的随机Nash微分博弈问题。首先借助线性Markov切换系统随机最优控制的相关结果,得到了有限时域和无线时域Nash均衡解的存在条件等价于其相应微分(代数) Riccati方程存在解,并给出了最优解的显式形式;然后应用相应的微分博弈结果分析线性Markov切换系统的混合H2/H∞控制问题;最后通过数值算例验证了所提出方法的可行性。  相似文献   

13.
Consideration was given to the class of systems described by a finite set of the controllable control-affine diffusion Ito processes with stepwise transitions defined by the evolution of the uniform Markov chain (Markov switchings). For these systems, the notion of exponential dissipativity was introduced, and its theory was developed and used to estimate the possible variations of the output feedback law under which the system retains its robust stability. For the set of linear systems with uncertain parameters, proposed was a two-step procedure for determination of the output feedback control based on comparison with the stochastic model and providing their simultaneous robust stabilization. At the first step, the robust stabilizing control is established by means of an iterative algorithm. Then, the possible variations of the feedback law for which the robust stability is retained are estimated by solving a system of matrix linear inequalities. An example was presented.  相似文献   

14.
This paper considers a class of mean-field stochastic linear–quadratic optimal control problems with Markov jump parameters. The new feature of these problems is that means of state and control are incorporated into the systems and the cost functional. Based on the modes of Markov chain, the corresponding decomposition technique of augmented state and control is introduced. It is shown that, under some appropriate conditions, there exists a unique optimal control, which can be explicitly given via solutions of two generalized difference Riccati equations. A numerical example sheds light on the theoretical results established.  相似文献   

15.
It is well known that stochastic control systems can be viewed as Markov decision processes (MDPs) with continuous state spaces. In this paper, we propose to apply the policy iteration approach in MDPs to the optimal control problem of stochastic systems. We first provide an optimality equation based on performance potentials and develop a policy iteration procedure. Then we apply policy iteration to the jump linear quadratic problem and obtain the coupled Riccati equations for their optimal solutions. The approach is applicable to linear as well as nonlinear systems and can be implemented on-line on real world systems without identifying all the system structure and parameters.  相似文献   

16.
An optimal control problem is considered for a nonlinear stochastic system with an interrupted observation mechanism that is characterized in terms of a jump Markov process taking on the values 0 or 1. The state of the system is described by a diffusion process, but the observation has components modulated by the jump process. The admissible control laws are smooth functions of the observation. Using the calculus of variations, necessary conditions on optimal controls are derived. These conditions amount to solving a set of four coupled nonlinear partial differential equations. A numerical procedure for solving these equations is suggested and an example dealt with numerically.  相似文献   

17.
This paper discusses the state estimation and optimal control problem of a class of partially‐observable stochastic hybrid systems (POSHS). The POSHS has interacting continuous and discrete dynamics with uncertainties. The continuous dynamics are given by a Markov‐jump linear system and the discrete dynamics are defined by a Markov chain whose transition probabilities are dependent on the continuous state via guard conditions. The only information available to the controller are noisy measurements of the continuous state. To solve the optimal control problem, a separable control scheme is applied: the controller estimates the continuous and discrete states of the POSHS using noisy measurements and computes the optimal control input from the state estimates. Since computing both optimal state estimates and optimal control inputs are intractable, this paper proposes computationally efficient algorithms to solve this problem numerically. The proposed hybrid estimation algorithm is able to handle state‐dependent Markov transitions and compute Gaussian‐ mixture distributions as the state estimates. With the computed state estimates, a reinforcement learning algorithm defined on a function space is proposed. This approach is based on Monte Carlo sampling and integration on a function space containing all the probability distributions of the hybrid state estimates. Finally, the proposed algorithm is tested via numerical simulations.  相似文献   

18.
We discuss the state estimation advantages for a class of linear discrete-time stochastic jump systems, in which a Markov process governs the operation mode, and the state variables and disturbances are subject to inequality constraints. The horizon estimation approach addressed the constrained state estimation problem, and the Bayesian network technique solved the stochastic jump problem. The moving horizon state estimator designed in this paper can produce the constrained state estimates with a lower error covariance than under the unconstrained counterpart. This new estimation method is used in the design of the restricted state estimator for two practical applications.  相似文献   

19.
We analyze the tracking performance of the least mean square (LMS) algorithm for adaptively estimating a time varying parameter that evolves according to a finite state Markov chain. We assume the Markov chain jumps infrequently between the finite states at the same rate of change as the LMS algorithm. We derive mean square estimation error bounds for the tracking error of the LMS algorithm using perturbed Lyapunov function methods. Then combining results in two-time-scale Markov chains with weak convergence methods for stochastic approximation, we derive the limit dynamics satisfied by continuous-time interpolation of the estimates. Unlike most previous analyzes of stochastic approximation algorithms, the limit we obtain is a system of ordinary differential equations with regime switching controlled by a continuous-time Markov chain. Next, to analyze the rate of convergence, we take a continuous-time interpolation of a scaled sequence of the error sequence and derive its diffusion limit. Somewhat remarkably, for correlated regression vectors we obtain a jump Markov diffusion. Finally, two novel examples of the analysis are given for state estimation of hidden Markov models (HMMs) and adaptive interference suppression in wireless code division multiple access (CDMA) networks.  相似文献   

20.
本文研究一类同时含有Markov跳过程和乘性噪声的离散时间非线性随机系统的最优控制问题, 给出并证明了相应的最大值原理. 首先, 利用条件期望的平滑性, 通过引入具有适应解的倒向随机差分方程, 给出了带有线性差分方程约束的线性泛函的表示形式, 并利用Riesz定理证明其唯一性. 其次, 对带Markov跳的非线性随机控制系统, 利用针状变分法, 对状态方程进行一阶变分, 获得其变分所满足的线性差分方程. 然后, 在引入Hamilton函数的基础上, 通过一对由倒向随机差分方程刻画的伴随方程, 给出并证明了带有Markov跳的离散时间非线性随机最优控制问题的最大值原理, 并给出该最优控制问题的一个充分条件和相应的Hamilton-Jacobi-Bellman方程. 最后, 通过 一个实际例子说明了所提理论的实用性和可行性.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号