首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this paper, the stabilization of stochastic coupled systems (SCSs) with time delay via feedback control based on discrete‐time state observations is investigated. We use the discrete‐time state feedback control to stabilize stochastic coupled systems with time delay. Moreover, by employing Lyapunov method and graph theory, the upper bound of the duration between two consecutive state observations is obtained and some criteria are established to guarantee the stabilization in sense of ‐stability and mean‐square asymptotic stability of SCSs with time delay via feedback control based on discrete‐time state observations. In addition, to verify the theoretical results, stochastic coupled oscillators with time delay are performed. At last, a numerical example is given to illustrate the applicability and effectiveness of our analytical results.  相似文献   

2.
In this paper, the model predictive control problem is investigated for a class of discrete‐time systems with random delay and randomly occurring nonlinearity. The randomly occurring nonlinearity, which describes the phenomena of a class of nonlinear disturbances occurring in a random way, is modeled according to a Bernoulli distributed white sequence with a known conditional probability. Moreover, the random delay is governed by a discrete‐time finite‐state Markov chain. The approach of delay fractioning is applied to the controller synthesis. It is shown that the proposed model predictive controller guarantees the stochastic stability of the closed‐loop system. Finally, a numerical simulation is given to illustrate the effectiveness of the proposed method.  相似文献   

3.
网络控制系统随机稳定性研究   总被引:9,自引:2,他引:7  
马卫国  邵诚 《自动化学报》2007,33(8):878-882
研究了具有随机网络诱导时延及数据包丢失的网络控制系统随机稳定性问题. 本文用一个具有两个状态的马尔可夫链来描述数据通过网络传输时随机数据包丢失过程, 利用马尔可夫跳变线性系统理论, 将网络控制系统建模为一个具有两种运行模式的马尔可夫跳变线性系统, 给出了在状态反馈控制下网络控制系统随机稳定的线性矩阵不等式形式的充分条件, 最后用一个仿真示例验证了该方法的有效性.  相似文献   

4.
In this paper, the problems of stochastic stability and stabilization for a class of uncertain time‐delay systems with Markovian jump parameters are investigated. The jumping parameters are modelled as a continuous‐time, discrete‐state Markov process. The parametric uncertainties are assumed to be real, time‐varying and norm‐bounded that appear in the state, input and delayed‐state matrices. The time‐delay factor is constant and unknown with a known bound. Complete results for both delay‐independent and delay‐dependent stochastic stability criteria for the nominal and uncertain time‐delay jumping systems are developed. The control objective is to design a state feedback controller such that stochastic stability and a prescribed ?‐performance are guaranteed. We establish that the control problem for the time‐delay Markovian jump systems with and without uncertain parameters can be essentially solved in terms of the solutions of a finite set of coupled algebraic Riccati inequalities or linear matrix inequalities. Extension of the developed results to the case of uncertain jumping rates is also provided. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

5.
Jump Markov linear systems are linear systems whose parameters evolve with time according to a finite-state Markov chain. Given a set of observations, our aim is to estimate the states of the finite-state Markov chain and the continuous (in space) states of the linear system. The computational cost in computing conditional mean or maximum a posteriori (MAP) state estimates of the Markov chain or the state of the jump Markov linear system grows exponentially in the number of observations. We present three globally convergent algorithms based on stochastic sampling methods for state estimation of jump Markov linear systems. The cost per iteration is linear in the data length. The first proposed algorithm is a data augmentation (DA) scheme that yields conditional mean state estimates. The second proposed scheme is a stochastic annealing (SA) version of DA that computes the joint MAP sequence estimate of the finite and continuous states. Finally, a Metropolis-Hastings DA scheme based on SA is designed to yield the MAP estimate of the finite-state Markov chain. Convergence results of the three above-mentioned stochastic algorithms are obtained. Computer simulations are carried out to evaluate the performances of the proposed algorithms. The problem of estimating a sparse signal developing from a neutron sensor based on a set of noisy data from a neutron sensor and the problem of narrow-band interference suppression in spread spectrum code-division multiple-access (CDMA) systems are considered  相似文献   

6.
In this note we study the problem of state estimation for a class of sampled-measurement stochastic hybrid systems, where the continuous state x satisfies a linear stochastic differential equation, and noisy measurements y are taken at assigned discrete-time instants. The parameters of both the state and measurement equation depend on the discrete state q of a continuous-time finite Markov chain. Even in the fault detection setting we consider-at most one transition for q is admissible-the switch may occur between two observations, whence it turns out that the optimal estimates cannot be expressed in parametric form and time integrations are unavoidable, so that the known estimation techniques cannot be applied. We derive and implement an algorithm for the estimation of the states x, q and of the discrete-state switching time that is convenient for both recursive update and the eventual numerical quadrature. Numerical simulations are illustrated.  相似文献   

7.
《Performance Evaluation》1999,35(3-4):109-129
Transient analysis of non-Markovian stochastic Petri nets is a theoretically interesting and practically important problem. In this paper, we first present a method to compute bounds and an approximation on the average state sojourn times for a subclass of deterministic and stochastic Petri nets (DSPNs) where there is a single persistent deterministic transition that can become enabled only in a special state. Then, we extend this class by allowing the transition to become enabled in any state, as long as the time between successive enablings of the deterministic transition is independent of this state, and develop a new approximate transient analysis approach. In addition to renewal theory, we only make use of discrete and continuous Markov chain concepts. As an application, we use the model of a finite-capacity queue with a server subject to breakdowns, and assess the quality of our approximations.  相似文献   

8.
This paper discusses the state estimation and optimal control problem of a class of partially‐observable stochastic hybrid systems (POSHS). The POSHS has interacting continuous and discrete dynamics with uncertainties. The continuous dynamics are given by a Markov‐jump linear system and the discrete dynamics are defined by a Markov chain whose transition probabilities are dependent on the continuous state via guard conditions. The only information available to the controller are noisy measurements of the continuous state. To solve the optimal control problem, a separable control scheme is applied: the controller estimates the continuous and discrete states of the POSHS using noisy measurements and computes the optimal control input from the state estimates. Since computing both optimal state estimates and optimal control inputs are intractable, this paper proposes computationally efficient algorithms to solve this problem numerically. The proposed hybrid estimation algorithm is able to handle state‐dependent Markov transitions and compute Gaussian‐ mixture distributions as the state estimates. With the computed state estimates, a reinforcement learning algorithm defined on a function space is proposed. This approach is based on Monte Carlo sampling and integration on a function space containing all the probability distributions of the hybrid state estimates. Finally, the proposed algorithm is tested via numerical simulations.  相似文献   

9.
李顺祥  田彦涛 《控制工程》2004,11(4):325-328
根据混合系统离散状态的动态行为和Markov链的状态也是离散的特点,提出了一类离散状态的动态行为是Markov链的混合系统。与传统的混合系统相比,这类系统能够刻画出混合系统离散动态行为的随机性,可以用来描述系统受到外界环境因素制约和内部突发事件等随机因素影响而发生变化的动态行为。根据动态系统的稳定性定义以及随机过程理论,给出了Markov线性切换系统的随机稳定性定义,并且分析了Markov线性切换系统的随机稳定性问题,给出了判定随机稳定性的充分必要条件。  相似文献   

10.
对于网络控制系统中一般的动态输出反馈控制问题,应用延迟量子化和增广对象向量方法建立离散时间Markov跳变系统模型,并给出稳定化控制器的设计算法和倒立摆上的仿真计算.由于应用延迟量子化方法,所建立的模型和给出的设计方法也能用于求解具有常分布律的随机延迟的动态输出反馈控制问题.  相似文献   

11.
The stochastic model considered is a linear jump diffusion process X for which the coefficients and the jump processes depend on a Markov chain Z with finite state space. First, we study the optimal filtering and control problem for these systems with non-Gaussian initial conditions, given noisy observations of the state X and perfect measurements of Z. We derive a new sufficient condition which ensures the existence and the uniqueness of the solution of the nonlinear stochastic differential equations satisfied by the output of the filter. We study a quadratic control problem and show that the separation principle holds. Next, we investigate an adaptive control problem for a state process X defined by a linear diffusion for which the coefficients depend on a Markov chain, the processes X and Z being observed in independent white noises. Suboptimal estimates for the process X, Z and approximate control law are investigated for a large class of probability distributions of the initial state. Asymptotic properties of these filters and this control law are obtained. Upper bounds for the corresponding error are given  相似文献   

12.
We consider the problem of transmitting packets over a randomly varying point to point channel with the objective of minimizing the expected power consumption subject to a constraint on the average packet delay. By casting it as a constrained Markov decision process in discrete time with time-averaged costs, we prove structural results about the dependence of the optimal policy on buffer occupancy, number of packet arrivals in the previous slot and the channel fading state for both i.i.d. and Markov arrivals and channel fading. The techniques we use to establish such results: convexity, stochastic dominance, decreasing-differences, are among the standard ones for the purpose. Our main contribution, however, is the passage to the average cost case, a notoriously difficult problem for which rather limited results are available. The novel proof techniques used here are likely to have utility in other stochastic control problems well beyond their immediate application considered here.   相似文献   

13.
In this paper, we introduce a Hidden Markov Model (HMM) for studying an optimal investment problem of an insurer when model uncertainty is present. More specifically, the financial price and insurance risk processes are modulated by a continuous‐time, finite‐state, hidden Markov chain. The states of the chain represent different modes of the model. The HMM approach is viewed as a ‘dynamic’ version of the Bayesian approach to model uncertainty. The optimal investment problem is formulated as a stochastic optimal control problem with partial observations. The innovations approach in the filtering theory is then used to transform the problem into one with complete observations. New robust filters of the chain and estimates of key parameters are derived. We discuss the optimal investment problem using the Hamilton–Jacobi–Bellman (HJB) dynamic programming approach and derive a closed‐form solution in the case of an exponential utility and zero interest rate. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

14.
In this paper, the state diagrams and steady-state balance equations for two kinds of open queuing network models are presented. The first model comprises a network of single queues with single servers, while the second model comprises multiple servers for single queues. State diagrams are drawn for (2, 3) queuing networks with (i) single servers and (ii) multiple servers. Steady-state balance equations are derived from the state diagrams. The paper provides a method to solve open queuing networks by analyzing the stochastic process involved in the transition of states in a continuous time Markov chain which represents the state diagram of a queuing system.  相似文献   

15.
In this paper, the controller synthesis problem for fault tolerant control systems (FTCS) with stochastic stability and H2 performance is studied. System faults of random nature are modelled by a Markov chain. Because the real system fault modes are not directly accessible in the context of FTCS, the controller is reconfigured based on the output of a fault detection and identification (FDI) process, which is modelled by another Markov chain. Then state feedback and output feedback control are developed to achieve the mean square stability (MSS) and the H2 performance for both continuous‐time and discrete‐time systems with model uncertainties. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

16.
针对双边丢包和双边时延的网络化控制系统的有限时间控制问题,通过引入时间偏移量的方法,将丢包转换为时延,从而形成多时延系统,并将时间延迟转换为系统状态延迟。基于全概率公式给出已知网络丢包率下连续丢包的概率公式,用Markov链表征网络时延的变化规律。以线性矩阵不等式的形式给出改进的有限时间时延相关稳定性判据,并通过数值仿真验证所提方法的有效性。  相似文献   

17.
The problem of optimal routing of messages into two parallel queues is considered in the framework of discrete-time Markov decision processes with countable state space and unbounded costs. We assume that the controller has a delayed state information, the delay being equal to one time slot. Both discount and average optimal policies are shown to be monotone and of threshold type  相似文献   

18.
This paper studies the problem of control for discrete time delay linear systems with Markovian jump parameters. The system under consideration is subjected to both time-varying norm-bounded parameter uncertainty and unknown time delay in the state, and Markovian jump parameters in all system matrices. We address the problem of robust state feedback control in which both robust stochastic stability and a prescribed H performance are required to be achieved irrespective of the uncertainty and time delay. It is shown that the above problem can be solved if a set of coupled linear matrix inequalities has a solution  相似文献   

19.
The problem of state estimation and system structure detection for discrete stochastic dynamical systems with parameters which may switch among a finite set of values is considered. The switchings are modelled by a Markov chain with known transition probabilities. A brief survey and a unified treatment of the existing suboptimal algorithms are provided. The optimal algorithms require exponentially increasing memory and computations with time. Simulation results comparing the various suboptimal algorithms are presented.  相似文献   

20.
基于Markov延迟特性的闭环网络控制系统研究   总被引:35,自引:2,他引:35       下载免费PDF全文
针对控制网络中固有的随机传输延迟 ,提出了一种新颖的控制模式 ,实现了对存在多步随机传输延迟的网络控制系统的数学建模 .基于该模型 ,利用Markov链理论 ,得到了满足给定性能指标的随机最优控制律 ,同时给出了求取相应的Markov链状态转移矩阵的方法 .文末通过实验研究 ,验证了所提理论的正确性和有效性  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号