共查询到20条相似文献,搜索用时 703 毫秒
1.
2.
对离散时间Markov跳变系统, 当系统状态不完全可测时, 研究了一类基于输出反馈的鲁棒模型预测控制问题. 所研究系统为准线性参数时变的, 考虑在当前时刻系统的时变参数是已知的, 将来时刻未知的情况. 综合考虑系统存在多胞不确定性和有界噪声等因素, 通过运用线性矩阵不等式方法及变量变换思想, 将无穷时域性能指标的最小最大鲁棒预测控制问题转化为具有线性矩阵不等式约束的凸优化问题, 得到了系统的输出反馈控制律. 引入二次有界概念, 在满足输入输出约束的情况下, 保证闭环系统的随机稳定性. 数值算例验证了方法的有效性. 相似文献
3.
An optimal control problem is considered for a nonlinear stochastic system with an interrupted observation mechanism that is characterized in terms of a jump Markov process taking on the values 0 or 1. The state of the system is described by a diffusion process, but the observation has components modulated by the jump process. The admissible control laws are smooth functions of the observation. Using the calculus of variations, necessary conditions on optimal controls are derived. These conditions amount to solving a set of four coupled nonlinear partial differential equations. A numerical procedure for solving these equations is suggested and an example dealt with numerically. 相似文献
4.
This paper considers the issue of solving the inverse problem for a Markov model using an actuarial scheme as a corresponding example. 相似文献
5.
针对含扩散项不可靠随机生产系统最优生产控制的优化命题, 采用数值解方法来求解该优化命题最优控制所满足的模态耦合的非线性偏微分HJB方程. 首先构造Markov链来近似生产系统状态演化, 并基于局部一致性原理, 把求解连续时间随机控制问题转化为求解离散时间的Markov决策过程问题, 然后采用数值迭代和策略迭代算法来实现最优控制数值求解过程. 文末仿真结果验证了该方法的正确性和有效性. 相似文献
6.
This paper presents a numerical method to calculate the value function for a general discounted impulse control problem for piecewise deterministic Markov processes. Our approach is based on a quantization technique for the underlying Markov chain defined by the post jump location and inter-arrival time. Convergence results are obtained and more importantly we are able to give a convergence rate of the algorithm. The paper is illustrated by a numerical example. 相似文献
7.
8.
The optimal control problem is considered for a system given by the Markov chain with integral constraints. It is shown that the solution to the optimal control problem on the set of all predictable controls satisfies Markov property. This optimal Markov control can be obtained as a solution of the corresponding dual problem (in case if the regularity condition holds) or (in other case) by means of proposed regularization method. The problems arising due to the system nonregularity along with the way to cope with those problems are illustrated by an example of optimal control problem for a single channel queueing system. 相似文献
9.
针对随机事件驱动的网络化控制系统, 研究其中的有限时域和无限时域内最优控制器的设计问题. 首先, 根据执行器介质访问机制将网络化控制系统建模为具有多个状态的马尔科夫跳变系统; 然后, 基于动态规划和马尔科夫跳变线性系统理论设计满足二次型性能指标的最优控制序列, 通过求解耦合黎卡提方程的镇定解, 给出最优控制律的计算方法, 使得网络化控制系统均方指数稳定; 最后, 通过仿真实验表明了所提出方法的有效性.
相似文献10.
Li Li Author Vitae Valery A. Ugrinovskii Author Vitae Robert Orsi Author Vitae 《Automatica》2007,43(11):1932-1944
This paper addresses the problem of decentralized robust stabilization and control for a class of uncertain Markov jump parameter systems. Control is via output feedback and knowledge of the discrete Markov state. It is shown that the existence of a solution to a collection of mode-dependent coupled algebraic Riccati equations and inequalities, which depend on certain additional parameters, is both necessary and sufficient for the existence of a robust decentralized switching controller. A guaranteed upper bound on robust performance is also given. To obtain a controller which satisfies this bound, an optimization problem involving rank constrained linear matrix inequalities is introduced, and a numerical approach for solving this problem is presented. To demonstrate the efficacy of the proposed approach, an example stabilization problem for a power system comprising three generators and one on-load tap changing transformer is considered. 相似文献
11.
This article deals with the mean square consensus problem for second-order discrete-time multi-agent systems. Both cases of systems with and without time delays in Markov switching topologies are considered. With the introduced control protocols, necessary and sufficient conditions for mean square consensus of second-order multi-agent systems are derived. Under the given control protocols in Markov switching topologies, the second-order multi-agent systems can reach mean square consensus if and only if each union of the graphs corresponding to all the nodes in closed sets has a spanning tree. Finally, a simulation example is provided to illustrate the effectiveness of our theoretical results. 相似文献
12.
The guaranteed cost control problem for discrete‐time singular Markov jump systems with parameter uncertainties is discussed. The weighting matrix in quadratic cost function is indefinite. For full and partial knowledge of transition probabilities cases, state feedback controllers are designed based on linear matrix inequalities method which guarantee that the closed‐loop discrete‐time singular Markov jump systems are regular, causal and robust stochastically stable, and the cost value has a zero lower bound and a finite upper bound. A numerical example to illustrate the effectiveness of the method is given in the paper. Copyright © 2010 John Wiley & Sons, Ltd. 相似文献
13.
Amir Hossein Abolmasoumi Hamid Reza Momeni 《International Journal of Control, Automation and Systems》2011,9(4):768-776
This paper investigates the problem of robust observer-based stabilization for a delayed Markovian jump system. The sources
of randomness in the system mode and the delay mode are assumed to be different. To this end, two different Markov processes
are considered for modeling the randomness of the system matrices and the state delay. A two mode-dependent Lyapunov-Krasovskii
functional is used to design a robust observer based feedback control rule for the stochastic stability of the closed-loop
system. The rule4 should also satisfy the condition of disturbance reduction at a prescribed level in the presence of parametric
uncertainties. The procedure is implemented by solving linear matrix inequalities (LMIs). The results are tested within a
simulation example and the effectiveness of the proposed design method is verified. 相似文献
14.
M. Scott 《Automatica》1986,22(6):711-715
A unified approach to solving three common optimal control problems is presented, for linear systems under general constraints. The problems are: (1) the time optimal control problem; (2) the fuel optimal control problem in fixed time; (3) the time optimal control problem with a fuel constraint. A special purpose linear programming algorithm is used. State variable constraints are efficiently handled by a cutting plane algorithm. An example of a sixth order system with two inputs and two state variable constraints illustrates the method as implemented on a personal computer. 相似文献
15.
In this paper, the application of the dynamic programming approach to constrained stochastic control problems with expected value constraints is demonstrated. Specifically, two such problems are analyzed using this approach. The problems analyzed are the problem of minimizing a discounted cost infinite horizon expectation objective subject to an identically structured constraint, and the problem of minimizing a discounted cost infinite horizon minimax objective subject to a discounted expectation constraint. Using the dynamic programming approach, optimality equations, which are the chief contribution of this paper, are obtained for these problems. In particular, the dynamic programming operators for problems with expectation constraints differ significantly from those of standard dynamic programming and problems with worst-case constraints. For the discounted cost infinite horizon cases, existence and uniqueness of solutions to the dynamic programming equations are explicitly shown by using the Banach fixed point theorem to show that the corresponding dynamic programming operators are contractions. The theory developed is illustrated by numerically solving the constrained stochastic control dynamic programming equations derived for simple example problems. The example problems are based on a two-state Markov model that represents an error prone system that is to be maintained. 相似文献
16.
17.
This paper treats the problem of optimal control of finite-state Markov processes observed in noise. Two types of noisy observations are considered: additive white Gaussian noise and jump-type observations. Sufficient conditions for the optimality of a control law are obtained similar to the stochastic Hamilton-Jacobi equation for perfectly observed Markov processes. An illustrative example concludes the paper. 相似文献
18.
19.
In this paper, the stabilization problem is considered for the class of wireless networked control systems (WNCS). An indicator is introduced in the WNCS model. The packet drop sequences in the indicator are represented as states of a Markov chain. A new discrete Markov switching system model integrating 802.11 protocol and new scheduling approach for wireless networks with control systems are constructed. The variable controller can be obtained easily by solving the linear matrix inequality (LMI) with the use of the Matlab toolbox. Both the known and unknown dropout probabilities are considered. Finally, a simulation is given to show the feasibility of the proposed method. 相似文献