首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 281 毫秒
1.
带有随机跳跃干扰的线性二次随机最优控制问题   总被引:2,自引:2,他引:0  
吴臻  王向荣 《自动化学报》2003,29(6):821-826
给出一类布朗运动和泊松过程混合驱动的正倒向随机微分方程解的存在唯一性结果,应用这一结果研究带有随机跳跃干扰的线性二次随机最优控制问题,并得到最优控制的显式形式,可以证明最优控制是唯一的.然后,引入和研究一类推广的黎卡提方程系统,讨论该方程系统的可解性并由该方程的解得到带有随机跳跃干扰的线性二次随机最优控制问题最优的线性反馈.  相似文献   

2.
正倒向随机微分方程与一类线性二次随机最优控制问题   总被引:2,自引:0,他引:2  
讨论一类正倒向随机微分方程解的存在唯一性及其对应的一类线性二次随机最优控制问题,利用单调性方法证明了一类特殊的正倒向随机微分方程解的存在唯一性定理,利用该结果研究一类耦合了一个倒向随机微分方程的线性随机控制系统广义最优指标随机控制问题,得到由正倒向随机微分方程的解所表示的唯一最优控制的显式表达式,并得到精确的线性反馈及其对应的Riccati方程.  相似文献   

3.
韩春艳  张焕水 《自动化学报》2009,35(11):1446-1451
研究了在观测中存在Markov跳跃时滞的离散时间系统的线性最小方差状态估计问题. 首先, 通过引入跳跃时滞的示性函数, 将带有跳跃时滞的观测方程转化为带有乘性噪声的定常时滞系统. 进一步采用状态扩维的方法, 将定常时滞系统转化为无时滞的Markov跳跃系统. 最后, 基于得到的无时滞系统, 采用Hilbert空间已有的几何论知识, 设计线性最优状态估计器, 得出基于Riccati方程的滤波器的表达式, 并证明了所得滤波器的渐渐收敛性.  相似文献   

4.
非线性广义系统最优控制的最大值原理——有限维情形   总被引:3,自引:0,他引:3  
本文利用Ekeland变分原理和Fattorini引理处理非线性广义系统最优控制问题,给出 该问题解适合最大值原理的证明.  相似文献   

5.
聚合物驱最优控制问题求解算法的设计与实现   总被引:1,自引:0,他引:1       下载免费PDF全文
为了获得聚合物驱油的最大利润,通过最优控制来确定聚合物的最佳注入策略是一种有效的方法。该最优控制问题的数值解涉及到油藏数值模拟、伴随方程和非线性规划问题。给出了基于面向对象的算法设计方案及其实现细节。利用全隐式差分格式离散化聚合物驱模型,并采用Newton-Raphson求解所得到非线性方程组,在求解前向模型的同时构造了伴随方程。对一个三维聚合物驱注入问题进行了实例求解,表明了所实现算法的实用性和有效性。  相似文献   

6.
针对含扩散项不可靠随机生产系统最优生产控制的优化命题, 采用数值解方法来求解该优化命题最优控制所满足的模态耦合的非线性偏微分HJB方程. 首先构造Markov链来近似生产系统状态演化, 并基于局部一致性原理, 把求解连续时间随机控制问题转化为求解离散时间的Markov决策过程问题, 然后采用数值迭代和策略迭代算法来实现最优控制数值求解过程. 文末仿真结果验证了该方法的正确性和有效性.  相似文献   

7.
研究一类带随机跳跃的完全耦合的线性二次随机控制问题. 得到了最优控制的显式解, 并可以证明最优控制是唯一的. 引入了一类推广的黎卡提方程并讨论了其可解性. 利用这一类推广的黎卡提方程的解, 得到了上述带随机跳跃的最优控制问题的线性状态反馈调节器.  相似文献   

8.
基于Bellman随机非线性动态规划法, 提出了具有条件马尔科夫跳变结构的离散随机系统的最优控制方法, 应用随机变结构系统的性质对最优控制算法进行了简化处理, 并将后验概率密度函数用条件高斯函数来逼近, 针对一类具有条件马尔科夫跳变结构的线性离散随机系统, 给出了其逼近最优控制算法.  相似文献   

9.
王伟  张焕水 《自动化学报》2010,36(4):597-602
主要研究了控制输入和外部输入中带有时变时滞的离散系统的有限时间H无穷控制问题. 首先通过定义一个取值为0或1的二元变量将具有时变时滞的单通道离散系统转化为多通道定时滞离散系统, 进一步利用该系统H无穷控制与相应倒向随机系统平滑估计之间的对偶关系解决该问题,通过与原系统维数相同的Riccati差分方程给出了该问题的解,并且给出了因果与严格因果H控制器存在的充分必要条件.  相似文献   

10.
研究线性Markov切换系统的随机Nash微分博弈问题。首先借助线性Markov切换系统随机最优控制的相关结果,得到了有限时域和无线时域Nash均衡解的存在条件等价于其相应微分(代数) Riccati方程存在解,并给出了最优解的显式形式;然后应用相应的微分博弈结果分析线性Markov切换系统的混合H2/H∞控制问题;最后通过数值算例验证了所提出方法的可行性。  相似文献   

11.
In this paper, we study a linear‐quadratic optimal control problem for mean‐field stochastic differential equations driven by a Poisson random martingale measure and a one‐dimensional Brownian motion. Firstly, the existence and uniqueness of the optimal control is obtained by the classic convex variation principle. Secondly, by the duality method, the optimality system, also called the stochastic Hamilton system which turns out to be a linear fully coupled mean‐field forward‐backward stochastic differential equation with jumps, is derived to characterize the optimal control. Thirdly, applying a decoupling technique, we establish the connection between two Riccati equations and the stochastic Hamilton system and then prove the optimal control has a state feedback representation.  相似文献   

12.
In this paper we introduce and solve the partially observed optimal stopping non-linear risk-sensitive stochastic control problem for discrete-time non-linear systems. The presented results are closely related to previous results for finite horizon partially observed risk-sensitive stochastic control problem. An information state approach is used and a new (three-way) separation principle established that leads to a forward dynamic programming equation and a backward dynamic programming inequality equation (both infinite dimensional). A verification theorem is given that establishes the optimal control and optimal stopping time. The risk-neutral optimal stopping stochastic control problem is also discussed.  相似文献   

13.
本文研究了一类离散时间非齐次马尔可夫跳跃线性系统的线型二次高斯(linear quadratic Gaussian,LQG)问题,其中系统模态转移概率矩阵随时间随机变化,其变化特性由一高阶马尔可夫链描述.对于该系统的LQG问题,文中首先给出了线性最优滤波器,得到最优状态估计;其次,验证分离定理成立,并利用利用动态规划方法设计了系统最优控制器;最后,数值仿真结果验证了所设计控制器的有效性.  相似文献   

14.
The decentralized linear–quadratic–Gaussian (LQG) control problem for networked control systems (NCSs) with asymmetric information is investigated, where controller 1 shares its historical information with controller 2, and not vice versa. The asymmetry of the information structure leads to the coupling between controller 2 and estimator 1, and hence the classical separation principle fails. Through the assumption of linear control strategy, the coupling between controller 2 and estimator 1 (CCE) is decoupled, but the estimation gain is still coupled with the control gain. It is noted that the control gain conforms to the backward Riccati equation while estimation gain abides by the forward equation, which is computationally challenging. Applying the stochastic maximum principle, the solvability of the decentralized LQG control problem is reduced to that of corresponding forward and backward stochastic difference equations (FBSDEs). Further, necessary and sufficient conditions for the solvability of optimal control problem are presented by two Riccati equations, one of which is nonsymmetric. Moreover, a novel iterative forward method is proposed to calculate the coupled backward control gain and forward estimation gain.  相似文献   

15.
Optimal control problems for discrete-time linear systems subject to Markovian jumps in the parameters are considered for the case in which the Markov chain takes values in a countably infinite set. Two situations are considered: the noiseless case and the case in which an additive noise is appended to the model. The solution for these problems relies, in part, on the study of a countably infinite set of coupled algebraic Riccati equations (ICARE). Conditions for existence and uniqueness of a positive semidefinite solution to the ICARE are obtained via the extended concepts of stochastic stabilizability (SS) and stochastic detectability (SD), which turn out to be equivalent to the spectral radius of certain infinite dimensional linear operators in a Banach space being less than one. For the long-run average cost, SS and SD guarantee existence and uniqueness of a stationary measure and consequently existence of an optimal stationary control policy. Furthermore, an extension of a Lyapunov equation result is derived for the countably infinite Markov state-space case  相似文献   

16.
The aim of the present paper is to provide an optimal solution to the H2 state-feedback and output-feedback control problems for stochastic linear systems subjected both to Markov jumps and to multiplicative white noise. It is proved that in the state-feedback case the optimal solution is a static gain which is also optimal in the class of all higher-order controllers. In the output-feedback case the optimal H2 controller has the same order as the given stochastic system. The realization of the optimal controllers depend on the stabilizing solutions of some appropriate systems of Riccati-type coupled equations. An effective iterative convergent algorithm to compute these stabilizing solutions is also presented. The paper gives some illustrative numerical example allowing to compare the results obtained by the proposed design approach with the ones presented in the recent control literature.  相似文献   

17.
We consider in this paper the stochastic optimal control problem of discrete-time linear systems subject to Markov jumps and multiplicative noises (MJLS-mn for short). Our objective is to present an optimal policy for the problem of maximising the system's total expected output over a finite-time horizon while restricting the weighted sum of its variance to a pre-specified upper-bound value. We obtain explicit conditions for the existence of an optimal control law for this problem as well as an algorithm for obtaining it, extending previous results in the literature. The paper is concluded by applying our results to a portfolio selection problem subject to regime switching.  相似文献   

18.
A discrete-time linear optimal control problem with given initial and terminal times, state control constraints, and variable end points is set forth. Corresponding to this primal control problem, or maximization problem, is a dual linear control problem, or minimization problem. A dual maximum principle is proved with the aid of the duality theory of linear programming, where the dual of the Hamiltonian of the primal control problem is the Hamiltonian of the dual control problem. A discrete-time analog of the Hamilton-Jacobi equation is derived; and economic applications are discussed.  相似文献   

19.
This paper investigates the near optimal control for a kind of linear stochastic control systems governed by the forward–backward stochastic differential equations, where both the drift and diffusion terms are allowed to depend on controls and the control domain is not assumed to be convex. In the previous work (Theorem 3.1) of the second and third authors, some problem of near optimal control with the control dependent diffusion is addressed and our current paper can be viewed as some direct response to it. The necessary condition of the near-optimality is established within the framework of optimality variational principle developed by Yong and obtained by the convergence technique to treat the optimal control of FBSDEs in unbounded control domains by Wu. Some new estimates are given here to handle the near optimality. In addition, an illustrating example is discussed as well.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号