首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到12条相似文献,搜索用时 15 毫秒
1.
This paper is concerned with partial-information mixed optimal stochastic continuous–singular control problem for mean-field stochastic differential equation driven by Teugels martingales and an independent Brownian motion, where the Teugels martingales are a family of pairwise strongly orthonormal martingales associated with Lévy processes. The control variable has two components; the first being absolutely continuous, and the second singular. Partial-information necessary and sufficient conditions of optimal continuous–singular control for these mean-field models are investigated. As an illustration, this paper studies a partial-information linear quadratic control problem of mean-field type involving continuous–singular control.  相似文献   

2.
ABSTRACT

In this paper, we introduce a new class of backward doubly stochastic differential equations (in short BDSDE) called mean-field backward doubly stochastic differential equations (in short MFBDSDE) driven by Itô-Lévy processes and study the partial information optimal control problems for backward doubly stochastic systems driven by Itô-Lévy processes of mean-field type, in which the coefficients depend on not only the solution processes but also their expected values. First, using the method of contraction mapping, we prove the existence and uniqueness of the solutions to this kind of MFBDSDE. Then, by the method of convex variation and duality technique, we establish a sufficient and necessary stochastic maximum principle for the stochastic system. Finally, we illustrate our theoretical results by an application to a stochastic linear quadratic optimal control problem of a mean-field backward doubly stochastic system driven by Itô-Lévy processes.  相似文献   

3.
本文研究一类同时含有Markov跳过程和乘性噪声的离散时间非线性随机系统的最优控制问题, 给出并证明了相应的最大值原理. 首先, 利用条件期望的平滑性, 通过引入具有适应解的倒向随机差分方程, 给出了带有线性差分方程约束的线性泛函的表示形式, 并利用Riesz定理证明其唯一性. 其次, 对带Markov跳的非线性随机控制系统, 利用针状变分法, 对状态方程进行一阶变分, 获得其变分所满足的线性差分方程. 然后, 在引入Hamilton函数的基础上, 通过一对由倒向随机差分方程刻画的伴随方程, 给出并证明了带有Markov跳的离散时间非线性随机最优控制问题的最大值原理, 并给出该最优控制问题的一个充分条件和相应的Hamilton-Jacobi-Bellman方程. 最后, 通过 一个实际例子说明了所提理论的实用性和可行性.  相似文献   

4.
In this paper, we consider an optimal control problem for the stochastic system described by stochastic differential equations with delay. We obtain the maximum principle for the optimal control of this problem by virtue of the duality method and the anticipated backward stochastic differential equations. Our results can be applied to a production and consumption choice problem. The explicit optimal consumption rate is obtained.  相似文献   

5.
We consider a stochastic control problem with linear dynamics with jumps, convex cost criterion, and convex state constraint, in which the control enters the drift, the diffusion, and the jump coefficients. We allow these coefficients to be random, and do not impose any Lp-bounds on the control.

We obtain a stochastic maximum principle for this model that provides both necessary and sufficient conditions of optimality. This is the first version of the stochastic maximum principle that covers the consumption–investment problem in which there are jumps in the price system.  相似文献   


6.
7.
Comparison principles for general impulsive stochastic functional differential systems are established.Employing the comparison principles and the theory of differential inequalities,stability and instability,involving two measures,of impulsive stochastic functional differential systems are investigated.Several stability and instability criteria are obtained,and two examples are also given to illustrate our results.  相似文献   

8.
9.
研究了一类由偏微分方程描述的Ito型时变时滞随机系统的变结构控制问题.首先构造了系统的滑动流形,设计了变结构控制律;然后证明T系统的滑动模具有次可达性,并且利用Halanay不等式的方法给出了系统滑动模运动为均方稳定运动的一个充分条件.  相似文献   

10.
The nonlinear stochastic optimal control problem of quasi‐integrable Hamiltonian systems with uncertain parameters is investigated. The uncertain parameters are described by using a random vector with λ probability density function. First, the partially averaged Itô stochastic differential equations are derived by using the stochastic averaging method for quasi‐integrable Hamiltonian systems. Then, the dynamical programming equation is established based on stochastic dynamical programming principle. By minimizing the dynamical programming equation with respect to control forces, the optimal control forces can be derived, which are functions of the uncertain parameters. The final optimal control forces are then determined by probability‐weighted average of the obtained control forces with the probability density of the uncertain parameters as weighting function. The mean control effectiveness and mean control efficiency are used to evaluate the proposed control strategy. The robustness of the proposed control is measured by using the ratios of the variation coefficients of mean control effectiveness and mean control efficiency to the variation coefficients of uncertain parameters. Finally, two examples are given to illustrate the proposed control strategy and its effectiveness and robustness. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

11.
The principal contribution of this paper is that it shows how much the controllability condition can be weakened, while still providing the possibility of stabilization. A result concerning stabilization with respect to a target set for affine control systems in the lack of the full rank condition is proved.  相似文献   

12.
This article is a survey of the early development of selected areas in nonlinear continuous-time stochastic control. Key developments in optimal control and the dynamic programming principle, existence of optimal controls under complete and partial observations, nonlinear filtering, stochastic stability, the stochastic maximum principle and ergodic control are discussed. Issues concerning wide bandwidth noise for stability, modeling, filtering and ergodic control are dealt with. The focus is on the earlier work, but many important topics are omitted for lack of space.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号