首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
2.
We consider a stochastic control problem with linear dynamics with jumps, convex cost criterion, and convex state constraint, in which the control enters the drift, the diffusion, and the jump coefficients. We allow these coefficients to be random, and do not impose any Lp-bounds on the control.

We obtain a stochastic maximum principle for this model that provides both necessary and sufficient conditions of optimality. This is the first version of the stochastic maximum principle that covers the consumption–investment problem in which there are jumps in the price system.  相似文献   


3.
In this paper we prove necessary conditions for optimality of a stochastic control problem for a class of stochastic partial differential equations that is controlled through the boundary. This kind of problem can be interpreted as a stochastic control problem for an evolution system in a Hilbert space. The regularity of the solution of the adjoint equation, that is a backward stochastic equation in infinite dimension, plays a crucial role in the formulation of the maximum principle.  相似文献   

4.
ABSTRACT

In this paper, we introduce a new class of backward doubly stochastic differential equations (in short BDSDE) called mean-field backward doubly stochastic differential equations (in short MFBDSDE) driven by Itô-Lévy processes and study the partial information optimal control problems for backward doubly stochastic systems driven by Itô-Lévy processes of mean-field type, in which the coefficients depend on not only the solution processes but also their expected values. First, using the method of contraction mapping, we prove the existence and uniqueness of the solutions to this kind of MFBDSDE. Then, by the method of convex variation and duality technique, we establish a sufficient and necessary stochastic maximum principle for the stochastic system. Finally, we illustrate our theoretical results by an application to a stochastic linear quadratic optimal control problem of a mean-field backward doubly stochastic system driven by Itô-Lévy processes.  相似文献   

5.
We consider the second-order Taylor expansion for backward doubly stochastic control system. The results are obtained under no restriction on the convexity of control domain. Moreover, the control variable is allowed in the drift coefficient and the diffusion coefficient.  相似文献   

6.
In this paper, we consider an optimal control problem for the stochastic system described by stochastic differential equations with delay. We obtain the maximum principle for the optimal control of this problem by virtue of the duality method and the anticipated backward stochastic differential equations. Our results can be applied to a production and consumption choice problem. The explicit optimal consumption rate is obtained.  相似文献   

7.
In this paper, under the framework of Fréchet derivatives, we study a stochastic optimal control problem driven by a stochastic differential equation with general cost functional. By constructing a series of first-order and second-order adjoint equations, we establish the stochastic maximum principle and get the related Hamilton systems.  相似文献   

8.
In this paper, we consider risk‐sensitive optimal control and differential games for stochastic differential delayed equations driven by Brownian motion. The problems are related to robust stochastic optimization with delay due to the inherent feature of the risk‐sensitive objective functional. For both problems, by using the logarithmic transformation of the associated risk‐neutral problem, the necessary and sufficient conditions for the risk‐sensitive maximum principle are obtained. We show that these conditions are characterized in terms of the variational inequality and the coupled anticipated backward stochastic differential equations (ABSDEs). The coupled ABSDEs consist of the first‐order adjoint equation and an additional scalar ABSDE, where the latter is induced due to the nonsmooth nonlinear transformation of the adjoint process of the associated risk‐neutral problem. For applications, we consider the risk‐sensitive linear‐quadratic control and game problems with delay, and the optimal consumption and production game, for which we obtain explicit optimal solutions.  相似文献   

9.
本文研究一类同时含有Markov跳过程和乘性噪声的离散时间非线性随机系统的最优控制问题, 给出并证明了相应的最大值原理. 首先, 利用条件期望的平滑性, 通过引入具有适应解的倒向随机差分方程, 给出了带有线性差分方程约束的线性泛函的表示形式, 并利用Riesz定理证明其唯一性. 其次, 对带Markov跳的非线性随机控制系统, 利用针状变分法, 对状态方程进行一阶变分, 获得其变分所满足的线性差分方程. 然后, 在引入Hamilton函数的基础上, 通过一对由倒向随机差分方程刻画的伴随方程, 给出并证明了带有Markov跳的离散时间非线性随机最优控制问题的最大值原理, 并给出该最优控制问题的一个充分条件和相应的Hamilton-Jacobi-Bellman方程. 最后, 通过 一个实际例子说明了所提理论的实用性和可行性.  相似文献   

10.
11.
In this paper, we are interested in the problem of optimal control where the system is given by a fully coupled forward‐backward stochastic differential equation with a risk‐sensitive performance functional. As a preliminary step, we use the risk neutral which is an extension of the initial control system where the admissible controls are convex, and an optimal solution exists.Then, we study the necessary as well as sufficient optimality conditions for risk sensitive performance. At the end of this work, we illustrate our main result by giving an example that deals with an optimal portfolio choice problem in financial market, specifically the model of control cash flow of a firm or project where, for instance, we can set the model of pricing and managing an insurance contract.  相似文献   

12.
This paper is concerned with the forward–backward stochastic optimal control problem with Poisson jumps. A necessary condition of optimality in the form of a global maximum principle as well as a sufficient condition of optimality are presented under the assumption that the diffusion and jump coefficients do not contain the control variable, and the control domain need not be convex. The case where there are some state constraints is also discussed. A financial example is discussed to illustrate the application of our result. Copyright © 2011 John Wiley and Sons Asia Pte Ltd and Chinese Automatic Control Society  相似文献   

13.
14.
本文提出了不确定拟哈密顿系统、基于随机平均法、随机极大值原理和随机微分对策理论的一种随机极大极小最优控制策略.首先,运用拟哈密顿系统的随机平均法,将系统状态从速度和位移的快变量形式转化为能量的慢变量形式,得到部分平均的It随机微分方程;其次,给定控制性能指标,对于不确定拟哈密顿系统的随机最优控制,根据随机微分对策理论,将其转化为一个极小极大控制问题;再根据随机极大值原理,建立关于系统与伴随过程的前向-后向随机微分方程,随机最优控制表达为哈密顿控制函数的极大极小条件,由此得到最坏情形下的扰动参数与极大极小最优控制;然后,将最坏扰动参数与最优控制代入部分平均的It随机微分方程并完成平均,求解与完全平均的It随机微分方程相应的Fokker-Planck-Kolmogorov(FPK)方程,可得受控系统的响应量并计算控制效果;最后,将上述不确定拟哈密顿系统的随机最优控制策略应用于一个两自由度非线性系统,通过数值结果说明该随机极大极小控制策略的控制效果.  相似文献   

15.
A new approach to study the indefinite stochastic linear quadratic (LQ) optimal control problems, which we called the “equivalent cost functional method”, is introduced by Yu (2013) in the setup of Hamiltonian system. On the other hand, another important issue along this research direction, is the possible state feedback representation of optimal control and the solvability of associated indefinite stochastic Riccati equations. As the response, this paper continues to develop the equivalent cost functional method by extending it to the Riccati equation setup. Our analysis is featured by its introduction of some equivalent cost functionals which enable us to have the bridge between the indefinite and positive-definite stochastic LQ problems. With such bridge, some solvability relation between the indefinite and positive-definite Riccati equations is further characterized. It is remarkable the solvability of the former is rather complicated than the latter, hence our relation provides some alternative but useful viewpoint. Consequently, the corresponding indefinite linear quadratic problem is discussed for which the unique optimal control is derived in terms of state feedback via the solution of the Riccati equation. In addition, some example is studied using our theoretical results.  相似文献   

16.
In this paper, we deal with a new kind of partially observed nonzero‐sum differential game governed by stochastic differential delay equations. One of the special features is that the controlled system and the utility functionals involve both delays in the state variable and the control variables under different observation equations for each player. We obtain a maximum principle and a verification theorem for the game problem by virtue of Girsanov's theorem and the convex variational method. In addition, based on the theoretical results and Malliavin derivative techniques, we solve a production and consumption choice game problem.  相似文献   

17.
The present study deals with a new approach of optimal control problems where the state equation is a Mean-Field stochastic differential equation, and the set of strict (classical) controls need not be convex and the diffusion coefficient depends on the term control. Our consideration is based on only one adjoint process, and the necessary conditions as well as a sufficient condition for optimality in the form of a relaxed maximum principle are obtained, with application to Linear quadratic stochastic control problem with mean-field type.  相似文献   

18.
In this study, we propose a varying terminal time structure for the optimal control problem under state constraints, in which the terminal time follows the varying of the control via the constrained condition. Focusing on this new optimal control problem, we investigate a novel stochastic maximum principle, which differs from the traditional optimal control problem under state constraints. The optimal pair of the optimal control model can be verified via this new stochastic maximum principle.  相似文献   

19.
Dear editor, The main objective of this study is to investigate one type of stochastic optimal control problem for a delayed system using the maximum principle ...  相似文献   

20.
This paper studies detectability and observability of discrete-time stochastic linear systems. Based on the standard notions of detectability and observability for time-varying linear systems, corresponding definitions for discrete-time stochastic systems are proposed which unify some recently reported detectability and exact observability concepts for stochastic linear systems. The notion of observability leads to the stochastic version of the well-known rank criterion for observability of deterministic linear systems. By using these two concepts, the discrete-time stochastic Lyapunov equation and Riccati equations are studied. The results not only extend some of the existing results on these two types of equation but also indicate that the notions of detectability and observability studied in this paper take analogous functions as the usual concepts of detectability and observability in deterministic linear systems. It is expected that the results presented may play important roles in many design problems in stochastic linear systems.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号