共查询到20条相似文献,搜索用时 15 毫秒
1.
Li Chen Author Vitae Author Vitae 《Automatica》2010,46(6):1074-1080
In this paper, we consider an optimal control problem for the stochastic system described by stochastic differential equations with delay. We obtain the maximum principle for the optimal control of this problem by virtue of the duality method and the anticipated backward stochastic differential equations. Our results can be applied to a production and consumption choice problem. The explicit optimal consumption rate is obtained. 相似文献
2.
In this paper, we are interested in the problem of optimal control where the system is given by a fully coupled forward‐backward stochastic differential equation with a risk‐sensitive performance functional. As a preliminary step, we use the risk neutral which is an extension of the initial control system where the admissible controls are convex, and an optimal solution exists.Then, we study the necessary as well as sufficient optimality conditions for risk sensitive performance. At the end of this work, we illustrate our main result by giving an example that deals with an optimal portfolio choice problem in financial market, specifically the model of control cash flow of a firm or project where, for instance, we can set the model of pricing and managing an insurance contract. 相似文献
3.
Abel Cadenillas 《Systems & Control Letters》2002,47(5):433-444
We consider a stochastic control problem with linear dynamics with jumps, convex cost criterion, and convex state constraint, in which the control enters the drift, the diffusion, and the jump coefficients. We allow these coefficients to be random, and do not impose any Lp-bounds on the control.
We obtain a stochastic maximum principle for this model that provides both necessary and sufficient conditions of optimality. This is the first version of the stochastic maximum principle that covers the consumption–investment problem in which there are jumps in the price system. 相似文献
4.
Herein, we study the near-optimality of linear forward-backward stochastic control systems. As the theoretical results, some sufficient and necessary conditions of the near-optimality are established in the form of Pontryagin stochastic maximum principle. As an illustration and practical application, one ε-optimal control example is figured out and solved using our theoretical results. 相似文献
5.
Giuseppina Guatteri 《Systems & Control Letters》2011,60(3):198-204
In this paper we prove necessary conditions for optimality of a stochastic control problem for a class of stochastic partial differential equations that is controlled through the boundary. This kind of problem can be interpreted as a stochastic control problem for an evolution system in a Hilbert space. The regularity of the solution of the adjoint equation, that is a backward stochastic equation in infinite dimension, plays a crucial role in the formulation of the maximum principle. 相似文献
6.
This paper firstly presents necessary and sufficient conditions for the solvability of discrete time, mean-field, stochastic linear-quadratic optimal control problems. Secondly, the optimal control within a class of linear feedback controls is investigated using a matrix dynamical optimization method. Thirdly, by introducing several sequences of bounded linear operators, the problem is formulated as an operator stochastic linear-quadratic optimal control problem. By the kernel-range decomposition representation of the expectation operator and its pseudo-inverse, the optimal control is derived using solutions to two algebraic Riccati difference equations. Finally, by completing the square, the two Riccati equations and the optimal control are also obtained. 相似文献
7.
Weifeng Wang 《International journal of control》2013,86(5):942-952
We consider the second-order Taylor expansion for backward doubly stochastic control system. The results are obtained under no restriction on the convexity of control domain. Moreover, the control variable is allowed in the drift coefficient and the diffusion coefficient. 相似文献
8.
The present study deals with a new approach of optimal control problems where the state equation is a Mean-Field stochastic differential equation, and the set of strict (classical) controls need not be convex and the diffusion coefficient depends on the term control. Our consideration is based on only one adjoint process, and the necessary conditions as well as a sufficient condition for optimality in the form of a relaxed maximum principle are obtained, with application to Linear quadratic stochastic control problem with mean-field type. 相似文献
9.
The robustness of non-linear stochastic optimal control for quasi-Hamiltonian systems with uncertain parameters is studied. Based on the independence of uncertain parameters and stochastic excitations, the non-linear stochastic optimal control for the nominal quasi-Hamiltonian system with average-value parameters is first obtained by using the stochastic averaging method and stochastic dynamical programming principle. Then, the means and standard deviations of root-mean-square responses, control effectiveness and control efficiency for the uncertain quasi-Hamiltonian system are calculated by using the stochastic averaging method and the probabilistic analysis. By introducing the sensitivity of the variation coefficients of controlled root-mean-square responses, control effectiveness and control efficiency to those of uncertain parameters, the robustness of the non-linear stochastic optimal control is evaluated. Two examples are given to illustrate the proposed control procedure and its robustness. 相似文献
10.
11.
Jianjun Zhou 《International journal of control》2013,86(9):1771-1784
In this article, we consider an optimal control problem in which the controlled state dynamics is governed by a stochastic evolution equation in Hilbert spaces and the cost functional has a quadratic growth. The existence and uniqueness of the optimal control are obtained by the means of an associated backward stochastic differential equations with a quadratic growth and an unbounded terminal value. As an application, an optimal control of stochastic partial differential equations with dynamical boundary conditions is also given to illustrate our results. 相似文献
12.
Z.J. Palmor 《Automatica》1982,18(1):107-116
Structural, stability and sensitivity properties of optimal stochastic control systems for dead-time, stable minimum phase as well as non-minimum phase processes are presented. The processes are described by rational transfer functions plus dead-times and the disturbances by rational spectral densities. It is shown that although the frequency domain design techniques guarantee asymptotically stable systems for given process and disturbance models, many of the designs might be practically unstable. Necessary and sufficient conditions that must be imposed on the design to assure practically stable optimal systems are derived. The uncertainties in the parameters and in the structure of the process model are measured by means of an ignorance function. Sufficient conditions in terms of the ignorance function, which guarantee stable design and by means of which the bounds of the uncertainties for a given design may be estimated, are stated. Conditions under which the optimal designs possess attractive relative stability properties, namely gain and phase margins of at least 2 and 60°, respectively, are stated, too. It is further shown that any optimal controller, for the type of processes discussed in this paper, may be separated into a primary controller and into a dead-time compensator where the latter is completely independent of the cost and the disturbance properties. Such a decomposition gives excellent insight into the role of the cost and the disturbance in the design. When low order process and disturbance models are used, the conventional PI and PID control laws coupled with the dead-time compensator emerge. 相似文献
13.
Verification Theorem Of Stochastic Optimal Control With Mixed Delay And Applications To Finance
下载免费PDF全文

This paper focuses on a general model of a controlled stochastic differential equation with mixed delay in the state variable. Based on the Itô formula, stochastic analysis, convex analysis, and inequality technique, we obtain a semi‐coupled forward‐backward stochastic differential equation with mixed delay and mixed initial‐terminal conditions and prove that such forward‐backward system admits a unique adapted solution. The verification theorem for an optimal control of a system with mixed delay is established. The obtained results generalize and improve some recent results, and they are more easily verified and applied in practice. As an application, we conclude with finding explicitly the optimal consumption rate from the wealth process of a person given by a stochastic differential equation with mixed delay which fit into our general model. 相似文献
14.
We prove the existence of optimal relaxed controls as well as strict optimal controls for systems governed by non linear forward-backward stochastic differential equations (FBSDEs). Our approach is based on weak convergence techniques for the associated FBSDEs in the Jakubowski S-topology and a suitable Skorokhod representation theorem. 相似文献
15.
《国际计算机数学杂志》2012,89(14):3311-3327
In this article, singular optimal control for stochastic linear singular system with quadratic performance is obtained using ant colony programming (ACP). To obtain the optimal control, the solution of matrix Riccati differential equation is computed by solving differential algebraic equation using a novel and nontraditional ACP approach. The obtained solution in this method is equivalent or very close to the exact solution of the problem. Accuracy of the solution computed by the ACP approach to the problem is qualitatively better. The solution of this novel method is compared with the traditional Runge Kutta method. An illustrative numerical example is presented for the proposed method. 相似文献
16.
A. A. Kostoglotov A. I. Kostoglotov S. V. Lazarenko 《Automatic Control and Computer Sciences》2007,41(5):274-281
The problem of optimization of dynamic systems is considered. It is shown that, unlike the well-known methods of optimal control, through use of the maximum principle it becomes possible to synthesize an efficient control law that substantially reduces computational complexity, as is demonstrated by the results of numerical simulation. 相似文献
17.
Z.J. Palmor 《Automatica》1982,18(4):491-492
The practical stability of optimal stochastic control systems for processes having dead-times is considered. Previously obtained necessary conditions for practical stability of such systems are generalized using Pontryagin's theorem on the roots of two-variable polynomials. The conditions are expressed in terms of the relations between the orders of the process, the process model and the disturbance model. 相似文献
18.
A continuous time dynamic model of discrete scheduling problems for a large class of manufacturing systems is considered in the present paper. The realistic manufacturing based on multi-level bills of materials, flexible machines, controllable buffers and deterministic demand profiles is modeled in the canonical form of optimal control. Carrying buffer costs are minimized by controlling production rates of all machines that can be set up instantly. The maximum principle for the model is studied and properties of the optimal production regimes are revealed. The solution method developed rests on the iterative approach generalizing the method of projected gradient, but takes advantage of the analytical properties of the optimal solution to reduce significantly computational efforts. Computational experiments presented demonstrate effectiveness of the approach in comparison with pure iterative method. 相似文献
19.
20.
We formulate a class of singular stochastic control problem with recursive utility where the cost function is determined by a backward stochastic differential equation. Some characteristics of the value function of the control problem are obtained by the method of approximation via penalization, and the optimal control process is constructed. 相似文献