首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
It is well documented (e.g. Zhou (1998) [8]) that the near-optimal controls, as the alternative to the “exact” optimal controls, are of great importance for both the theoretical analysis and practical application purposes due to its nice structure and broad-range availability, feasibility as well as flexibility. However, the study of near-optimality on the stochastic recursive problems, to the best of our knowledge, is a totally unexplored area. Thus we aim to fill this gap in this paper. As the theoretical result, a necessary condition as well as a sufficient condition of near-optimality for stochastic recursive problems is derived by using Ekeland’s principle. Moreover, we work out an ε-optimal control example to shed light on the application of the theoretical result. Our work develops that of [8] but in a rather different backward stochastic differential equation (BSDE) context.  相似文献   

2.
A new approach to study the indefinite stochastic linear quadratic (LQ) optimal control problems, which we called the “equivalent cost functional method”, is introduced by Yu (2013) in the setup of Hamiltonian system. On the other hand, another important issue along this research direction, is the possible state feedback representation of optimal control and the solvability of associated indefinite stochastic Riccati equations. As the response, this paper continues to develop the equivalent cost functional method by extending it to the Riccati equation setup. Our analysis is featured by its introduction of some equivalent cost functionals which enable us to have the bridge between the indefinite and positive-definite stochastic LQ problems. With such bridge, some solvability relation between the indefinite and positive-definite Riccati equations is further characterized. It is remarkable the solvability of the former is rather complicated than the latter, hence our relation provides some alternative but useful viewpoint. Consequently, the corresponding indefinite linear quadratic problem is discussed for which the unique optimal control is derived in terms of state feedback via the solution of the Riccati equation. In addition, some example is studied using our theoretical results.  相似文献   

3.
《国际计算机数学杂志》2012,89(14):3311-3327
In this article, singular optimal control for stochastic linear singular system with quadratic performance is obtained using ant colony programming (ACP). To obtain the optimal control, the solution of matrix Riccati differential equation is computed by solving differential algebraic equation using a novel and nontraditional ACP approach. The obtained solution in this method is equivalent or very close to the exact solution of the problem. Accuracy of the solution computed by the ACP approach to the problem is qualitatively better. The solution of this novel method is compared with the traditional Runge Kutta method. An illustrative numerical example is presented for the proposed method.  相似文献   

4.
In this paper, we prove the existence and uniqueness of a solution for a class of multi-valued stochastic differential equations driven by G-Brownian motion (MSDEG) by means of the Yosida approximation method. Moreover, we set up an optimality principle of stochastic control problem and prove the value function of the control problem is the unique viscosity solution of a class of nonlinear partial differential variational inequalities.  相似文献   

5.
We consider a stochastic control problem with linear dynamics with jumps, convex cost criterion, and convex state constraint, in which the control enters the drift, the diffusion, and the jump coefficients. We allow these coefficients to be random, and do not impose any Lp-bounds on the control.

We obtain a stochastic maximum principle for this model that provides both necessary and sufficient conditions of optimality. This is the first version of the stochastic maximum principle that covers the consumption–investment problem in which there are jumps in the price system.  相似文献   


6.
We consider a problem of optimal control for a stochastic two-scale system that depends on a small positive parameter and where the stochastic differential equations, describing the evolution of the fast variables, degenerate to algebraic equations in the limit when=0. The model is nonlinear, but with the fast components entering linearly. Our main result is to show that, when tends to zero, the optimal value of the cost functional, that also includes the fast variables in the terminal pay-off, converges to the optimal value of a suitable reduced stochastic control problem. As a consequence we also have that a nearly optimal control for the limit problem can be modified to become nearly optimal also for the prelimit problems when. is sufficiently small.The work by Yuri M. Kabanov was performed during a stay in Padova supported by GNAFA/CNR.  相似文献   

7.
In this paper, we consider an optimal control problem for the stochastic system described by stochastic differential equations with delay. We obtain the maximum principle for the optimal control of this problem by virtue of the duality method and the anticipated backward stochastic differential equations. Our results can be applied to a production and consumption choice problem. The explicit optimal consumption rate is obtained.  相似文献   

8.
Herein, we study the near-optimality of linear forward-backward stochastic control systems. As the theoretical results, some sufficient and necessary conditions of the near-optimality are established in the form of Pontryagin stochastic maximum principle. As an illustration and practical application, one ε-optimal control example is figured out and solved using our theoretical results.  相似文献   

9.
This paper discusses discrete-time stochastic linear quadratic (LQ) problem in the infinite horizon with state and control dependent noise, where the weighting matrices in the cost function are assumed to be indefinite. The problem gives rise to a generalized algebraic Riccati equation (GARE) that involves equality and inequality constraints. The well-posedness of the indefinite LQ problem is shown to be equivalent to the feasibility of a linear matrix inequality (LMI). Moreover, the existence of a stabilizing solution to the GARE is equivalent to the attainability of the LQ problem. All the optimal controls are obtained in terms of the solution to the GARE. Finally, we give an LMI -based approach to solve the GARE via a semidefinite programming.  相似文献   

10.
具有乘性噪声的随机不确定系统的控制问题有着广泛的应用背景. 本文概述了具有乘性噪声的线性离散时间随机系统的稳定性分析、均方镇定、最优控制以及最优估计问题和相关结论. 同时, 本文研究了具有状态与控制乘性噪声的线性多变量离散时间系统的均方镇定和最优控制问题, 分析了这两个问题之间的联系, 并讨论了最优状态反馈控制器的设计算法.  相似文献   

11.
This paper presents a stochastic adaptive control algorithm which is shown to possess the following properties when applied to a possibly unstable, inverse stable, linear stochastic system with unknown parameters, whenever that system satisfies a certain positive real condition on its (moving average) noise dynamics. 1) The adaptive control part of the algorithm stabilizes and asymptotically optimizes the behavior of the system in the sense that the (limit of the) sample mean-square variation of the-output around a given demand level equals that of a minimum variance control strategy implemented with known parameters. This optimal behavior is subject to an offset μ2where μ2is the variance of a dither signal added to the control action in order to produce a "continually disturbed control." Formu^{2} > 0, it is shown that the input-output process satisfies a persistent excitation property, and hence, subject to a simple identifiability condition, the next property holds. 2) The observed input and output of the controlled system may be taken as inputs to an approximate maximum likelihood algorithm (AML) which generates strongly consistent estimates of the system's parameters. Results are presented for the scalar and multivariable cases.  相似文献   

12.
This paper considers the boundary control problem for linear stochastic reaction‐diffusion systems with Neumann boundary conditions. First, when the full‐domain system states are accessible, a boundary control is designed, and a sufficient condition is established to ensure the mean‐square exponential stability of the resulting closed‐loop system. Next, when the full‐domain system states are not available, an observer‐based control is proposed such that the underlying closed‐loop system is stable. Furthermore, observer‐based controller is designed for the systems with an H performance. Simulation examples are given to demonstrate the effectiveness and potential of the new design techniques.  相似文献   

13.
In this paper we prove necessary conditions for optimality of a stochastic control problem for a class of stochastic partial differential equations that is controlled through the boundary. This kind of problem can be interpreted as a stochastic control problem for an evolution system in a Hilbert space. The regularity of the solution of the adjoint equation, that is a backward stochastic equation in infinite dimension, plays a crucial role in the formulation of the maximum principle.  相似文献   

14.
15.
Sampled-data (SD) based linear quadratic (LQ) control problem of stochastic linear continuous-time (LCT) systems is discussed. Two types of systems are involved. One is time-invariant and the other is time-varying. In addition to stability analysis of the closed-loop systems, the index difference between SD-based LQ control and conventional LQ control is investigated. It is shown that when sample time ?T is small, so is the index difference. In addition, the upper bounds of the differences are also presented, which are O(?T2) and O(?T), respectively.  相似文献   

16.
We consider the optimal guidance of an ensemble of independent, structurally identical, finite-dimensional stochastic linear systems with variation in system parameters between initial and target states of interest by applying a common control function without the use of feedback. Our exploration of such ensemble control systems is motivated by practical control design problems in which variation in system parameters and stochastic effects must be compensated for when state feedback is unavailable, such as in pulse design for nuclear magnetic resonance spectroscopy and imaging. In this paper, we extend the notion of ensemble control to stochastic linear systems with additive noise and jumps, which we model using white Gaussian noise and Poisson counters, respectively, and investigate the optimal steering problem. In our main result, we prove that the minimum norm solution to a Fredholm integral equation of the first kind provides the optimal control that simultaneously minimizes the mean square error (MSE) and the error in the mean of the terminal state. The optimal controls are generated numerically for several example ensemble control problems, and Monte Carlo simulations are used to illustrate their performance. This work has immediate applications to the control of dynamical systems with parameter dispersion or uncertainty that are subject to additive noise, which are of interest in quantum control, neuroscience, and sensorless robotic manipulation.  相似文献   

17.
ABSTRACT

In this paper, we investigate the optimal control problems for delayed doubly stochastic control systems. We first discuss the existence and uniqueness of the delayed doubly stochastic differential equation by martingale representation theorem and contraction mapping principle. As a necessary condition of the optimal control, we deduce a stochastic maximum principle under some assumption. At the same time, a sufficient condition of optimality is obtained by using the duality method. At the end of the paper, we apply our stochastic maximum principle to a class of linear quadratic optimal control problem and obtain the explicit expression of the optimal control.  相似文献   

18.
ABSTRACT

In this paper, the preview control problem for a class of linear continuous time stochastic systems with multiplicative noise is studied based on the augmented error system method. First, a deterministic assistant system is introduced, and the original system is translated to the assistant system. Then, the integrator is employed to ensure the output of the closed-loop system tracking the reference signal accurately. Second, the augmented error system, which includes integrator vector, control vector and reference signal, is constructed based on the system after translation. As a result, the tracking problem is transformed into the optimal control problem of the augmented error system, and the optimal control input is obtained by the dynamic programming method. This control input is regarded as the preview controller of the original system. For a linear stochastic system with multiplicative noise, the difficulty being unable to construct an augmented error system by the derivation method is solved in this paper. And, the existence and uniqueness solution of the Riccati equation corresponding to the stochastic augmented error system is discussed. The numerical simulations show that the preview controller designed in this paper is very effective.  相似文献   

19.
This paper aims at characterizing the most destabilizing switching law for discrete-time switched systems governed by a set of bounded linear operators. The switched system is embedded in a special class of discrete-time bilinear control systems. This allows us to apply the variational approach to the bilinear control system associated with a Mayer-type optimal control problem, and a second-order necessary optimality condition is derived. Optimal equivalence between the bilinear system and the switched system is analyzed, which shows that any optimal control law can be equivalently expressed as a switching law. This specific switching law is most unstable for the switched system, and thus can be used to determine stability under arbitrary switching. Based on the second-order moment of the state, the proposed approach is applied to analyze uniform mean-square stability of discrete-time switched linear stochastic systems. Numerical simulations are presented to verify the usefulness of the theoretic results.  相似文献   

20.
严志国  张国山 《控制与决策》2011,26(8):1224-1228
讨论一类具有时变、有限能量外部扰动的线性随机系统有限时间H∞控制问题.首先,给出了线性随机系统有限时间如控制问题的定义;然后,通过构造Lyapunov-Krasovskii函数,并结合线性矩阵不等式,给出了随机系统有限时间如控制器有解的充分条件;进一步,将该问题简化为具有线性矩阵不等式约束的优化问题,并给出了相应的求解算法;最后,通过数值算例表明了该设计方法的有效性.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号