首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Optimal control is a very important field of study not only in theory but in applications, and stochastic optimal control is also a significant branch of research in theory and applications. Based on the concept of uncertain process, an uncertain optimal control problem is dealt with. Applying Bellman's principle of optimality, the principle of optimality for uncertain optimal control is obtained, and then a fundamental result called the equation of optimality in uncertain optimal control is given. Finally, as an application, the equation of optimality is used to solve a portfolio selection model.  相似文献   

2.
Uncertainty theory is a branch of mathematics which provides a new tool to deal with the human uncertainty. Based on uncertainty theory, this paper proposes an optimistic value model of discrete‐time linear quadratic (LQ) optimal control, whereas the state and control weighting matrices in the cost function are indefinite, the system dynamics are disturbed by uncertain noises. With the aid of the Bellman's principle of optimality in dynamic programming, we first present a recurrence equation. Then, a necessary condition for the state feedback control of the indefinite LQ problem is derived by using the recurrence equation. Moreover, a sufficient condition of well‐posedness for the indefinite LQ optimal control is given. Finally, a numerical example is presented by using the obtained results.  相似文献   

3.
A vector-valued impulsive control problem is considered whose dynamics, defined by a differential inclusion, are such that the vector fields associated with the singular term do not satisfy the so-called Frobenius condition. A concept of robust solution based on a new reparametrization procedure is adopted in order to derive necessary conditions of optimality. These conditions are obtained by taking a limit of those for an appropriate sequence of auxiliary “standard” optimal control problems approximating the original one. An example to illustrate the nature of the new optimality conditions is provided.  相似文献   

4.
ABSTRACT

In this paper, we investigate the optimal control problems for delayed doubly stochastic control systems. We first discuss the existence and uniqueness of the delayed doubly stochastic differential equation by martingale representation theorem and contraction mapping principle. As a necessary condition of the optimal control, we deduce a stochastic maximum principle under some assumption. At the same time, a sufficient condition of optimality is obtained by using the duality method. At the end of the paper, we apply our stochastic maximum principle to a class of linear quadratic optimal control problem and obtain the explicit expression of the optimal control.  相似文献   

5.
This note presents the optimal linear-quadratic (LQ) regulator for a linear system with multiple time delays in the control input. Optimality of the solution is proved in two steps. First, a necessary optimality condition is derived from the maximum principle. Then, the sufficiency of this condition is established by verifying that it satisfies the Hamilton-Jacobi-Bellman equation. Using an illustrative example, the performance of the obtained optimal regulator is compared against the performance of the optimal LQ regulator for linear systems without delays and some other feasible feedback regulators that are linear in the state variables. Finally, the note establishes a duality between the solutions of the optimal filtering problem for linear systems with multiple time delays in the observations and the optimal LQ control problem for linear systems with multiple time delays in the control input.  相似文献   

6.
A finite horizon linear quadratic (LQ) optimal control problem is studied for a class of discrete-time linear fractional systems (LFSs) affected by multiplicative, independent random perturbations. Based on the dynamic programming technique, two methods are proposed for solving this problem. The first one seems to be new and uses a linear, expanded-state model of the LFS. The LQ optimal control problem reduces to a similar one for stochastic linear systems and the solution is obtained by solving Riccati equations. The second method appeals to the principle of optimality and provides an algorithm for the computation of the optimal control and cost by using directly the fractional system. As expected, in both cases, the optimal control is a linear function in the state and can be computed by a computer program. A numerical example and comparative simulations of the optimal trajectory prove the effectiveness of the two methods. Some other simulations are obtained for different values of the fractional order.  相似文献   

7.
In this contribution, we obtain a nonlinear controller for a class of nonlinear time delay systems, by using the inverse optimality approach. We avoid the solution of the Hamilton Jacobi Bellman type equation and the determination of the Bellman's functional by extending the inverse optimality approach for delay free nonlinear systems to time delay nonlinear systems. This is achieved by combining the Control Lyapunov Function framework and Lyapunov-Krasovskii functionals of complete type. Explicit formulas for an optimal control are obtained. The efficiency of the proposed method is illustrated via experimental results applied to a dehydration process whose model includes a delayed state linear part and a delayed nonlinear part. To give evidence of the good performance of the proposed control law, experimental comparison against an industrial Proportional Integral Derivative controller and optimal linear controller. Additionally experimental robustness tests are presented.  相似文献   

8.
In this paper, we consider an optimal control problem with retarded control and study a larger class of singular (in the classical sense) controls. The Kelley and equality type optimality conditions are obtained. To prove our main results, we use the Legendre polynomials as variations of control.  相似文献   

9.
一类混杂系统的最优控制   总被引:1,自引:0,他引:1  
研究了一类脉冲依赖于状态的混杂系统的最优控制问题.与传统的变分方法不同,通过将跳跃瞬间转化为一个新的待优化参数,得到了该混杂系统的必要最优性条件,从而将最优控制问题转化为一边界值问题,该边界值问题可由数值方法或解析方法解决.此外,利用广义微分的理论,将该必要最优性条件推广到Frechet微分形式.结论表明,在混杂动态系统运行的连续部分,最优解所满足的必要性条件和传统的连续系统相同.在混杂动态系统的脉冲点处,哈密尔顿函数满足连续性条件,协态变量则满足一定的跳跃条件.最后,通过两个实例分析,表明该方法是有效的.  相似文献   

10.
For the control systems whose dynamics obeys a nonlinear regular integral Volterra equation with additional constraints in the form of equalities, the necessary optimality conditions were established on the basis of the abstract Yakubovich-Matveev theory of optimal control and, in particular, the abstract principle of maximum. Consideration was given to two kinds of the nonlinear controllable singular integral equations with unrestricted multipliers under the integral—with the power kernel of the Cauchy kernel type and with the logarithmic kernel. Attention was paid mostly to the nonlinear controlled dynamic systems obeying an integro-differential Volterra equation of the first order. As before, the study relied on the abstract theory of optimal control. The necessary optimality conditions were established by deriving the corresponding conjugate equation, transversality conditions, and principle of maximum.  相似文献   

11.
In optimal control problems, the Hamiltonian function is given by the weighted sum of the integrand of the cost function and the dynamic equation. The coefficient multiplying the integrand of the cost function is either zero or one; and if this coefficient is zero, then the optimal control problem is known as abnormal; otherwise it is normal. This paper provides a characterization of the abnormal optimal control problem for multi-body mechanical systems, subject to external forces and moments, and holonomic and nonholonomic constraints. This study does not only account for first-order necessary conditions, such as Pontryagin’s principle, but also for higher-order conditions, which allow the analysis of singular optimal controls.  相似文献   

12.
随机运动目标搜索问题的最优控制模型   总被引:1,自引:0,他引:1  
提出了Rn空间中做布朗运动的随机运动目标的搜索问题的最优控制模型.采用分析的方法来研究随机运动目标的最优搜索问题,并将原问题转化为由一个二阶偏微分方程(HJB方程)所表示的确定性分布参数系统的等价问题,推导出随机运动目标的最优搜索问题的HJB方程,并证明了该方程的解即是所寻求的最优搜索策略.由此给出了一个计算最优搜索策略的算法和一个实例.  相似文献   

13.
We consider the optimal control of feedback linearizable dynamical systems subject to mixed state and control constraints. In general, a linearizing feedback control does not minimize the cost function. Such problems arise frequently in astronautical applications where stringent performance requirements demand optimality over feedback linearizing controls. In this paper, we consider a pseudospectral (PS) method to compute optimal controls. We prove that a sequence of solutions to the PS-discretized constrained problem converges to the optimal solution of the continuous-time optimal control problem under mild and numerically verifiable conditions. The spectral coefficients of the state trajectories provide a practical method to verify the convergence of the computed solution. The proposed ideas are illustrated by several numerical examples.  相似文献   

14.
一类奇异时滞系统的奇异二次指标最优控制问题   总被引:1,自引:0,他引:1  
利用基本的代数等价变换,将一类奇异滞后系统的奇异二次指标最优控制问题转化为正常状态滞后系统的非奇异二次指标最优控制问题,并讨论了二的等价性,在一些常规条件下,给出了问题的解,并把最优控制综合为最优状态反馈。  相似文献   

15.
To partially implement the idea of considering nonlinear optimal control problems immediately on the set of Pontryagin extremals (or on quasiextremals if the optimal solution does not exist), we introduce auxiliary functions of canonical variables, which we call bipositional, and the corresponding modified Lagrangian for the problem. The Lagrangian is subject to minimization on the trajectories of the canonical system from the Maximum Principle. This general approach is further specialized for nonconvex problems that are linear in state, leading to a nonstandard dual optimal control problem on the trajectories of the adjoint system. Applying the feedback minimum principle to both original and dual problems, we have obtained a pair of necessary optimality conditions that significantly strengthen the Maximum Principle and admit a constructive realization in the form of an iterative problem solving procedure. The general approach, optimality features, and the iterative solution procedure are illustrated by a series of examples.  相似文献   

16.
In this paper, we deal with the optimal control problem governed by multidimensional modified Swift–Hohenberg equation. After showing the relationship between the control problem and its approximation, we derive the optimality conditions for an optimal control of our original problem by using one of the approximate problems.  相似文献   

17.
This paper is concerned with partial-information mixed optimal stochastic continuous–singular control problem for mean-field stochastic differential equation driven by Teugels martingales and an independent Brownian motion, where the Teugels martingales are a family of pairwise strongly orthonormal martingales associated with Lévy processes. The control variable has two components; the first being absolutely continuous, and the second singular. Partial-information necessary and sufficient conditions of optimal continuous–singular control for these mean-field models are investigated. As an illustration, this paper studies a partial-information linear quadratic control problem of mean-field type involving continuous–singular control.  相似文献   

18.
In optimal control problems any extremal arc which trivially satisfies the Maximum Principle, that is a first-order control variation produces no change in cost, is called singular. Higher-order conditions are then needed to check the optimality of such arcs. Using the Volterra series associated with the variation of the cost functional gives a new context for analyzing singular optimal control problems. A basic optimality criterion for a fixed terminal time Mayer problem is obtained which allows one to derive the necessary conditions for optimality in terms of Lie brackets of vector fields associated with the dynamics of the problem.  相似文献   

19.
The problem of optimal full-order observers for continuous-time linear systems with colored process and measurement noises is considered. In such cases, optimal estimation of the state involves augmenting the system, thus a higher-order observer is required. The structure of a full-order observer is assumed and necessary conditions for the optimal observer are derived. The conditions are given for the general case where the intensity of the white-noise component of the measurement noise may be singular. The solution consists of a modified Riccati equation and a Lyapunov equation coupled by two projection matrices in the singular case and one projection matrix in the nonsingular case  相似文献   

20.
The optimal control of deterministic discrete time-invariant automaton-type systems is considered. Changes in the system’s state are governed by a recurrence equation. The switching times and their order are not specified in advance. They are found by optimizing a functional that takes into account the cost of each switching. This problem is a generalization of the classical optimal control problem for discrete time-invariant systems. It is proved that, in the time-invariant case, switchings of the optimal trajectory (may be multiple instantaneous switchings) are possible only at the initial and (or) terminal points in time. This fact is used in the derivation of equations for finding the value (Hamilton–Jacobi–Bellman) function and its generators. The necessary and sufficient optimality conditions are proved. It is shown that the generators of the value function in linear–quadratic problems are quadratic, and the value function itself is piecewise quadratic. Algorithms for the synthesis of the optimal closed-loop control are developed. The application of the optimality conditions is demonstrated by examples.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号