首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
To partially implement the idea of considering nonlinear optimal control problems immediately on the set of Pontryagin extremals (or on quasiextremals if the optimal solution does not exist), we introduce auxiliary functions of canonical variables, which we call bipositional, and the corresponding modified Lagrangian for the problem. The Lagrangian is subject to minimization on the trajectories of the canonical system from the Maximum Principle. This general approach is further specialized for nonconvex problems that are linear in state, leading to a nonstandard dual optimal control problem on the trajectories of the adjoint system. Applying the feedback minimum principle to both original and dual problems, we have obtained a pair of necessary optimality conditions that significantly strengthen the Maximum Principle and admit a constructive realization in the form of an iterative problem solving procedure. The general approach, optimality features, and the iterative solution procedure are illustrated by a series of examples.  相似文献   

2.
An optimal control problem described by a system of two-dimensional nonlinear difference Volterra-type equations that are a discrete analogue of the two-dimensional integral Volterra equation is considered. The first and second variations of the quality functional are calculated assuming that the control domain is open. They are used to obtain an analogue of the Euler equation and to find the necessary second order optimality conditions that can be checked constructively.  相似文献   

3.
Linear-quadratic Bolza problems of optimal control with variable end points are considered. Under the strengthened Legendre condition, necessary and sufficient optimality conditions are established, and it is shown that the linear-quadratic Bolza problem of optimal control can be reduced to a quadratic minimization problem in a finite-dimensional space. Simple simulations where solutions of a nonlinear problem can be recovered from solutions of the accessory linear-quadratic problem are indicated. Conjectures regarding sufficient conditions for optimality in nonlinear Bolza problems are included  相似文献   

4.
5.
In optimal control problems any extremal arc which trivially satisfies the Maximum Principle, that is a first-order control variation produces no change in cost, is called singular. Higher-order conditions are then needed to check the optimality of such arcs. Using the Volterra series associated with the variation of the cost functional gives a new context for analyzing singular optimal control problems. A basic optimality criterion for a fixed terminal time Mayer problem is obtained which allows one to derive the necessary conditions for optimality in terms of Lie brackets of vector fields associated with the dynamics of the problem.  相似文献   

6.
For nonlinear affine control systems with unbounded controls, necessary high-order optimality conditions are derived. These conditions are stated in the form of a high-order maximum principle and are expressed in terms of Lie brackets and Newton diagrams.  相似文献   

7.
8.
A class of optimal control problems of a system governed by linear dynamic equations on time scales with quadratic cost functional is considered. By the Lebesgue Δ-integral theory and the Sobolev-type space H1 on time scales the weak solution of linear dynamic equations on time scales for both initial value problem and backward problem are introduced, therefore the necessary conditions of optimality are presented. Some typical examples are given for demonstration.  相似文献   

9.
Satisfaction of the generalized Legendre-Clebsch condition is known to be a necessary condition of optimality in singular control problems. Recently, a new additional necessary condition of optimality was discovered. In this note, we demonstrate by means of an example that together these two necessary conditions are, in general, insufficient for optimality in singular control problems.  相似文献   

10.
A class of discrete control problems described only by locally Lipschitz functions is studied from the point of view of necessary optimality conditions. Moreover, it is assumed that state and control constraints are given implicitly as general sets being approximated by the respective ‘ generalized ’ tangent or normal cone. To investigate these types of non-differentiable optimization problems some basic facts of the so called non-smooth analysis have to be applied. The crucial role is played by a ‘ generalized ’ gradient of a locally Lipschitz function. Using these concepts one is able to formulate necessary optimality conditions in a fairly general setting. However, to obtain necessary conditions in a more familiar form an alternative definition of a partial generalized gradient ia explored. Some special cases of discrete control problems are studied separately. In addition, also the maximum principle formulation of necessary conditions is investigated and a question of possible extensions is briefly discussed  相似文献   

11.
For the problems of optimal control of discrete systems, the necessary global optimality conditions were established without using the convexity assumptions. These conditions are based on the feedback controls constructed using the weakly monotone solutions of the Hamilton-Jacobi inequality.  相似文献   

12.
A vector-valued impulsive control problem is considered whose dynamics, defined by a differential inclusion, are such that the vector fields associated with the singular term do not satisfy the so-called Frobenius condition. A concept of robust solution based on a new reparametrization procedure is adopted in order to derive necessary conditions of optimality. These conditions are obtained by taking a limit of those for an appropriate sequence of auxiliary “standard” optimal control problems approximating the original one. An example to illustrate the nature of the new optimality conditions is provided.  相似文献   

13.
A discrete system that models the operation of a dynamic automaton with memory is considered. In distinction from the ordinary models of discrete systems, in which the states are changed (switched) at prescribed instants of time, automaton-type systems may change their states at arbitrary instants. Furthermore, the choice of the instants when the automaton “fires” is a control resource and it is subject to optimization. Sufficient optimality conditions for such systems under a limited or unlimited number of switchings are proved. Equations for the synthesis of the optimal closed-loop control are derived. Application of the optimality conditions is illustrated by examples.  相似文献   

14.
A simple, direct method is presented for deriving necessary conditions of optimality covering most of the situations encountered in practice. The method is based on the observation that, if generalized functions are admitted, a broad class of optimal control problems can be transcribed into a canonical form called the Canonical Variational Problem (CVP). The necessary optimality condition for CVP is phrased in the form of a Lagrange multiplier rule. This condition contains many of the results relating to minimum principles in the optimal control literature.  相似文献   

15.
16.
A centered difference method for the boundary value problems arising as necessary conditions for hereditary control problems is given. Convergence and convergence rates are established and numerical examples are presented and compared with other results in the literature.  相似文献   

17.
This work shows a new formulation means to obtain the optimal control for systems having polynomial nonlinearities by using functional analytic methods. The polynomic system is reduced to a quadratic form by introducing a set of pseudostate variables and the structure of the system is characterized by a set of matrices. A new set of optimizing equations is obtained and the necessary and sufficient conditions are met by the positive definiteness requirement on the system matrix.  相似文献   

18.
The canonical theory of the necessary and sufficient conditions for global optimality based on the sets of nonsmooth solutions of the differential Hamilton-Jacobi inequalities of two classes of weakly and strongly monotone Lyapunov type functions was developed. These functions enable one to estimate from above and below the objective functional of the optimal control problem and determine the internal and external approximations of the reachability set of the controlled dynamic system.  相似文献   

19.
Sufficient optimality conditions for control of discrete determinate systems with incomplete state-vector feedback are formulated and demonstrated, and also relations for desired control determination are obtained. Application of the obtained relations is demonstrated by examples.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号