首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We study distributed optimal control problems governed by time-fractional parabolic equations with time dependent coefficients on metric graphs, where the fractional derivative is considered in the Caputo sense. Using the Galerkin method and compactness results, for the spatial part, and approximating the kernel of the time-fractional Caputo derivative by a sequence of more regular kernel functions, we first prove the well-posedness of the system. We then turn to the existence and uniqueness of solutions to the distributed optimal control problem. By means of the Lagrange multiplier method, we develop an adjoint calculus for the right Caputo derivative and derive the corresponding first order optimality system. Moreover, we propose a finite difference scheme to find the approximate solution of the state equation and the resulting optimality system on metric graphs. Finally, examples are provided on two different graphs to illustrate the performance of the proposed difference scheme.  相似文献   

2.
In this paper we study the fully discrete mixed finite element methods for quadratic convex optimal control problem governed by semilinear parabolic equations. The space discretization of the state variable is done using usual mixed finite elements, whereas the time discretization is based on difference methods. The state and the co-state are approximated by the lowest order Raviart–Thomas mixed finite element spaces and the control is approximated by piecewise constant elements. By applying some error estimates techniques of mixed finite element methods, we derive a priori error estimates both for the coupled state and the control approximation. Finally, we present a numerical example which confirms our theoretical results.  相似文献   

3.
In this paper the perceptron neural networks are applied to approximate the solution of fractional optimal control problems. The necessary (and also sufficient in most cases) optimality conditions are stated in a form of fractional two-point boundary value problem. Then this problem is converted to a Volterra integral equation. By using perceptron neural network’s ability in approximating a nonlinear function, first we propose approximating functions to estimate control, state and co-state functions which they satisfy the initial or boundary conditions. The approximating functions contain neural network with unknown weights. Using an optimization approach, the weights are adjusted such that the approximating functions satisfy the optimality conditions of fractional optimal control problem. Numerical results illustrate the advantages of the method.  相似文献   

4.
An optimal control problem is studied for distributed systems governed by nonlinear parabolic PDE's with state constraints. The state equation is monotone in the state variable and nonlinear in the control variable. The constraints and the cost functional are not necessarily convex. Relaxed controls are used to prove the existence of an optimal control. Moreover, a minimum principle of relaxed optimality is established.  相似文献   

5.
In this note we consider an optimal scanning control problem for parabolic systems in a one-dimensional space. It is shown that for the optimal regulator problem involving the minimization of a given final-time quadratic cost functional there exist optimal locations for the scanning control. Conditions for optimality are given. Also, it is shown that for the heat equation the "best" location of the control cannot be a fixed point in the spatial domain of the system.  相似文献   

6.
This paper presents a method for finding the optimal control of linear systems with delays in state and input and the quadratic cost functional using an orthonormal basis for square integrable functions. The state and control variables and their delays are expanded in an orthonormal basis with unknown coefficients. Using operational matrices of integration, delay and product, the relation between coefficients of variables is provided. Then, necessary condition of optimality is driven as a linear system of algebraic equations in terms of the unknown coefficients of state and control variables. As an application of this method, the linear Legendre multiwavelets as an orthonormal basis for L 2[0, 1] are used. Two time-delayed optimal control problems are approximated to establish the usefulness of the method.  相似文献   

7.
By applying differential form theory, we consider the singular control problem for non-linear systems with control variables appearing linearly in both the system dynamics and the performance index. First, we derive necessary conditions of singular optimality for a single-input system, including the relation to the Euler-Poisson equation and to the generalized Legendre-Clebsch condition. Defining the degree of singularity, we develop necessary conditions satisfied by the singular trajectory embedded in a reduced space. For a time-invariant system, we clarify the relation between the dynamic and the related static optimality. Second, we derive necessary conditions for singular optimality for a multi-input system where the dimension of the control vector is equal to that of the state space. We show that the Shima-Sawaragi condition for the optimality of boundary controls and the generalized Legendre-Clebsch condition are obtained from these conditions. The results are also applied to the analysis of a time-invariant system.  相似文献   

8.
We consider a stochastic control problem with linear dynamics with jumps, convex cost criterion, and convex state constraint, in which the control enters the drift, the diffusion, and the jump coefficients. We allow these coefficients to be random, and do not impose any Lp-bounds on the control.

We obtain a stochastic maximum principle for this model that provides both necessary and sufficient conditions of optimality. This is the first version of the stochastic maximum principle that covers the consumption–investment problem in which there are jumps in the price system.  相似文献   


9.
本文综述高维拟线性抛物型方程、拟线性复Ginzburg-Landau方程以及只含一个控制变量的高维耦合拟线性抛物型方程组的能控性方面的一些近期的结果.通过使用不动点技术,采用主部具有C1系数的线性抛物型方程或方程组一些新的精细的Carleman估计.这一方法的要点是在古典解的框架下考虑能控性问题,并且当给定的数据具有一定的正则性时,线性抛物型方程或方程组在H¨older空间中来选取控制函数.利用类似的方法,还建立了拟线性抛物型方程不灵敏控制的存在性,其关键是将不灵敏问题转化为由拟线性抛物型方程和线性抛物型方程构成的耦合方程组在单个控制下一个非标准的能控性问题.  相似文献   

10.
11.
This paper presents a continuous time solution to the problem of designing a relatively optimal control, precisely, a dynamic control which is optimal with respect to a given initial condition and is stabilizing for any other initial state. This technique provides a drastic reduction of the complexity of the controller and successfully applies to systems in which (constrained) optimality is necessary for some “nominal operation” only. The technique is combined with a pole assignment procedure. It is shown that once the closed-loop poles have been fixed and an optimal trajectory originating from the nominal initial state compatible with these poles is computed, a stabilizing compensator which drives the system along this trajectory can be derived in closed form. There is no restriction on the optimality criterion and the constraints. The optimization is carried out over a finite-dimensional parameterization of the trajectories. The technique has been presented for state feedback. We propose here a technique based on the Youla–Kučera parameterization which works for output feedback. The main result is that we provide conditions for solvability in terms of a set of linear algebraic equations.  相似文献   

12.
We study the well-posedness of an optimal control problem for the coefficients of a quasilinear parabolic equation. We obtain necessary optimality conditions for this problem.  相似文献   

13.
Approaching the problem of optimal adaptive control as ldquooptimal control made adaptive,rdquo namely, as a certainty equivalence combination of linear quadratic optimal control and standard parameter estimation, fails on two counts: numerical (as it requires a solution to a Riccati equation at each time step) and conceptual (as the combination actually does not possess any optimality property). In this note, we present a particular form of optimality achievable in Lyapunov-based adaptive control. State and control are subject to positive definite penalties, whereas the parameter estimation error is penalized through an exponential of its square, which means that no attempt is made to enforce the parameter convergence, but the estimation transients are penalized simultaneously with the state and control transients. The form of optimality we reveal here is different from our work in [Z. H. Li and M. Krstic, ldquoOptimal design of adaptive tracking controllers for nonlinear systems,rdquo Automatica, vol. 33, pp. 1459-1473, 1997] where only the terminal value of the parameter error was penalized. We present our optimality concept on a partial differential equation (PDE) example-boundary control of a particular parabolic PDE with an unknown reaction coefficient. Two technical ideas are central to the developments in the note: a nonquadratic Lyapunov function and a normalization in the Lyapunov-based update law. The optimal adaptive control problem is fundamentally nonlinear and we explore this aspect through several examples that highlight the interplay between the non-quadratic cost and value functions.  相似文献   

14.
In this paper, we develop a unified framework to address the problem of optimal nonlinear analysis and feedback control for partial stability and partial‐state stabilization. Partial asymptotic stability of the closed‐loop nonlinear system is guaranteed by means of a Lyapunov function that is positive definite and decrescent with respect to part of the system state, which can clearly be seen to be the solution to the steady‐state form of the Hamilton–Jacobi–Bellman equation and hence guaranteeing both partial stability and optimality. The overall framework provides the foundation for extending optimal linear‐quadratic controller synthesis to nonlinear nonquadratic optimal partial‐state stabilization. Connections to optimal linear and nonlinear regulation for linear and nonlinear time‐varying systems with quadratic and nonlinear nonquadratic cost functionals are also provided. Finally, we also develop optimal feedback controllers for affine nonlinear systems using an inverse optimality framework tailored to the partial‐state stabilization problem and use this result to address polynomial and multilinear forms in the performance criterion. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

15.
A simulation study for the optimal control of a system described by a set of two coupled parabolic equations is presented. The control is a function of time and appears as a coefficient in the state equation. The optimality conditions lead to a two points boundary value problem which is solved by an iterative ‘steepest descent method“. Numerical results related to different criterions are presented.  相似文献   

16.
In this article, we consider a bioeconomic model for minimax control problems which are governed by periodic competing parabolic Lotka-Volterra equations. Firstly, the existence and uniqueness results to the state equations are proved as well as stability under mild assumptions. Afterwards, we formulate the minimax control problem. The optimality criteria are to minimize damage and trapping costs of the first species (nuisance), and to maximize the difference between economic revenue and cost of the second species. The existence of an optimal solution is obtained. Also, necessary conditions for a saddle point optimality are derived.  相似文献   

17.
Optimization with time-dependent partial differential equations (PDEs) as constraints appears in many science and engineering applications. The associated first-order necessary optimality system consists of one forward and one backward time-dependent PDE coupled with optimality conditions. An optimization process by using the one-shot method determines the optimal control, state and adjoint state at once, with the cost of solving a large scale, fully discrete optimality system. Hence, such a one-shot method could easily become computationally prohibitive when the time span is long or time step is small. To overcome this difficulty, we propose several time domain decomposition algorithms for improving the computational efficiency of the one-shot method. In these algorithms, the optimality system is split into many small subsystems over a much smaller time interval, which are coupled by appropriate continuity matching conditions. Both one-level and two-level multiplicative and additive Schwarz algorithms are developed for iteratively solving the decomposed subsystems in parallel. In particular, the convergence of the one-level, non-overlapping algorithms is proved. The effectiveness of our proposed algorithms is demonstrated by both 1D and 2D numerical experiments, where the developed two-level algorithms show convergence rates that are scalable with respect to the number of subdomains.  相似文献   

18.
The purpose of this paper is to extend the Hamilton-Jacobi criteria for optimality to stochastic control systems with discontinuous system state process. The controlled system process is modeled as a well-measurable process. The admissible class of control processes is not required to be smooth and consists of predictable processes with respect to the available information σ-field. The total cost corresponding to the admissible control law is a measure generated by an integrable increasing process (random clock of the system), which may also depend on the control law. The optimality criterion is established in terms of inequalities involving dual predictable projection of the performance rate and increasing predictable process which generate a potential of class D. For a sufficiently general type of system state process such predictable processes have been calculated, and an explicit version of the optimality criterion is presented.  相似文献   

19.
We present a framework to solve a finite-time optimal control problem for parabolic partial differential equations (PDEs) with diffusivity-interior actuators, which is motivated by the control of the current density profile in tokamak plasmas. The proposed approach is based on reduced order modeling (ROM) and successive optimal control computation. First we either simulate the parabolic PDE system or carry out experiments to generate data ensembles, from which we then extract the most energetic modes to obtain a reduced order model based on the proper orthogonal decomposition (POD) method and Galerkin projection. The obtained reduced order model corresponds to a bilinear control system. Based on quasi-linearization of the optimality conditions derived from Pontryagin’s maximum principle, and stated as a two boundary value problem, we propose an iterative scheme for suboptimal closed-loop control. We take advantage of linear synthesis methods in each iteration step to construct a sequence of controllers. The convergence of the controller sequence is proved in appropriate functional spaces. When compared with previous iterative schemes for optimal control of bilinear systems, the proposed scheme avoids repeated numerical computation of the Riccati equation and therefore reduces significantly the number of ODEs that must be solved at each iteration step. A numerical simulation study shows the effectiveness of this approach.  相似文献   

20.
It is well known that stochastic control systems can be viewed as Markov decision processes (MDPs) with continuous state spaces. In this paper, we propose to apply the policy iteration approach in MDPs to the optimal control problem of stochastic systems. We first provide an optimality equation based on performance potentials and develop a policy iteration procedure. Then we apply policy iteration to the jump linear quadratic problem and obtain the coupled Riccati equations for their optimal solutions. The approach is applicable to linear as well as nonlinear systems and can be implemented on-line on real world systems without identifying all the system structure and parameters.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号