首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 297 毫秒
1.
2.
A sufficient condition to solve an optimal control problem is to solve the Hamilton–Jacobi–Bellman (HJB) equation. However, finding a value function that satisfies the HJB equation for a nonlinear system is challenging. For an optimal control problem when a cost function is provided a priori, previous efforts have utilized feedback linearization methods which assume exact model knowledge, or have developed neural network (NN) approximations of the HJB value function. The result in this paper uses the implicit learning capabilities of the RISE control structure to learn the dynamics asymptotically. Specifically, a Lyapunov stability analysis is performed to show that the RISE feedback term asymptotically identifies the unknown dynamics, yielding semi-global asymptotic tracking. In addition, it is shown that the system converges to a state space system that has a quadratic performance index which has been optimized by an additional control element. An extension is included to illustrate how a NN can be combined with the previous results. Experimental results are given to demonstrate the proposed controllers.  相似文献   

3.
The singular optimal control problem for asymptotic stabilisation has been extensively studied in the literature. In this paper, the optimal singular control problem is extended to address a weaker version of closed-loop stability, namely, semistability, which is of paramount importance for consensus control of network dynamical systems. Three approaches are presented to address the nonlinear semistable singular control problem. Namely, a singular perturbation method is presented to construct a state-feedback singular controller that guarantees closed-loop semistability for nonlinear systems. In this approach, we show that for a non-negative cost-to-go function the minimum cost of a nonlinear semistabilising singular controller is lower than the minimum cost of a singular controller that guarantees asymptotic stability of the closed-loop system. In the second approach, we solve the nonlinear semistable singular control problem by using the cost-to-go function to cancel the singularities in the corresponding Hamilton–Jacobi–Bellman equation. For this case, we show that the minimum value of the singular performance measure is zero. Finally, we provide a framework based on the concepts of state-feedback linearisation and feedback equivalence to solve the singular control problem for semistabilisation of nonlinear dynamical systems. For this approach, we also show that the minimum value of the singular performance measure is zero. Three numerical examples are presented to demonstrate the efficacy of the proposed singular semistabilisation frameworks.  相似文献   

4.
A deterministic optimal control problem is solved for a control-affine non-linear system with a non-quadratic cost function. We algebraically solve the Hamilton–Jacobi equation for the gradient of the value function. This eliminates the need to explicitly solve the solution of a Hamilton–Jacobi partial differential equation. We interpret the value function in terms of the control Lyapunov function. Then we provide the stabilizing controller and the stability margins. Furthermore, we derive an optimal controller for a control-affine non-linear system using the state dependent Riccati equation (SDRE) method; this method gives a similar optimal controller as the controller from the algebraic method. We also find the optimal controller when the cost function is the exponential-of-integral case, which is known as risk-sensitive (RS) control. Finally, we show that SDRE and RS methods give equivalent optimal controllers for non-linear deterministic systems. Examples demonstrate the proposed methods.  相似文献   

5.
In our early work, we show that one way to solve a robust control problem of an uncertain system is to translate the robust control problem into an optimal control problem. If the system is linear, then the optimal control problem becomes a linear quadratic regulator (LQR) problem, which can be solved by solving an algebraic Riccati equation. In this article, we extend the optimal control approach to robust tracking of linear systems. We assume that the control objective is not simply to drive the state to zero but rather to track a non-zero reference signal. We assume that the reference signal to be tracked is a polynomial function of time. We first investigated the tracking problem under the conditions that all state variables are available for feedback and show that the robust tracking problem can be solved by solving an algebraic Riccati equation. Because the state feedback is not always available in practice, we also investigated the output feedback. We show that if we place the poles of the observer sufficiently left of the imaginary axis, the robust tracking problem can be solved. As in the case of the state feedback, the observer and feedback can be obtained by solving two algebraic Riccati equations.  相似文献   

6.
On reachability and minimum cost optimal control   总被引:1,自引:0,他引:1  
Questions of reachability for continuous and hybrid systems can be formulated as optimal control or game theory problems, whose solution can be characterized using variants of the Hamilton-Jacobi-Bellman or Isaacs partial differential equations. The formal link between the solution to the partial differential equation and the reachability problem is usually established in the framework of viscosity solutions. This paper establishes such a link between reachability, viability and invariance problems and viscosity solutions of a special form of the Hamilton-Jacobi equation. This equation is developed to address optimal control problems where the cost function is the minimum of a function of the state over a specified horizon. The main advantage of the proposed approach is that the properties of the value function (uniform continuity) and the form of the partial differential equation (standard Hamilton-Jacobi form, continuity of the Hamiltonian and simple boundary conditions) make the numerical solution of the problem much simpler than other approaches proposed in the literature. This fact is demonstrated by applying our approach to a reachability problem that arises in flight control and using numerical tools to compute the solution.  相似文献   

7.
8.
由于分布参数系统通常由偏微分方程描述,采用解析法求解分布参数系统最优边界控制问题,是非常难以解决的.正交函数逼近的方法在分布参数系统控制方面,已经取得了较好的效果.Haar小波作为正交基函数,利用小波的一些运算及变换矩阵,将分布参数系统转化为集总参数系统,再求其逼近解.仿真示例验证了所提出的算法是非常有效的.该方法为分布参数系统的控制算法提出了一条新的解决方案.  相似文献   

9.
《Automatica》2014,50(11):2822-2834
We study the quadratic control of a class of stochastic hybrid systems with linear continuous dynamics for which the lengths of time that the system stays in each mode are independent random variables with given probability distribution functions. We derive a condition for finding the optimal feedback policy that minimizes a discounted infinite horizon cost. We show that the optimal cost is the solution to a set of differential equations with unknown boundary conditions. Furthermore, we provide a recursive algorithm for computing the optimal cost and the optimal feedback policy. The applicability of our result is illustrated through a numerical example, motivated by stochastic gene regulation in biology.  相似文献   

10.
Optimizing aircraft collision avoidance and performing trajectory optimization are the key problems in an air transportation system. This paper is focused on solving these problems by using a stochastic optimal control approach. The major contribution of this paper is a proposed stochastic optimal control algorithm to dynamically adjust and optimize aircraft trajectory. In addition, this algorithm accounts for random wind dynamics and convective weather areas with changing size. Although the system is modeled by a stochastic differential equation, the optimal feedback control for this equation can be computed as a solution of a partial differential equation, namely, an elliptic Hamilton‐Jacobi‐Bellman equation. In this paper, we solve this equation numerically using a Markov Chain approximation approach, where a comparison of three different iterative methods and two different optimization search methods are presented. Simulations show that the proposed method provides better performance in reducing conflict probability in the system and that it is feasible for real applications.  相似文献   

11.
In recent few decades, linear quadratic optimal control problems have achieved great improvements in theoretical and practical perspectives. For a linear quadratic optimal control problem, it is well known that the optimal feedback control is characterized by the solution of a Riccati differential equation, which cannot be solved exactly in many cases, and sometimes the optimal feedback control will be a complex time-oriented function. In this paper, we introduce a parametric optimal control problem of uncertain linear quadratic model and propose an approximation method to solve it for simplifying the expression of optimal control. A theorem is given to ensure the solvability of optimal parameter. Besides, the analytical expressions of optimal control and optimal value are derived by using the proposed approximation method. Finally, an inventory-promotion problem is dealt with to illustrate the efficiency of the results and the practicability of the model.  相似文献   

12.
通过定义线性哈密顿系统新形式的第3类生成函数, 建立了求解线性二次终端控制问题的生成函数方法与Riccati变换方法的直接联系, 并证明两种方法导出的最优控制律是等价的. 生成函数方法的优势在于可以灵活地处理边界约束条件. 本文根据Riccati变换方法沿时间逆向求解问题的特点, 定义了适于逆向正则变换的第3类生成函数, 用于求解哈密顿两点边值问题, 可得到用生成函数表示的最优控制律. 并通过验证生成函数方法所得最优控制律的各部分与Riccati变换法所得的结果相对应, 证明了两种方法所得控制律的等价性.  相似文献   

13.
This paper visits the quadratic optimal control problem of decentralised control systems via static output feedback. A gradient flow approach is introduced as a tool to compute the optimal output feedback gain. Several nice properties are revealed concerning the convergence of the gain matrix along the trajectory of an ordinary differential equation obtained from the gradient of objective cost, i.e. the objective cost is decreasing along this trajectory. If the equilibrium points are isolated, the convergence can be guaranteed. A simulation example is given to illustrate the effectiveness of this approach.  相似文献   

14.
In this work, “policy iteration algorithm” (PIA) is applied for controlling arterial oxygen saturation that does not require mathematical models of the plant. This technique is based on nonlinear optimal control to solve the Hamilton–Jacobi–Bellman equation. The controller is synthesized using a state feedback configuration based on an unidentified model of complex pathophysiology of pulmonary system in order to control gas exchange in ventilated patients, as under some circumstances (like emergency situations), there may not be a proper and individualized model for designing and tuning controllers available in time. The simulation results demonstrate the optimal control of oxygenation based on the proposed PIA by iteratively evaluating the Hamiltonian cost functions and synthesizing the control actions until achieving the converged optimal criteria. Furthermore, as a practical example, we examined the performance of this control strategy using an interconnecting three-tank system as a real nonlinear system.  相似文献   

15.
An optimal (practical) stabilization problem is formulated in an inverse approach and solved for nonlinear evolution systems in Hilbert spaces. The optimal control design ensures global well-posedness and global practical $\mathcal{K}_{\infty}$-exponential stability of the closed-loop system, minimizes a cost functional, which appropriately penalizes both state and control in the sense that it is positive definite (and radially unbounded) in the state and control, without having to solve a Hamilton-Jacobi-Belman equation (HJBE). The Lyapunov functional used in the control design explicitly solves a family of HJBEs. The results are applied to design inverse optimal boundary stabilization control laws for extensible and shearable slender beams governed by fully nonlinear partial differential equations.   相似文献   

16.
We study a pointwise control problem for the Boussinesq system in two dimensions. The control is a source term in the heat equation, and the cost functional takes into account the distance between the solution to the Boussinesq system and a given profile. We have recently studied similar problems for the linearized Boussinesq system (Nguyen and Raymond, J. Optim. Theory Appl. 141 (2009) 147-165). Studying these problems in the nonlinear case is more difficult. There is some literature on the pointwise control of linear and nonlinear partial differential equations; however, to the authors’ knowledge, nothing has been done for the Boussinesq system. In this study, we prove the existence of optimal solutions and derive optimality conditions.  相似文献   

17.
In this paper,the optimal control of a class of general affine nonlinear discrete-time(DT) systems is undertaken by solving the Hamilton Jacobi-Bellman(HJB) equation online and forward in time.The proposed approach,referred normally as adaptive or approximate dynamic programming(ADP),uses online approximators(OLAs) to solve the infinite horizon optimal regulation and tracking control problems for affine nonlinear DT systems in the presence of unknown internal dynamics.Both the regulation and tracking contro...  相似文献   

18.
We present a framework to solve a finite-time optimal control problem for parabolic partial differential equations (PDEs) with diffusivity-interior actuators, which is motivated by the control of the current density profile in tokamak plasmas. The proposed approach is based on reduced order modeling (ROM) and successive optimal control computation. First we either simulate the parabolic PDE system or carry out experiments to generate data ensembles, from which we then extract the most energetic modes to obtain a reduced order model based on the proper orthogonal decomposition (POD) method and Galerkin projection. The obtained reduced order model corresponds to a bilinear control system. Based on quasi-linearization of the optimality conditions derived from Pontryagin’s maximum principle, and stated as a two boundary value problem, we propose an iterative scheme for suboptimal closed-loop control. We take advantage of linear synthesis methods in each iteration step to construct a sequence of controllers. The convergence of the controller sequence is proved in appropriate functional spaces. When compared with previous iterative schemes for optimal control of bilinear systems, the proposed scheme avoids repeated numerical computation of the Riccati equation and therefore reduces significantly the number of ODEs that must be solved at each iteration step. A numerical simulation study shows the effectiveness of this approach.  相似文献   

19.
基于神经网络的微分对策控制器设计   总被引:1,自引:0,他引:1       下载免费PDF全文
周锐 《控制与决策》2003,18(1):123-125
采用伴随-BP技术,将微分对策的两点边值求解问题转化两个神经网络的学习问题,训练后的两个神经网络分别作为对策双方的最优控制器在线使用,避免了直接求解复杂的两点边值问题,对追逃微分对策问题的仿真结果表明,该方法对初始条件和噪声具有较好的鲁棒性。  相似文献   

20.
The Pontryagin Maximum Principle is one of the most important results in optimal control, and provides necessary conditions for optimality in the form of a mixed initial/terminal boundary condition on a pair of differential equations for the system state and its conjugate costate. Unfortunately, this mixed boundary value problem is usually difficult to solve, since the Pontryagin Maximum Principle does not give any information on the initial value of the costate. In this paper, we explore an optimal control problem with linear and convex structure and derive the associated dual optimization problem using convex duality, which is often much easier to solve than the original optimal control problem. We present that the solution to the dual optimization problem supplements the necessary conditions of the Pontryagin Maximum Principle, and elaborate the procedure of constructing the optimal control and its corresponding state trajectory in terms of the solution to the dual problem.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号