首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Proposes a recurrent learning algorithm for designing the controllers of continuous dynamical systems in optimal control problems. The controllers are in the form of unfolded recurrent neural nets embedded with physical laws from classical control techniques. The learning algorithm is characterized by a double forward-recurrent-loops structure for solving both temporal recurrent and structure recurrent problems. The first problem results from the nature of general optimal control problems, where the objective functions are often related to (evaluated at) some specific time steps or system states only, causing missing learning signals at some steps or states. The second problem is due to the high-order discretization of continuous systems by the Runge-Kutta method that we perform to increase accuracy. This discretization transforms the system into several identical interconnected subnetworks, like a recurrent neural net expanded in the time axis. Two recurrent learning algorithms with different convergence properties are derived; first- and second-order learning algorithms. Their computations are local and performed efficiently as net signal propagation. We also propose two new nonlinear control structures for the 2D guidance problem and the optimal PI control problem. Under the training of the recurrent learning algorithms, these controllers can be easily tuned to be suboptimal for given objective functions. Extensive computer simulations show the controllers' optimization and generalization abilities  相似文献   

2.
We construct a continuous feedback for a saturated system x(t)=Ax(t)+B/spl sigma/(u(t)). The feedback renders the system asymptotically stable on the whole set of states that can be driven to 0 with an open-loop control. The trajectories of the resulting closed-loop system are optimal for an auxiliary optimal control problem with a convex cost and linear dynamics. The value function for the auxiliary problem, which we show to be differentiable, serves as a Lyapunov function for the saturated system. Relating the saturated system, which is nonlinear, to an optimal control problem with linear dynamics is possible thanks to the monotone structure of saturation.  相似文献   

3.
This paper treats the problem of the combined design of structure/control systems for achieving optimal maneuverability. A maneuverability index which directly reflects the time required to perform a given maneuver or set of maneuvers is introduced. By designing the flexible appendages of a spacecraft, its maneuverability is optimized under the constraints of structural properties, and of the postmaneuver spill-over being within a specified bound. The spillover reduction is achieved by making use of an appropriate control design model. The distributed parameter design problem is approached using assumed shape functions and finite element analysis with dynamic reduction. Characteristics of the problem and problem solving procedures have been investigated. Adaptive approximate design methods have been developed to overcome computational difficulties. It is shown that the global optimal design may be obtained by tuning the natural frequencies of the spacecraft to satisfy specific constraints. We quantify the difference between a lower bound to the objective function associated with the original problem and the estimate obtained from the modified problem as the index for the adaptive refinement procedure. Numerical examples show that the results of the optimal design can provide substantial improvement.  相似文献   

4.
In this paper we show that the concept of an implemented semigroup provides a natural mathematical framework for analysis of the infinite-dimensional differential Lyapunov equation. Lyapunov equations of this form arise in various system-theoretic and control problems with a finite time horizon, infinite-dimensional state space and unbounded operators in the mathematical model of the system. The implemented semigroup approach allows us to derive a necessary and sufficient condition for the differential Lyapunov equation with an unbounded forcing term to admit a bounded solution in a suitable space. Whilst our focus is on the differential Lyapunov equation, we show that the same framework is also appropriate for the algebraic version of this equation. As an application we show that the approach can be used to solve a simple decoupling problem arising in optimal control. The problem of infinite time admissibility of the control operator and an infinite-dimensional version of the Lyapunov theorem serve as additional illustrations.  相似文献   

5.
In this paper, we develop a unified framework to address the problem of optimal nonlinear analysis and feedback control for nonlinear stochastic dynamical systems. Specifically, we provide a simplified and tutorial framework for stochastic optimal control and focus on connections between stochastic Lyapunov theory and stochastic Hamilton–Jacobi–Bellman theory. In particular, we show that asymptotic stability in probability of the closed‐loop nonlinear system is guaranteed by means of a Lyapunov function that can clearly be seen to be the solution to the steady‐state form of the stochastic Hamilton–Jacobi–Bellman equation and, hence, guaranteeing both stochastic stability and optimality. In addition, we develop optimal feedback controllers for affine nonlinear systems using an inverse optimality framework tailored to the stochastic stabilization problem. These results are then used to provide extensions of the nonlinear feedback controllers obtained in the literature that minimize general polynomial and multilinear performance criteria. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

6.
We propose a novel way for sampled-data implementation (with the zero order hold assumption) of continuous-time controllers for general nonlinear systems. We assume that a continuous-time controller has been designed so that the continuous-time closed-loop satisfies all performance requirements. Then, we use this control law indirectly to compute numerically a sampled-data controller. Our approach exploits a model predictive control (MPC) strategy that minimizes the mismatch between the solutions of the sampled-data model and the continuous-time closed-loop model. We propose a control law and present conditions under which stability and sub-optimality of the closed loop can be proved. We only consider the case of unconstrained MPC. We show that the recent results in [G. Grimm, M.J. Messina, A.R. Teel, S. Tuna, Model predictive control: for want of a local control Lyapunov function, all is not lost, IEEE Trans. Automat. Control 2004, to appear] can be directly used for analysis of stability of our closed-loop system.  相似文献   

7.
In this note, the problem of estimating the asymptotic stability region (ASR) of uncertain systems with bounded sliding mode controllers is considered. To simplify the problem the authors use a state transformation and choose a suitable Lyapunov function for the transformed control system. Using the Lyapunov function the authors estimate the ASR and show the exponential stability of the closed-loop control system in the region. Also, by an example the authors show that for a certain class of uncertain dynamical systems with bounded sliding mode controllers their method gives a more improved estimate of the ASR  相似文献   

8.
The design of a state feedback law for an affine nonlinear system to render a (as small as possible) compact neighborhood of the equilibrium of interest globally attractive is discussed. Following Z. Artstein's theorem (1983), the problem can be solved by designing a so-called control Lyapunov function. For systems which are in a cascade form, a Lyapunov function meeting Artstein's conditions is designed, assuming the knowledge of a control law stabilizing the equilibrium of the head nonlinear subsystem. In particular, for planar systems, this gives sufficient and necessary conditions for a compact neighborhood of the equilibrium to be stabilized  相似文献   

9.
In this paper, we consider a two-player stochastic differential game problem over an infinite time horizon where the players invoke controller and stopper strategies on a nonlinear stochastic differential game problem driven by Brownian motion. The optimal strategies for the two players are given explicitly by exploiting connections between stochastic Lyapunov stability theory and stochastic Hamilton–Jacobi–Isaacs theory. In particular, we show that asymptotic stability in probability of the differential game problem is guaranteed by means of a Lyapunov function which can clearly be seen to be the solution to the steady-state form of the stochastic Hamilton–Jacobi–Isaacs equation, and hence, guaranteeing both stochastic stability and optimality of the closed-loop control and stopper policies. In addition, we develop optimal feedback controller and stopper policies for affine nonlinear systems using an inverse optimality framework tailored to the stochastic differential game problem. These results are then used to provide extensions of the linear feedback controller and stopper policies obtained in the literature to nonlinear feedback controllers and stoppers that minimise and maximise general polynomial and multilinear performance criteria.  相似文献   

10.
In this paper we discuss the construction of “universal” controllers for a class of robust stabilization problems. We give a general theorem on the construction of these controllers, which requires that a certain nonlinear inequality is solvablepointwisely or, equivalently, that arobust control Lyapunov function does exist. The constructive procedure producesalmost smooth controllers. The robust control Lyapunov functions extend to uncertain systems the concept of control Lyapunov functions. If such a robust control Lyapunov function also satisfies a small control property, the resulting stabilizing controller is also continuous in the origin of the state space. Applications of our results range from optimal to robust control.  相似文献   

11.
In our early work, we show that one way to solve a robust control problem of an uncertain system is to translate the robust control problem into an optimal control problem. If the system is linear, then the optimal control problem becomes a linear quadratic regulator (LQR) problem, which can be solved by solving an algebraic Riccati equation. In this article, we extend the optimal control approach to robust tracking of linear systems. We assume that the control objective is not simply to drive the state to zero but rather to track a non-zero reference signal. We assume that the reference signal to be tracked is a polynomial function of time. We first investigated the tracking problem under the conditions that all state variables are available for feedback and show that the robust tracking problem can be solved by solving an algebraic Riccati equation. Because the state feedback is not always available in practice, we also investigated the output feedback. We show that if we place the poles of the observer sufficiently left of the imaginary axis, the robust tracking problem can be solved. As in the case of the state feedback, the observer and feedback can be obtained by solving two algebraic Riccati equations.  相似文献   

12.
提出线性多变量系统控制Lyapunov函数(CLF)构造的一般方法. 先证明可以通过解一类Lyapunov方程, 得到线性系统二次型的CLF. 接着证明了对于线性系统, 这种方法可以提供所有二次型的CLF. 最后证明了若线性系统存在CLF, 那么必存在二次型的CLF. 由此完全解决了线性系统的CLF构造问题.  相似文献   

13.
We consider the problem of designing optimal distributed controllers whose impulse response has limited propagation speed. We introduce a state-space framework in which all spatially invariant systems with this property can be characterized. After establishing the closure of such systems under linear fractional transformations, we formulate the H2 optimal control problem using the model-matching framework. We demonstrate that, even though the optimal control problem is non-convex with respect to some state-space design parameters, a variety of numerical optimization algorithms can be employed to relax the original problem, thereby rendering suboptimal controllers. In particular, for the case in which every subsystem has scalar input disturbance, scalar measurement, and scalar actuation signal, we investigate the application of the Steiglitz–McBride, Gauss–Newton, and Newton iterative schemes to the optimal distributed controller design problem. We apply this framework to examples previously considered in the literature to demonstrate that, by designing structured controllers with infinite impulse response, superior performance can be achieved compared to finite impulse response structured controllers of the same temporal degree.  相似文献   

14.
This paper presents a method for enlarging the domain of attraction of nonlinear model predictive control (MPC). The usual way of guaranteeing stability of nonlinear MPC is to add a terminal constraint and a terminal cost to the optimization problem such that the terminal region is a positively invariant set for the system and the terminal cost is an associated Lyapunov function. The domain of attraction of the controller depends on the size of the terminal region and the control horizon. By increasing the control horizon, the domain of attraction is enlarged but at the expense of a greater computational burden, while increasing the terminal region produces an enlargement without an extra cost.In this paper, the MPC formulation with terminal cost and constraint is modified, replacing the terminal constraint by a contractive terminal constraint. This constraint is given by a sequence of sets computed off-line that is based on the positively invariant set. Each set of this sequence does not need to be an invariant set and can be computed by a procedure which provides an inner approximation to the one-step set. This property allows us to use one-step approximations with a trade off between accuracy and computational burden for the computation of the sequence. This strategy guarantees closed loop-stability ensuring the enlargement of the domain of attraction and the local optimality of the controller. Moreover, this idea can be directly translated to robust MPC.  相似文献   

15.
A class of Lyapunov functions is proposed for discrete-time linear systems interconnected with a cone bounded nonlinearity. Using these functions, we propose sufficient conditions for the global stability analysis, in terms of linear matrix inequalities (LMI), only taking the bounded sector condition into account. Unlike frameworks based on the Lur’e-type function, the additional assumptions about the derivative or discrete variation of the nonlinearity are not necessary. Hence, a wider range of cone bounded nonlinearities can be covered. We also show that there is a link between global stability LMI conditions based on this new Lyapunov function and a transfer function of an auxiliary system being strictly positive real. In addition, the novel function is considered in the local stability analysis problem of discrete-time Lur’e systems subject to a saturating feedback. A convex optimization problem based on sufficient LMI conditions is formulated to maximize an estimate of the basin of attraction. Another specificity of this new Lyapunov function is the fact that the estimate is composed of disconnected sets. Numerical examples reveal the effectiveness of this new Lyapunov function in providing a less conservative estimate with respect to the quadratic function.  相似文献   

16.
In this paper the output feedback stabilizability problem is explored in terms of control Lyapunov functions. Sufficient conditions for stabilization are provided for a certain class of systems by means of output feedback stabilizers that can be obtained from an optimization problem. Our main results extends those developed in [31] and generalize a theorem due to Sontag [23].  相似文献   

17.
针对非线性连续系统难以跟踪时变轨迹的问题,本文首先通过系统变换引入新的状态变量从而将非线性系统的最优跟踪问题转化为一般非线性时不变系统的最优控制问题,并基于近似动态规划算法(ADP)获得近似最优值函数与最优控制策略.为有效地实现该算法,本文利用评价网与执行网来估计值函数及相应的控制策略,并且在线更新二者.为了消除神经网络近似过程中产生的误差,本文在设计控制器时增加一个鲁棒项;并且通过Lyapunov稳定性定理来证明本文提出的控制策略可保证系统跟踪误差渐近收敛到零,同时也验证在较小的误差范围内,该控制策略能够接近于最优控制策略.最后给出两个时变跟踪轨迹实例来证明该方法的可行性与有效性.  相似文献   

18.
The problem of estimating regions of asymptotic stability (RAS) of uncertain dynamical systems with bounded controllers and the sliding model requirement is discussed. The notion of combining different Lyapunov functions is utilized. Different Lyapunov functions are found, and for each Lyapunov function an estimate of the RAS is found. Different regions are combined in order to find an improved RAS. The combined region is not, in general, a convex region and cannot be found analytically by using just a single Lyapunov function. A class of uncertain, linear, time-invariant, multivariable control systems is considered, and a transformation for decoupling the system into two subsystems is used. The transformation simplifies the RAS estimation problem. Stability domains are estimated for a linear, variable-structure control system with sliding mode performance in the case of a discontinuous bounded controller  相似文献   

19.
20.
A sufficient condition to solve an optimal control problem is to solve the Hamilton–Jacobi–Bellman (HJB) equation. However, finding a value function that satisfies the HJB equation for a nonlinear system is challenging. For an optimal control problem when a cost function is provided a priori, previous efforts have utilized feedback linearization methods which assume exact model knowledge, or have developed neural network (NN) approximations of the HJB value function. The result in this paper uses the implicit learning capabilities of the RISE control structure to learn the dynamics asymptotically. Specifically, a Lyapunov stability analysis is performed to show that the RISE feedback term asymptotically identifies the unknown dynamics, yielding semi-global asymptotic tracking. In addition, it is shown that the system converges to a state space system that has a quadratic performance index which has been optimized by an additional control element. An extension is included to illustrate how a NN can be combined with the previous results. Experimental results are given to demonstrate the proposed controllers.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号