首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper introduces an observer-based adaptive optimal control method for unknown singularly perturbed nonlinear systems with input constraints. First, a multi-time scales dynamic neural network (MTSDNN) observer with a novel updating law derived from a properly designed Lyapunov function is proposed to estimate the system states. Then, an adaptive learning rule driven by the critic NN weight error is presented for the critic NN, which is used to approximate the optimal cost function. Finally, the optimal control action is calculated by online solving the Hamilton-Jacobi-Bellman (HJB) equation associated with the MTSDNN observer and critic NN. The stability of the overall closed-loop system consisting of the MTSDNN observer, the critic NN and the optimal control action is proved. The proposed observer-based optimal control approach has an essential advantage that the system dynamics are not needed for implementation, and only the measured input/output data is needed. Moreover, the proposed optimal control design takes the input constraints into consideration and thus can overcome the restriction of actuator saturation. Simulation results are presented to confirm the validity of the investigated approach.   相似文献   

2.
In this article, the event-triggered optimal tracking control problem for multiplayer unknown nonlinear systems is investigated by using adaptive critic designs. By constructing a neural network (NN)-based observer with input–output data, the system dynamics of multiplayer unknown nonlinear systems is obtained. Subsequently, the optimal tracking control problem is converted to an optimal regulation problem by establishing a tracking error system. Then, the optimal tracking control policy for each player is derived by solving coupled event-triggered Hamilton-Jacobi (HJ) equation via a critic NN. Meanwhile, a novel weight updating rule is designed by adopting concurrent learning method to relax the persistence of excitation (PE) condition. Moreover, an event-triggering condition is designed by using Lyapunov's direct method to guarantee the uniform ultimate boundedness (UUB) of the closed-loop multiplayer systems. Finally, the effectiveness of the developed method is verified by two different multiplayer nonlinear systems.  相似文献   

3.
This paper is to develop a simplified optimized tracking control using reinforcement learning (RL) strategy for a class of nonlinear systems. Since the nonlinear control gain function is considered in the system modeling, it is challenging to extend the existing RL-based optimal methods to the tracking control. The main reasons are that these methods' algorithm are very complex; meanwhile, they also require to meet some strict conditions. Different with these exiting RL-based optimal methods that derive the actor and critic training laws from the square of Bellman residual error, which is a complex function consisting of multiple nonlinear terms, the proposed optimized scheme derives the two RL training laws from negative gradient of a simple positive function, so that the algorithm can be significantly simplified. Moreover, the actor and critic in RL are constructed by employing neural network (NN) to approximate the solution of Hamilton–Jacobi–Bellman (HJB) equation. Finally, the feasibility of the proposed method is demonstrated in accordance with both Lyapunov stability theory and simulation example.  相似文献   

4.
In this article, an optimal bipartite consensus control (OBCC) scheme is proposed for heterogeneous multiagent systems (MASs) with input delay by reinforcement learning (RL) algorithm. A directed signed graph is established to construct MASs with competitive and cooperative relationships, and model reduction method is developed to tackle input delay problem. Then, based on the Hamilton–Jacobi–Bellman (HJB) equation, policy iteration method is utilized to design the bipartite consensus controller, which consists of value function and optimal controller. Further, a distributed event-triggered function is proposed to increase control efficiency, which only requires information from its own agent and neighboring agents. Based on the input-to-state stability (ISS) function and Lyapunov function, sufficient conditions for the stability of MASs can be derived. Apart from that, RL algorithm is employed to solve the event-triggered OBCC problem in MASs, where critic neural networks (NNs) and actor NNs estimate value function and control policy, respectively. Finally, simulation results are given to validate the feasibility and efficiency of the proposed algorithm.  相似文献   

5.
In this paper, an adaptive output feedback event-triggered optimal control algorithm is proposed for partially unknown constrained-input continuous-time nonlinear systems. First, a neural network observer is constructed to estimate unmeasurable state. Next, an event-triggered condition is established, and only when the event-triggered condition is violated will the event be triggered and the state be sampled. Then, an event-triggered-based synchronous integral reinforcement learning (ET-SIRL) control algorithm with critic-actor neural networks (NNs) architecture is proposed to solve the event-triggered Hamilton–Jacobi–Bellman equation under the established event-triggered condition. The critic and actor NNs are used to approximate cost function and optimal event-triggered optimal control law, respectively. Meanwhile, the event-triggered-based closed-loop system state and all the neural network weight estimation errors are uniformly ultimately bounded proved by Lyapunov stability theory, and there is no Zeno behavior. Finally, two numerical examples are presented to show the effectiveness of the proposed ET-SIRL control algorithm.  相似文献   

6.
In this study, a finite-time online optimal controller was designed for a nonlinear wheeled mobile robotic system (WMRS) with inequality constraints, based on reinforcement learning (RL) neural networks. In addition, an extended cost function, obtained by introducing a penalty function to the original long-time cost function, was proposed to deal with the optimal control problem of the system with inequality constraints. A novel Hamilton-Jacobi-Bellman (HJB) equation containing the constraint conditions was defined to determine the optimal control input. Furthermore, two neural networks (NNs), a critic and an actor NN, were established to approximate the extended cost function and the optimal control input, respectively. The adaptation laws of the critic and actor NN were obtained with the gradient descent method. The semi-global practical finite-time stability (SGPFS) was proved using Lyapunov's stability theory. The tracking error converges to a small region near zero within the constraints in a finite period. Finally, the effectiveness of the proposed optimal controller was verified by a simulation based on a practical wheeled mobile robot model.  相似文献   

7.
This paper proposes an online adaptive approximate solution for the infinite-horizon optimal tracking control problem of continuous-time nonlinear systems with unknown dynamics. The requirement of the complete knowledge of system dynamics is avoided by employing an adaptive identifier in conjunction with a novel adaptive law, such that the estimated identifier weights converge to a small neighborhood of their ideal values. An adaptive steady-state controller is developed to maintain the desired tracking performance at the steady-state, and an adaptive optimal controller is designed to stabilize the tracking error dynamics in an optimal manner. For this purpose, a critic neural network (NN) is utilized to approximate the optimal value function of the Hamilton-Jacobi-Bellman (HJB) equation, which is used in the construction of the optimal controller. The learning of two NNs, i.e., the identifier NN and the critic NN, is continuous and simultaneous by means of a novel adaptive law design methodology based on the parameter estimation error. Stability of the whole system consisting of the identifier NN, the critic NN and the optimal tracking control is guaranteed using Lyapunov theory; convergence to a near-optimal control law is proved. Simulation results exemplify the effectiveness of the proposed method.   相似文献   

8.
A sufficient condition to solve an optimal control problem is to solve the Hamilton–Jacobi–Bellman (HJB) equation. However, finding a value function that satisfies the HJB equation for a nonlinear system is challenging. For an optimal control problem when a cost function is provided a priori, previous efforts have utilized feedback linearization methods which assume exact model knowledge, or have developed neural network (NN) approximations of the HJB value function. The result in this paper uses the implicit learning capabilities of the RISE control structure to learn the dynamics asymptotically. Specifically, a Lyapunov stability analysis is performed to show that the RISE feedback term asymptotically identifies the unknown dynamics, yielding semi-global asymptotic tracking. In addition, it is shown that the system converges to a state space system that has a quadratic performance index which has been optimized by an additional control element. An extension is included to illustrate how a NN can be combined with the previous results. Experimental results are given to demonstrate the proposed controllers.  相似文献   

9.
应用一种新的自适应动态最优化方法(ADP),在线实现对非线性连续系统的最优控制。首先应用汉密尔顿函数(Hamilton-Jacobi-Bellman, HJB)求解系统的最优控制,并应用神经网络BP算法对汉密尔顿函数中的性能指标进行估计,进而得到非线性连续系统的最优控制。同时引进一种新的自适应算法,基于参数误差,在线实现对系统进行动态最优求解,而且通过李亚普诺夫方法对参数收敛情况也进行详细的分析。最后,用仿真结果来验证所提出的方法的可行性。  相似文献   

10.
王敏  黄龙旺  杨辰光 《自动化学报》2022,48(5):1234-1245
本文针对具有执行器故障的一类离散非线性多输入多输出(Multi-input multi-output, MIMO)系统, 提出了一种基于事件触发的自适应评判容错控制方案. 该控制方案包括评价和执行网络. 在评价网络里, 为了缓解现有的非光滑二值效用函数可能引起的执行网络跳变问题, 利用高斯函数构建了一个光滑的效用函数, 并采用评价网络近似最优性能指标函数. 在执行网络里, 通过变量替换将系统状态的将来信息转化成关于系统当前状态的函数, 并结合事件触发机制设计了最优跟踪控制器. 该控制器引入了动态补偿项, 不仅能够抑制执行器故障对系统性能的影响, 而且能够改善系统的控制性能. 稳定性分析表明所有信号最终一致有界且跟踪误差收敛于原点的有界小邻域内. 数值系统和实际系统的仿真结果验证了该方案的有效性.  相似文献   

11.
《Automatica》2014,50(12):3281-3290
This paper addresses the model-free nonlinear optimal control problem based on data by introducing the reinforcement learning (RL) technique. It is known that the nonlinear optimal control problem relies on the solution of the Hamilton–Jacobi–Bellman (HJB) equation, which is a nonlinear partial differential equation that is generally impossible to be solved analytically. Even worse, most practical systems are too complicated to establish an accurate mathematical model. To overcome these difficulties, we propose a data-based approximate policy iteration (API) method by using real system data rather than a system model. Firstly, a model-free policy iteration algorithm is derived and its convergence is proved. The implementation of the algorithm is based on the actor–critic structure, where actor and critic neural networks (NNs) are employed to approximate the control policy and cost function, respectively. To update the weights of actor and critic NNs, a least-square approach is developed based on the method of weighted residuals. The data-based API is an off-policy RL method, where the “exploration” is improved by arbitrarily sampling data on the state and input domain. Finally, we test the data-based API control design method on a simple nonlinear system, and further apply it to a rotational/translational actuator system. The simulation results demonstrate the effectiveness of the proposed method.  相似文献   

12.
In this paper, an observer design is proposed for nonlinear systems. The Hamilton–Jacobi–Bellman (HJB) equation based formulation has been developed. The HJB equation is formulated using a suitable non-quadratic term in the performance functional to tackle magnitude constraints on the observer gain. Utilizing Lyapunov's direct method, observer is proved to be optimal with respect to meaningful cost. In the present algorithm, neural network (NN) is used to approximate value function to find approximate solution of HJB equation using least squares method. With time-varying HJB solution, we proposed a dynamic optimal observer for the nonlinear system. Proposed algorithm has been applied on nonlinear systems with finite-time-horizon and infinite-time-horizon. Necessary theoretical and simulation results are presented to validate proposed algorithm.  相似文献   

13.
In this paper, the output feedback based finitehorizon near optimal regulation of nonlinear affine discretetime systems with unknown system dynamics is considered by using neural networks (NNs) to approximate Hamilton-Jacobi-Bellman (HJB) equation solution. First, a NN-based Luenberger observer is proposed to reconstruct both the system states and the control coefficient matrix. Next, reinforcement learning methodology with actor-critic structure is utilized to approximate the time-varying solution, referred to as the value function, of the HJB equation by using a NN. To properly satisfy the terminal constraint, a new error term is defined and incorporated in the NN update law so that the terminal constraint error is also minimized over time. The NN with constant weights and timedependent activation function is employed to approximate the time-varying value function which is subsequently utilized to generate the finite-horizon near optimal control policy due to NN reconstruction errors. The proposed scheme functions in a forward-in-time manner without offline training phase. Lyapunov analysis is used to investigate the stability of the overall closedloop system. Simulation results are given to show the effectiveness and feasibility of the proposed method.   相似文献   

14.
The Hamilton-Jacobi-Bellman (HJB) equation corresponding to constrained control is formulated using a suitable nonquadratic functional. It is shown that the constrained optimal control law has the largest region of asymptotic stability (RAS). The value function of this HJB equation is solved for by solving for a sequence of cost functions satisfying a sequence of Lyapunov equations (LE). A neural network is used to approximate the cost function associated with each LE using the method of least-squares on a well-defined region of attraction of an initial stabilizing controller. As the order of the neural network is increased, the least-squares solution of the HJB equation converges uniformly to the exact solution of the inherently nonlinear HJB equation associated with the saturating control inputs. The result is a nearly optimal constrained state feedback controller that has been tuned a priori off-line.  相似文献   

15.
Considering overshoot and chatter caused by the unknown interference, this article studies the adaptive robust optimal controls of continuous-time (CT) multi-input systems with an approximate dynamic programming (ADP) based Q-function scheme. An adaptive integral reinforcement learning (IRL) scheme is proposed to study the optimal solutions of Q-functions. First, multi-input value functions are presented, and Nash equilibrium is analyzed. A complex Hamilton–Jacobi–Issacs (HJI) equation is constructed with the multi-input system and the zero-sum-game-based value function. It is a challenging task to solve the HJI equation for nonlinear system. Thus, A transformation of the HJI equation is constructed as a Q-function. The neural network (NN) is applied to learn the solution of the transformed Q-functions based on the adaptive IRL scheme. Moreover, an error information is added to the Q-function for the issue of insufficient initial incentives to relax the persistent excitation (PE) condition. Simultaneously, an IRL signal of the critic networks is introduced to study the saddle-point intractable solution, such that the system drift and NN derivatives in the HJI equation are relaxed. The convergence of weight parameters is proved, and the closed-loop stability of the multi-system with the proposed IRL Q-function scheme is analyzed. Finally, a two-engine driven F-16 aircraft plant and a nonlinear system are presented to verify the effectiveness of the proposed adaptive IRL Q-function scheme.  相似文献   

16.
In this paper, we present an empirical study of iterative least squares minimization of the Hamilton-Jacobi-Bellman (HJB) residual with a neural network (NN) approximation of the value function. Although the nonlinearities in the optimal control problem and NN approximator preclude theoretical guarantees and raise concerns of numerical instabilities, we present two simple methods for promoting convergence, the effectiveness of which is presented in a series of experiments. The first method involves the gradual increase of the horizon time scale, with a corresponding gradual increase in value function complexity. The second method involves the assumption of stochastic dynamics which introduces a regularizing second derivative term to the HJB equation. A gradual reduction of this term provides further stabilization of the convergence. We demonstrate the solution of several problems, including the 4-D inverted-pendulum system with bounded control. Our approach requires no initial stabilizing policy or any restrictive assumptions on the plant or cost function, only knowledge of the plant dynamics. In the Appendix, we provide the equations for first- and second-order differential backpropagation.  相似文献   

17.
ABSTRACT

In this paper, we propose an identifier–critic-based approximate dynamic programming (ADP) structure to online solve H∞ control problem of nonlinear continuous-time systems without knowing precise system dynamics, where the actor neural network (NN) that has been widely used in the standard ADP learning structure is avoided. We first use an identifier NN to approximate the completely unknown nonlinear system dynamics and disturbances. Then, another critic NN is proposed to approximate the solution of the induced optimal equation. The H∞ control pair is obtained by using the proposed identifier–critic ADP structure. A recently developed adaptation algorithm is used to online directly estimate the unknown NN weights simultaneously, where the convergence to the optimal solution can be rigorously guaranteed, and the stability of the closed-loop system is analysed. Thus, this new ADP scheme can improve the computational efficiency of H∞ control implementation. Finally, simulation results confirm the effectiveness of the proposed methods.  相似文献   

18.
In this paper, an observer-based optimal control scheme is developed for unknown nonlinear systems using adaptive dynamic programming (ADP) algorithm. First, a neural-network (NN) observer is designed to estimate system states. Then, based on the observed states, a neuro-controller is constructed via ADP method to obtain the optimal control. In this design, two NN structures are used: a three-layer NN is used to construct the observer which can be applied to systems with higher degrees of nonlinearity and without a priori knowledge of system dynamics, and a critic NN is employed to approximate the value function. The optimal control law is computed using the critic NN and the observer NN. Uniform ultimate boundedness of the closed-loop system is guaranteed. The actor, critic, and observer structures are all implemented in real-time, continuously and simultaneously. Finally, simulation results are presented to demonstrate the effectiveness of the proposed control scheme.  相似文献   

19.
在求解离散非线性零和博弈问题时,为了在有效降低网络通讯和控制器执行次数的同时保证良好的控制效果,本文提出了一种基于事件驱动机制的最优控制方案.首先,设计了一个采用新型事件驱动阈值的事件驱动条件,并根据贝尔曼最优性原理获得了最优控制对的表达式.为了求解该表达式中的最优值函数,提出了一种单网络值迭代算法.利用一个神经网络构建评价网.设计了新的评价网权值更新规则.通过在评价网、控制策略及扰动策略之间不断迭代,最终获得零和博弈问题的最优值函数和最优控制对.然后,利用Lyapunov稳定性理论证明了闭环系统的稳定性.最后,将该事件驱动最优控制方案应用到了两个仿真例子中,验证了所提方法的有效性.  相似文献   

20.
Cai  Yuliang  Zhang  Huaguang  Zhang  Kun  Liu  Chong 《Neural computing & applications》2020,32(13):8763-8781

In this paper, a novel online iterative scheme, based on fuzzy adaptive dynamic programming, is proposed for distributed optimal leader-following consensus of heterogeneous nonlinear multi-agent systems under directed communication graph. This scheme combines game theory, adaptive dynamic programming together with generalized fuzzy hyperbolic model (GFHM). Firstly, based on precompensation technique, an appropriate model transformation is proposed to convert the error system into augmented error system, and an exquisite performance index function is defined for this system. Secondly, on the basis of Hamilton–Jacobi–Bellman (HJB) equation, the optimal consensus control is designed and a novel policy iteration (PI) algorithm is put forward to learn the solutions of the HJB equation online. Here, the proposed PI algorithm is implemented on account of GFHMs. Compared with dual-network model including critic network and action network, the proposed scheme only requires critic network. Thirdly, the augmented consensus error of each agent and the weight estimation error of each GFHM are proved to be uniformly ultimately bounded, and the stability of our method has been verified. Finally, some numerical examples and application examples are conducted to demonstrate the effectiveness of the theoretical results.

  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号