首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this paper, an adaptive output feedback event-triggered optimal control algorithm is proposed for partially unknown constrained-input continuous-time nonlinear systems. First, a neural network observer is constructed to estimate unmeasurable state. Next, an event-triggered condition is established, and only when the event-triggered condition is violated will the event be triggered and the state be sampled. Then, an event-triggered-based synchronous integral reinforcement learning (ET-SIRL) control algorithm with critic-actor neural networks (NNs) architecture is proposed to solve the event-triggered Hamilton–Jacobi–Bellman equation under the established event-triggered condition. The critic and actor NNs are used to approximate cost function and optimal event-triggered optimal control law, respectively. Meanwhile, the event-triggered-based closed-loop system state and all the neural network weight estimation errors are uniformly ultimately bounded proved by Lyapunov stability theory, and there is no Zeno behavior. Finally, two numerical examples are presented to show the effectiveness of the proposed ET-SIRL control algorithm.  相似文献   

2.
In this article, an optimal bipartite consensus control (OBCC) scheme is proposed for heterogeneous multiagent systems (MASs) with input delay by reinforcement learning (RL) algorithm. A directed signed graph is established to construct MASs with competitive and cooperative relationships, and model reduction method is developed to tackle input delay problem. Then, based on the Hamilton–Jacobi–Bellman (HJB) equation, policy iteration method is utilized to design the bipartite consensus controller, which consists of value function and optimal controller. Further, a distributed event-triggered function is proposed to increase control efficiency, which only requires information from its own agent and neighboring agents. Based on the input-to-state stability (ISS) function and Lyapunov function, sufficient conditions for the stability of MASs can be derived. Apart from that, RL algorithm is employed to solve the event-triggered OBCC problem in MASs, where critic neural networks (NNs) and actor NNs estimate value function and control policy, respectively. Finally, simulation results are given to validate the feasibility and efficiency of the proposed algorithm.  相似文献   

3.
In this paper, we consider the problem of leader synchronization in systems with interacting agents in large networks while simultaneously satisfying energy‐related user‐defined distributed optimization criteria. But modeling in large networks is very difficult, and for that reason, we derive a model‐free formulation that is based on a separate distributed Q‐learning function for every agent. Every Q‐function is a parametrization of each agent's control, of the neighborhood controls, and of the neighborhood tracking error. It is also evident that none of the agents has any information on where the leader is connected to and from where she spreads the desired information. The proposed algorithm uses an integral reinforcement learning approach with a separate distributed actor/critic network for each agent: a critic approximator to approximate each value function and an actor approximator to approximate each optimal control law. The derived tuning laws for each actor and critic approximators are designed appropriately by using gradient descent laws. We provide rigorous stability and convergence proofs to show that the closed‐loop system has an asymptotically stable equilibrium point and that the control policies form a graphical Nash equilibrium. We demonstrate the effectiveness of the proposed method on a network consisting of 10 agents. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

4.
In this paper we discuss an online algorithm based on policy iteration for learning the continuous-time (CT) optimal control solution with infinite horizon cost for nonlinear systems with known dynamics. That is, the algorithm learns online in real-time the solution to the optimal control design HJ equation. This method finds in real-time suitable approximations of both the optimal cost and the optimal control policy, while also guaranteeing closed-loop stability. We present an online adaptive algorithm implemented as an actor/critic structure which involves simultaneous continuous-time adaptation of both actor and critic neural networks. We call this ‘synchronous’ policy iteration. A persistence of excitation condition is shown to guarantee convergence of the critic to the actual optimal value function. Novel tuning algorithms are given for both critic and actor networks, with extra nonstandard terms in the actor tuning law being required to guarantee closed-loop dynamical stability. The convergence to the optimal controller is proven, and the stability of the system is also guaranteed. Simulation examples show the effectiveness of the new algorithm.  相似文献   

5.
This paper is to develop a simplified optimized tracking control using reinforcement learning (RL) strategy for a class of nonlinear systems. Since the nonlinear control gain function is considered in the system modeling, it is challenging to extend the existing RL-based optimal methods to the tracking control. The main reasons are that these methods' algorithm are very complex; meanwhile, they also require to meet some strict conditions. Different with these exiting RL-based optimal methods that derive the actor and critic training laws from the square of Bellman residual error, which is a complex function consisting of multiple nonlinear terms, the proposed optimized scheme derives the two RL training laws from negative gradient of a simple positive function, so that the algorithm can be significantly simplified. Moreover, the actor and critic in RL are constructed by employing neural network (NN) to approximate the solution of Hamilton–Jacobi–Bellman (HJB) equation. Finally, the feasibility of the proposed method is demonstrated in accordance with both Lyapunov stability theory and simulation example.  相似文献   

6.
应用一种新的自适应动态最优化方法(ADP),在线实现对非线性连续系统的最优控制。首先应用汉密尔顿函数(Hamilton-Jacobi-Bellman, HJB)求解系统的最优控制,并应用神经网络BP算法对汉密尔顿函数中的性能指标进行估计,进而得到非线性连续系统的最优控制。同时引进一种新的自适应算法,基于参数误差,在线实现对系统进行动态最优求解,而且通过李亚普诺夫方法对参数收敛情况也进行详细的分析。最后,用仿真结果来验证所提出的方法的可行性。  相似文献   

7.
基于数据自适应评判的离散2-D系统零和博弈最优控制   总被引:1,自引:1,他引:0  
提出了基于一种迭代自适应评判设计(ACD)算法解决一类离散时间Roesser型2-D系统的二人零和对策问题. 文章主要思想是采用自适应评判技术迭代的获得最优控制对使得性能指标函数达到零和对策的鞍点. 所提出的ACD可以通过输入输出数据进行实现而不需要系统的模型. 为了实现迭代ACD算法, 神经网络分别用来近似性能指标函数和计算最优控制率. 最后最优控制策略将应用到空气干燥过程控制中以证明其有效性.  相似文献   

8.
In this paper, reinforcement learning state- and output-feedback-based adaptive critic controller designs are proposed by using the online approximators (OLAs) for a general multi-input and multioutput affine unknown nonlinear discretetime systems in the presence of bounded disturbances. The proposed controller design has two entities, an action network that is designed to produce optimal signal and a critic network that evaluates the performance of the action network. The critic estimates the cost-to-go function which is tuned online using recursive equations derived from heuristic dynamic programming. Here, neural networks (NNs) are used both for the action and critic whereas any OLAs, such as radial basis functions, splines, fuzzy logic, etc., can be utilized. For the output-feedback counterpart, an additional NN is designated as the observer to estimate the unavailable system states, and thus, separation principle is not required. The NN weight tuning laws for the controller schemes are also derived while ensuring uniform ultimate boundedness of the closed-loop system using Lyapunov theory. Finally, the effectiveness of the two controllers is tested in simulation on a pendulum balancing system and a two-link robotic arm system.  相似文献   

9.
本文针对连续时间非线性系统的不对称约束多人非零和博弈问题, 建立了一种基于神经网络的自适应评判控制方法. 首先, 本文提出了一种新颖的非二次型函数来处理不对称约束问题, 并且推导出最优控制律和耦合Hamilton-Jacobi方程. 值得注意的是, 当系统状态为零时, 最优控制策略是不为零的, 这与以往不同. 然后, 通过构建单一评判网络来近似每个玩家的最优代价函数, 从而获得相关的近似最优控制策略. 同时, 在评判学习期间发展了一种新的权值更新规则. 此外, 通过利用Lyapunov理论证明了评判网络权值近似误差和闭环系统状态的稳定性. 最后, 仿真结果验证了本文所提方法的有效性  相似文献   

10.
This paper proposes an adaptive critic tracking control design for a class of nonlinear systems using fuzzy basis function networks (FBFNs). The key component of the adaptive critic controller is the FBFN, which implements an associative learning network (ALN) to approximate unknown nonlinear system functions, and an adaptive critic network (ACN) to generate the internal reinforcement learning signal to tune the ALN. Another important component, the reinforcement learning signal generator, requires the solution of a linear matrix inequality (LMI), which should also be satisfied to ensure stability. Furthermore, the robust control technique can easily reject the effects of the approximation errors of the FBFN and external disturbances. Unlike traditional adaptive critic controllers that learn from trial-and-error interactions, the proposed on-line tuning algorithm for ALN and ACN is derived from Lyapunov theory, thereby significantly shortening the learning time. Simulation results of a cart-pole system demonstrate the effectiveness of the proposed FBFN-based adaptive critic controller.  相似文献   

11.
In this study, a finite-time online optimal controller was designed for a nonlinear wheeled mobile robotic system (WMRS) with inequality constraints, based on reinforcement learning (RL) neural networks. In addition, an extended cost function, obtained by introducing a penalty function to the original long-time cost function, was proposed to deal with the optimal control problem of the system with inequality constraints. A novel Hamilton-Jacobi-Bellman (HJB) equation containing the constraint conditions was defined to determine the optimal control input. Furthermore, two neural networks (NNs), a critic and an actor NN, were established to approximate the extended cost function and the optimal control input, respectively. The adaptation laws of the critic and actor NN were obtained with the gradient descent method. The semi-global practical finite-time stability (SGPFS) was proved using Lyapunov's stability theory. The tracking error converges to a small region near zero within the constraints in a finite period. Finally, the effectiveness of the proposed optimal controller was verified by a simulation based on a practical wheeled mobile robot model.  相似文献   

12.
Adaptive critic (AC) based controllers are typically discrete and/or yield a uniformly ultimately bounded stability result because of the presence of disturbances and unknown approximation errors. A continuous-time AC controller is developed that yields asymptotic tracking of a class of uncertain nonlinear systems with bounded disturbances. The proposed AC-based controller consists of two neural networks (NNs) – an action NN, also called the actor, which approximates the plant dynamics and generates appropriate control actions; and a critic NN, which evaluates the performance of the actor based on some performance index. The reinforcement signal from the critic is used to develop a composite weight tuning law for the action NN based on Lyapunov stability analysis. A recently developed robust feedback technique, robust integral of the sign of the error (RISE), is used in conjunction with the feedforward action neural network to yield a semiglobal asymptotic result. Experimental results are provided that illustrate the performance of the developed controller.  相似文献   

13.
In this paper, a suite of adaptive neural network (NN) controllers is designed to deliver a desired tracking performance for the control of an unknown, second-order, nonlinear discrete-time system expressed in nonstrict feedback form. In the first approach, two feedforward NNs are employed in the controller with tracking error as the feedback variable whereas in the adaptive critic NN architecture, three feedforward NNs are used. In the adaptive critic architecture, two action NNs produce virtual and actual control inputs, respectively, whereas the third critic NN approximates certain strategic utility function and its output is employed for tuning action NN weights in order to attain the near-optimal control action. Both the NN control methods present a well-defined controller design and the noncausal problem in discrete-time backstepping design is avoided via NN approximation. A comparison between the controller methodologies is highlighted. The stability analysis of the closed-loop control schemes is demonstrated. The NN controller schemes do not require an offline learning phase and the NN weights can be initialized at zero or random. Results show that the performance of the proposed controller schemes is highly satisfactory while meeting the closed-loop stability.   相似文献   

14.
A new adaptive critic autopilot design for bank-to-turn missiles is presented. In this paper, the architecture of adaptive critic learning scheme contains a fuzzy-basis-function-network based associative search element (ASE), which is employed to approximate nonlinear and complex functions of bank-to-turn missiles, and an adaptive critic element (ACE) generating the reinforcement signal to tune the associative search element. In the design of the adaptive critic autopilot, the control law receives signals from a fixed gain controller, an ASE and an adaptive robust element, which can eliminate approximation errors and disturbances. Traditional adaptive critic reinforcement learning is the problem faced by an agent that must learn behavior through trial-and-error interactions with a dynamic environment, however, the proposed tuning algorithm can significantly shorten the learning time by online tuning all parameters of fuzzy basis functions and weights of ASE and ACE. Moreover, the weight updating law derived from the Lyapunov stability theory is capable of guaranteeing both tracking performance and stability. Computer simulation results confirm the effectiveness of the proposed adaptive critic autopilot.  相似文献   

15.
在求解离散非线性零和博弈问题时,为了在有效降低网络通讯和控制器执行次数的同时保证良好的控制效果,本文提出了一种基于事件驱动机制的最优控制方案.首先,设计了一个采用新型事件驱动阈值的事件驱动条件,并根据贝尔曼最优性原理获得了最优控制对的表达式.为了求解该表达式中的最优值函数,提出了一种单网络值迭代算法.利用一个神经网络构建评价网.设计了新的评价网权值更新规则.通过在评价网、控制策略及扰动策略之间不断迭代,最终获得零和博弈问题的最优值函数和最优控制对.然后,利用Lyapunov稳定性理论证明了闭环系统的稳定性.最后,将该事件驱动最优控制方案应用到了两个仿真例子中,验证了所提方法的有效性.  相似文献   

16.
《Automatica》2014,50(12):3281-3290
This paper addresses the model-free nonlinear optimal control problem based on data by introducing the reinforcement learning (RL) technique. It is known that the nonlinear optimal control problem relies on the solution of the Hamilton–Jacobi–Bellman (HJB) equation, which is a nonlinear partial differential equation that is generally impossible to be solved analytically. Even worse, most practical systems are too complicated to establish an accurate mathematical model. To overcome these difficulties, we propose a data-based approximate policy iteration (API) method by using real system data rather than a system model. Firstly, a model-free policy iteration algorithm is derived and its convergence is proved. The implementation of the algorithm is based on the actor–critic structure, where actor and critic neural networks (NNs) are employed to approximate the control policy and cost function, respectively. To update the weights of actor and critic NNs, a least-square approach is developed based on the method of weighted residuals. The data-based API is an off-policy RL method, where the “exploration” is improved by arbitrarily sampling data on the state and input domain. Finally, we test the data-based API control design method on a simple nonlinear system, and further apply it to a rotational/translational actuator system. The simulation results demonstrate the effectiveness of the proposed method.  相似文献   

17.
This paper considers optimal consensus control problem for unknown nonlinear multiagent systems (MASs) subjected to control constraints by utilizing event‐triggered adaptive dynamic programming (ETADP) technique. To deal with the control constraints, we introduce nonquadratic energy consumption functions into performance indices and formulate the Hamilton‐Jacobi‐Bellman (HJB) equations. Then, based on the Bellman's optimality principle, constrained optimal consensus control policies are designed from the HJB equations. In order to implement the ETADP algorithm, the critic networks and action networks are developed to approximate the value functions and consensus control policies respectively based on the measurable system data. Under the event‐triggered control framework, the weights of the critic networks and action networks are only updated at the triggering instants which are decided by the designed adaptive triggered conditions. The Lyapunov method is used to prove that the local neighbor consensus errors and the weight estimation errors of the critic networks and action networks are ultimately bounded. Finally, a numerical example is provided to show the effectiveness of the proposed ETADP method.  相似文献   

18.
In this paper we present an online adaptive control algorithm based on policy iteration reinforcement learning techniques to solve the continuous-time (CT) multi player non-zero-sum (NZS) game with infinite horizon for linear and nonlinear systems. NZS games allow for players to have a cooperative team component and an individual selfish component of strategy. The adaptive algorithm learns online the solution of coupled Riccati equations and coupled Hamilton–Jacobi equations for linear and nonlinear systems respectively. This adaptive control method finds in real-time approximations of the optimal value and the NZS Nash-equilibrium, while also guaranteeing closed-loop stability. The optimal-adaptive algorithm is implemented as a separate actor/critic parametric network approximator structure for every player, and involves simultaneous continuous-time adaptation of the actor/critic networks. A persistence of excitation condition is shown to guarantee convergence of every critic to the actual optimal value function for that player. A detailed mathematical analysis is done for 2-player NZS games. Novel tuning algorithms are given for the actor/critic networks. The convergence to the Nash equilibrium is proven and stability of the system is also guaranteed. This provides optimal adaptive control solutions for both non-zero-sum games and their special case, the zero-sum games. Simulation examples show the effectiveness of the new algorithm.  相似文献   

19.
The optimal control issue of discrete-time nonlinear unknown systems with time-delay control input is the focus of this work. In order to reduce communication costs, a reinforcement learning-based event-triggered controller is proposed. By applying the proposed control method, closed-loop system's asymptotic stability is demonstrated, and a maximum upper bound for the infinite-horizon performance index can be calculated beforehand. The event-triggered condition requires the next time state information. In an effort to forecast the next state and achieve optimal control, three neural networks (NNs) are introduced and used to approximate system state, value function, and optimal control. Additionally, a M NN is utilized to cope with the time-delay term of control input. Moreover, taking the estimation errors of NNs into account, the uniformly ultimately boundedness of state and NNs weight estimation errors can be guaranteed. Ultimately, the validity of proposed approach is illustrated by simulations.  相似文献   

20.
In this paper, performance oriented control laws are synthesized for a class of single‐input‐single‐output (SISO) n‐th order nonlinear systems in a normal form by integrating the neural networks (NNs) techniques and the adaptive robust control (ARC) design philosophy. All unknown but repeat‐able nonlinear functions in the system are approximated by the outputs of NNs to achieve a better model compensation for an improved performance. While all NN weights are tuned on‐line, discontinuous projections with fictitious bounds are used in the tuning law to achieve a controlled learning. Robust control terms are then constructed to attenuate model uncertainties for a guaranteed output tracking transient performance and a guaranteed final tracking accuracy. Furthermore, if the unknown nonlinear functions are in the functional ranges of the NNs and the ideal NN weights fall within the fictitious bounds, asymptotic output tracking is achieved to retain the perfect learning capability of NNs. The precision motion control of a linear motor drive system is used as a case study to illustrate the proposed NNARC strategy.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号