首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
The two‐player zero‐sum (ZS) game problem provides the solution to the bounded L2‐gain problem and so is important for robust control. However, its solution depends on solving a design Hamilton–Jacobi–Isaacs (HJI) equation, which is generally intractable for nonlinear systems. In this paper, we present an online adaptive learning algorithm based on policy iteration to solve the continuous‐time two‐player ZS game with infinite horizon cost for nonlinear systems with known dynamics. That is, the algorithm learns online in real time an approximate local solution to the game HJI equation. This method finds, in real time, suitable approximations of the optimal value and the saddle point feedback control policy and disturbance policy, while also guaranteeing closed‐loop stability. The adaptive algorithm is implemented as an actor/critic/disturbance structure that involves simultaneous continuous‐time adaptation of critic, actor, and disturbance neural networks. We call this online gaming algorithm ‘synchronous’ ZS game policy iteration. A persistence of excitation condition is shown to guarantee convergence of the critic to the actual optimal value function. Novel tuning algorithms are given for critic, actor, and disturbance networks. The convergence to the optimal saddle point solution is proven, and stability of the system is also guaranteed. Simulation examples show the effectiveness of the new algorithm in solving the HJI equation online for a linear system and a complex nonlinear system. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

2.
In this paper we discuss an online algorithm based on policy iteration for learning the continuous-time (CT) optimal control solution with infinite horizon cost for nonlinear systems with known dynamics. That is, the algorithm learns online in real-time the solution to the optimal control design HJ equation. This method finds in real-time suitable approximations of both the optimal cost and the optimal control policy, while also guaranteeing closed-loop stability. We present an online adaptive algorithm implemented as an actor/critic structure which involves simultaneous continuous-time adaptation of both actor and critic neural networks. We call this ‘synchronous’ policy iteration. A persistence of excitation condition is shown to guarantee convergence of the critic to the actual optimal value function. Novel tuning algorithms are given for both critic and actor networks, with extra nonstandard terms in the actor tuning law being required to guarantee closed-loop dynamical stability. The convergence to the optimal controller is proven, and the stability of the system is also guaranteed. Simulation examples show the effectiveness of the new algorithm.  相似文献   

3.
This paper proposes a novel optimal adaptive eventtriggered control algorithm for nonlinear continuous-time systems. The goal is to reduce the controller updates, by sampling the state only when an event is triggered to maintain stability and optimality. The online algorithm is implemented based on an actor/critic neural network structure. A critic neural network is used to approximate the cost and an actor neural network is used to approximate the optimal event-triggered controller. Since in the algorithm proposed there are dynamics that exhibit continuous evolutions described by ordinary differential equations and instantaneous jumps or impulses, we will use an impulsive system approach. A Lyapunov stability proof ensures that the closed-loop system is asymptotically stable. Finally, we illustrate the effectiveness of the proposed solution compared to a timetriggered controller.   相似文献   

4.
In this paper, we consider the problem of leader synchronization in systems with interacting agents in large networks while simultaneously satisfying energy‐related user‐defined distributed optimization criteria. But modeling in large networks is very difficult, and for that reason, we derive a model‐free formulation that is based on a separate distributed Q‐learning function for every agent. Every Q‐function is a parametrization of each agent's control, of the neighborhood controls, and of the neighborhood tracking error. It is also evident that none of the agents has any information on where the leader is connected to and from where she spreads the desired information. The proposed algorithm uses an integral reinforcement learning approach with a separate distributed actor/critic network for each agent: a critic approximator to approximate each value function and an actor approximator to approximate each optimal control law. The derived tuning laws for each actor and critic approximators are designed appropriately by using gradient descent laws. We provide rigorous stability and convergence proofs to show that the closed‐loop system has an asymptotically stable equilibrium point and that the control policies form a graphical Nash equilibrium. We demonstrate the effectiveness of the proposed method on a network consisting of 10 agents. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

5.
This article presents a novel actor‐critic‐barrier structure for the multiplayer safety‐critical systems. Non‐zero‐sum (NZS) games with full‐state constraints are first transformed into unconstrained NZS games using a barrier function. The barrier function is capable of dealing with both symmetric and asymmetric constraints on the state. It is shown that the Nash equilibrium of the unconstrained NZS guarantees to stabilize the original multiplayer system. The barrier function is combined with an actor‐critic structure to learn the Nash equilibrium solution in an online fashion. It is shown that integrating the barrier function with the actor‐critic structure guarantees that the constraints will not be violated during learning. Boundedness and stability of the closed‐loop signals are analyzed. The efficacy of the presented approach is finally demonstrated by using a simulation example.  相似文献   

6.
霍煜  王鼎  乔俊飞 《控制与决策》2023,38(11):3066-3074
针对一类具有不确定性的连续时间非线性系统,提出一种基于单网络评判学习的鲁棒跟踪控制方法.首先建立由跟踪误差与参考轨迹构成的增广系统,将鲁棒跟踪控制问题转换为镇定设计问题.通过采用带有折扣因子和特殊效用项的代价函数,将鲁棒镇定问题转换为最优控制问题.然后,通过构建评判神经网络对最优代价函数进行估计,进而得到最优跟踪控制算法.为了放松该算法的初始容许控制条件,在评判神经网络权值更新律中增加一个额外项.利用Lyapunov方法证明闭环系统的稳定性及鲁棒跟踪性能.最后,通过仿真结果验证该方法的有效性和适用性.  相似文献   

7.
In this paper, we introduce an online algorithm that uses integral reinforcement knowledge for learning the continuous‐time optimal control solution for nonlinear systems with infinite horizon costs and partial knowledge of the system dynamics. This algorithm is a data‐based approach to the solution of the Hamilton–Jacobi–Bellman equation, and it does not require explicit knowledge on the system's drift dynamics. A novel adaptive control algorithm is given that is based on policy iteration and implemented using an actor/critic structure having two adaptive approximator structures. Both actor and critic approximation networks are adapted simultaneously. A persistence of excitation condition is required to guarantee convergence of the critic to the actual optimal value function. Novel adaptive control tuning algorithms are given for both critic and actor networks, with extra terms in the actor tuning law being required to guarantee closed loop dynamical stability. The approximate convergence to the optimal controller is proven, and stability of the system is also guaranteed. Simulation examples support the theoretical result. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

8.
We propose a novel event‐triggered optimal tracking control algorithm for nonlinear systems with an infinite horizon discounted cost. The problem is formulated by appropriately augmenting the system and the reference dynamics and then using ideas from reinforcement learning to provide a solution. Namely, a critic network is used to estimate the optimal cost while an actor network is used to approximate the optimal event‐triggered controller. Because the actor network updates only when an event occurs, we shall use a zero‐order hold along with appropriate tuning laws to encounter for this behavior. Because we have dynamics that evolve in continuous and discrete time, we write the closed‐loop system as an impulsive model and prove asymptotic stability of the equilibrium point and Zeno behavior exclusion. Simulation results of a helicopter, a one‐link rigid robot under gravitation field, and a controlled Van‐der‐Pol oscillator are presented to show the efficacy of the proposed approach. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

9.
In this paper we present an online adaptive control algorithm based on policy iteration reinforcement learning techniques to solve the continuous-time (CT) multi player non-zero-sum (NZS) game with infinite horizon for linear and nonlinear systems. NZS games allow for players to have a cooperative team component and an individual selfish component of strategy. The adaptive algorithm learns online the solution of coupled Riccati equations and coupled Hamilton–Jacobi equations for linear and nonlinear systems respectively. This adaptive control method finds in real-time approximations of the optimal value and the NZS Nash-equilibrium, while also guaranteeing closed-loop stability. The optimal-adaptive algorithm is implemented as a separate actor/critic parametric network approximator structure for every player, and involves simultaneous continuous-time adaptation of the actor/critic networks. A persistence of excitation condition is shown to guarantee convergence of every critic to the actual optimal value function for that player. A detailed mathematical analysis is done for 2-player NZS games. Novel tuning algorithms are given for the actor/critic networks. The convergence to the Nash equilibrium is proven and stability of the system is also guaranteed. This provides optimal adaptive control solutions for both non-zero-sum games and their special case, the zero-sum games. Simulation examples show the effectiveness of the new algorithm.  相似文献   

10.
An online adaptive optimal control is proposed for continuous-time nonlinear systems with completely unknown dynamics, which is achieved by developing a novel identifier-critic-based approximate dynamic programming algorithm with a dual neural network (NN) approximation structure. First, an adaptive NN identifier is designed to obviate the requirement of complete knowledge of system dynamics, and a critic NN is employed to approximate the optimal value function. Then, the optimal control law is computed based on the information from the identifier NN and the critic NN, so that the actor NN is not needed. In particular, a novel adaptive law design method with the parameter estimation error is proposed to online update the weights of both identifier NN and critic NN simultaneously, which converge to small neighbourhoods around their ideal values. The closed-loop system stability and the convergence to small vicinity around the optimal solution are all proved by means of the Lyapunov theory. The proposed adaptation algorithm is also improved to achieve finite-time convergence of the NN weights. Finally, simulation results are provided to exemplify the efficacy of the proposed methods.  相似文献   

11.
This paper introduces an observer-based adaptive optimal control method for unknown singularly perturbed nonlinear systems with input constraints. First, a multi-time scales dynamic neural network (MTSDNN) observer with a novel updating law derived from a properly designed Lyapunov function is proposed to estimate the system states. Then, an adaptive learning rule driven by the critic NN weight error is presented for the critic NN, which is used to approximate the optimal cost function. Finally, the optimal control action is calculated by online solving the Hamilton-Jacobi-Bellman (HJB) equation associated with the MTSDNN observer and critic NN. The stability of the overall closed-loop system consisting of the MTSDNN observer, the critic NN and the optimal control action is proved. The proposed observer-based optimal control approach has an essential advantage that the system dynamics are not needed for implementation, and only the measured input/output data is needed. Moreover, the proposed optimal control design takes the input constraints into consideration and thus can overcome the restriction of actuator saturation. Simulation results are presented to confirm the validity of the investigated approach.   相似文献   

12.
Cai  Yuliang  Zhang  Huaguang  Zhang  Kun  Liu  Chong 《Neural computing & applications》2020,32(13):8763-8781

In this paper, a novel online iterative scheme, based on fuzzy adaptive dynamic programming, is proposed for distributed optimal leader-following consensus of heterogeneous nonlinear multi-agent systems under directed communication graph. This scheme combines game theory, adaptive dynamic programming together with generalized fuzzy hyperbolic model (GFHM). Firstly, based on precompensation technique, an appropriate model transformation is proposed to convert the error system into augmented error system, and an exquisite performance index function is defined for this system. Secondly, on the basis of Hamilton–Jacobi–Bellman (HJB) equation, the optimal consensus control is designed and a novel policy iteration (PI) algorithm is put forward to learn the solutions of the HJB equation online. Here, the proposed PI algorithm is implemented on account of GFHMs. Compared with dual-network model including critic network and action network, the proposed scheme only requires critic network. Thirdly, the augmented consensus error of each agent and the weight estimation error of each GFHM are proved to be uniformly ultimately bounded, and the stability of our method has been verified. Finally, some numerical examples and application examples are conducted to demonstrate the effectiveness of the theoretical results.

  相似文献   

13.
This paper addresses the problem of online model identification for multivariable processes with nonlinear and time‐varying dynamic characteristics. For this purpose, two online multivariable identification approaches with self‐organizing neural network model structures will be presented. The two adaptive radial basis function (RBF) neural networks are called as the growing and pruning radial basis function (GAP‐RBF) and minimal resource allocation network (MRAN). The resulting identification algorithms start without a predefined model structure and the dynamic model is generated autonomously using the sequential input‐output data pairs in real‐time applications. The extended Kalman filter (EKF) learning algorithm has been extended for both of the adaptive RBF‐based neural network approaches to estimate the free parameters of the identified multivariable model. The unscented Kalman filter (UKF) has been proposed as an alternative learning algorithm to enhance the accuracy and robustness of nonlinear multivariable processes in both the GAP‐RBF and MRAN based approaches. In addition, this paper intends to study comparatively the general applicability of the particle filter (PF)‐based approaches for the case of non‐Gaussian noisy environments. For this purpose, the Unscented Particle Filter (UPF) is employed to be used as alternative to the EKF and UKF for online parameter estimation of self‐generating RBF neural networks. The performance of the proposed online identification approaches is evaluated on a highly nonlinear time‐varying multivariable non‐isothermal continuous stirred tank reactor (CSTR) benchmark problem. Simulation results demonstrate the good performances of all identification approaches, especially the GAP‐RBF approach incorporated with the UKF and UPF learning algorithms. Copyright © 2010 John Wiley and Sons Asia Pte Ltd and Chinese Automatic Control Society  相似文献   

14.
In this paper, a solution to the approximate tracking problem of sampled‐data systems with uncertain, time‐varying sampling intervals and delays is presented. Such time‐varying sampling intervals and delays can typically occur in the field of networked control systems. The uncertain, time‐varying sampling and network delays cause inexact feedforward, which induces a perturbation on the tracking error dynamics, for which a model is presented in this paper. Sufficient conditions for the input‐to‐state stability (ISS) of the tracking error dynamics with respect to this perturbation are given. Hereto, two analysis approaches are developed: a discrete‐time approach and an approach in terms of delay impulsive differential equations. These ISS results provide bounds on the steady‐state tracking error as a function of the plant properties, the control design and the network properties. Moreover, it is shown that feedforward preview can significantly improve the tracking performance and an online extremum seeking (nonlinear programming) algorithm is proposed to online estimate the optimal preview time. The results are illustrated on a mechanical motion control example showing the effectiveness of the proposed strategy and providing insight into the differences and commonalities between the two analysis approaches. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

15.
设计了一种基于事件的迭代自适应评判算法,用于解决一类非仿射系统的零和博弈最优跟踪控制问题.通过数值求解方法得到参考轨迹的稳定控制,进而将未知非线性系统的零和博弈最优跟踪控制问题转化为误差系统的最优调节问题.为了保证闭环系统在具有良好控制性能的基础上有效地提高资源利用率,引入一个合适的事件触发条件来获得阶段性更新的跟踪策略对.然后,根据设计的触发条件,采用Lyapunov方法证明误差系统的渐近稳定性.接着,通过构建四个神经网络,来促进所提算法的实现.为了提高目标轨迹对应稳定控制的精度,采用模型网络直接逼近未知系统函数而不是误差动态系统.构建评判网络、执行网络和扰动网络用于近似迭代代价函数和迭代跟踪策略对.最后,通过两个仿真实例,验证该控制方法的可行性和有效性.  相似文献   

16.
针对一类非线性零和微分对策问题,本文提出了一种事件触发自适应动态规划(event-triggered adaptive dynamic programming,ET--ADP)算法在线求解其鞍点.首先,提出一个新的自适应事件触发条件.然后,利用一个输入为采样数据的神经网络(评价网络)近似最优值函数,并设计了新型的神经网络权值更新律使得值函数、控制策略及扰动策略仅在事件触发时刻同步更新.进一步地,利用Lyapunov稳定性理论证明了所提出的算法能够在线获得非线性零和微分对策的鞍点且不会引起Zeno行为.所提出的ET--ADP算法仅在事件触发条件满足时才更新值函数、控制策略和扰动策略,因而可有效减少计算量和降低网络负荷.最后,两个仿真例子验证了所提出的ET--ADP算法的有效性.  相似文献   

17.
Vector‐valued controller cost functions that are solely data‐dependent and reflect multiple objectives of a control system are examined within the framework of unfalsified adaptive control. The notion of Pareto optimality of vector‐valued cost functions and the conditions under which they are cost‐detectable are discussed. A sampled data/discrete‐time Level‐Set controller switching algorithm is investigated which allows for the relaxation of the assumption that the controller cost function be monotonically nondecreasing in time. This opens up the possibility of the use of fading memory cost functions which are nonmonotone. When an active controller is falsified at the current threshold cost level, the Level‐Set switching algorithm replaces it by an effectively unique solution of the weighted Tchebycheff method, thus ensuring the selection of an unfalsified Pareto optimal controller. Theoretical results for convergence and stability of the adaptive system are given. Simulation results validate the use of cost‐detectable multi‐objective cost functions. An example of a cost‐detectable cost function which uses fading memory norm of the fictitious tracking error as a performance measure is shown. This allows for computation of performance of nonactive controllers with respect to a reference model.  相似文献   

18.
在求解离散非线性零和博弈问题时,为了在有效降低网络通讯和控制器执行次数的同时保证良好的控制效果,本文提出了一种基于事件驱动机制的最优控制方案.首先,设计了一个采用新型事件驱动阈值的事件驱动条件,并根据贝尔曼最优性原理获得了最优控制对的表达式.为了求解该表达式中的最优值函数,提出了一种单网络值迭代算法.利用一个神经网络构建评价网.设计了新的评价网权值更新规则.通过在评价网、控制策略及扰动策略之间不断迭代,最终获得零和博弈问题的最优值函数和最优控制对.然后,利用Lyapunov稳定性理论证明了闭环系统的稳定性.最后,将该事件驱动最优控制方案应用到了两个仿真例子中,验证了所提方法的有效性.  相似文献   

19.
This paper proposes an intermittent model‐free learning algorithm for linear time‐invariant systems, where the control policy and transmission decisions are co‐designed simultaneously while also being subjected to worst‐case disturbances. The control policy is designed by introducing an internal dynamical system to further reduce the transmission rate and provide bandwidth flexibility in cyber‐physical systems. Moreover, a Q‐learning algorithm with two actors and a single critic structure is developed to learn the optimal parameters of a Q‐function. It is shown by using an impulsive system approach that the closed‐loop system has an asymptotically stable equilibrium and that no Zeno behavior occurs. Furthermore, a qualitative performance analysis of the model‐free dynamic intermittent framework is given and shows the degree of suboptimality concerning the optimal continuous updated controller. Finally, a numerical simulation of an unknown system is carried out to highlight the efficacy of the proposed framework.  相似文献   

20.
This paper considers optimal consensus control problem for unknown nonlinear multiagent systems (MASs) subjected to control constraints by utilizing event‐triggered adaptive dynamic programming (ETADP) technique. To deal with the control constraints, we introduce nonquadratic energy consumption functions into performance indices and formulate the Hamilton‐Jacobi‐Bellman (HJB) equations. Then, based on the Bellman's optimality principle, constrained optimal consensus control policies are designed from the HJB equations. In order to implement the ETADP algorithm, the critic networks and action networks are developed to approximate the value functions and consensus control policies respectively based on the measurable system data. Under the event‐triggered control framework, the weights of the critic networks and action networks are only updated at the triggering instants which are decided by the designed adaptive triggered conditions. The Lyapunov method is used to prove that the local neighbor consensus errors and the weight estimation errors of the critic networks and action networks are ultimately bounded. Finally, a numerical example is provided to show the effectiveness of the proposed ETADP method.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号