首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Induction machines (IM) constitute a theoretically interesting and practically important class of nonlinear systems. They are frequently used as wind generators for their power/cost ratio. They are described by a fifth‐order nonlinear differential equation with two inputs and only three state variables available for measurement. The control task is further complicated by the fact that IM are subject to unknown (load) disturbances and the parameters can be of great uncertainty. One is then faced with the challenging problem of controlling a highly nonlinear system, with unknown time‐varying parameters, where the regulated output, besides being unmeasurable, is perturbed by an unknown additive signal. Passivity‐based control (PBC) is a well‐established structure‐preserving design methodology which has shown to be very powerful to design robust controllers for physical systems described by Euler‐Lagrange equations of motion. PBCs provide a natural procedure to "shape" the potential energy yielding controllers with a clear physical interpretation in terms of interconnection of the system with its environment and are robust vis á vis to unmodeled dissipative effects. One recent approach of PBC is the Interconnection and Damping Assignment Passivity‐Based Control (IDA‐PBC) which is a very useful technique to control nonlinear systems assigning a desired (Port‐Controlled Hamiltonian) structure to the closed‐loop. The aim of this paper is to give a survey on different PBC of IM. The originality of this work is that the author proves that the well known field oriented control of IM is a particular case of the IDA‐PBC with disturbance.  相似文献   

2.
《Applied Soft Computing》2007,7(3):818-827
This paper proposes a reinforcement learning (RL)-based game-theoretic formulation for designing robust controllers for nonlinear systems affected by bounded external disturbances and parametric uncertainties. Based on the theory of Markov games, we consider a differential game in which a ‘disturbing’ agent tries to make worst possible disturbance while a ‘control’ agent tries to make best control input. The problem is formulated as finding a min–max solution of a value function. We propose an online procedure for learning optimal value function and for calculating a robust control policy. Proposed game-theoretic paradigm has been tested on the control task of a highly nonlinear two-link robot system. We compare the performance of proposed Markov game controller with a standard RL-based robust controller, and an H theory-based robust game controller. For the robot control task, the proposed controller achieved superior robustness to changes in payload mass and external disturbances, over other control schemes. Results also validate the effectiveness of neural networks in extending the Markov game framework to problems with continuous state–action spaces.  相似文献   

3.
This paper investigates the problem of robust controller design for output‐constrained and state‐constrained uncertain switched nonlinear systems. By using the idea of p‐times differentiable unbounded functions and the backstepping technique, a constructive method is proposed to design effective controllers such that the output of a class of uncertain switched nonlinear systems in lower triangular form can asymptotically track a constant reference signal without violation of the output tracking error constraint. Furthermore, the explored method is applied to the state‐constrained robust stabilization problem for a class of general uncertain switched nonlinear systems. Finally, a simulation example is provided to demonstrate the effectiveness of the developed results. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

4.
A method is presented for synthesizing output estimators and disturbance feedforward controllers for continuous‐time, uncertain, gridded, linear parameter‐varying (LPV) systems. Integral quadratic constraints are used to describe the uncertainty. Since the gridded LPV systems do not have a valid frequency‐domain interpretation, the time domain, dissipation inequality approach is followed. There are 2 main contributions. The first contribution is that a notion of duality is developed for the worst‐case gain analysis of uncertain, gridded LPV systems. This includes notions of dual LPV systems and dual integral quadratic constraints. Furthermore, several technical results are developed to demonstrate that the sufficient conditions for bounding the worst‐case gain of the primal and dual uncertain LPV systems are equivalent. The second contribution is that the convex conditions are derived for the synthesis of robust output estimators for uncertain LPV systems. The estimator synthesis conditions, together with the duality results, enable the convex synthesis of robust disturbance feedforward controllers. The effectiveness of the proposed method is demonstrated using a numerical example.  相似文献   

5.
Second‐order sliding mode (SOSM) control is used to keep exactly a constraint σ of the second relative degree or to avoid chattering phenomenon. Yet, the traditional SOSM controllers are designed based upon the assumption that the uncertainties or their derivatives are bounded by positive constants. In this paper, a global SOSM controller is designed for a general class of single‐input–single‐output nonlinear systems with uncertainties bounded by positive functions. Moreover, a variable‐gain robust exact differentiator is developed such that the SOSM controllers with finite‐time convergence can also be implemented even when the derivative of the constraint σ is unavailable. Simulation results are given to show the effectiveness of the proposed method.  相似文献   

6.
This paper proposes a novel approach to the problem of ??2 disturbance attenuation with global stability for nonlinear uncertain systems by placing great emphasis on seamless integration of linear and nonlinear controllers. This paper develops a new concept of state‐dependent scaling adapted to dynamic uncertainties and nonlinear‐gain bounded uncertainties that do not necessarily have finite linear‐gain, which is a key advance from previous scaling techniques. The proposed formulation of designing global nonlinear controllers is not only a natural extension of linear robust control, but also the approach renders the nonlinear controller identical with the linear control at the equilibrium. This paper particularly focuses on scaled ?? control which is widely accepted as a powerful methodology in linear robust control, and extends it nonlinearly. If the nonlinear system belongs to a generalized class of triangular systems allowing for unmodelled dynamics, the effect of the disturbance can be attenuated to an arbitrarily small level with global asymptotic stability by partial‐state feedback control. A procedure of designing such controllers is described in the form of recursive selection of state‐dependent scaling factors. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

7.
We consider the goal of ensuring robust stability when a given manipulator feedback control law is modified online, for example, to safely improve the performance by a learning module. To this end, the factorization approach is applied to both the plant and controller models to characterize robustly stabilizing controllers for rigid‐body manipulators under approximate inverse dynamics control. Outer‐loop controllers to stabilize the nonlinear uncertain loop that results from approximate inverse dynamics are often derived by lumping uncertainty in a single term and subsequent analysis of the error system. Here, by contrast, the well‐known norm bounds of these uncertain dynamics are first recast into a generalized plant configuration that preserves the characteristic uncertainty structure. Then, the overall loop uncertainty is expressed with respect to the nominal outer‐loop feedback controller by means of an uncertain dual‐Youla operator. Therefore, using the dual‐Youla parameterization, we provide a novel way to rigorously quantify permissible perturbations of robot manipulator feedforward/feedback controllers. The method proposed in this paper does not constitute another robust control law for rigid‐body manipulators, but rather a characterization of a set of robustly stabilizing controllers. The resulting double‐Youla parameterization for the control of robot manipulators is amenable to numerous advanced design methods. The result is thoroughly discussed by a planar elbow manipulator and exemplified with a six‐degree‐of‐freedom robot scenario with varying payload.  相似文献   

8.
Model‐based learning control of nonlinear systems is studied. Two types of learning algorithms, described by differential equations and/or difference equations to learn unknown time functions, are designed and compared using the Lyapunov's direct method. The time functions to be learned are classified into several classes according to their properties such as continuity, periodicity, and value at the origin of the state space. Conditions are found for iterative learning controls to achieve asymptotic stability and asymptotic learning convergence. For a comparative study, learning capability of a control is defined and, using the criterion, other model‐based controls with learning capability such as adaptive controls and robust controls are investigated. Through the study, iterative learning control is shown to be the one best suited for learning unknown time functions of known period. Finally, it is shown for the first time that an iterative learning control is directly applicable to systems described by nonlinear partial differential equation.  相似文献   

9.
A desired compensation adaptive law‐based neural network (DCAL‐NN) controller is proposed for the robust position control of rigid‐link robots. The NN is used to approximate a highly nonlinear function. The controller can guarantee the global asymptotic stability of tracking errors and boundedness of NN weights. In addition, the NN weights here are tuned on‐line, with no offline learning phase required. When compared with standard adaptive robot controllers, we do not require linearity in the parameters, or lengthy and tedious preliminary analysis to determine a regression matrix. The controller can be regarded as a universal reusable controller because the same controller can be applied to any type of rigid robots without any modifications. A comparative simulation study with different robust and adaptive controllers is included.  相似文献   

10.
Reinforcement learning (RL) has been applied to constructing controllers for nonlinear systems in recent years. Since RL methods do not require an exact dynamics model of the controlled object, they have a higher flexibility and potential for adaptation to uncertain or nonstationary environments than methods based on traditional control theory. If the target system has a continuous state space whose dynamic characteristics are nonlinear, however, RL methods often suffer from unstable learning processes. For this reason, it is difficult to apply RL methods to control tasks in the real world. In order to overcome the disadvantage of RL methods, we propose an RL scheme combining multiple controllers, each of which is constructed based on traditional control theory. We then apply it to a swinging-up and stabilizing task of an acrobot with a limited torque, which is a typical but difficult task in the field of nonlinear control theory. Our simulation result showed that our method was able to realize stable learning and to achieve fairly good control.This work was presented, in part, at the 9th International Symposium on Artificial Life and Robotics, Oita, Japan, January 28–30, 2004  相似文献   

11.
For systems with uncertainties, lots of PID parameter tuning methods have been proposed from the view point of the robust stability theory. However, the control performance becomes conservative using robust PID controllers. In this paper, a new two‐degree‐of‐freedom (2DOF) controller, which can improve the tracking properties, is proposed for nonlinear systems. According to the proposed method, the prefilter is designed as the PD compensator whose control parameters are tuned by the idea of a memory‐based modeling (MBM) method. Since the MBM method is a type of local modeling methods for nonlinear systems, PD parameters can be tuned adequately in an online manner corresponding to nonlinear properties. Finally, the effectiveness of the newly proposed control scheme is numerically evaluated on a simulation example. Copyright © 2008 John Wiley and Sons Asia Pte Ltd and Chinese Automatic Control Society  相似文献   

12.
The robust tracking and model following problem of linear uncertain time‐delay systems is investigated in this paper. By using the solution of the algebraic Riccati equation, this paper presents a direct approach to the design of robust tracking controllers. The system is controlled to track dynamic inputs generated from a reference model. In the case of matched uncertainties, the proposed controller ensures uniform ultimate boundedness of tracking errors and, furthermore, the bounds can be made arbitrarily small. In the case of mismatched uncertainties, a sufficient condition is presented such that the controller guarantees uniform ultimate boundedness of tracking errors. Compared with existing results, the main feature of the approach proposed in this paper is that it does not require any precompensator even for the non‐Hurwitz nominal system and, obviously, it is a direct method. It also employs linear controllers rather than nonlinear ones. Therefore, the designing method is simple for use and the resulting controller is easy to implement. Numerical examples show that this scheme can accommodate larger uncertainties and is likely to produce less conservative results.  相似文献   

13.
This paper presents a novel approach to designing switching linear parameter‐varying (SLPV) controllers with improved local performance and an algorithm for optimizing switching surfaces to further improve the performance of the SLPV controllers. The design approach utilizes the weighted average of the local L2‐gain bounds (representing the local performance) as the cost function to be minimized, whereas the maximum of the local L2‐gain bounds (representing the worst‐case performance over all subsets) is bounded with a tuning parameter. The tuning parameter is useful for taking the trade‐off between the local performance and the worst‐case performance. An algorithm based on the particle swarm optimization is introduced to optimize the switching surfaces of an SLPV controller. The efficacy of the proposed SLPV controller design approach and switching surface optimization algorithm is demonstrated on both a numerical example and a physical example of air‐fuel ratio control of an automotive engine.  相似文献   

14.
A new adaptive learning rule   总被引:1,自引:0,他引:1  
A method is presented for nonlinear function identification and application to learning control. The control objective is to identify and compensate for a nonlinear disturbance function. The nonlinear disturbance function is represented as an integral of a predefined kernel function multiplied by an unknown influence function. Sufficient conditions for the existence of such a representation are provided. Similarly, the nonlinear function estimate is generated by an integral of the predefined kernel multiplied by an influence function estimate. Using the time history of the plant, the learning rule indirectly estimates the unknown function by updating the influence function estimate. It is shown that the estimate function converges to the actual disturbance asymptotically. Consequently, the controller achieves the disturbance cancellation asymptotically. The method is extended to repetitive control applications. It is applied to the control of robot manipulators. Simulation and actual real-time implementation results using the Berkeley/NSK robot arm show that the proposed learning algorithm is more robust and converges at a faster rate than conventional repetitive controllers  相似文献   

15.
This paper is concerned with event‐triggered H control for a class of nonlinear networked control systems. An event‐triggered transmission scheme is introduced to select ‘necessary’ sampled data packets to be transmitted so that precious communication resources can be saved significantly. Under the event‐triggered transmission scheme, the closed‐loop system is modeled as a system with an interval time‐varying delay. Two novel integral inequalities are established to provide a tight estimation on the derivative of the Lyapunov–Krasovskii functional. As a result, a novel sufficient condition on the existence of desired event‐triggered H controllers is derived in terms of solutions to a set of linear matrix inequalities. No parameters need to be tuned when controllers are designed. The proposed method is then applied to the robust stabilization of a class of nonlinear networked control systems, and some linear matrix inequality‐based conditions are formulated to design both event‐triggered and time‐triggered H controllers. Finally, two numerical examples are given to demonstrate the effectiveness of the proposed method. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

16.
This paper considers robust stability and robust performance analysis for discrete‐time linear systems subject to nonlinear uncertainty. The uncertainty set is described by memoryless, time‐invariant, sector bounded, and slope restricted nonlinearities. We first give an overview of the absolute stability criterion based on the Lur'e‐Postkinov Lyapunov function, along with a frequency domain condition. Subsequently, we derive sufficient conditions to compute the upper bounds of the worst case H2 and worst case H∞ performance. For both robust stability testing and robust performance computation, we show that these sufficient conditions can be readily and efficiently determined by performing convex optimization over linear matrix inequalities.  相似文献   

17.
In this paper, performance oriented control laws are synthesized for a class of single‐input‐single‐output (SISO) n‐th order nonlinear systems in a normal form by integrating the neural networks (NNs) techniques and the adaptive robust control (ARC) design philosophy. All unknown but repeat‐able nonlinear functions in the system are approximated by the outputs of NNs to achieve a better model compensation for an improved performance. While all NN weights are tuned on‐line, discontinuous projections with fictitious bounds are used in the tuning law to achieve a controlled learning. Robust control terms are then constructed to attenuate model uncertainties for a guaranteed output tracking transient performance and a guaranteed final tracking accuracy. Furthermore, if the unknown nonlinear functions are in the functional ranges of the NNs and the ideal NN weights fall within the fictitious bounds, asymptotic output tracking is achieved to retain the perfect learning capability of NNs. The precision motion control of a linear motor drive system is used as a case study to illustrate the proposed NNARC strategy.  相似文献   

18.
In this paper, stochastic optimal strategy for unknown linear discrete‐time system quadratic zero‐sum games in input‐output form with communication imperfections such as network‐induced delays and packet losses, otherwise referred to as networked control system (NCS) zero‐sum games, relating to the H optimal control problem is solved in a forward‐in‐time manner. First, the linear discrete‐time zero sum state space representation is transformed into a linear NCS in the state space form after incorporating random delays and packet losses and then into the input‐output form. Subsequently, the stochastic optimal approach, referred to as adaptive dynamic programming (ADP), is introduced which estimates the cost or value function to solve the infinite horizon optimal regulation of unknown linear NCS quadratic zero‐sum games in the presence of communication imperfections. The optimal control and worst case disturbance inputs are derived based on the estimated value function in the absence of state measurements. An update law for tuning the unknown parameters of the value function estimator is derived and Lyapunov theory is used to show that all signals are asymptotically stable (AS) and that the estimated control and disturbance signals converge to optimal control and worst case disturbances, respectively. Simulation results are included to verify the theoretical claims.  相似文献   

19.
This paper proposes an intermittent model‐free learning algorithm for linear time‐invariant systems, where the control policy and transmission decisions are co‐designed simultaneously while also being subjected to worst‐case disturbances. The control policy is designed by introducing an internal dynamical system to further reduce the transmission rate and provide bandwidth flexibility in cyber‐physical systems. Moreover, a Q‐learning algorithm with two actors and a single critic structure is developed to learn the optimal parameters of a Q‐function. It is shown by using an impulsive system approach that the closed‐loop system has an asymptotically stable equilibrium and that no Zeno behavior occurs. Furthermore, a qualitative performance analysis of the model‐free dynamic intermittent framework is given and shows the degree of suboptimality concerning the optimal continuous updated controller. Finally, a numerical simulation of an unknown system is carried out to highlight the efficacy of the proposed framework.  相似文献   

20.
鲜斌  林嘉裕 《控制与决策》2020,35(11):2646-2652
针对小型无人直升机精确动力学模型难以获取以及姿态控制易受未知外界风扰影响的问题,设计一种基于强化学习(reinforcement learning,RL)与super twisting相结合的非线性控制算法.利用直升机在线飞行数据,训练执行者-评价者(actor-critic,AC)网络以逼近系统建模不确定部分.为了抑制未知外界风扰,提高系统鲁棒性,同时补偿AC网络逼近误差,设计基于super twisting的鲁棒控制算法.进而,利用Lyapunov稳定性分析方法证明无人直升机姿态误差能在有限时间内收敛到零.最后对所提出的算法进行实验验证,实验结果表明,所提出算法具有良好的控制效果,对系统不确定性和外界扰动具有良好的鲁棒性.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号