共查询到20条相似文献,搜索用时 171 毫秒
1.
2.
针对一类具有未知输入齿隙、参数不确定以及未建模动态和干扰的非线性系统,设计了自适应鲁棒控制器.将齿隙非线性模型等价表示为具有有界建模误差的全局线性化模型,在此基础上设计了包含自适应模型补偿、反馈稳定和鲁棒反馈3部分的自适应鲁棒控制器,并给出了系统动态跟踪误差和稳态误差指标.理论分析证明,闭环控制系统信号有界且跟踪误差在任意期望的精度范围内,仿真研究验证了所提出方法的有效性. 相似文献
3.
4.
5.
针对一类不确定非线性系统的跟踪控制问题,提出一种基于特征模型的复合自适应控制方法.该方法的创新性在于基于系统的误差特征模型,构建一种综合跟踪控制误差和模型估计误差的特征参量复合自适应律,该自适应律用于控制器设计和分析,可同时实现跟踪控制误差和模型估计误差的收敛.此外,为便于特征参量自适应律的设计和分析,根据特征参量的慢时变特性,将其视为未知标称常数项和时变误差项之和,并且选用其中常数项的估计量作为自适应控制参数.进一步,为抑制特征参量中时变误差项对系统稳定性和模型估计误差收敛性的影响,在控制器及复合自适应律设计中引入带饱和函数的非线性环节.理论分析证明闭环控制系统稳定,且跟踪控制误差和模型估计误差收敛到原点的一个邻域内.仿真结果表明,与现有仅根据模型估计误差调节的基于特征模型的自适应控制方法相比,所提出的复合自适应控制方法具有更好的控制性能. 相似文献
6.
7.
8.
9.
针对液压马达伺服控制系统由于系统强非线性引起的精确控制困难的问题,提出了基于线性化和滑模控制算法的自适应鲁棒积分滑模控制器.在不改变系统模型的前提下,应用线性化将系统模型中的部分非线性项进行线性化处理,降低了系统的强非线性,并结合自适应算法进行线性化误差补偿.同时,针对跟踪精度不足等问题,引入积分滑模控制算法进行鲁棒控... 相似文献
10.
为实现高速列车自动停车功能,根据列车纵向动力学分析和制动系统原理,建立了高速列车非线性制动模型.针对大系统模型的强耦合、强非线性和不确定性的特点,依据列车运行速度,将非线性模型表示为T-S(Takagi-Sugeno)模型,并基于自适应模糊策略,设计了自动停车滑模控制器.控制算法通过自适应模糊系统逼近模型中不确定项和互联项的上界,消除了车间作用力及运行阻力的影响,使列车追踪理想停车曲线.依据李亚普诺夫方法证明了闭环系统的稳定性和追踪误差的收敛性.仿真结果验证了所提滑模控制器的有效性. 相似文献
11.
In this article, we propose an adaptive backstepping control scheme using fuzzy neural networks (FNNs), ABCFNN, for a class of nonlinear non-affine systems in non-triangular form. The nonlinear non-affine system contains the uncertainty, external disturbance or parameters variations. Two kinds of FNN systems are used to estimate the unknown system functions. According to the FNN estimations, the adaptive backstepping control (ABCFNN) signal can be generated by backstepping design procedure such that the system output follows the desired trajectory. To ensure robustness and performance, a proportional-integral-surface function and robust controller are designed to improve the control performance. Based on the Lyapunov stability theory, the stability of a closed-loop system is guaranteed and the adaptive laws of the FNN parameters are obtained. This approach is also valid for nonlinear affine system with uncertainty or disturbance. The uncertainty and disturbance terms are estimated by FNNs and treated by the ABCFNN scheme. Finally, the effectiveness of the proposed ABCFNN is demonstrated through the simulation of controlling a nonlinear non-affine system and the continuously stirred tank reactor plant to demonstrate the performances of our approach. 相似文献
12.
Yazdan Bavafa-Toosi 《International journal of systems science》2017,48(3):649-658
Although flexible neural networks (FNNs) have been used more successfully than classical neural networks (CNNs) in many industrial applications, nothing is rigorously known about their properties. In fact they are not even well known to the systems and control community. In the first part of this paper, existing structures of and results on FNNs are surveyed. In the second part FNNs are examined in a theoretical framework. As a result, theoretical evidence is given for the superiority of FNNs over CNNs and further properties of the former are developed. More precisely, several fundamental properties of feedforward and recurrent FNNs are established. This includes the universal approximation capability, minimality, controllability, observability, and identifiability. In the broad sense, the results of this paper help that general use of FNNs in systems and control theory and applications be based on firm theoretical foundations. Theoretical analysis and synthesis of FNN-based systems thus become possible. The paper is concluded by a collection of topics for future work. 相似文献
13.
Shifei Ding Hongjie Jia Jinrong Chen Fengxiang Jin 《Artificial Intelligence Review》2014,41(3):373-384
Fuzzy neural networks (FNNs) and rough neural networks (RNNs) both have been hot research topics in the artificial intelligence in recent years. The former imitates the human brain in dealing with problems, the other takes advantage of rough set theory to process questions uncertainly. The aim of FNNs and RNNs is to process the massive volume of uncertain information, which is widespread applied in our life. This article summarizes the recent research development of FNNs and RNNs (together called granular neural networks). First the fuzzy neuron and rough neuron is introduced; next FNNs are analysed in two categories: normal FNNs and fuzzy logic neural networks; then the RNNs are analysed in the following four aspects: neural networks based on using rough sets in preprocessing information, neural networks based on rough logic, neural networks based on rough neuron and neural networks based on rough-granular; then we give a flow chart of the RNNs processing questions and an application of classical neural networks based on rough sets; next this is compared with FNNs and RNNs and the way to integrate is described; finally some advice is given on development of FNNs and RNNs in future. 相似文献
14.
Optimizing fuzzy neural networks for tuning PID controllers using an orthogonal simulated annealing algorithm OSA 总被引:2,自引:0,他引:2
In this paper, we formulate an optimization problem of establishing a fuzzy neural network model (FNNM) for efficiently tuning proportional-integral-derivative (PID) controllers of various test plants with under-damped responses using a large number P of training plants such that the mean tracking error J of the obtained P control systems is minimized. The FNNM consists of four fuzzy neural networks (FNNs) where each FNN models one of controller parameters (K, T/sub i/, T/sub d/, and b) of PID controllers. An existing indirect, two-stage approach used a dominant pole assignment method with P=198 to find the corresponding PID controllers. Consequently, an adaptive neuro-fuzzy inference system (ANFIS) is used to independently train the four individual FNNs using input the selected 176 of the 198 PID controllers that 22 controllers with parameters having large variation are abandoned. The innovation of the proposed approach is to directly and simultaneously optimize the four FNNs by using a novel orthogonal simulated annealing algorithm (OSA). High performance of the OSA-based approach arises from that OSA can effectively optimize lots of parameters of the FNNM to minimize J. It is shown that the OSA-based FNNM with P=176 can improve the ANFIS-based FNNM in averagely decreasing 13.08% error J and 88.07% tracking error of the 22 test plants by refining the solution of the ANFIS-based method. Furthermore, the OSA-based FNNMs using P=198 and 396 from an extensive tuning domain have similar good performance with that using P=176 in terms of J. 相似文献
15.
Parameter Optimization of Interval Type-2 Fuzzy Neural Networks Based on PSO and BBBC Methods 下载免费PDF全文
Interval type-2 fuzzy neural networks (IT2FNNs) can be seen as the hybridization of interval type-2 fuzzy systems (IT2FSs) and neural networks (NNs). Thus, they naturally inherit the merits of both IT2FSs and NNs. Although IT2FNNs have more advantages in processing uncertain, incomplete, or imprecise information compared to their type-1 counterparts, a large number of parameters need to be tuned in the IT2FNNs, which increases the difficulties of their design. In this paper, big bang-big crunch (BBBC) optimization and particle swarm optimization (PSO) are applied in the parameter optimization for Takagi-Sugeno-Kang (TSK) type IT2FNNs. The employment of the BBBC and PSO strategies can eliminate the need of backpropagation computation. The computing problem is converted to a simple feed-forward IT2FNNs learning. The adoption of the BBBC or the PSO will not only simplify the design of the IT2FNNs, but will also increase identification accuracy when compared with present methods. The proposed optimization based strategies are tested with three types of interval type-2 fuzzy membership functions (IT2FMFs) and deployed on three typical identification models. Simulation results certify the effectiveness of the proposed parameter optimization methods for the IT2FNNs. 相似文献
16.
This paper presents a function approximation to a general class of polynomials by using one-hidden-layer feedforward neural
networks(FNNs). Both the approximations of algebraic polynomial and trigonometric polynomial functions are discussed in details.
For algebraic polynomial functions, an one-hidden-layer FNN with chosen number of hidden-layer nodes and corresponding weights
is established by a constructive method to approximate the polynomials to a remarkable high degree of accuracy. For trigonometric
functions, an upper bound of approximation is therefore derived by the constructive FNNs. In addition, algorithmic examples
are also included to confirm the accuracy performance of the constructive FNNs method. The results show that it improves efficiently
the approximations of both algebraic polynomials and trigonometric polynomials. Consequently, the work is really of both theoretical
and practical significance in constructing a one-hidden-layer FNNs for approximating the class of polynomials. The work also
paves potentially the way for extending the neural networks to approximate a general class of complicated functions both in
theory and practice. 相似文献
17.
The essential order of approximation for neural networks 总被引:15,自引:0,他引:15
XU Zongben & CAO FeilongInstitute for Information System Sciences Xi''''an Jiaotong University Xi''''an China 《中国科学F辑(英文版)》2004,47(1):97-112
There have been various studies on approximation ability of feedforward neural networks (FNNs). Most of the existing studies are, however, only concerned with density or upper bound estimation on how a multivariate function can be approximated by an FNN, and consequently, the essential approximation ability of an FNN cannot be revealed. In this paper, by establishing both upper and lower bound estimations on approximation order, the essential approximation ability (namely, the essential approximation order) of a class of FNNs is clarified in terms of the modulus of smoothness of functions to be approximated. The involved FNNs can not only approximate any continuous or integrable functions defined on a compact set arbitrarily well, but also provide an explicit lower bound on the number of hidden units required. By making use of multivariate approximation tools, it is shown that when the functions to be approximated are Lipschitzian with order up to 2, the approximation speed of the FNNs is uniquely deter 相似文献
18.
Koutroumbas K 《Neural computation》2003,15(10):2457-2481
In this letter, the capabilities of feedforward neural networks (FNNs) on the realization and approximation of functions of the form g: R(l) --> A, which partition the R(l) space into polyhedral sets, each one being assigned to one out of the c classes of A, are investigated. More specifically, a constructive proof is given for the fact that FNNs consisting of nodes having sigmoid output functions are capable of approximating any function g with arbitrary accuracy. Also, the capabilities of FNNs consisting of nodes having the hard limiter as output function are reviewed. In both cases, the two-class as well as the multiclass cases are considered. 相似文献
19.
There have been many studies on the simultaneous approximation capability of feed-forward neural networks (FNNs). Most of these, however, are only concerned with the density or feasibility of performing simultaneous approximations. This paper considers the simultaneous approximation of algebraic polynomials, employing Taylor expansion and an algebraic constructive approach, to construct a class of FNNs which realize the simultaneous approximation of any smooth multivariate function and all of its derivatives. We also present an upper bound on the approximation accuracy of the FNNs, expressed in terms of the modulus of continuity of the functions to be approximated. 相似文献
20.
In the paper, the use of neural networks for the implementation of fast algorithms of spectral transformations is discussed. It is shown that the fast algorithms are particular cases of fast neural networks (FNNs). Methods for parametric tuning FNNs to a given system of basis functions are suggested. Neural network implementations of the fast Walsh and wavelet transformations and the fast Fourier, Vilenkin–Christiansen, and Haar transforms are constructed. The discussions are illustrated by examples. 相似文献