首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Fuzzy neural network (FNN) architectures, in which fuzzy logic and artificial neural networks are integrated, have been proposed by many researchers. In addition to developing the architecture for the FNN models, evolution of the learning algorithms for the connection weights is also a very important. Researchers have proposed gradient descent methods such as the back propagation algorithm and evolution methods such as genetic algorithms (GA) for training FNN connection weights. In this paper, we integrate a new meta-heuristic algorithm, the electromagnetism-like mechanism (EM), into the FNN training process. The EM algorithm utilizes an attraction–repulsion mechanism to move the sample points towards the optimum. However, due to the characteristics of the repulsion mechanism, the EM algorithm does not settle easily into the local optimum. We use EM to develop an EM-based FNN (the EM-initialized FNN) model with fuzzy connection weights. Further, the EM-initialized FNN model is used to train fuzzy if–then rules for learning expert knowledge. The results of comparisons done of the performance of our EM-initialized FNN model to conventional FNN models and GA-initialized FNN models proposed by other researchers indicate that the performance of our EM-initialized FNN model is better than that of the other FNN models. In addition, our use of a fuzzy ranking method to eliminate redundant fuzzy connection weights in our FNN architecture results in improved performance over other FNN models.  相似文献   

2.
An adaptive supervised learning scheme is proposed in this paper for training Fuzzy Neural Networks (FNN) to identify discrete-time nonlinear dynamical systems. The FNN constructs are neural-network-based connectionist models consisting of several layers that are used to implement the functions of a fuzzy logic system. The fuzzy rule base considered here consists of Takagi-Sugeno IF-THEN rules, where the rule outputs are realized as linear polynomials of the input components. The FNN connectionist model is functionally partitioned into three separate parts, namely, the premise part, which provides the truth values of the rule preconditional statements, the consequent part providing the rule outputs, and the defuzzification part computing the final output of the FNN construct. The proposed learning scheme is a two-stage training algorithm that performs both structure and parameter learning, simultaneously. First, the structure learning task determines the proper fuzzy input partitions and the respective precondition matching, and is carried out by means of the rule base adaptation mechanism. The rule base adaptation mechanism is a self-organizing procedure which progressively generates the proper fuzzy rule base, during training, according to the operating conditions. Having completed the structure learning stage, the parameter learning is applied using the back-propagation algorithm, with the objective to adjust the premise/consequent parameters of the FNN so that the desired input/output representation is captured to an acceptable degree of accuracy. The structure/parameter training algorithm exhibits good learning and generalization capabilities as demonstrated via a series of simulation studies. Comparisons with conventional multilayer neural networks indicate the effectiveness of the proposed scheme.  相似文献   

3.
A fast learning algorithm for training multilayer feedforward neural networks (FNN) by using a fading memory extended Kalman filter (FMEKF) is presented first, along with a technique using a self-adjusting time-varying forgetting factor. Then a U-D factorization-based FMEKF is proposed to further improve the learning rate and accuracy of the FNN. In comparison with the backpropagation (BP) and existing EKF-based learning algorithms, the proposed U-D factorization-based FMEKF algorithm provides much more accurate learning results, using fewer hidden nodes. It has improved convergence rate and numerical stability (robustness). In addition, it is less sensitive to start-up parameters (e.g., initial weights and covariance matrix) and the randomness in the observed data. It also has good generalization ability and needs less training time to achieve a specified learning accuracy. Simulation results in modeling and identification of nonlinear dynamic systems are given to show the effectiveness and efficiency of the proposed algorithm.  相似文献   

4.
A fuzzy neural network (FNN) controller with adaptive learning rates is proposed to control a nonlinear mechanism system in this study. First, the network structure and the on-line learning algorithm of the FNN is described. To guarantee the convergence of the tracking error, analytical methods based on a discrete-type Lyapunov function are proposed to determine the adaptive learning rates of the FNN. Next, a slider-crank mechanism, which is driven by a permanent magnet (PM) synchronous motor, is studied as an example to demonstrate the effectiveness of the proposed control technique; the FNN controller is implemented to control the slider position of the motor-slider-crank nonlinear mechanism. The robust control performance and learning ability of the proposed FNN controller with adaptive learning rates is demonstrated by simulation and experimental results.  相似文献   

5.
A fuzzy neural network and its application to pattern recognition   总被引:1,自引:0,他引:1  
Defines four types of fuzzy neurons and proposes the structure of a four-layer feedforward fuzzy neural network (FNN) and its associated learning algorithm. The proposed four-layer FNN performs well when used to recognize shifted and distorted training patterns. When an input pattern is provided, the network first fuzzifies this pattern and then computes the similarities of this pattern to all of the learned patterns. The network then reaches a conclusion by selecting the learned pattern with the highest similarity and gives a nonfuzzy output. The 26 English alphabets and the 10 Arabic numerals, each represented by 16×16 pixels, were used as original training patterns. In the simulation experiments, the original 36 exemplar patterns were shifted in eight directions by 1 pixel (6.25% to 8.84%) and 2 pixels (12.5% to 17.68%). After the FNN has been trained by the 36 exemplar patterns, the FNN can recall all of the learned patterns with 100% recognition rate. It can also recognize patterns shifted by 1 pixel in eight directions with 100% recognition rate and patterns shifted by 2 pixels in eight directions with an average recognition rate of 92.01%. After the FNN has been trained by the 36 exemplar patterns and 72 shifted patterns, it can recognize patterns shifted by 1 pixel with 100% recognition rate and patterns shifted by 2 pixels with an average recognition rate of 98.61%. The authors have also tested the FNN with 10 kinds of distorted patterns for each of the 36 exemplars. The FNN can recognize all of the distorted patterns with 100% recognition rate. The proposed FNN can also be adapted for applications in some other pattern recognition problems  相似文献   

6.
训练模式对的小幅摄动可能对模糊神经网络的性能产生副作用,为此文中提出了一般性的模糊神经网络对训练模式对摄动的鲁棒性概念,并就典型的模糊双向联想记忆网络FBAM进行了具体分析,理论研究表明FBAM采用模糊赫布学习算法时该鲁棒性好,而采用新近提出的另一学习算法时,该鲁棒性较差,为此,作者为后一算法提供了一种训练模式对摄动的控制方法,以保证FBAM的这种鲁棒性较好,最后用FBAM在图像联想方面的实验证实了文中的某些理论结果,文中工作对FBAM系统的性能分析、学习算法的选择和模式对获取过程的指导有一定意义。  相似文献   

7.
The stability analysis of the learning rate for a two-layer neural network (NN) is discussed first by minimizing the total squared error between the actual and desired outputs for a set of training vectors. The stable and optimal learning rate, in the sense of maximum error reduction, for each iteration in the training (back propagation) process can therefore be found for this two-layer NN. It has also been proven in this paper that the dynamic stable learning rate for this two-layer NN must be greater than zero. Thus it Is guaranteed that the maximum error reduction can be achieved by choosing the optimal learning rate for the next training iteration. A dynamic fuzzy neural network (FNN) that consists of the fuzzy linguistic process as the premise part and the two-layer NN as the consequence part is then illustrated as an immediate application of our approach. Each part of this dynamic FNN has its own learning rate for training purpose. A genetic algorithm is designed to allow a more efficient tuning process of the two learning rates of the FNN. The objective of the genetic algorithm is to reduce the searching time by searching for only one learning rate, which is the learning rate of the premise part, in the FNN. The dynamic optimal learning rates of the two-layer NN can be found directly using our innovative approach. Several examples are fully illustrated and excellent results are obtained for the model car backing up problem and the identification of nonlinear first order and second order systems.  相似文献   

8.
一种模糊神经网络的快速参数学习算法   总被引:9,自引:0,他引:9  
提出了一种新的模糊神经网络的快速参数学习算法, 采用一些特殊的处理, 可以用递推最小二乘法(RLS)来调整所有的参数. 以前的学习算法在调整模糊隶属度函数的中心和宽度的时候, 用的是梯度下降法, 具有容易陷入局部最小值点、收敛速度慢等缺点, 而本算法则可以克服这些缺点, 最后通过仿真验证了算法的有效性.  相似文献   

9.
Dynamical optimal training for interval type-2 fuzzy neural network (T2FNN)   总被引:2,自引:0,他引:2  
Type-2 fuzzy logic system (FLS) cascaded with neural network, type-2 fuzzy neural network (T2FNN), is presented in this paper to handle uncertainty with dynamical optimal learning. A T2FNN consists of a type-2 fuzzy linguistic process as the antecedent part, and the two-layer interval neural network as the consequent part. A general T2FNN is computational-intensive due to the complexity of type 2 to type 1 reduction. Therefore, the interval T2FNN is adopted in this paper to simplify the computational process. The dynamical optimal training algorithm for the two-layer consequent part of interval T2FNN is first developed. The stable and optimal left and right learning rates for the interval neural network, in the sense of maximum error reduction, can be derived for each iteration in the training process (back propagation). It can also be shown both learning rates cannot be both negative. Further, due to variation of the initial MF parameters, i.e., the spread level of uncertain means or deviations of interval Gaussian MFs, the performance of back propagation training process may be affected. To achieve better total performance, a genetic algorithm (GA) is designed to search optimal spread rate for uncertain means and optimal learning for the antecedent part. Several examples are fully illustrated. Excellent results are obtained for the truck backing-up control and the identification of nonlinear system, which yield more improved performance than those using type-1 FNN.  相似文献   

10.
Feedforward neural networks (FNNs) have been proposed to solve complex problems in pattern recognition and classification and function approximation. Despite the general success of learning methods for FNNs, such as the backpropagation (BP) algorithm, second-order optimization algorithms and layer-wise learning algorithms, several drawbacks remain to be overcome. In particular, two major drawbacks are convergence to a local minima and long learning time. We propose an efficient learning method for a FNN that combines the BP strategy and optimization layer by layer. More precisely, we construct the layer-wise optimization method using the Taylor series expansion of nonlinear operators describing a FNN and propose to update weights of each layer by the BP-based Kaczmarz iterative procedure. The experimental results show that the new learning algorithm is stable, it reduces the learning time and demonstrates improvement of generalization results in comparison with other well-known methods.  相似文献   

11.
应用模糊神经网络预测油田产量   总被引:1,自引:0,他引:1  
为了研究受多变量、时变和不确定因素影响的油田产量预测问题,将模糊逻辑推理技术与人工神经网络相结合,构建具有模糊逻辑推理和学习功能的模糊神经网络(FNN)系统。该系统基于现有的油田开发历史数据,建立相应的规则集,使用神经网络的训练方法(如梯度下降学习算法),在训练过程中调整参数,并自适应增加规则,以使系统的输出最佳地逼近于目标样本。通过对某油田的实际开发历史数据的拟合与测试,结果表明该模糊神经网络能够较精确地预测未来的油产量,与常规的BP神经网络相比,其预测精度更高、训练速度更快。因此,基于模糊神经网络(FNN)的油田产量预测方法研究具有较好的实际应用价值。  相似文献   

12.
Feedforward neural networks are the most commonly used function approximation techniques in neural networks. By the universal approximation theorem, it is clear that a single-hidden layer feedforward neural network (FNN) is sufficient to approximate the corresponding desired outputs arbitrarily close. Some researchers use genetic algorithms (GAs) to explore the global optimal solution of the FNN structure. However, it is rather time consuming to use GA for the training of FNN. In this paper, we propose a new optimization algorithm for a single-hidden layer FNN. The method is based on the convex combination algorithm for massaging information in the hidden layer. In fact, this technique explores a continuum idea which combines the classic mutation and crossover strategies in GA together. The proposed method has the advantage over GA which requires a lot of preprocessing works in breaking down the data into a sequence of binary codes before learning or mutation can apply. Also, we set up a new error function to measure the performance of the FNN and obtain the optimal choice of the connection weights and thus the nonlinear optimization problem can be solved directly. Several computational experiments are used to illustrate the proposed algorithm, which has good exploration and exploitation capabilities in search of the optimal weight for single hidden layer FNNs.  相似文献   

13.
一种用于说话人辨认的EM训练算法   总被引:2,自引:0,他引:2  
提出用于说话人辨认的一种概率映射网络(PMN)分类器,分类器的参数用EM(Expectationmaximization)算法进行训练。PMN网为一个四层前馈网,它构成一个贝叶斯分类器,实现多类分类的贝叶斯判别,把输入的说话人语音数据模型参数通过网络变换为输出的说话人判定。其网络节点对应于贝叶斯后验概率公式的各个变量。该PMN网络用高斯核函数作为密度函数,网络参数训练由EM算法实现,其学习方式为类间的监督学习和类内的非监督学习。实验结果表明,这种分类网络及其学习算法在说话人辨认应用中是有效的。  相似文献   

14.
A new and improved method to feedforward neural network (FNN) development for application to data classification problems, such as the prediction of levels of low-back disorder (LBD) risk associated with industrial jobs, is presented. Background on FNN development for data classification is provided along with discussions of previous research and neighborhood (local) solution search methods for hard combinatorial problems. An analytical study is presented which compared prediction accuracy of a FNN based on an error-back propagation (EBP) algorithm with the accuracy of a FNN developed by considering results of local solution search (simulated annealing) for classifying industrial jobs as posing low or high risk for LBDs. The comparison demonstrated superior performance of the FNN generated using the new method. The architecture of this FNN included fewer input (predictor) variables and hidden neurons than the FNN developed based on the EBP algorithm. Independent variable selection methods and the phenomenon of 'overfitting' in FNN (and statistical model) generation for data classification are discussed. The results are supportive of the use of the new approach to FNN development for applications to musculoskeletal disorders and risk forecasting in other domains.  相似文献   

15.
This paper proposes a self-evolving interval type-2 fuzzy neural network (SEIT2FNN) with online structure and parameter learning. The antecedent parts in each fuzzy rule of the SEIT2FNN are interval type-2 fuzzy sets and the fuzzy rules are of the Takagi–Sugeno–Kang (TSK) type. The initial rule base in the SEIT2FNN is empty, and the online clustering method is proposed to generate fuzzy rules that flexibly partition the input space. To avoid generating highly overlapping fuzzy sets in each input variable, an efficient fuzzy set reduction method is also proposed. This method independently determines whether a corresponding fuzzy set should be generated in each input variable when a new fuzzy rule is generated. For parameter learning, the consequent part parameters are tuned by the rule-ordered Kalman filter algorithm for high-accuracy learning performance. Detailed learning equations on applying the rule-ordered Kalman filter algorithm to the SEIT2FNN consequent part learning, with rules being generated online, are derived. The antecedent part parameters are learned by gradient descent algorithms. The SEIT2FNN is applied to simulations on nonlinear plant modeling, adaptive noise cancellation, and chaotic signal prediction. Comparisons with other type-1 and type-2 fuzzy systems in these examples verify the performance of the SEIT2FNN.   相似文献   

16.
给出煤矸石组分模式识别的模糊神经网络模型 ,提出一种实用生态算子 ,同时将在此基础上构建的生态遗传算法用于模糊神经网络的离线学习。仿真和实验结果显示 :新算法使网络具有良好的收敛性能 ,而且从训练好的定量网络中提取的模糊规则提高了煤中矸石的识别率。  相似文献   

17.
Support-vector-based fuzzy neural network for pattern classification   总被引:3,自引:0,他引:3  
Fuzzy neural networks (FNNs) for pattern classification usually use the backpropagation or C-cluster type learning algorithms to learn the parameters of the fuzzy rules and membership functions from the training data. However, such kinds of learning algorithms usually cannot minimize the empirical risk (training error) and expected risk (testing error) simultaneously, and thus cannot reach a good classification performance in the testing phase. To tackle this drawback, a support-vector-based fuzzy neural network (SVFNN) is proposed for pattern classification in this paper. The SVFNN combines the superior classification power of support vector machine (SVM) in high dimensional data spaces and the efficient human-like reasoning of FNN in handling uncertainty information. A learning algorithm consisting of three learning phases is developed to construct the SVFNN and train its parameters. In the first phase, the fuzzy rules and membership functions are automatically determined by the clustering principle. In the second phase, the parameters of FNN are calculated by the SVM with the proposed adaptive fuzzy kernel function. In the third phase, the relevant fuzzy rules are selected by the proposed reducing fuzzy rule method. To investigate the effectiveness of the proposed SVFNN classification, it is applied to the Iris, Vehicle, Dna, Satimage, Ijcnn1 datasets from the UCI Repository, Statlog collection and IJCNN challenge 2001, respectively. Experimental results show that the proposed SVFNN for pattern classification can achieve good classification performance with drastically reduced number of fuzzy kernel functions.  相似文献   

18.
Three new learning algorithms for Takagi-Sugeno-Kang fuzzy system based on training error and genetic algorithm are proposed. The first two algorithms are consisted of two phases. In the first phase, the initial structure of neuro-fuzzy network is created by estimating the optimum points of training data in input-output space using KNN (for the first algorithm) and Mean-Shift methods (for the second algorithm) and keeps adding new neurons based on an error-based algorithm. Then in the second phase, redundant neurons are recognized and removed using a genetic algorithm. The third algorithm then builds the network in one phase using a modified version of error algorithm used in the first two methods. The KNN method is shown to be invariant to parameter K in KNN algorithm and in two simulated examples outperforms other neuro-fuzzy approaches in both performance and network compactness.  相似文献   

19.
基于DFP校正拟牛顿法的傅里叶神经网络   总被引:1,自引:0,他引:1       下载免费PDF全文
林琳  黄南天  高兴泉 《计算机工程》2012,38(10):144-147
针对傅里叶神经网络采用最速下降法导致局部极小、学习速度慢以及泛化能力差的问题,提出一种基于DFP校正拟牛顿法的新学习算法。该算法计算复杂度低,能保证网络具有良好的泛化能力和全局最优性。通过2个数值算例检验该算法,同时和BP神经网络以及另外2种傅里叶神经网络作比较。结果表明,该算法计算复杂度约为最速下降法的5%,为最小二乘学习算法的80%,具有较好的泛化 能力。  相似文献   

20.
构造性设计是ANN设计的发展方向之一。全面的高质量的ANN学习应包括神经元激活函数类型的自动优化。该文在构造性设计的框架内讨论了如何实现典型前馈网络的包括神经元激活函数类型在内的全面学习。首先,提出了典型前馈网络的一种构造性设计方法的原理和算法框架,把整个网络的设计分解成了一个个单个神经元的设计问题;然后提出了基于GA的能实现激活函数类型优选的单个神经元的设计方法。大量函数拟合的仿真实验显示:与其它几种激活函数类型不优选的常见ANN设计方法相比,该文提出的方法更有效,能用较小的网络结构获得较好的泛化性能。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号