首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 21 毫秒
1.
Ning  Meng Joo  Xianyao   《Neurocomputing》2009,72(16-18):3818
In this paper, we present a fast and accurate online self-organizing scheme for parsimonious fuzzy neural networks (FAOS-PFNN), where a novel structure learning algorithm incorporating a pruning strategy into new growth criteria is developed. The proposed growing procedure without pruning not only speeds up the online learning process but also facilitates a more parsimonious fuzzy neural network while achieving comparable performance and accuracy by virtue of the growing and pruning strategy. The FAOS-PFNN starts with no hidden neurons and parsimoniously generates new hidden units according to the proposed growth criteria as learning proceeds. In the parameter learning phase, all the free parameters of hidden units, regardless of whether they are newly created or originally existing, are updated by the extended Kalman filter (EKF) method. The effectiveness and superiority of the FAOS-PFNN paradigm is compared with other popular approaches like resource allocation network (RAN), RAN via the extended Kalman filter (RANEKF), minimal resource allocation network (MRAN), adaptive-network-based fuzzy inference system (ANFIS), orthogonal least squares (OLS), RBF-AFS, dynamic fuzzy neural networks (DFNN), generalized DFNN (GDFNN), generalized GAP-RBF (GGAP-RBF), online sequential extreme learning machine (OS-ELM) and self-organizing fuzzy neural network (SOFNN) on various benchmark problems in the areas of function approximation, nonlinear dynamic system identification, chaotic time-series prediction and real-world regression problems. Simulation results demonstrate that the proposed FAOS-PFNN algorithm can achieve faster learning speed and more compact network structure with comparably high accuracy of approximation and generalization.  相似文献   

2.
In the conventional backpropagation (BP) learning algorithm used for the training of the connecting weights of the artificial neural network (ANN), a fixed slope−based sigmoidal activation function is used. This limitation leads to slower training of the network because only the weights of different layers are adjusted using the conventional BP algorithm. To accelerate the rate of convergence during the training phase of the ANN, in addition to updates of weights, the slope of the sigmoid function associated with artificial neuron can also be adjusted by using a newly developed learning rule. To achieve this objective, in this paper, new BP learning rules for slope adjustment of the activation function associated with the neurons have been derived. The combined rules both for connecting weights and slopes of sigmoid functions are then applied to the ANN structure to achieve faster training. In addition, two benchmark problems: classification and nonlinear system identification are solved using the trained ANN. The results of simulation-based experiments demonstrate that, in general, the proposed new BP learning rules for slope and weight adjustments of ANN provide superior convergence performance during the training phase as well as improved performance in terms of root mean square error and mean absolute deviation for classification and nonlinear system identification problems.  相似文献   

3.
针对传统神经网络收敛速度慢,收敛精度低,以及用于模式识别泛化能力差的问题。提出了将量子神经网络与小波理论相结合的量子小波神经网络模型。该模型隐层量子神经元采用小波基函数的线性叠加作为激励函数,称之为多层小波激励函数,这样隐层神经元既能表示更多的状态和量级,又能提高网络收敛精度和速度。给出了网络学习算法。并以之在漏钢预报波形识别中的应用验证了该模型和学习算法的有效性。  相似文献   

4.
This paper presents a modified structure of a neural network with tunable activation function and provides a new learning algorithm for the neural network training. Simulation results of XOR problem, Feigenbaum function, and Henon map show that the new algorithm has better performance than BP (back propagation) algorithm in terms of shorter convergence time and higher convergence accuracy. Further modifications of the structure of the neural network with the faster learning algorithm demonstrate simpler structure with even faster convergence speed and better convergence accuracy.  相似文献   

5.
针对传统神经网络识别率低和泛化能力差的问题,提出了一种改进的自组织模糊神经网络(SOFNN)学习算法。以保存椭球基函数(EBF)层各个神经元的输出及输出之和为依据进行神经元的修改,删除和增加,进而得到网络的有效神经元,并减少样本训练的时间。用最小二乘法(RLSE)估计参数,用梯度下降法修改参数,保证网络收敛。与其他的模糊神经网络相比,在精确度、结构复杂性和抗干扰性方面的优越性,在真实数据集上得到了有效的验证。  相似文献   

6.
The prediction accuracy and generalization ability of neural/neurofuzzy models for chaotic time series prediction highly depends on employed network model as well as learning algorithm. In this study, several neural and neurofuzzy models with different learning algorithms are examined for prediction of several benchmark chaotic systems and time series. The prediction performance of locally linear neurofuzzy models with recently developed Locally Linear Model Tree (LoLiMoT) learning algorithm is compared with that of Radial Basis Function (RBF) neural network with Orthogonal Least Squares (OLS) learning algorithm, MultiLayer Perceptron neural network with error back-propagation learning algorithm, and Adaptive Network based Fuzzy Inference System. Particularly, cross validation techniques based on the evaluation of error indices on multiple validation sets is utilized to optimize the number of neurons and to prevent over fitting in the incremental learning algorithms. To make a fair comparison between neural and neurofuzzy models, they are compared at their best structure based on their prediction accuracy, generalization, and computational complexity. The experiments are basically designed to analyze the generalization capability and accuracy of the learning techniques when dealing with limited number of training samples from deterministic chaotic time series, but the effect of noise on the performance of the techniques is also considered. Various chaotic systems and time series including Lorenz system, Mackey-Glass chaotic equation, Henon map, AE geomagnetic activity index, and sunspot numbers are examined as case studies. The obtained results indicate the superior performance of incremental learning algorithms and their respective networks, such as, OLS for RBF network and LoLiMoT for locally linear neurofuzzy model.  相似文献   

7.
李鹏华  柴毅  熊庆宇 《自动化学报》2013,39(9):1511-1522
针对Elman神经网络的学习速度和泛化性能, 提出一种具有量子门结构的新型Elman神经网络模型及其梯度扩展反向传播(Back-propagation)学习算法, 新模型由量子比特神经元和经典神经元构成. 新网络结构采用量子映射层以确保来自上下文单元的局部反馈与隐藏层输入之间的模式一致; 通过量子比特神经元输出与相关量子门参数的修正互补关系以提高网络更新动力. 新学习算法采用搜索然后收敛的策略自适应地调整学习率参数以提高网络学习速度; 通过将上下文单元的权值扩展到隐藏层的权值矩阵, 使其在与隐藏层权值同步更新过程中获取时间序列的额外信息, 从而提高网络上下文单元输出与隐藏层输入之间的匹配程度. 以峰值检波为例的数值实验结果显示, 在量子反向传播学习过程中, 量子门Elman神经网络具有较快的学习速度和良好的泛化性能.  相似文献   

8.
In this paper, a modified learning algorithm for the multilayer neural network with the multi-valued neurons (MLMVN) is presented. The MLMVN, which is a member of complex-valued neural networks family, has already demonstrated a number of important advantages over other techniques. A modified learning algorithm for this network is based on the introduction of an acceleration step, performing by means of the complex QR decomposition and on the new approach to calculation of the output neurons errors: they are calculated as the differences between the corresponding desired outputs and actual values of the weighted sums. These modifications significantly improve the existing derivative-free backpropagation learning algorithm for the MLMVN in terms of learning speed. A modified learning algorithm requires two orders of magnitude lower number of training epochs and less time for its convergence when compared with the existing learning algorithm. Good performance is confirmed not only by the much quicker convergence of the learning algorithm, but also by the compatible or even higher classification/prediction accuracy, which is obtained by testing over some benchmarks (Mackey–Glass and Jenkins–Box time series) and over some satellite spectral data examined in a comparison test.  相似文献   

9.
R.  S.  N.  P. 《Neurocomputing》2009,72(16-18):3771
In a fully complex-valued feed-forward network, the convergence of the Complex-valued Back Propagation (CBP) learning algorithm depends on the choice of the activation function, learning sample distribution, minimization criterion, initial weights and the learning rate. The minimization criteria used in the existing versions of CBP learning algorithm in the literature do not approximate the phase of complex-valued output well in function approximation problems. The phase of a complex-valued output is critical in telecommunication and reconstruction and source localization problems in medical imaging applications. In this paper, the issues related to the convergence of complex-valued neural networks are clearly enumerated using a systematic sensitivity study on existing complex-valued neural networks. In addition, we also compare the performance of different types of split complex-valued neural networks. From the observations in the sensitivity analysis, we propose a new CBP learning algorithm with logarithmic performance index for a complex-valued neural network with exponential activation function. The proposed CBP learning algorithm directly minimizes both the magnitude and phase errors and also provides better convergence characteristics. Performance of the proposed scheme is evaluated using two synthetic complex-valued function approximation problems, the complex XOR problem, and a non-minimum phase equalization problem. Also, a comparative analysis on the convergence of the existing fully complex and split complex networks is presented.  相似文献   

10.
陈华伟  年晓玲  靳蕃 《计算机应用》2006,26(5):1106-1108
提出一种新的前向神经网络的学习算法,该算法在正向和反向阶段均可以对不同的层间的权值进行必要的调整,在正向阶段按最小范数二乘解原则确定连接隐层与输出层的权值,反向阶段则按误差梯度下降原则调整通连接输入层与隐层间的权值,具有很快的学习能力和收敛速度,并且能在一定的程度上保证所训练神经网络的泛化能力,实验结果初步验证了新算法的性能。  相似文献   

11.
In this work we present a constructive algorithm capable of producing arbitrarily connected feedforward neural network architectures for classification problems. Architecture and synaptic weights of the neural network should be defined by the learning procedure. The main purpose is to obtain a parsimonious neural network, in the form of a hybrid and dedicate linear/nonlinear classification model, which can guide to high levels of performance in terms of generalization. Though not being a global optimization algorithm, nor a population-based metaheuristics, the constructive approach has mechanisms to avoid premature convergence, by mixing growing and pruning processes, and also by implementing a relaxation strategy for the learning error. The synaptic weights of the neural networks produced by the constructive mechanism are adjusted by a quasi-Newton method, and the decision to grow or prune the current network is based on a mutual information criterion. A set of benchmark experiments, including artificial and real datasets, indicates that the new proposal presents a favorable performance when compared with alternative approaches in the literature, such as traditional MLP, mixture of heterogeneous experts, cascade correlation networks and an evolutionary programming system, in terms of both classification accuracy and parsimony of the obtained classifier.  相似文献   

12.
基于面向对象自适应粒子群算法的神经网络训练*   总被引:2,自引:0,他引:2  
针对传统的神经网络训练算法收敛速度慢和泛化性能低的缺陷,提出一种新的基于面向对象的自适应粒子群优化算法(OAPSO)用于神经网络的训练。该算法通过改进PSO的编码方式和自适应搜索策略以提高网络的训练速度与泛化性能,并结合Iris和Ionosphere分类数据集进行测试。实验结果表明:基于OAPSO算法训练的神经网络在分类准确率上明显优于BP算法及标准PSO算法,极大地提高了网络泛化能力和优化效果,具有快速全局收敛的性能。  相似文献   

13.
杨梅娟  陈亚军 《计算机应用》2006,26(11):2765-2768
为克服BP算法收敛速度慢,易陷入局部极小值等的缺点,提高BP预测精度等性能,提出了变共轭梯度法(VCG)。从激活函数和学习规则两方面修正网络,并对其收敛性作了分析及简要证明。将其应用于我国主要农产品总产量的预测,证实了该算法的有效性。  相似文献   

14.
We propose a new type of recurrent neural-network architecture, in which each output unit is connected to itself and is also fully connected to other output units and all hidden units. The proposed recurrent neural network differs from Jordan's and Elman's recurrent neural networks with respect to function and architecture, because it has been originally extended from being a mere multilayer feedforward neural network, to improve discrimination and generalization powers. We also prove the convergence properties of the learning algorithm in the proposed recurrent neural network, and analyze the performance of the proposed recurrent neural network by performing recognition experiments with the totally unconstrained handwritten numeric database of Concordia University, Montreal, Canada. Experimental results have confirmed that the proposed recurrent neural network improves discrimination and generalization powers in the recognition of visual patterns  相似文献   

15.
This paper presents a novel neural network model with hybrid quantized architecture to improve the performance of the conventional Elman networks. The quantum gate technique is introduced for solving the pattern mismatch between the inputs stream and one-time-delay state feedback. A quantized back-propagation training algorithm with an adaptive dead zone scheme is developed for providing an optimal or suboptimal tradeoff between the convergence speed and the generalization performance. Furthermore, the effectiveness of the new real time learning algorithm is demonstrated by proving the quantum gate parameter convergence based on Lyapunov method. The numerical experiments are carried out to demonstrate the accuracy of the theoretical results.  相似文献   

16.
不同池化模型的卷积神经网络学习性能研究   总被引:1,自引:1,他引:0       下载免费PDF全文
目的 基于卷积神经网络的深度学习算法在图像处理领域正引起广泛关注。为了进一步提高卷积神经网络特征提取的准确度,加快参数收敛速度,优化网络学习性能,通过对比不同的池化模型对学习性能的影响提出一种动态自适应的改进池化算法。方法 构建卷积神经网络模型,使用不同的池化模型对网络进行训练,并检验在不同迭代次数下的学习结果。在现有算法准确率不高和收敛速度较慢的情况下,通过使用不同的池化模型对网络进行训练,从而构建一种新的动态自适应池化模型,并研究在不同迭代次数下其对识别准确率和收敛速度的影响。结果 通过对比实验发现,使用动态自适应池化算法的卷积神经网络学习性能最优,在手写数字集上的收敛速度最高可以提升18.55%,而模型对图像的误识率最多可以降低20%。结论 动态自适应池化算法不但使卷积神经网络对特征的提取更加精确,而且很大程度地提高了收敛速度和模型准确率,从而达到优化网络学习性能的目的。这种模型可以进一步拓展到其他与卷积神经网络相关的深度学习算法。  相似文献   

17.
The slow convergence of back-propagation neural network (BPNN) has become a challenge in data-mining and knowledge discovery applications due to the drawbacks of the gradient descent (GD) optimization method, which is widely adopted in BPNN learning. To solve this problem, some standard optimization techniques such as conjugate-gradient and Newton method have been proposed to improve the convergence rate of BP learning algorithm. This paper presents a heuristic method that adds an adaptive smoothing momentum term to original BP learning algorithm to speedup the convergence. In this improved BP learning algorithm, adaptive smoothing technique is used to adjust the momentums of weight updating formula automatically in terms of “3 σ limits theory.” Using the adaptive smoothing momentum terms, the improved BP learning algorithm can make the network training and convergence process faster, and the network’s generalization performance stronger than the standard BP learning algorithm can do. In order to verify the effectiveness of the proposed BP learning algorithm, three typical foreign exchange rates, British pound (GBP), Euro (EUR), and Japanese yen (JPY), are chosen as the forecasting targets for illustration purpose. Experimental results from homogeneous algorithm comparisons reveal that the proposed BP learning algorithm outperforms the other comparable BP algorithms in performance and convergence rate. Furthermore, empirical results from heterogeneous model comparisons also show the effectiveness of the proposed BP learning algorithm.  相似文献   

18.
目前神经网络已经成为解决非线性系统辨识问题的一类有效的方法,但是常用的多层感知器存在网络稳定性差、收敛速度慢的问题.在多层感知器和傅里叶级数基础上提出的傅里叶神经网络具有较好的泛化性、模式识别能力,但其学习算法主要采用最速下降法,易产生陷入局部极小,学习速度慢等问题.提出一种采用双折线步方法的傅里叶神经网络,避免了局部极小问题,且具有二阶收敛速度.通过相应的数值算例验证新算法的性能,并应用于非线性系统的识别问题中,其结果和几类经典的神经网络算法做了相应的对比和分析.  相似文献   

19.
研究了一种新的多输出神经元模型.首先,给出这类模型的一般形式,并将该模型应用于多层前向神经网络;其次,给出了其学习算法,即递推最小二乘算法,最后通过几个模拟实验表明,采用多输出神经元模型的多层前向神经网络,具有结构简单,泛化能力强,收敛速度快,收敛精度高等特点,其性能远远优于激活函数可调模型的多层前向神经网络.  相似文献   

20.
Real-time learning capability of neural networks   总被引:4,自引:0,他引:4  
In some practical applications of neural networks, fast response to external events within an extremely short time is highly demanded and expected. However, the extensively used gradient-descent-based learning algorithms obviously cannot satisfy the real-time learning needs in many applications, especially for large-scale applications and/or when higher generalization performance is required. Based on Huang's constructive network model, this paper proposes a simple learning algorithm capable of real-time learning which can automatically select appropriate values of neural quantizers and analytically determine the parameters (weights and bias) of the network at one time only. The performance of the proposed algorithm has been systematically investigated on a large batch of benchmark real-world regression and classification problems. The experimental results demonstrate that our algorithm can not only produce good generalization performance but also have real-time learning and prediction capability. Thus, it may provide an alternative approach for the practical applications of neural networks where real-time learning and prediction implementation is required.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号