首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 328 毫秒
1.
BP神经网络学习速度及泛化能力的提高   总被引:1,自引:0,他引:1  
针对BP网络学习速度的缓慢性及较差的泛化能力,本文提出了几种改进的学习算法。其改进具体表现在以下方面:1)采用Cauchy误差估计器代替传统的LMS误差估计器;2 )对常规的Sigmoid函数引入形态因子;3)自适应修改学习率;4 )采用规则化调整提高泛化能力。并以MATLAB神经网络工具箱中相关函数验证了3) 4 )两种改进算法的优越性。  相似文献   

2.
一种新的神经网络结构及其学习算法   总被引:3,自引:0,他引:3  
武妍 《计算机工程》2004,30(9):61-62,F003
提出了一个基于模糊集理论的新的神经网络结构及其学习算法。这种神经网络是对标准的BP网络修改后构成的(记作FIBP)。并通过几个实例仿真验证其有效性。实验结果表明,当用FIBP网络解决动态的、高度非线性的函数逼近时,其学习速度比BP网络快,精度高而且泛化能力高。  相似文献   

3.
具有混沌激励函数的BP网络算法   总被引:1,自引:0,他引:1  
该文讨论了BP网络学习过程中的假饱和现象,并给出了一种改进的算法,有效地解决了假饱和的问题。仿真结果表明,该方法不但可以提高网络学习的快速性,而且具有一定的避免权值落入局部极小点的能力,从而提高了网络的收敛精度,同时,该算法还能提高网络的泛化能力。  相似文献   

4.
普通三层RBF网络已经是一种较好的神经网络,为了进一步提高RBF网络的性能,在普通三层RBF网络的基础上,构建出一种运用PSO算法的自递归RBF网络。学习算法采用以梯度学习算法配合PSO算法对参数进行调整。与采用动量-梯度学习算法,且为结构为三层的RBF网络相比,提的运用PSO算法的自递归RBF网络可以在神经元较少的情况下,具有更好的泛化能力、鲁棒性和准确性。最后通过仿真实验,对算法的有效性进行了验证。  相似文献   

5.
为了提高误差反向传播算法的网络泛化能力,针对BP网络中所存在网络泛化能力差的缺点,结合混沌优化的优点,提出了一种改进的算法.将网络中的少数神经元的激励函数改变为具有混沌特性的激励函数,这些神经元不存在饱和区,从而可以加快学习速度,克服假饱和现象,并且神经元的输出具有-定的随机性,类似于噪声的作用,可在-定程度上提高网络的泛化能力.针对字符识别的仿真效果进行分析,证明网络的容错能力较好,网络的泛化能力得到了改善.  相似文献   

6.
CMAC学习性能及泛化性能研究综述   总被引:1,自引:0,他引:1  
小脑模型清晰度控制器(CMAC)是一种局部学习前馈网络,结构简单,收敛速度快,易于实现。从其每个神经元来看,各神经元之间是一种线性关系,但从总体结构来看,网络是一种非线性映射关系。而且模型从输入开始就存在一种泛化能力。网络的学习和泛化能力一直是研究热点,因此,该文将对CMAC网络的泛化能力、学习能力以及一些改善途径进行多方面的综合性的讨论。文章最后还将给出一种改善CMAC泛化能力的训练策略,它不仅避免了学习干扰问题加快了学习速度而且可以通过提高训练循环次数增加训练样本量。通过MATLAB仿真发现这种训练策略可以改善CMAC网络的泛化能力。该方法简单有效是可行的。  相似文献   

7.
基于DFP校正拟牛顿法的傅里叶神经网络   总被引:1,自引:0,他引:1       下载免费PDF全文
林琳  黄南天  高兴泉 《计算机工程》2012,38(10):144-147
针对傅里叶神经网络采用最速下降法导致局部极小、学习速度慢以及泛化能力差的问题,提出一种基于DFP校正拟牛顿法的新学习算法。该算法计算复杂度低,能保证网络具有良好的泛化能力和全局最优性。通过2个数值算例检验该算法,同时和BP神经网络以及另外2种傅里叶神经网络作比较。结果表明,该算法计算复杂度约为最速下降法的5%,为最小二乘学习算法的80%,具有较好的泛化 能力。  相似文献   

8.
构造型神经网络双交叉覆盖增量学习算法   总被引:12,自引:1,他引:12  
陶品  张钹  叶榛 《软件学报》2003,14(2):194-201
研究了基于覆盖的构造型神经网络(cover based constructive neural networks,简称CBCNN)中的双交叉覆盖增量学习算法(BiCovering algorithm,简称BiCA).根据CBCNN的基本思想,该算法进一步通过构造多个正反覆盖簇,使得网络在首次构造完成后还可以不断地修改与优化神经网络的参数与结构,增加或删除网络中的节点,进行增量学习.通过分析认为,BiCA学习算法不但保留了CBCNN网络的优点与特点,而且实现了增量学习并提高了CBCNN网络的泛化能力.仿真实验结果显示,该增量学习算法在神经网络初始分类能力较差的情况下具有快速学习能力,并且对样本的学习顺序不敏感.  相似文献   

9.
基于聚类的RBF-LBF串联神经网络学习算法   总被引:1,自引:0,他引:1  
唐勇智  葛洪伟 《计算机应用》2007,27(12):2916-2918
为提高网络的泛化能力,研究了单层RBF神经网络和LBF网络组成的RBF-LBF串联神经网络,并提出了一种基于模式聚类的RBF-LBF串联神经网络的学习算法。该算法分别对单层RBF网络和LBF网络的输入进行模式聚类,以确定网络的初始结构,然后通过调整错分样本的类别,使之部分重叠或合并核函数。经双螺旋线问题仿真实验证明,该算法确具有很好的泛化能力且只需较短的训练时间。  相似文献   

10.
基于输出误差与偏导数误差信息融合的神经网络训练   总被引:2,自引:0,他引:2  
文章首先提出了表示前向神经网的泛化能力的一种度量,分析了提高网络泛化能力的主要途径,进而提出了基于网络输出误差与输出对输入偏导数误差信息融合的网络训练策略,给出了两者信息融合的有效方法和相应网络训练算法。具体应用结果表明所提出算法可显著提高网络的泛化能力。  相似文献   

11.
The BP-SOM architecture and learning rule   总被引:3,自引:0,他引:3  
For some problems, the back-propagation learning rule often used for training multilayer feedforward networks appears to have serious limitations. In this paper we describe BP-SOM, an alternative training procedure. In BP-SOM the traditional back-propagation learning rule is combined with unsupervised learning in self-organizing maps. While the multilayer feedforward network is trained, the hidden-unit activations of the feedforward network are used as training material for the accompanying self-organizing maps. After a few training cycles, the maps develop, to a certain extent, self-organization. The information in the maps is used in updating the connection weights of the feedforward network. The effect is that during BP-SOM learning, hidden-unit activations of patterns, associated with the same class, becomemore similar to each other. Results on two hard to learn classification tasks show that the BP-SOM architecture and learning rule offer a strong alternative for training multilayer feedforward networks with back-propagation.  相似文献   

12.
In this paper, a new multi-output neural model with tunable activation function (TAF) and its general form are presented. It combines both traditional neural model and TAF neural model. Recursive least squares algorithm is used to train a multilayer feedforward neural network with the new multi-output neural model with tunable activation function (MO-TAF). Simulation results show that the MO-TAF-enabled multi-layer feedforward neural network has better capability and performance than the traditional multilayer feedforward neural network and the feedforward neural network with tunable activation functions. In fact, it significantly simplifies the neural network architecture, improves its accuracy and speeds up the convergence rate.  相似文献   

13.
In this paper, we use the approach of adaptive critic design (ACD) for control, specifically, the action-dependent heuristic dynamic programming (ADHDP) method. A least squares support vector machine (SVM) regressor has been used for generating the control actions, while an SVM-based tree-type neural network (NN) is used as the critic. After a failure occurs, the critic and action are retrained in tandem using the failure data. Failure data is binary classification data, where the number of failure states are very few as compared to the number of no-failure states. The difficulty of conventional multilayer feedforward NNs in learning this type of classification data has been overcome by using the SVM-based tree-type NN, which due to its feature to add neurons to learn misclassified data, has the capability to learn any binary classification data without a priori choice of the number of neurons or the structure of the network. The capability of the trained controller to handle unforeseen situations is demonstrated.  相似文献   

14.
传统的两层二值双向联想记忆(BAM)网络因其结构的限制存在着存储容量有限、区分小差别模式和存储非正交模式能力不足的缺陷,结构上将其扩展至三层网络是一个有效的解决思路,但是三层二值BAM网络的学习是一个难题,而三层连续型BAM网络又存在处理二值问题不方便的问题。为了解决这些问题,提出一种三层结构的二值双向联想记忆网络,创新之处是采用了二值多层前向网络的MRⅡ算法实现了三层二值BAM网络的学习。实验结果表明,基于MRⅡ算法的三层二值BAM网络极大地提高了网络的存储容量和模式区分能力,同时保留了二值网络特定的优势,具有较高的理论与实用价值。  相似文献   

15.
A neural-network classifier for detecting vascular structures in angiograms was developed. The classifier consisted of a multilayer feedforward network window in which the center pixel was classified using gray-scale information within the window. The network was trained by using the backpropagation algorithm with the momentum term. Based on this image segmentation problem, the effect of changing network configuration on the classification performance was also characterized. Factors including topology, rate parameters, training sample set, and initial weights were systematically analyzed. The training set consisted of 75 selected points from a 256x256 digitized cineangiogram. While different network topologies showed no significant effect on performance, both the learning process and the classification performance were sensitive to the rate parameters. In a comparative study, the network demonstrated its superiority in classification performance. It was also shown that the trained neural-network classifier was equivalent to a generalized matched filter with a nonlinear decision tree.  相似文献   

16.
Classification of radar clutter using neural networks   总被引:4,自引:0,他引:4  
A classifier that incorporates both preprocessing and postprocessing procedures as well as a multilayer feedforward network (based on the back-propagation algorithm) in its design to distinguish between several major classes of radar returns including weather, birds, and aircraft is described. The classifier achieves an average classification accuracy of 89% on generalization for data collected during a single scan of the radar antenna. The procedures of feature selection for neural network training, the classifier design considerations, the learning algorithm development, the implementation, and the experimental results of the neural clutter classifier, which is simulated on a Warp systolic computer, are discussed. A comparative evaluation of the multilayer neural network with a traditional Bayes classifier is presented.  相似文献   

17.
基于BP神经网络的图像压缩的Matlab实现   总被引:3,自引:0,他引:3  
BP网络是目前最常用的一种人工神经网络模型,它利用多层前馈网络的模式变换能力实现数据编码,直接提供数据压缩能力.在介绍BP网络图像压缩机原理及算法的基础上,通过计算机Matlab仿真实验实现数字图像压缩,并分析了各种参数对重建图像性能的影响.  相似文献   

18.
A homomorphic feedforward network (HFFN) for nonlinear adaptive filtering is introduced. This is achieved by a two-layer feedforward architecture with an exponential hidden layer and logarithmic preprocessing step. This way, the overall input-output relationship can be seen as a generalized Volterra model, or as a bank of homomorphic filters. Gradient-based learning for this architecture is introduced, together with some practical issues related to the choice of optimal learning parameters and weight initialization. The performance and convergence speed are verified by analysis and extensive simulations. For rigor, the simulations are conducted on artificial and real-life data, and the performances are compared against those obtained by a sigmoidal feedforward network (FFN) with identical topology. The proposed HFFN proved to be a viable alternative to FFNs, especially in the critical case of online learning on small- and medium-scale data sets.  相似文献   

19.
Probability density functions are estimated by an exponential family of densities based on multilayer feedforward networks. The role of the multilayer feedforward networks, in the proposed estimator, is to approximate the logarithm of the probability density functions. The method of maximum likelihood is used, as the main contribution, to derive an unsupervised backpropagation learning law to estimate the probability density functions. Computer simulation results demonstrating the use of the derived learning law are presented.  相似文献   

20.
In practice, the back-propagation algorithm often runs very slowly, and the question naturally arises as to whether there are necessarily intrinsic computation and difficulties with training neural networks, or better training algorithms might exist. Two important issues will be investigated in this framework. One establishes a flexible structure, to construct very simple neural network for multi-input/output systems. The other issue is how to obtain the learning algorthm to achieve good performance in the training phase. In this paper, the feedforward neural network with flexible bipolar sigmoid functions (FBSFs) are investigated to learn the inverse model of the system. The FBSF has changeable shape by changing the values of its parameter according to the desired trajectory or the teaching signal. The proposed neural network is trained to learn the inverse dynamic model by using back-propagation learning algorithms. In these learning algorithms, not only the connection weights but also the sigmoid function parameters (SFPs) are adjustable. The feedback-error-learning is used as a learning method for the feedforward controller. In this case, the output of a feedback controller is fed to the neural network model. The suggested method is applied to a two-link robotic manipulator control system which is configured as a direct controller for the system to demonstrate the capability of our scheme. Also, the advantages of the proposed structure over other traditional neural network structures are discussed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号