首页 | 本学科首页   官方微博 | 高级检索  
     

一种基于误差放大的快速BP学习算法
引用本文:杨博,王亚东,苏小红.一种基于误差放大的快速BP学习算法[J].计算机研究与发展,2004,41(5):774-779.
作者姓名:杨博  王亚东  苏小红
作者单位:哈尔滨工业大学计算机学院,哈尔滨,150001
基金项目:国家自然科学基金项目 (69975 0 0 5,60 2 73 0 83 )
摘    要:针对目前使用梯度下降原则的BP学习算法,受饱和区域影响容易出现收敛速度趋缓的问题,提出一种新的基于误差放大的快速BP学习算法以消除饱和区域对后期训练的影响.该算法通过对权值修正函数中误差项的自适应放大,使权值的修正过程不会因饱和区域的影响而趋于停滞,从而使BP学习算法能很快地收敛到期望的精度值.对3-parity问题和Soybean分类问题的仿真实验表明,与目前常用的Delta-bar-Delta方法、加入动量项方法、Prime Offset等方法相比,该方法在不增加算法的复杂度和额外的CPU机时的情况下能更快地收敛到目标精度值.

关 键 词:反向传播  多层人工神经网络  误差放大  饱和区域  奇偶问题  Soybean数据集

An Algorithm for Fast Convergence of Back Propagation by Enlarging Error
YANG Bo,WANG Ya Dong,and SU Xiao Hong.An Algorithm for Fast Convergence of Back Propagation by Enlarging Error[J].Journal of Computer Research and Development,2004,41(5):774-779.
Authors:YANG Bo  WANG Ya Dong  and SU Xiao Hong
Abstract:A back propagation neural network based on enlarging error is proposed for improving the learning speed of multi layer artificial neural networks with sigmoid activation function It deals with the flat spots that play a significant role in the slow convergence of back propagation (BP) The advantages of the proposed algorithm are that it can be established easily and convergent with minimal mean square error It updates the weights of neural network effectively by enlarging the error term of each output unit, and keeps high learning rate to meet the convergence criteria quickly The experiments based on the well established benchmarks, such as 3 parity and soybean data sets, show that the algorithm is more efficacious and powerful than some of the existing algorithms such as Delta bar Delta algorithm, momentum algorithm, and Prime Offset algorithm in learning, and it is less computationally intensive and less required memory than the Levenberg Marquardt(LM) method
Keywords:back propagation  multi  layer artificial neural network  enlarging error  flat  spots  parity problem  soybean data set
本文献已被 CNKI 维普 万方数据 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号