首页 | 本学科首页   官方微博 | 高级检索  
     

BP算法收敛性分析及改进
引用本文:姜雷,李新.BP算法收敛性分析及改进[J].计算机时代,2010(12):29-30.
作者姓名:姜雷  李新
作者单位:重庆工程职业技术学院,重庆400037
摘    要:在标准BP神经网络的训练中,将误差函数作为权值调整的依据,使用固定学习率计算权值,这样的结果往往使网络的学习速度过慢甚至无法收敛。对此,从网络收敛的稳定性和速度的角度出发,分析了误差函数和权值修改函数,对算法中学习率的作用进行了具体的讨论,提出了一种根据误差变化对学习率进行动态调整的方法。该方法简单实用,能有效防止网络训练时出现发散,提高网络的收敛速度和稳定性。

关 键 词:BP算法  误差变化率  学习率  收敛速度

Analysis and Improvement on Convergence of BP Algorithm
JIANG Lei,LI Xin.Analysis and Improvement on Convergence of BP Algorithm[J].Computer Era,2010(12):29-30.
Authors:JIANG Lei  LI Xin
Affiliation:( Chongqing Vocational hstitute of Engineering, Chongqing 400037, China)
Abstract:In the training of standard BP nerve network, the weights are adjusted according to error fimetion and calculated with the fixed learning rate, which often causes network learning speed too slow, even not to converge. For this, starting with the stability and speed of network convergence, we analyze the error function and weights adjusting function, discuss the effect of learning rate in the algorithm specifically, and present a method to adjust the learning rate dynamically according to the error changing. The method is simple and practical, it can prevent the network training from divergence effectively, improve the convergence speed and stability of the network.
Keywords:BP algorithm  error changing rate  learning rate  convergence speed
本文献已被 维普 万方数据 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号