首页 | 本学科首页   官方微博 | 高级检索  
     


Negative effects of sufficiently small initial weights on back-propagation neural networks
Authors:Yan LIU  Jie YANG  Long LI  Wei WU
Affiliation:1(1School of Mathematical Sciences,Dalian University of Technology,Dalian 116024,China)(2School of Information Science and Engineering,Dalian Polytechnic University,Dalian 116034,China)(3Department of Mathematics and Computational Science,Hengyang Normal University,Hengyang 421002,China)
Abstract:In the training of feedforward neural networks,it is usually suggested that the initial weights should be small in magnitude in order to prevent premature saturation.The aim of this paper is to point out the other side of the story:In some cases,the gradient of the error functions is zero not only for infinitely large weights but also for zero weights.Slow convergence in the beginning of the training procedure is often the result of sufficiently small initial weights.Therefore,we suggest that,in these cases,the initial values of the weights should be neither too large,nor too small.For instance,a typical range of choices of the initial weights might be something like(0.4,0.1) ∪(0.1,0.4),rather than(0.1,0.1) as suggested by the usual strategy.Our theory that medium size weights should be used has also been extended to a few commonly used transfer functions and error functions.Numerical experiments are carried out to support our theoretical findings.
Keywords:Neural networks  Back-propagation  Gradient learning method  Convergence
本文献已被 CNKI 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号