首页 | 本学科首页   官方微博 | 高级检索  
     


Convergence analysis of BP neural networks via sparse response regularization
Affiliation:1. College of Science, China University of Petroleum, Qingdao 266580, China;2. School of Engineering, Sun Yat-sen University, Guangzhou 510275, China;3. School of Mathematics and Statistics, Lanzhou University, Lanzhou 730000, China;4. Department of Electrical and Computer Engineering, University of Louisville, Louisville, KY 40292, USA;5. Information Technology Institute, University of Social Sciences, ?ód? 90-113, Poland
Abstract:Backpropagation (BP) algorithm is the typical strategy to train the feedforward neural networks (FNNs). Gradient descent approach is the popular numerical optimal method which is employed to implement the BP algorithm. However, this technique frequently leads to poor generalization and slow convergence. Inspired by the sparse response character of human neuron system, several sparse-response BP algorithms were developed which effectively improve the generalization performance. The essential idea is to impose the responses of hidden layer as a specific L1 penalty term on the standard error function of FNNs. In this paper, we mainly focus on the two remaining challenging tasks: one is to solve the non-differential problem of the L1 penalty term by introducing smooth approximation functions. The other aspect is to provide a rigorous convergence analysis for this novel sparse response BP algorithm. In addition, an illustrative numerical simulation has been done to support the theoretical statement.
Keywords:Backpropagation  Neural networks  Gradient descent  Non-differential  Convergence
本文献已被 ScienceDirect 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号