首页 | 本学科首页   官方微博 | 高级检索  
     


Convergence analysis of convex incremental neural networks
Authors:Lei?Chen  author-information"  >  author-information__contact u-icon-before"  >  mailto:dcschen@nus.edu.sg"   title="  dcschen@nus.edu.sg"   itemprop="  email"   data-track="  click"   data-track-action="  Email author"   data-track-label="  "  >Email author,Hung?Keng?Pung
Affiliation:(1) Network Systems and Service Lab., Department of Computer Science, National University of Singapore, Kent Ridge, Singapore
Abstract:Recently, a convex incremental algorithm (CI-ELM) has been proposed in Huang and Chen (Neurocomputing 70:3056–3062, 2007), which randomly chooses hidden neurons and then analytically determines the output weights connecting with the hidden layer and the output layer. Though hidden neurons are generated randomly, the network constructed by CI-ELM is still based on the principle of universal approximation. The random approximation theory breaks through the limitation of most conventional theories, eliminating the need for tuning hidden neurons. However, due to the random characteristic, some of the neurons contribute little to decrease the residual error, which eventually increase the complexity and computation of neural networks. Thus, CI-ELM cannot precisely give out its convergence rate. Based on Lee’s results (Lee et al., IEEE Trans Inf Theory 42(6):2118–2132, 1996), we first show the convergence rate of a maximum CI-ELM, and then systematically analyze the convergence rate of an enhanced CI-ELM. Different from CI-ELM, the hidden neurons of the two algorithms are chosen by following the maximum or optimality principle under the same structure as CI-ELM. Further, the proof process also demonstrates that our algorithms achieve smaller residual errors than CI-ELM. Since the proposed neural networks remove these “useless” neurons, they improve the efficiency of neural networks. The experimental results on benchmark regression problems will support our conclusions. The work is under the funding of Singapore MOE AcRF Tier 1 grant WBS No: R 252-000-221-112.
Keywords:Feedforward neural networks  Universal approximation  Convergence rate  Generalization performance
本文献已被 SpringerLink 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号