首页 | 本学科首页   官方微博 | 高级检索  
     


Neural network constructive algorithms: Trading generalization for learning efficiency?
Authors:F. J. Śmieja
Affiliation:(1) German National Research Centre for Computer Science (GMD), St. Augustin 1, 5205 Schloß Birlinghoven, Germany
Abstract:There are currently several types of constructive, (or growth), algorithms available for training a feed-forward neural network. This paper describes and explains the main ones, using a fundamental approach to the multi-layer perceptron problem-solving mechanisms. The claimed convergence properties of the algorithms are verified using just two mapping theorems, which consequently enables all the algorithms to be unified under a basic mechanism. The algorithms are compared and contrasted and the deficiencies of some highlighted. The fundamental reasons for the actual success of these algorithms are extracted, and used to suggest where they might most fruitfully be applied. A suspicion that they are not a panacea for all current neural network difficulties, and that one must somewhere along the line pay for the learning efficiency they promise, is developed into an argument that their generalization abilities will lie on average below that of back-propagation.Funded by the German Ministry of Research and Technology, grant number 01 IN 111 A/4.German National Research Centre for Computer Science (GMD), Schloß Birlinghoven, 5205 St. Augustin 1, Germany.
Keywords:
本文献已被 SpringerLink 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号