首页 | 本学科首页   官方微博 | 高级检索  
     


Parallel,self-organizing,hierarchical neural networks with forward-backward training
Authors:S. -W. Deng  O. K. Ersoy
Affiliation:(1) School of Electrical Engineering, Purdue University, 47907 West Lafayette, Indiana, USA
Abstract:A forward-backward training algorithm for parallel, self-organizing hierarchical neural networks (PSHNNs) is described. Using linear algebra, it is shown that the forward-backward training of ann-stage PSHNN until convergence is equivalent to the pseudo-inverse solution for a single, total network designed in the least-squares sense with the total input vector consisting of the actual input vector and its additional nonlinear transformations. These results are also valid when a single long input vector is partitioned into smaller length vectors. A number of advantages achieved are: small modules for easy and fast learning, parallel implementation of small modules during testing, faster convergence rate, better numerical error-reduction, and suitability for learning input nonlinear transformations by other neural networks. The backpropagation (BP) algorithm is proposed for learning input nonlinearitics. Better performance in terms of deeper minimum of the error function and faster convergence rate is achieved when a single BP network is replaced by a PSHNN of equal complexity in which each stage is a BP network of smaller complexity than the single BP network.
Keywords:
本文献已被 SpringerLink 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号