首页 | 本学科首页   官方微博 | 高级检索  
     


Strategies for improving neural net generalisation
Authors:Derek Partridge  Niall Griffith
Affiliation:(1) Department of Computer Science, University of Exeter, EX4 4PT Exeter, UK
Abstract:We address the problem of training multilayer perceptrons to instantiate a target function. In particular, we explore the accuracy of the trained network on a test set of previously unseen patterns — the generalisation ability of the trained network. We systematically evaluate alternative strategies designed to improve the generalisation performance. The basic idea is to generate a diverse set of networks, each of which is designed to be an implementation of the target function. We then have a set of trained, alternative versions — a version set. The goal is to achieve lsquouseful diversityrsquo within this set, and thus generate potential for improved generalisation performance of the set as a wholewhen compared to the performance of any individual version. We define this notion of lsquouseful diversityrsquo, we define a metric for it, we explore a number of ways of generating it, and we present the results of an empirical study of a number of strategies for exploiting it to achieve maximum generalisation performance. The strategies encompass statistical measures as well as a lsquoselectornetrsquo approach which proves to be particularly promising. The selector net is a form of lsquometanetrsquo that operates in conjunction with a version set.
Keywords:Multilayer perceptrons  Backpropagation  Generalisation  Generalisation diversity  Majority vote  Selector-net  Metanet  Version set
本文献已被 SpringerLink 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号