首页 | 本学科首页   官方微博 | 高级检索  
     


Using the BSP cost model to optimise parallel neural network training
Authors:R. O. Rogers  D. B. Skillicorn
Affiliation:

Department of Computing and Information Science, Queen's University, Kingston, Canada K7L 3N6

Abstract:We derive cost formulae for three different parallelisation techniques for training both supervised and unsupervised networks. These formulae are parameterised by properties of the target computer architecture. It is therefore possible to decide both which technique is best for a given parallel computer, and which parallel computer best suits a given technique. One technique, exemplar parallelism, is far superior to almost all parallel computer architectures. Formulae also take into account optimal batch learning as the overall training approach. Cost predictions are made for several of today's popular parallel computers.
Keywords:Data mining   Neural networks   Parallelism   Bulk synchronous parallelism (BSP)   Cost analysis   Supervised learning   Unsupervised learning   Batch learning   Deterministic learning   Stochastic learning
本文献已被 ScienceDirect 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号