首页 | 本学科首页   官方微博 | 高级检索  
     


A parallel implementation of the backward error propagation neural network training algorithm: experiments in event identification.
Authors:D F Sittig  J A Orr
Affiliation:Center for Medical Informatics, Yale University School of Medicine, New Haven, Connecticut 06510.
Abstract:An artificial neural-network-based (ANN) event detection and alarm generation system has been developed to aid clinicians in the identification of critical events commonly occurring in the anesthesia breathing circuit. To detect breathing circuit problems, the system monitored CO2 gas concentration, gas flow, and airway pressure. Various parameters were extracted from each of these input waveforms and fed into an artificial neural network. To develop truly robust ANNs, investigators are required to train their networks on large training data sets, requiring enormous computing power. We implemented a parallel version of the backward error propagation neural network training algorithm in the widely portable parallel programming language C-Linda. A maximum speedup of 4.06 was obtained with six processors. This speedup represents a reduction in total run-time from 6.4 to 1.5 h. By reducing the total run time of the computation through parallelism, we were able to optimize many of the neural network's initial parameters. We conclude that use of the master-worker model of parallel computation is an excellent method for speeding up the backward error propagation neural network training algorithm.
Keywords:
本文献已被 ScienceDirect 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号