首页 | 本学科首页   官方微博 | 高级检索  
     

基于CUDA技术的卷积神经网络识别算法
引用本文:张佳康,陈庆奎.基于CUDA技术的卷积神经网络识别算法[J].计算机工程,2010,36(15):179-181.
作者姓名:张佳康  陈庆奎
作者单位:上海理工大学光电信息与计算机工程学院,上海,200093
基金项目:国家自然科学基金资助项目,上海教委发展基金资助项目,上海教委科研创新基金资助重点项目,上海市重点学科建设基金资助项目 
摘    要:针对具有高浮点运算能力的流处理器设备GPU对神经网络的适用性问题,提出卷积神经网络的并行化识别算法,采用计算统一设备架构(CUDA)技术,并定义其上的并行化数据结构,描述计算任务到CUDA的映射机制。实验结果证明,在GTX200硬件架构的GPU上实现的并行识别算法的平均浮点运算能力峰值较CPU上串行算法提高了近60倍,更适用于神经网络的相关应用。

关 键 词:流处理器  单指令多线程  GTX200硬件架构  CUDA技术  卷积神经网络

CUDA Technology Based Recognition Algorithm of Convolutional Neural Networks
ZHANG Jia-kang,CHEN Qing-kui.CUDA Technology Based Recognition Algorithm of Convolutional Neural Networks[J].Computer Engineering,2010,36(15):179-181.
Authors:ZHANG Jia-kang  CHEN Qing-kui
Affiliation:(School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai 200093)
Abstract:For the problem whether Graphic Processing Unit(GPU), the stream processor with high performance of floating-point computing is applicable to neural networks, this paper proposes the parallel recognition algorithm of Convolutional Neural Networks(CNNs). It adopts Compute Unified Device Architecture(CUDA) technology, definites the parallel data structures, and describes the mapping mechanism for computing tasks on CUDA. It compares the parallel recognition algorithm achieved on GPU of GTX200 hardware architecture with the serial algorithm on CPU. It improves speed by nearly 60 times. Result shows that GPU based the stream processor architecture are more applicable to some related applications about neural networks than CPU.
Keywords:stream processor  Single-Instruction Multiple-Thread(SIMT)  GTX200 hardware architecture  Compute Unified Device Architecture (CUDA) technology  Convolutional Neural Networks(CNNs)
本文献已被 维普 万方数据 等数据库收录!
点击此处可从《计算机工程》浏览原始摘要信息
点击此处可从《计算机工程》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号