首页 | 本学科首页   官方微博 | 高级检索  
     

面向异构并行架构的大规模原型学习算法
引用本文:苏统华,李松泽,邓胜春,于洋,白薇.面向异构并行架构的大规模原型学习算法[J].哈尔滨工业大学学报,2016,48(11):53-60.
作者姓名:苏统华  李松泽  邓胜春  于洋  白薇
作者单位:哈尔滨工业大学 软件学院, 哈尔滨 150001,哈尔滨工业大学 软件学院, 哈尔滨 150001,哈尔滨工业大学 软件学院, 哈尔滨 150001,中建八局大连公司, 辽宁 大连 116021,诺基亚通信系统技术北京有限公司浙江分公司, 杭州 310053
基金项目:国家自然科学基金(61203260);黑龙江省自然科学基金重点项目(ZD2015017);哈尔滨工业大学科研创新基金 (HIT.NSRIF.2015083)
摘    要:为解决当前原型学习算法在大规模、大类别机器学习和模式识别领域的计算密集瓶颈问题,提出一种采用GPU和CPU异构并行计算架构的可扩展原型学习算法框架.一是通过分解和重组算法的计算任务,将密集的计算负载转移到GPU上,而CPU只需进行少量的流程控制.二是根据任务类型自适应地决定是采用分块策略还是并行归约策略来实现.采用大规模手写汉字样本库验证本框架,在消费级显卡GTX680上使用小批量处理模式进行模型学习时,最高可得到194倍的加速比,升级到GTX980显卡,加速比可提升到638倍;算法甚至在更难以加速的随机梯度下降模式下,也至少能获得30倍的加速比.该算法框架在保证识别精度的前提下具有很高的可扩展性,能够有效解决原有原型学习的计算瓶颈问题.

关 键 词:原型学习  学习矢量量化  手写汉字识别  并行归约  异构并行计算
收稿时间:2015/5/11 0:00:00

Massively scalable prototype learning for heterogeneous parallel computing architecture
SU Tonghu,LI Songze,DENG Shengchun,YU Yang and BAI Wei.Massively scalable prototype learning for heterogeneous parallel computing architecture[J].Journal of Harbin Institute of Technology,2016,48(11):53-60.
Authors:SU Tonghu  LI Songze  DENG Shengchun  YU Yang and BAI Wei
Affiliation:School of Software, Harbin Institute of Technology, Harbin 150001, China,School of Software, Harbin Institute of Technology, Harbin 150001, China,School of Software, Harbin Institute of Technology, Harbin 150001, China,Dalian Branch China Construction Eighth Engineering Division Corp.Ltd, Dalian 116021, Liaoning, China and Nokia Solutions and Networks, Hangzhou 310053, China
Abstract:Current learning algorithms for prototype learning require intensive computation burden for large category machine learning and pattern recognition fields. To solve this bottleneck problem, a principled scalable prototype learning method is proposed based on heterogeneous parallel computing architecture of GPUs and CPUs. The method can transfer the intense workload to the GPU side instead of CPU side through splitting and rearranging the computing task, so that only a few control process is needed to be managed by the CPU. Meanwhile, the method has the ability to adaptively choose the strategies between tiling and reduction depending on its workload. Our evaluations on a large Chinese character database show that up to 194X speedup can be achieved in the case of mini-batch when evaluated on a consumer-level card of GTX 680. When a new GTX980 card is used, it can scale up to 638X. Even to the more difficult SGD occasion, a more than 30-fold speedup is observed. The proposed framework possess a high scalability while preserving its performance precision, and can effectively solve the bottleneck problems in prototype learning.
Keywords:prototype learning  learning vector quantization  Chinese character recognition  parallel reduction  heterogeneous parallel computing
本文献已被 CNKI 等数据库收录!
点击此处可从《哈尔滨工业大学学报》浏览原始摘要信息
点击此处可从《哈尔滨工业大学学报》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号