首页 | 本学科首页   官方微博 | 高级检索  
     

基于CUDA的GMM模型快速训练方法及应用
引用本文:吴奎,宋彦,戴礼荣.基于CUDA的GMM模型快速训练方法及应用[J].数据采集与处理,2012,27(1):85-90.
作者姓名:吴奎  宋彦  戴礼荣
作者单位:中国科学技术大学电子工程与信息科学系,合肥,230027
摘    要:由于能够很好地近似描述任何分布,混合高斯模型(GMM)在模式在识别领域得到了广泛的应用.GMM模型参数通常使用迭代的期望最大化(EM)算法训练获得,当训练数据量非常庞大及模型混合数很大时,需要花费很长的训练时间.NVIDIA公司推出的统一计算设备架构(Computed unified device architecture,CUDA)技术通过在图形处理单元(GPU)并发执行多个线程能够实现大规模并行快速计算.本文提出一种基于CUDA,适用于特大数据量的GMM模型快速训练方法,包括用于模型初始化的K-means算法的快速实现方法,以及用于模型参数估计的EM算法的快速实现方法.文中还将这种训练方法应用到语种GMM模型训练中.实验结果表明,与Intel DualCore PentiumⅣ3.0 GHz CPU的一个单核相比,在NVIDIA GTS250 GPU上语种GMM模型训练速度提高了26倍左右.

关 键 词:混合高斯模型  语种识别  图形处理单元  统一计算设备架构
收稿时间:2010/11/13 0:00:00
修稿时间:2011/6/12 0:00:00

CUDA based Fast GMM Model Training Method and its Application
Wu Kui,Song Yan and Dai LiRong.CUDA based Fast GMM Model Training Method and its Application[J].Journal of Data Acquisition & Processing,2012,27(1):85-90.
Authors:Wu Kui  Song Yan and Dai LiRong
Affiliation:(Department of Electronic Engineering and Information Science,University of Science and Technology of China,Hefei,230027,China)
Abstract:Due to its good property to provide an approximation to any distribution,Gaussian mixture model(GMM) is widely applied in the field of pattern recognition.Usually,the iterative expectation-maximization(EM) algorithm is applied to GMM parameter estimation.The computational complexity at model training procedure can become extremely high when large amounts of training data and large mixture number are engaged.The computed unified device architecture(CUDA) technology provided by NVIDIA Corporation can perform fast parallel computation by running thousands of threads simultaneously on graphic processing unit(GPU).A fast GMM model training implementation using CUDA is presented,which is especially applicable to large amounts of training data.The fast training implementation contains two parts,i.e.,the K-means algorithm for model initialization and the EM algorithm for parameter estimation.Furthermore,the fast training method is applied to language GMMs training.Experimental results show that language model training using GPU is about 26 times faster on NVIDIA GTS250 than the traditional implementation on one of the single core of Intel DualCore Pentium Ⅳ 3.0 GHz CPU.
Keywords:Caussian mixture model(GMM)  language identification  graphic processing unit(GPU)  computed unified device architecture(CUDA)
本文献已被 CNKI 万方数据 等数据库收录!
点击此处可从《数据采集与处理》浏览原始摘要信息
点击此处可从《数据采集与处理》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号