首页 | 本学科首页   官方微博 | 高级检索  
     

多核/众核平台上推荐算法的实现与性能评估
引用本文:陈静,方建滨,唐滔,杨灿群.多核/众核平台上推荐算法的实现与性能评估[J].计算机科学,2017,44(10):71-74.
作者姓名:陈静  方建滨  唐滔  杨灿群
作者单位:国防科学技术大学计算机学院 长沙410073,国防科学技术大学计算机学院 长沙410073,国防科学技术大学计算机学院 长沙410073,国防科学技术大学计算机学院 长沙410073
基金项目:本文受国家自然科学基金项目(61170049,61402488,4,61602501),国家863项目(2015AA01A301)资助
摘    要:用OpenCL语言标准设计并实现了推荐系统领域的两种经典算法:交替最小二乘法(Alternating Least Squares,ALS)与循环坐标下降法(Cyclic Coordinate Descent,CCD)。将其应用到CPU,GPU,MIC多核与众核平台上,探索了在该平台上影响算法性能的因子:潜在特征维数与线程个数。同时,将OpenCL实现的两种算法与CUDA和OpenMP的实现进行比较,得出了一系列结论。在同等条件下,与ALS算法相比,CCD算法的精度更高,收敛速度更快且更稳定,但所耗时间更长。ALS和CCD算法基于OpenCL的实现性能不亚于CUDA(CCD 上加速比为1.03x,ALS上加速比为1.2x)和OpenMP的实现(CCD与ALS上加速比大约为1.6~1.7x),并且两种算法在CPU平台上的性能均比GPU与MIC好。

关 键 词:推荐系统  OpenCL  ALS  CCD
收稿时间:2016/12/21 0:00:00
修稿时间:2017/1/11 0:00:00

Implementation and Performance Evaluation of Recommender Algorithms Based on Multi-/Many-core Platforms
CHEN Jing,FANG Jian-bin,TANG Tao and YANG Can-qun.Implementation and Performance Evaluation of Recommender Algorithms Based on Multi-/Many-core Platforms[J].Computer Science,2017,44(10):71-74.
Authors:CHEN Jing  FANG Jian-bin  TANG Tao and YANG Can-qun
Affiliation:College of Computer,National University of Defense Technology,Changsha 410073,China,College of Computer,National University of Defense Technology,Changsha 410073,China,College of Computer,National University of Defense Technology,Changsha 410073,China and College of Computer,National University of Defense Technology,Changsha 410073,China
Abstract:In this paper,we designed and implemented two typical recommender algorithms,alternating least squares and cyclic coordinate descent in openCL.Then we evaluated them on Intel CPUs,NVIDIA GPUs and Intel MIC,and investigated the performance impacting factors: potential feature dimension and the number of thread.Meanwhile,we compared the OpenCL implementation with that of CUDA and OpenMP.Our experimental results show that in the same condition,CCD converges faster and performs more steadily,but is more time-consuming than ALS.We also observed that the performance based on OpenCL is better than CUDA and OpenMP when running on the same platform:the training time on GPU is slightly faster than that of the CUDA implementation (1.03x for CCD and 1.2x for ALS),and the training time on CPU is 1.6~1.7 times less than that of the OpenMP implementation with 16 threads.When running the OpenCL implementation on different platforms,we noticed that CPU performs better than both the GPU and the MIC.
Keywords:Recommender system  OpenCL  ALS  CCD
点击此处可从《计算机科学》浏览原始摘要信息
点击此处可从《计算机科学》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号