首页 | 本学科首页   官方微博 | 高级检索  
     


Efficient and High-quality Recommendations via Momentum-incorporated Parallel Stochastic Gradient Descent-Based Learning
Xin Luo, Wen Qin, Ani Dong, Khaled Sedraoui and MengChu Zhou, "Efficient and High-quality Recommendations via Momentum-incorporated Parallel Stochastic Gradient Descent-Based Learning," IEEE/CAA J. Autom. Sinica, vol. 8, no. 2, pp. 402-411, Feb. 2021. doi: 10.1109/JAS.2020.1003396
Authors:Xin Luo  Wen Qin  Ani Dong  Khaled Sedraoui  MengChu Zhou
Abstract:A recommender system (RS) relying on latent factor analysis usually adopts stochastic gradient descent (SGD) as its learning algorithm. However, owing to its serial mechanism, an SGD algorithm suffers from low efficiency and scalability when handling large-scale industrial problems. Aiming at addressing this issue, this study proposes a momentum-incorporated parallel stochastic gradient descent (MPSGD) algorithm, whose main idea is two-fold: a) implementing parallelization via a novel data-splitting strategy, and b) accelerating convergence rate by integrating momentum effects into its training process. With it, an MPSGD-based latent factor (MLF) model is achieved, which is capable of performing efficient and high-quality recommendations. Experimental results on four high-dimensional and sparse matrices generated by industrial RS indicate that owing to an MPSGD algorithm, an MLF model outperforms the existing state-of-the-art ones in both computational efficiency and scalability. 
Keywords:Big data   industrial application   industrial data   latent factor analysis   machine learning   parallel algorithm   recommender system (RS)   stochastic gradient descent (SGD)
本文献已被 维普 等数据库收录!
点击此处可从《IEEE/CAA Journal of Automatica Sinica》浏览原始摘要信息
点击此处可从《IEEE/CAA Journal of Automatica Sinica》下载免费的PDF全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号