TransGPerf: Exploiting Transfer Learning for Modeling Distributed Graph Computation Performance |
| |
Authors: | Songjie Niu Shimin Chen |
| |
Affiliation: | State Key Laboratory of Computer Architecture,Institute of Computing Technology,Chinese Academy of Sciences Beijing 100190,China;University of Chinese Academy of Sciences,Beijing 100049,China |
| |
Abstract: | It is challenging to model the performance of distributed graph computation. Explicit formulation cannot easily capture the diversified factors and complex interactions in the system. Statistical learning methods require a large number of training samples to generate an accurate prediction model. However, it is time-consuming to run the required graph computation tests to obtain the training samples. In this paper, we propose TransGPerf, a transfer learning based solution that can exploit prior knowledge from a source scenario and utilize a manageable amount of training data for modeling the performance of a target graph computation scenario. Experimental results show that our proposed method is capable of generating accurate models for a wide range of graph computation tasks on PowerGraph and GraphX. It outperforms transfer learning methods proposed for other applications in the literature. |
| |
Keywords: | performance modeling distributed graph computation deep learning transfer learning |
本文献已被 万方数据 等数据库收录! |
| 点击此处可从《计算机科学技术学报》浏览原始摘要信息 |
|
点击此处可从《计算机科学技术学报》下载免费的PDF全文 |