首页 | 本学科首页   官方微博 | 高级检索  
     

并行计算框架Spark的自适应缓存管理策略
引用本文:卞琛,于炯,英昌甜,修位蓉. 并行计算框架Spark的自适应缓存管理策略[J]. 电子学报, 2017, 45(2): 278-284. DOI: 10.3969/j.issn.0372-2112.2017.02.003
作者姓名:卞琛  于炯  英昌甜  修位蓉
作者单位:1. 新疆大学信息科学与工程学院, 新疆乌鲁木齐 830046;2. 乌鲁木齐职业大学信息工程学院, 新疆乌鲁木齐 830002
摘    要:并行计算框架Spark缺乏有效缓存选择机制,不能自动识别并缓存高重用度数据;缓存替换算法采用LRU,度量方法不够细致,影响任务的执行效率.本文提出一种Spark框架自适应缓存管理策略(Self-Adaptive Cache Management,SACM),包括缓存自动选择算法(Selection)、并行缓存清理算法(Parallel Cache Cleanup,PCC)和权重缓存替换算法(Lowest Weight Replacement,LWR).其中,缓存自动选择算法通过分析任务的DAG(Directed Acyclic Graph)结构,识别重用的RDD并自动缓存.并行缓存清理算法异步清理无价值的RDD,提高集群内存利用率.权重替换算法通过权重值判定替换目标,避免重新计算复杂RDD产生的任务延时,保障资源瓶颈下的计算效率.实验表明:我们的策略提高了Spark的任务执行效率,并使内存资源得到有效利用.

关 键 词:并行计算  缓存管理策略  Spark  弹性分布式数据集  
收稿时间:2015-09-02

Self-Adaptive Strategy for Cache Management in Spark
BIAN Chen,YU Jiong,YING Chang-tian,XIU Wei-rong. Self-Adaptive Strategy for Cache Management in Spark[J]. Acta Electronica Sinica, 2017, 45(2): 278-284. DOI: 10.3969/j.issn.0372-2112.2017.02.003
Authors:BIAN Chen  YU Jiong  YING Chang-tian  XIU Wei-rong
Affiliation:1. School of Information Science and Engineering, Xinjiang University, Urumqi, Xinjiang 830046, China;2. School of Information and Engineering, Urumqi Vocational University, Urumqi, Xinjiang 830002, China
Abstract:As a parallel computation framework,Spark does not have a good strategy to select valuable RDD to cache in limited memory.When memory has been full load,Spark will discard the least recently used RDD while ignoring other factors such as the computation cost and so on.This paper proposed a self-adaptive cache management strategy (SACM),which comprised of automatic selection algorithm (Selection),parallel cache cleanup algorithm (PCC) and lowest weight replacement algorithm (LWR).Selection algorithm can seek valuable RDDs and cache their partitions to speed up data intensive computations.PCC clean-up the valueless RDD sasynchronously to improve memory utilization.LWR takes comprehensive consideration of the usage frequency of RDD,the RDD's computation cost,and the size of RDD.Experiment results show that Spark with our selection algorithm calculates faster than traditional Spark,parallel cleanup algorithm contributes to the improvement of memory utilization,and LWR shows better performance in limited memory.
Keywords:parallel computing  cache management strategy  Spark  resilient distribution datasets
本文献已被 万方数据 等数据库收录!
点击此处可从《电子学报》浏览原始摘要信息
点击此处可从《电子学报》下载免费的PDF全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号