首页 | 本学科首页   官方微博 | 高级检索  
     

并行文件系统中适度贪婪的Cache预取一体化算法
引用本文:卢凯,金士尧,卢锡城.并行文件系统中适度贪婪的Cache预取一体化算法[J].计算机学报,1999,22(11):1172-1177.
作者姓名:卢凯  金士尧  卢锡城
作者单位:国防科学技术大学计算机科学系,长沙,410073
摘    要:传统文件系统中的Cache和预取技术是两种降低访问延迟的有效方法。在并行科学计算应用的I/O访问模式下,简单的Cache和预取技术已无法提供较高的Cache命中率,该文在分析该I/O模式的基础上提出了适度贪婪的Cache和预取一体化算法(PGI),该算法充分利用了并行文件系统环境的特点,采用了适度贪婪的动态滑模技术,可以有铲地消除预取时的抖动,降低系统处理开锁,并同时采用了Cache和预取一体化的

关 键 词:并行文件系统  预取  操作系统
修稿时间:1998年12月15日

PROPERLY GREEDY CACHE PREFETCH INTEGRATED ALGORITHM IN THE PARALLEL FILE SYSTEM
LU Kai,JIN Shi-Yao,LU Xi-Cheng.PROPERLY GREEDY CACHE PREFETCH INTEGRATED ALGORITHM IN THE PARALLEL FILE SYSTEM[J].Chinese Journal of Computers,1999,22(11):1172-1177.
Authors:LU Kai  JIN Shi-Yao  LU Xi-Cheng
Abstract:Cache and Prefetch is two efficient ways to lower the delay of I/O request. The latency of physical I/O operations can be avoided by caching frequently referenced data in primary memory rather than in secondary memory. And latency tolerance can be achieved by prefetch's overlapping long I/O access with independent operations. But simple cache and prefetch algorithm is no use for parallel scientific application, and prefetching must be carefully balanced against caching. This paper proposes a new prefetch algorithm faced to parallel scientific application: Properly Greedy Cache Prefetch Integrated Algorithm (PGI). PGI is based on the regularity of parallel scientific application's I/O access mode. Now, LRU _SP and TIP algorithm is too greedy when they execute prefetch operation. PGI can overcome the shortcomings of LRU _SP and TIP. PGI uses properly greedy dynamic slide window, which can adjust its prefetch size according to application's consume speed and I/O service delay. When replacing the cache blocks, PGI uses cache and prefetch integrated loss estimate algorithm to achieve the least loss and least I/O service time. In parallel file system environment, the workload of each I/O node is different, PGI thinks of it fully. When replacing cache blocks, PGI will first replace those cache blocks on light workload I/O nodes. So, the workload of each I/O node will be balance, and the total service time will be lower. The result shows that the PGI can bring higher hit ration, shorter I/O operation delay and balance workload.
Keywords:Cache
本文献已被 CNKI 维普 万方数据 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号