首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Dehne  Dittrich  Hutchinson 《Algorithmica》2008,36(2):97-122
Abstract. External memory (EM) algorithms are designed for large-scale computational problems in which the size of the internal memory of the computer is only a small fraction of the problem size. Typical EM algorithms are specially crafted for the EM situation. In the past, several attempts have been made to relate the large body of work on parallel algorithms to EM, but with limited success. The combination of EM computing, on multiple disks, with multiprocessor parallelism has been posted as a challenge by the ACM Working Group on Storage I/ O for Large-Scale Computing. In this paper we provide a simulation technique which produces efficient parallel EM algorithms from efficient BSP-like parallel algorithms. The techniques obtained can accommodate one or multiple processors on the EM target machine, each with one or more disks, and they also adapt to the disk blocking factor of the target machine. When applied to existing BSP-like algorithms, our simulation technique produces improved parallel EM algorithms for a large number of problems.  相似文献   

2.
分析了RAID-10系统的并行I/O任务模型,应用模糊函数提出了衡量I/O服务的满意度指标,并应用此指标,提出了一种适用于RAID-10系统I/O任务调度算法,提高了RAID-10系统的实时性能,并改善了负载其平衡能力。  相似文献   

3.
针对计算机系统中CPU与I/O性能差距持续扩大的的问题,引出缓存及预取的概念,并分析预取技术需要处理的几个关键问题。根据预取的发展。介绍并比较几类典型的预取算法。针对目前的各种软硬件新技术,探讨预取技术所面临的困难及其发展趋势。  相似文献   

4.
浅析缓存预取技术   总被引:2,自引:0,他引:2  
针对计算机系统中CPU与I/O性能差距持续扩大的的问题,引出缓存及预取的概念,并分析预取技术需要处理的几个关键问题。根据预取的发展,介绍并比较几类典型的预取算法。针对目前的各种软硬件新技术,探讨预取技术所面临的困难及其发展趋势。  相似文献   

5.
Buffer management for a D-disk parallel I/O system is considered in the context of randomized placement of data on the disks. A simple prefetching and caching algorithm PHASE-LRU using bounded lookahead is described and analyzed. It is shown that PHASE-LRU performs an expected number of I/Os that is within a factor Θ(logD/loglogD) of the number performed by an optimal off-line algorithm. In contrast, any deterministic buffer management algorithm with the same amount of lookahead must do at least times the number of I/Os of the optimal.  相似文献   

6.
在图像并行处理应用中,有很大一部分并行算法是属于迭代同步的数据并行算法。这类分布式应用的子任务间需要某种形式的同步,因而要有协作调度的算法保证子任务基本上同时开始,并以同样速度执行。该文提出了一种以并行程序最短执行时间为目标的数据划分和任务协作调度算法。与其它类似算法相比,这个算法的最大特点是考虑了通信开销,因而更加符合实际的应用,更加有效。  相似文献   

7.
文章提出了一种并行存储反应调度算法,它是基于存储访问建模的和基于规则的存储自动优化算法。这种算法使用E_IS_PPM和Last_N_Successor算法对存储访问建模,然后对存储访问模式进行分类,并确立了存储优化的规则集。最后,在MPI基础上实现了调度算法。  相似文献   

8.
分布式系统中的并行调度一直是一个十分活跃的课题.本文对系统的性能指标和调度技术予以介绍,最后总结了相关研究工作.  相似文献   

9.
We give a polynomial approximation scheme for the problem of scheduling on uniformly related parallel machines for a large class of objective functions that depend only on the machine completion times, including minimizing the lp norm of the vector of completion times. This generalizes and simplifies many previous results in this area.  相似文献   

10.
集群计算系统中并行I/O访问的自适应调度策略   总被引:1,自引:0,他引:1  
现代计算机系统的性能已经由受限于CPU转变为受限于I/O,因此,并行I/O是缓解这一难题的有效方法。文章通过对集群计算系统中I/O调度策略的研究,提出了一种新的自适应平等划分的I/O调度策略,并将它们与已经存在的调度策略进行比较。实验结果表明了所提出策略的优越性。  相似文献   

11.
叶孝斌  杨树强 《计算机工程》2000,26(3):57-58,76
并行I/O是基于无共享结构的并行数据库系统提高性能的有效途径之一。它通过并行磁盘服务和网络传输并行化提供了高带宽I/O。文章设计实现了基于无共享结构的并行数据库系统的并行I/O,探讨了设计并行I/O时的几个关键问题及实现技术。  相似文献   

12.
本文针对考虑资源约束的平行处理机的调度问题,以选择操作链来划分时间段,并建立了数学模型。这一方式将对处理机和资源的关注转化到操作的变化上来,极大地降低了该类问题的计算复杂性。  相似文献   

13.
并行I/O技术研究   总被引:7,自引:0,他引:7  
从分析提高I/O性能的途径开始,对在分布主存的高性能计算机中利用存储系统并行性来完成数据访问的并行文件系统所涉及到的问题进行了分析和探讨,最后介绍了几个著名的并行文件系统。  相似文献   

14.
RAID的并行I/O调度算法分析   总被引:6,自引:1,他引:6  
由于越来越多的应用受限于I/O,存储系统正起着越来越重要的作用,磁盘阵列RAID是一种提供高性能I/O的最常见存储设备,本文分析了RAID并行I/O调度算法的I/O执行时间和磁盘利用率,为合理配置高性能阵列提供了依据。  相似文献   

15.
毛友发  杨明福 《计算机工程》2004,30(18):33-34,121
研究了并行存储预取优化算法,根据并行存储的主要访问模式,提出要同时对文件内数据块访问和文件间访问进行建模,并对文件内数据块访问和文件间访问建模分别提出了E_IS_PPM算法和Last_N_Successor算法。最后将两个算法结合起来,提出了文件预取综合算法,算法根据计算和存储的可重叠程度以及文件预取页面的可获得性,自适应地决定预取深度。  相似文献   

16.
Jansen  Porkolab 《Algorithmica》2008,32(3):507-520
Abstract. A malleable parallel task is one whose execution time is a function of the number of (identical) processors alloted to it. We study the problem of scheduling a set of n independent malleable tasks on a fixed number of parallel processors, and propose an approximation scheme that for any fixed ε > 0 , computes in O(n) time a non-preemptive schedule of length at most (1+ε) times the optimum.  相似文献   

17.
分析了GIS两种常见的空间数据格式(GRID、TIN)并行转换的可行性,并给出了两种基于Cluster结构的并行转换的任务调度算法。其中确定模式的调度算实现比较简单,但在任务的执行有严格的次序;非确定模式的调度算法可以根据任务具体的执行时间确定任务执行的次序。实验表明:在任务分配不均衡时,非确定模式的调度算法效率更高。  相似文献   

18.
Our main result is an optimal online algorithm for preemptive scheduling on uniformly related machines with the objective to minimize makespan. The algorithm is deterministic, yet it is optimal even among all randomized algorithms. In addition, it is optimal for any fixed combination of speeds of the machines, and thus our results subsume all the previous work on various special cases. Together with a new lower bound it follows that the overall competitive ratio of this optimal algorithm is between 2.054 and e≈2.718. We also give a complete analysis of the competitive ratio for three machines. T. Ebenlendr and J. Sgall partially supported by Institutional Research Plan No. AV0Z10190503, by Inst. for Theor. Comp. Sci., Prague (project 1M0545 of MŠMT ČR), grant 201/05/0124 of GA ČR, and grant IAA1019401 of GA AV ČR. W. Jawor supported by NSF grants CCF-0208856 and OISE-0340752.  相似文献   

19.
Multiple memory models have been proposed to capture the effects of memory hierarchy culminating in the I-O model of Aggarwal and Vitter (Commun. ACM 31(9):1116–1127, [1988]). More than a decade of architectural advancements have led to new features that are not captured in the I-O model—most notably the prefetching capability. We propose a relatively simple Prefetch model that incorporates data prefetching in the traditional I-O models and show how to design optimal algorithms that can attain close to peak memory bandwidth. Unlike (the inverse of) memory latency, the memory bandwidth is much closer to the processing speed, thereby, intelligent use of prefetching can considerably mitigate the I-O bottleneck. For some fundamental problems, our algorithms attain running times approaching that of the idealized random access machines under reasonable assumptions. Our work also explains more precisely the significantly superior performance of the I-O efficient algorithms in systems that support prefetching compared to ones that do not.
Sandeep SenEmail:
  相似文献   

20.
A malleable parallel task is one whose execution time is a function of the number of (identical) processors allotted to it. We study the problem of scheduling a set of n independent malleable tasks on an arbitrary number m of parallel processors and propose an asymptotic fully polynomial time approximation scheme. For any fixed > 0, the algorithm computes a non-preemptive schedule of length at most (1+) times the optimum (plus an additive term) and has running time polynomial in n,m and 1 /.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号