首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到18条相似文献,搜索用时 505 毫秒
1.
延伸了TTIG模型并提出新的算法.在模拟实验中,将此算法与MATE和其它同构环境中基于DAG的调度算法,在不同测试条件下进行了比较,结果显示该算法性能明显优于MATE,与基于DAG模型的调度算法比较而言,在性能方面各有千秋,但在算法时间复杂度方面具有显著的优势.  相似文献   

2.
当前处理器由于较高的能量消耗,导致处理器热量散发的提高及系统可靠性的降低,已经成为目前计算机领域较为关心的问题.然而目前一些有效降低能量消耗的技术大多针对单处理器系统,较少考虑多处理器系统.提出的调度算法针对多处理器计算环境,以执行时间最快的任务优先调度为基础,结合其它有效技术(共享空闲时间回收),使得实时任务在其截止期内完成的同时能够有效地减低整个系统的能量消耗.针对独立任务集及具有依赖关系的任务集,提出两种针对同构计算环境的算法:STFBA1(Shortest—Task—First—Based Algorithm)及STFBA2,及两钟针对多任务集的算法HSA1(Hybrid Seheduling Algorithm)及HAS2.在单任务集计算环境下,与目前所知的有效算法相比,算法具有更好的性能(调度长度及能量消耗).在多任务集计算环境下,基于混合调度策略的算法能够明显改进调度性能.  相似文献   

3.
云计算环境下多有向无环图工作流的节能调度算法   总被引:1,自引:0,他引:1  
刘丹琦  于炯  英昌甜 《计算机应用》2013,33(9):2410-2415
针对多有向无环图(DAG)工作流节能调度算法中存在的节能效果不佳、适用范围较窄和无法兼顾性能优化等问题,提出了一种新的多DAG工作流节能调度方法--MREO。MREO在对计算密集型和通信密集型任务特点进行分析的基础上,通过整合独立任务,减少了处理器的数量,并利用回溯和分支限界算法对任务整合路径进行动态的优化选择,有效降低了整合算法的复杂度。实验结果证明,MREO在保证多DAG工作流性能的前提下,能够有效降低系统的计算和通信能量开销,获得了良好的节能效果。  相似文献   

4.
基于DAG的静态任务调度算法已有深入的研究及应用.目前的调度算法大多假定处理器之间可以并行接收数据,而没有考虑实际应用中通信链路的竞争及延迟,进而导致调度算法在具体应用中效率较低.侧重研究同构计算环境下具有依赖关系任务的边调度问题,结合传统任务调度问题中的有效策略,提出基于优化插入的调度算法(OISA).OISA根据实际问题的具体特征,采用改进的路由算法选择负载较少的数据链路,并通过形式化的证明以优化通信数据在链路的开始传输时间,以达到降低调度长度的目的.通过试验测试表明,OISA在性能上明显优于目前已有的相关算法.  相似文献   

5.
乔伟光  曾国荪 《计算机工程》2006,32(17):126-128
并行任务调度是影响机群计算效率的关键因素之一,机群环境DAG(Directed Acyclic Graph)任务图调度是一个NP完全问题,只能寻求启发式算法。已有的研究中,图解重构算法在允许任务复制的条件下,通过对DAG图递归分解与子图重构,初步实现了一个可行的调度方案。该文在此基础上,提出了以调度长度增量为依据的任务复制策略,利用该策略调整受制约节点的同簇前驱,解决了任务簇间的时间制约问题,缩短了调度长度;通过合理地选择任务簇进行合并,增大任务簇的粒度,提高了处理器的利用率。提出的以任务簇扩展-合并为特征、以分簇复制为手段的DAG图调度算法,改进和拓展了图解重构方法。实例分析表明本算法复杂度与TDS (Task Duplication Scheduling)相同,但性能更优。  相似文献   

6.
任丰玲  于炯  杨兴耀 《计算机工程》2012,38(23):287-290
针对云计算环境下多个有向无环图(DAG)工作流的调度问题,提出一种基于最小化数据传输时间和任务完成时间(LTCT)的算法,用于处理具有相同优先级的多个DAG工作流之间的调度问题。在多个DAG优先级各不相同时的情况下,给出多优先级多DAG的混合调度算法。实验结果表明,LTCT算法较E-Fairness算法在保证多DAG调度公平性的基础上,能避免额外的数据传输开销,有利于缩短整个工作流的执行Makespan,提高资源的利用率。  相似文献   

7.
MapReduce编程模型被广泛应用于大数据处理平台,而一个有效的任务调度算法对模型的运行效率至关重要。将MapReduce工作流的Map和Reduce阶段分别拆解为若干个有先后序限定关系的作业,每个作业再拆解为多个任务。之后基于计算集群的可用资源和任务异构性,构建面向作业和任务的2级有向无环图(DAG)模型,同时提出基于2级优先级排序的异构调度算法2-MRHS。算法的第1阶段进行优先级排序,即对作业和任务分别进行优先权值计算,再汇总得到任务的调度队列;第2阶段进行任务分配,即基于最快完成时间将每个任务所包含的数据块子任务分配给最适合的计算结点。采用大批量随机生成的DAG模型进行实验,结果表明与其他相关算法相比,本文算法有更短的调度长度(makespan)且更加稳定。  相似文献   

8.
针对并行程序结构产生任务计算量和通信量的随机性,提出了一种扩展的随机DAG模型.基于此模型对DAG调度中常用调度算法关键路径SCP(Static Critical Path)算法进行了详细的分析,提出了相应的扩展的随机DAG的调度方法SSCP(Stochastic Static Critical Path)算法.同时,给出了扩展的随机DAG中节点的EST(Earliest Start Time)计算方法,并以SCP算法为例进行实验模拟.实验结果表明,SSCP算法相对于SCP算法,减少了并行任务执行时间,并能更精确地预测任务调度的平均执行时间.  相似文献   

9.
在深入研究网格环境下任务调度算法的基础上,提出一种基于QoS的协作型任务调度遗传算法并通过引入协作型任务的形式化描述DAG图构造了QoS参数模型.该参数模型提出了任务完成时间、价格和可靠性三个QoS参数并将这些QoS参数引入遗传算法,实现了网格环境下协作型任务调度对服务质量的优化并保证了协作型任务之间的数据依赖.通过与DAG-MIN和DAG-GSA算法的对比实验表明,该算法能在保证较优调度性能的同时大幅度提高调度的服务质量.  相似文献   

10.
为了优化云工作流调度的经济代价和执行效率,提出一种基于有向无循环图(DAG)分割的工作流调度算法PBWS。以工作流调度效率与代价同步优化为目标,算法将调度求解过程划分为三个阶段进行:工作流DAG结构分割、分割结构调整及资源分配。工作流DAG结构分割阶段在确保任务间执行顺序依赖的同时求解初始的任务分割图;分割结构调整阶段以降低执行跨度为目标,在不同分割间对任务进行重分配;资源分配阶段旨在选择代价最高效的任务与资源映射关系,确保资源的总空闲时间最小。利用五种科学工作流DAG模型对算法进行了仿真实验。结果表明。PBWS算法仅以较小的执行跨度为开销,极大降低了工作流执行代价,实现了调度效率与调度代价的同步优化,其综合性能是优于同类型算法的。  相似文献   

11.
兰舟  孙世新 《计算机学报》2007,30(3):454-462
多处理器调度问题是影响系统性能的关键问题,基于任务复制的调度算法是解决多处理器调度问题较为有效的方法.文中分析了几个典型的基于任务复制算法,提出了基于动态关键任务(DCT)的多处理器任务分配算法.DCT算法以克服贪心算法不足为要点,调度过程中动态计算任务时间参数,准确确定处理器的关键任务,以关键任务为核心优化调度,逐步改善调度结果,最终取得最优的调度结果.分析和实验证明,DCT算法优于现有其它同类算法.  相似文献   

12.
王小乐  黄宏斌  邓苏 《自动化学报》2012,38(11):1870-1879
针对异构环境并行计算的静态任务调度问题,以最小化有向无环图 (Directed acyclic graph, DAG)的执行跨度为目标,改变HEFT (Heterogeneous earliest finish time)算法中任务上行权重的计算方法, 获得更加合理的任务顺序排列,提出了一种最早完成时间优先的表调度算法IHEFT (Improvement heterogeneous earliest finish time).该算法在计算任务的上行权重时, 分别计算该任务分配给不同资源的上行权重,取其最小值,比使用所有资源对该任务的平均处理时间进行计算的HEFT算法更为准确. 确定任务的处理顺序后采用最早完成时间越小越优先的策略将任务分配给最优资源,并使得任务的开始执行时间和结束时间满足DAG中有向边的通讯时间约束.通过使用部分文献中的算例数据以及随机生成满足一定结构要求的DAG进行算法测试,将IHEFT与HEFT, CPOP (Critical-path-on-a-processor)和LDCP (Longest dynamic critical path)进行了比较,结果显示IHEFT算法更有效,而且时间复杂度较低.  相似文献   

13.
A heterogeneous task scheduling algorithm called Predict and Arrange Task Scheduling (PATS) algorithm was proposed to achieve a lower bound time complexity with minimum schedule length. Two major steps were introduced, i.e. earliest finish time with level-based task scheduling and idle slot reduction. In the first step, tasks are scheduled according to their predicted earliest finish time from the candidate task list and their dependencies. Scheduling is performed one level at a time starting from top level and transcend downward. In the second step, the idle time slots in each processing unit are minimized. Two sets of experiments were designed to evaluate the merits of proposed algorithm. The first experiment involved the task graphs used by other methods. These graphs are all synthesized. The second experiment concerned the task graphs derived from real world applications such as montage work flow, molecular dynamic code. The experimental results showed that the PATS algorithm yielded better average schedule length ratio, running time, and efficiency than the compared algorithms.  相似文献   

14.
一种基于DAG图划分的网格关联任务调度算法   总被引:1,自引:0,他引:1  
网格计算中的大型应用程序往往被分解为多个关联任务.对于这类应用,任务间的依赖是一个不可忽略的因素.传统算法只能将其视为元任务来考虑,限制了对任务粒度的进一步划分,从而大大降低了任务调度的性能.本文提出一种基于DAG图划分的关联任务调度算法.它优先调度关键路径上的任务,同时利用任务复制的方法充分利用资源上的时间碎片,保证依赖关系及时得到满足.仿真结果表明,对于网格环境下的大规模关联任务,该算法有效地提高了作业执行速度和资源使用效率.  相似文献   

15.
Tasks in hard real-time systems are required to meet preset deadlines, even in the presence of transient faults, and hence the analysis of worst-case finish time (WCFT) must consider the extra time incurred by re-executing tasks that were faulty. Existing solutions can only estimate WCFT and usually result in significant under- or over-estimation. In this work, we conclude that a sufficient and necessary condition of a task set experiencing its WCFT is that its critical task incurs all expected transient faults. A method is presented to identify the critical task and WCFT in O(|V | + |E|) where |V | and |E| are the number of tasks and dependencies between tasks, respectively. This method finds its application in testing the feasibility of directed acyclic graph (DAG) based task sets scheduled in a wide variety of fault-prone multi-processor systems, where the processors could be either homogeneous or heterogeneous, DVS-capable or DVS-incapable, etc. The common practices, which require the same time complexity as the proposed critical-task method, could either underestimate the worst case by up to 25%, or overestimate by 13%. Based on the proposed critical-task method, a simulated-annealing scheduling algorithm is developed to find the energy efficient fault-tolerant schedule for a given DAG task set. Experimental results show that the proposed critical-task method wins over a common practice by up to 40% in terms of energy saving.  相似文献   

16.

In the past decade, heterogeneous multicore architectures with support for Single Instruction Multiple Thread (SIMT) style computing have become the standard platform of choice for scheduling HPC applications. Here, applications are typically modelled as a set of data-parallel tasks with dependencies represented in the form of a directed acyclic graph (DAG). The relevant execution time information for each constituent task in the DAG is known beforehand and is leveraged by scheduling algorithms (List or Cluster based) to ascertain near-optimal schedules at runtime. However, given an online setting, where applications are submitted by multiple users and the types of applications are not restrictive, the chances of knowing execution time information for every program are highly unlikely. In this context, we propose a class of intelligent algorithms for heterogeneous CPU-GPU platforms that leverage static analysis-assisted machine learning techniques for deciding how device assignments should be made at runtime, thus bypassing the requirement for expensive offline profiling passes. We formalize relevant task-level ranking metrics and discuss how existing scheduling techniques can be adapted for our proposed class of algorithms. We also devise an online cluster scheduling algorithm that supports dynamic task arrival by determining in any given scheduling epoch, mapping decisions for a subset of tasks in a DAG. We perform a detailed comparative analysis between our proposed cluster and list scheduling heuristics via extensive simulation experiments using a variety of heterogeneous multicore platform configurations and observe performance speedups in the range of 1.1–1.5× for cluster scheduling over that of list scheduling.

  相似文献   

17.
人工智能的飞速发展对高性能计算提出了更高的要求,异构计算环境下任务调度问题一直是高性能计算中的关键问题.本文提出一种基于优先队列划分的调度算法(PQDSA),该算法根据DAG(有向无循环图)任务集的入口节点数量确定优先队列数,通过任务的通信开销和计算开销划分任务队列,进而将关键节点任务分配给合适的队列,以产生效果较佳的任务调度队列,从而提高任务间的并行性,降低任务集的完工时间.与此同时,进一步基于插入策略将任务调度到处理器上,使任务调度更加高效地执行.PQDSA算法可以减少任务间的时间消耗,提高处理器的调度效率.通过与两个经典算法的性能对比,实验结果表明本文提出的PQDSA算法在任务完工时间和调度效率方面都要明显优于对比的算法.  相似文献   

18.
This paper proposes a scheduling algorithm to solve the problem of task scheduling in a cloud computing system with time‐varying communication conditions. This algorithm converts the scheduling problem with communication changes into a directed acyclic graph (DAG) scheduling problem for existing fuzzy communication task nodes, that is, the scheduling problem for a communication‐change DAG (CC‐DAG). The CC‐DAG contains both computation task nodes and communication task nodes. First, this paper proposes a weighted time‐series network bandwidth model to solve the indefinite processing time (cost) problem for a fuzzy communication task node. This model can accurately predict the processing time of a fuzzy communication task node. Second, to address the scheduling order problem for the computation task nodes, a dynamic pre‐scheduling search strategy (DPSS) is proposed. This strategy computes the essential paths for the pre‐scheduling of the computation task nodes based on the actual computation costs (times) of the computation task nodes and the predicted processing costs (times) of the fuzzy communication task nodes during the scheduling process. The computation task node with the longest essential path is scheduled first because its completion time directly influences the completion time of the task graph. Finally, we demonstrate the proposed algorithm via simulation experiments. The experimental results show that the proposed DPSS produced remarkable performance improvement rate on the total execution time that ranges between 11.5% and 21.2%. In view of the experimental results, the proposed algorithm provides better quality scheduling solution that is suitable for scientific application task execution in the cloud computing environment than HEFT, PEFT, and CEFT algorithms.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号