首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到18条相似文献,搜索用时 140 毫秒
1.
一个调度Fork-Join任务图的最优算法   总被引:5,自引:0,他引:5       下载免费PDF全文
Fork-Join任务图是一种并行处理的基本结构.虽然许多算法在任务满足某些条件时能产生最优调度,但往往没有考虑节省处理器个数和减少任务集的总完成时间,从而降低算法的加速比和效率.因此,提出一种基于任务复制的平衡调度算法,其时间复杂度为O(vq+vlogv),v和q分别表示任务集中任务的个数和使用的处理器个数.通过分析已用处理器的负载和空闲时间段,把任务尽量分配到已用的处理器上以均衡负载,从而提高其利用率.实验结果表明,该算法的加速比和总体效率优于其他算法.因此,该算法对于高性能应用程序的调度是一个较好的选择.  相似文献   

2.
目前,高能效的并行任务调度算法设计已经成为集群系统的研究热点.现有基于复制的节能调度算法主要利用阈值平衡系统的性能和能耗,但随机设置的阈值无法根据性能需求和环境参数等特征自动调节,导致调度算法存在一定的局限性.文中提出一种面向同构集群系统的两阶段节能调度算法ATES(Adaptive Threshold-based Energy-efficient Scheduling).首先,设计一种基于自适应阈值的任务复制策略,该策略能够自动计算最佳阈值,利用该阈值获取近似最优的任务分组.然后,将各分组任务调度到支持DVS的处理器上,并充分利用任务之间的空闲时间降低处理器电压.该算法将任务复制策略与电压调节技术有机结合,在调度过程中能够自动调整阈值,有效提高调度算法的能效.为了验证ATES算法的合理性,通过典型应用进行仿真实验,并与常见任务调度算法进行比较,结果表明ATES算法能够更好地实现性能和能耗之间的平衡.  相似文献   

3.
开销敏感的多处理器最优节能实时调度算法   总被引:1,自引:0,他引:1  
嵌入式多处理器系统的能耗问题变得日益重要,如何减少能耗同时满足实时约束成为多处理器系统节能实时调度中的一个重要问题.目前绝大多数研究基于关键速度降低处理器的频率以减少动态能耗,采用关闭处理器的方法减少静态能耗.虽然这种方法可以实现节能,但是不能保证最小化能耗.而现有最优的节能实时调度未考虑处理器状态切换的时间和能量开销,因此在切换开销不可忽视的实际平台中不再是最优的.文中针对具有独立动态电压频率调节和动态功耗管理功能的多处理器系统,考虑处理器切换开销,提出一种基于帧任务模型的最优节能实时调度算法.该算法根据关键速度来判断系统负载情况,确定具有最低能耗值的活跃处理器个数,然后根据状态切换开销来确定最优调度序列.该算法允许实时任务在处理器之间任意迁移,计算复杂度小,易于实现.数学分析证明了该算法的最优性.  相似文献   

4.
在多核系统中,任务调度是决定系统性能的关键因素之一。为优化任务调度,基于一些典型的任务调度算法(如PPA,徐成提出的算法等),提出了一种新的任务调度算法。该算法一方面合理确定前驱任务复制的先后顺序,而且进行两个阶段的复制,从而可以复制更多的前驱任务以减少调度长度和处理器上空余时间;另一方面,通过去除不影响任务系统调度长度的冗余簇,然后进行簇之间的合并,以减少处理机的数目和调度长度。实验表明,改进后的算法在任务调度的性能上优于典型算法。  相似文献   

5.
抢占阈值调度的功耗优化   总被引:2,自引:0,他引:2  
DVS(Dynamic Voltage Scaling)技术的应用使得任务执行时间延长进而使得处理器的静态功耗(由CMOS电路的泄露电流引起)迅速增加.延迟调度(Procrastination Scheduling)算法是近年提出用于减少静态功耗的有效方法,它通过推迟任务的正常执行来尽可能长时间地让处理器处于睡眠或关闭状态,从而避免过多的静态功耗泄露.文中针对可变电压处理器上运用抢占阈值调度策略的周期性任务集合,将节能调度和延迟调度结合起来,提出一种两阶段节能调度算法,先使用离线算法来计算每个任务的最优处理器执行速度,而后使用在线模拟调度算法来计算每个任务的延迟时间,从而动态判定处理器开启/关闭时刻.实例研究和仿真实验表明,作者的方法能够进一步降低抢占阈值任务调度算法的功耗.  相似文献   

6.
服务器执行任务产生的能耗是云计算系统动态能耗的重要组成部分。为降低云计算系统任务执行的总能耗,提出了一种基于能耗优化的最早完成时间任务调度方法,建立了服务器动态功率计算模型,基于动态功率的服务器执行能耗模型,以及云计算系统的能耗优化模型。调度策略根据任务的截止时间要求和在不同服务器上的执行能耗,选择不同的调度算法,以获得最小任务执行总能耗。实验结果证明,提出的任务调度方法,能够较好地满足任务截止时间的要求,降低云计算系统任务执行的总能耗。  相似文献   

7.
随着能耗管理成为可靠和绿色计算的重要课题,能耗感知调度方法以其低成本和可行性引发关注.目前,网格环境下依赖任务的能耗感知调度研究具有极大的挑战性,其需要平衡应用的优先约束性、海量数据传输、系统的异构性和不同性能指标的冲突性的关系.提出的网格依赖任务的能耗有效调度(energy-efficient scheduling of grid dependent tasks,ESGDT)算法旨在优化应用执行时间的前提下降低应用执行能耗,能有效解决上述问题.通过任务复制和渐进比例因子减少通信时间和通信能耗,同时兼顾应用复杂的数据依赖关系;适应芯片微型化和多核技术的发展趋势,采用动态电源管理技术减少任务执行的静态能耗;任务复制条件、渐进比例因子和微调原则均适时兼顾时间和能耗两个相互冲突的调度指标,并提出自适应和动态映射方法适应异构计算环境.模拟实验表明,较HEFT,EETDS和HEADUS算法,ESGDT算法不仅没有影响调度的时间性能,还可进一步降低应用执行能耗.  相似文献   

8.
实时多处理器系统的动态调度算法一直是实时系统中的重要研究课题.根据异构实时多处理器的特点,提出了一种新的异构实时动态调度算法P_IEFT.该算法采用了一个新的处理器分配策略——将任务分配到能最早完成任务的处理器上.该策略能够缩短调度长度,提高后继任务被接受的可能性,从而能够提高成功调度率.模拟结果表明,该调度算法的成功调度率高于近视算法和节约算法的成功调度率.  相似文献   

9.
兰舟  孙世新 《计算机学报》2007,30(3):454-462
多处理器调度问题是影响系统性能的关键问题,基于任务复制的调度算法是解决多处理器调度问题较为有效的方法.文中分析了几个典型的基于任务复制算法,提出了基于动态关键任务(DCT)的多处理器任务分配算法.DCT算法以克服贪心算法不足为要点,调度过程中动态计算任务时间参数,准确确定处理器的关键任务,以关键任务为核心优化调度,逐步改善调度结果,最终取得最优的调度结果.分析和实验证明,DCT算法优于现有其它同类算法.  相似文献   

10.
基于任务复制的处理器预分配算法   总被引:12,自引:2,他引:12  
基于任务复制的调度算法比无任务复制的调度算法具有较好的性能.文章在分析了基于任务复制的几个典型算法(如TDS,OSA等算法)及其假设条件后,提出了以使调度长度最短作为主要目标、减少处理机数目作为次要目标的处理器预分配算法PPA.该算法对任务计算时间与任务间通信时间未做任何限制(即不考虑任务粒度).通过与相关工作的比较可以看出:PPA算法在调度长度与处理器使用数目上均优于其它算法或与其它算法相当,同时,该算法具有与TDS,OSA相同的时间复杂度.这对嵌入式实时分布系统具有重要的意义。  相似文献   

11.
We address scheduling independent and precedence constrained parallel tasks on multiple homogeneous processors in a data center with dynamically variable voltage and speed as combinatorial optimization problems. We consider the problem of minimizing schedule length with energy consumption constraint and the problem of minimizing energy consumption with schedule length constraint on multiple processors. Our approach is to use level-by-level scheduling algorithms to deal with precedence constraints. We use a simple system partitioning and processor allocation scheme, which always schedules as many parallel tasks as possible for simultaneous execution. We use two heuristic algorithms for scheduling independent parallel tasks in the same level, i.e., SIMPLE and GREEDY. We adopt a two-level energy/time/power allocation scheme, namely, optimal energy/time allocation among levels of tasks and equal power supply to tasks in the same level. Our approach results in significant performance improvement compared with previous algorithms in scheduling independent and precedence constrained parallel tasks.  相似文献   

12.
Task graph pre-scheduling, using Nash equilibrium in game theory   总被引:1,自引:1,他引:0  
Prescheduling algorithms are targeted at restructuring of task graphs for optimal scheduling. Task graph scheduling is a NP-complete problem. This article offers a prescheduling algorithm for tasks to be executed on the networks of homogeneous processors. The proposed algorithm merges tasks to minimize their earliest start time while reducing the overall completion time. To this end, considering each task as a player attempting to reduce its earliest time as much as possible, we have applied the idea of Nash equilibrium in game theory to determine the most appropriate merging. Also, considering each level of a task graph as a player, seeking for distinct parallel processors to execute each of its independent tasks in parallel with the others, the idea of Nash equilibrium in game theory can be applied to determine the appropriate number of processors in a way that the overall idle time of the processors is minimized and the throughput is maximized. The communication delay will be explicitly considered in the comparisons. Our experiments with a number of known benchmarks task graphs and also two well-known problems of linear algebra, LU decomposition and Gauss–Jordan elimination, demonstrate the distinguished scheduling results provided by applying our algorithm. In our study, we consider ten scheduling algorithms: min–min, chaining, A ?, genetic algorithms, simulated annealing, tabu search, HLFET, ISH, DSH with task duplication, and our proposed algorithm (PSGT).  相似文献   

13.
Optimized task scheduling is one of the most important challenges to achieve high performance in multiprocessor environments such as parallel and distributed systems. Most introduced task-scheduling algorithms are based on the so-called list scheduling technique. The basic idea behind list scheduling is to prepare a sequence of nodes in the form of a list for scheduling by assigning them some priority measurements, and then repeatedly removing the node with the highest priority from the list and allocating it to the processor providing the earliest start time (EST). Therefore, it can be inferred that the makespans obtained are dominated by two major factors: (1) which order of tasks should be selected (sequence subproblem); (2) how the selected order should be assigned to the processors (assignment subproblem). A number of good approaches for overcoming the task sequence dilemma have been proposed in the literature, while the task assignment problem has not been studied much. The results of this study prove that assigning tasks to the processors using the traditional EST method is not optimum; in addition, a novel approach based on the ant colony optimization algorithm is introduced, which can find far better solutions.  相似文献   

14.
Because of environmental and monetary concerns, it is increasingly important to reduce the energy consumption in all areas, including parallel and high performance computing. In this article, we propose an approach to reduce the energy consumption needed for the execution of a set of tasks computed in parallel in a fork‐join fashion. The approach consists of an analytical model for the energy consumption of a parallel computation in fork‐join form on dynamic voltage frequency scaling processors, a theoretical specification of an energy‐optimal frequency‐scaled state, and the energy minimization by computing optimal scaling factors. For larger numbers of tasks, the approach is extended by scheduling algorithms, which exploit the analytical result and aim at a reduction of the energy. Energy measurements of a complex numerical method and the SPEC CPU2006 benchmarks as well as simulations for a large number of randomly generated tasks illustrate and validate the energy modeling, the minimization, and the scheduling results. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

15.
关联任务在多核处理器上并行调度所产生的通信时延,会对任务调度长度和处理器利用率造成负面影响,为了改善多核系统对关联任务的处理性能,针对关联任务在多核处理器上的调度特点,提出一种并行感知调度算法。计算各任务与终点间的最长路径值,按照该值的降序来分配任务调度次序,在分配处理器内核时兼顾关联度和任务最早可执行时间,设置最佳匹配评价函数。实验结果表明,与busHEFT和DTSV算法相比,该算法具有更短的任务调度时延、更少的通信量以及更高的处理器利用率。  相似文献   

16.
工作流任务执行时带来的高能耗不仅会增加云资源提供方的经济成本,而且会降低云系统的可靠性。为了满足截止时间的同时,降低工作流执行能耗,提出一种工作流能效调度算法CWEES。算法将能效优化调度划分为三个阶段:初始任务映射、处理器资源合并和任务松驰。初始任务映射旨在通过任务自底向上分级排序得到任务调度初始序列,处理器资源合并旨在通过重用松驰时间合并相对低效率的处理器,降低资源使用数量,任务松驰旨在为每个任务重新选择带有合适电压/频率等级的最优目标资源,在不违背任务顺序和截止时间约束前提下降低工作流执行总能耗。通过随机工作任务模型对算法的性能进行了仿真实验分析。结果表明,CWEES算法不仅资源利用率更高,而且可以在满足截止时间约束下降低工作流执行能耗,实现执行效率与能耗的均衡。  相似文献   

17.
人工智能的飞速发展对高性能计算提出了更高的要求,异构计算环境下任务调度问题一直是高性能计算中的关键问题.本文提出一种基于优先队列划分的调度算法(PQDSA),该算法根据DAG(有向无循环图)任务集的入口节点数量确定优先队列数,通过任务的通信开销和计算开销划分任务队列,进而将关键节点任务分配给合适的队列,以产生效果较佳的任务调度队列,从而提高任务间的并行性,降低任务集的完工时间.与此同时,进一步基于插入策略将任务调度到处理器上,使任务调度更加高效地执行.PQDSA算法可以减少任务间的时间消耗,提高处理器的调度效率.通过与两个经典算法的性能对比,实验结果表明本文提出的PQDSA算法在任务完工时间和调度效率方面都要明显优于对比的算法.  相似文献   

18.
The growth of energy consumption has been explosive in current data centers, super computers, and public cloud systems. This explosion has led to greater advocacy of green computing, and many efforts and works focus on the task scheduling in order to reduce energy dissipation. In order to obtain more energy reduction as well as maintain the quality of service by meeting the deadlines, this paper proposes a DVFS-enabled Energy-efficient Workflow Task Scheduling algorithm: DEWTS. Through merging the relatively inefficient processors by reclaiming the slack time, DEWTS can leverage the useful slack time recurrently after severs are merged. DEWTS firstly calculates the initial scheduling order of all tasks, and obtains the whole makespan and deadline based on Heterogeneous-Earliest-Finish-Time (HEFT) algorithm. Through resorting the processors with their running task number and energy utilization, the underutilized processors can be merged by closing the last node and redistributing the assigned tasks on it. Finally, in the task slacking phase, the tasks can be distributed in the idle slots under a lower voltage and frequency using DVFS technique, without violating the dependency constraints and increasing the slacked makespan. Based on the amount of randomly generated DAGs workflows, the experimental results show that DEWTS can reduce the total power consumption by up to 46.5 % for various parallel applications as well as balance the scheduling performance.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号