首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
众核处理器系统核资源动态分组的自适应调度算法   总被引:1,自引:0,他引:1  
针对众核处理器系统的核资源优化使用问题,提出了一种支持核资源动态分组的自适应调度算法CASM(core-partitioned adaptive scheduling for many-core systems).该算法通过对任务簇的拆分与合并,动态构建可弹性分区的核逻辑组,实现核资源的隔离优化访问.为了平衡核资源利用率及任务调度效率,CASM算法针对任务簇间和簇内的不同特点,分别采用公平性较好的均衡调度算法和资源利用率较高的自适应调度算法.在线竞争理论分析表明,CASM算法的任务执行时间在线竞争比为常数2,其性能可扩展性较好.实验结果表明,与WS(work-stealing),AGDEQ(adaptive greedy dynamicequi-partitioning)和EQUI°EQUI算法相比,CASM算法使任务集运行时间分别减少了近46%,32%和15%.在相同能耗情况下,CASM算法大幅度地提升了系统吞吐量.  相似文献   

2.
Multiprocessor scheduling in a shared multiprogramming environment can be structured in two levels, where a kernel-level job scheduler allots processors to jobs and a user-level thread scheduler maps the ready threads of a job onto the allotted processors. We present two provably-efficient two-level scheduling schemes called G-RAD and S-RAD respectively. Both schemes use the same job scheduler RAD for the processor allotments that ensures fair allocation under all levels of workload. In G-RAD, RAD is combined with a greedy thread scheduler suitable for centralized scheduling; in S-RAD, RAD is combined with a work-stealing thread scheduler more suitable for distributed settings. Both G-RAD and S-RAD are non-clairvoyant. Moreover, they provide effective control over the scheduling overhead and ensure efficient utilization of processors. We also analyze the competitiveness of both G-RAD and S-RAD with respect to an optimal clairvoyant scheduler. In terms of makespan, both schemes can achieve O(1)-competitiveness for any set of jobs with arbitrary release time. In terms of mean response time, both schemes are O(1)-competitive for arbitrary batched jobs. To the best of our knowledge, G-RAD and S-RAD are the first non-clairvoyant scheduling algorithms that guarantee provable efficiency, fairness and minimal overhead.  相似文献   

3.
调度问题是数据操作系统研究中的关键性问题,它建立了计算资源、计算任务以及数据间的链接关系。在海云协同网络环境下的调度问题中,常见的调度考量包括公平性、数据本地性等。由于数据操作系统使用环境的演化,工作负载中任务的交互特性给调度问题提出了新的挑战。本文在保留传统调度考量的基础上,兼顾交互、批处理两种作业模式的异同,提出一种优化的双层调度模型,并使用符合实际产业环境分布的工作负载在现实集群上对该调度模型进行了验证。实验结果说明,该模型以微量降低系统吞吐量的代价整体优化了交互作业的响应时间,同时兼顾了用户级公平性。  相似文献   

4.
Fairness is an important aspect in queuing systems. Several fairness measures have been proposed in queuing systems in general and parallel job scheduling in particular. Generally, a scheduler is considered unfair if some jobs are discriminated whereas others are favored. Some of the metrics used to measure fairness for parallel job schedulers can imply unfairness where there is no discrimination (and vice versa). This makes them inappropriate. In this paper, we show how the existing approach misrepresents fairness in practice. We then propose a new approach for measuring fairness for parallel job schedulers. Our approach is based on two principles: (i) as jobs have different resource requirements and find different queue/system states, they need not have the same performance for the scheduler to be fair and (ii) to compare two schedulers for fairness, we make comparisons of how the schedulers favor/discriminate individual jobs. We use performance and discrimination trends to validate our approach. We observe that our approach can deduce discrimination more accurately. This is true even in cases where the most discriminated jobs are not the worst performing jobs. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

5.
This paper addresses the problem of minimizing the scheduling length (make-span) of a batch of jobs with different arrival times. A job is described by a direct acyclic graph (DAG) of parallel tasks. The paper proposes a dynamic scheduling method that adapts the schedule when new jobs are submitted and that may change the processors assigned to a job during its execution. The scheduling method is divided into a scheduling strategy and a scheduling algorithm. We also propose an adaptation of the Heterogeneous Earliest-Finish-Time (HEFT) algorithm, called here P-HEFT, to handle parallel tasks in heterogeneous clusters with good efficiency without compromising the makespan. The results of a comparison of this algorithm with another DAG scheduler using a simulation of several machine configurations and job types shows that P-HEFT gives a shorter makespan for a single DAG but scores worse for multiple DAGs. Finally, the results of the dynamic scheduling of a batch of jobs using the proposed scheduler method showed significant improvements for more heavily loaded machines when compared to the alternative resource reservation approach.  相似文献   

6.
The utilization of parallel computers depends on how jobs are packed together: if the jobs are not packed tightly, resources are lost due to fragmentation. The problem is that the goal of high utilization may conflict with goals of fairness or even progress for all jobs. The common solution is to use backfilling, which combines a reservation for the first job in the interest of progress with packing of later jobs to fill in holes and increase utilization. However, backfilling considers the queued jobs one at a time, and thus might miss better packing opportunities. We propose the use of dynamic programming to find the best packing possible given the current composition of the queue, thus maximizing the utilization on every scheduling step. Simulations of this algorithm, called lookahead optimizing scheduler (LOS), using trace files from several IBM SP parallel systems, show that LOS indeed improves utilization, and thereby reduces the mean response time and mean slowdown of all jobs. Moreover, it is actually possible to limit the lookahead depth to about 50 jobs and still achieve essentially the same results. Finally, we experimented with selecting among alternative sets of jobs that achieve the same utilization. Surprising results indicate that choosing the set at the head of the queue does not necessarily guarantee best performance. Instead, repeatedly selecting the set with the maximal overall expected slowdown boosts performance when compared to all other alternatives checked.  相似文献   

7.
Grid scheduling algorithms are usually implemented in a simulation environment using tools that hide the complexity of the Grid and assumptions that are not always realistic. In our work, we describe the steps followed, the difficulties encountered and the solutions provided to develop and evaluate a scheduling policy, initially implemented in a simulation environment, in the gLite Grid middleware. Our focus is on a scheduling algorithm that allocates in a fair way the available resources among the requested users or jobs. During the actual implementation of this algorithm in gLite, we observed that the validity of the information used by the scheduler for its decisions affects greatly its performance. To improve the accuracy of this information, we developed an internal feedback mechanism that operates along with the scheduling algorithm. Also, a Grid computation resource cannot be shared concurrently between different users or jobs, making it difficult to provide actual fairness. For this reason we investigated the use of virtualization technology in the gLite middleware. We did a proof‐of‐concept implementation and performed an experimental evaluation of our scheduling algorithm in a small gLite testbed that proves the validity and applicability of our solutions. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

8.
朱洁  李雯睿  赵红  李滢 《计算机应用》2015,35(12):3383-3386
针对目前层级队列作业调度算法中资源占比高的作业执行效率低的问题,提出一种资源匹配最大集算法。该算法分析作业特征,引入完成度、等待时间、优先级、重调度次数为紧迫值因子,优先考虑资源占比高或等待时间长的作业,以改善作业公平性;采用双队列结构在可用资源总量内优先选择高紧迫值作业,在不同资源占比作业集比较中选择作业数最大集,以实现调度平衡。在与最大最小公平(Max-min fairness)算法的实例对比中发现,该算法可降低作业集平均等待时间、提高资源利用率。实验对比结果表明,该算法可将不同资源占比的单一类型作业集执行时间缩短18.73%,其中资源占比高的作业执行时间缩短27.26%;在混合型作业集中对应的执行时间可分别缩短22.36%与30.28%。所提算法能有效减少资源占比高作业的等待,提高作业整体执行效率。  相似文献   

9.
We present two improved results for scheduling batched parallel jobs on multiprocessors with mean response time as the performance metric. These results are obtained by using a generalized analysis framework where the response time of the jobs is expressed in two contributing factors that directly impact a scheduler’s competitive ratio. Specifically, we show that the scheduler IGDEQ is 3-competitive against the optimal while AGDEQ is 5.24-competitive. These results improve the known competitive ratios of 4 and 10, obtained by Deng et al. and by He et al., respectively. For the common case where no fractional allotments are allowed, we show that slightly larger competitive ratios can be obtained by augmenting the schedulers with the round-robin strategy.  相似文献   

10.
A new approach for dynamic job scheduling in mesh-connected multiprocessor systems, which supports a multiuser environment, is proposed in this paper. Our approach combines a submesh reservation policy with a priority-based scheduling policy to obtain high performance in terms of high throughput, high utilization, and low turn-around times for jobs. This high performance is achieved at the expense of scheduling jobs in a strictly fair, FCFS fashion; in fact, the algorithm is parameterized to allow trade-offs between performance and (short-term) POPS fairness. The proposed scheduler can be used with any submesh allocation policy. A fast and efficient implementation of the proposed scheduler has also been presented. The performance of the proposed scheme has been compared with the FCFS policy, the only existing scheduling strategy for meshes, to demonstrate the effectiveness of the proposed approach. Simulation results indicate that our scheduling strategy outperforms the FCFS policy significantly. Specifically, our strategy significantly reduces the average waiting delay of jobs over the FCFS policy. The fast implementation of the proposed scheduler results in low allocation and deallocation time overhead, as well as low space overhead  相似文献   

11.
We consider two single machine bicriteria scheduling problems in which jobs belong to either of two different disjoint sets, each set having its own performance measure. The problem has been referred to as interfering job sets in the scheduling literature and also been called multi-agent scheduling where each agent's objective function is to be minimized. In the first problem (P1) we look at minimizing total completion time and number of tardy jobs for the two sets of jobs and present a forward SPT-EDD heuristic that attempts to generate the set of non-dominated solutions. The complexity of this specific problem is NP-hard; however some pseudo-polynomial algorithms have been suggested by earlier researchers and they have been used to compare the results from the proposed heuristic. In the second problem (P2) we look at minimizing total weighted completion time and maximum lateness. This is an established NP-hard problem for which we propose a forward WSPT-EDD heuristic that attempts to generate the set of supported points and compare our solution quality with MIP formulations. For both of these problems, we assume that all jobs are available at time zero and the jobs are not allowed to be preempted.  相似文献   

12.
This paper presents several search heuristics and their performance in batch scheduling of parallel, unrelated machines. Identical or similar jobs are typically processed in batches in order to decrease setup times and/or processing times. The problem accounts for allotting batched work parts into unrelated parallel machines, where each batch consists of a fixed number of jobs. Some batches may contain different jobs but all jobs within each batch should have an identical processing time and a common due date. Processing time of each job of a batch is determined according to the machine group as well as the batch group to which the job belongs. Major or minor setup times are required between two subsequent batches depending on batch sequence but are independent of machines. The objective of our study is to minimize the total weighted tardiness for the unrelated parallel machine scheduling. Four search heuristics are proposed to address the problem, namely (1) the earliest weighted due date, (2) the shortest weighted processing time, (3) the two-level batch scheduling heuristic, and (4) the simulated annealing method. These proposed local search heuristics are tested through computational experiments with data from dicing operations of a compound semiconductor manufacturing facility.  相似文献   

13.
Production parallel systems are space‐shared, and resource allocation on such systems is usually performed using a batch queue scheduler. Jobs submitted to the batch queue experience a variable delay before the requested resources are granted. Predicting this delay can assist users in planning experiment time‐frames and choosing sites with less turnaround times and can also help meta‐schedulers make scheduling decisions. In this paper, we present an integrated adaptive framework, Qespera, for prediction of queue waiting times on parallel systems. We propose a novel algorithm based on spatial clustering for predictions using history of job submissions and executions. The framework uses adaptive set of strategies for choosing either distributions or summary of features to represent the system state and to compare with history jobs, varying the weights associated with the features for each job prediction, and selecting a particular algorithm dynamically for performing the prediction depending on the characteristics of the target and history jobs. Our experiments with real workload traces from different production systems demonstrate up to 22% reduction in average absolute error and up to 56% reduction in percentage prediction error over existing techniques. We also report prediction errors of less than 1 h for a majority of the jobs. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

14.
一种基于资源预分配的虚拟机软实时调度方法   总被引:1,自引:0,他引:1       下载免费PDF全文
虚拟机技术作为云计算的重要技术之一,近年来得到广泛关注,但是由于虚拟机管理层的存在,导致语义鸿沟,使得实时应用程序、并发程序等在虚拟机上的运行性能受到影响。分析和研究了Xen虚拟机管理器的Credit调度算法,针对其在并发调度和软实时调度方面存在的不足,提出了改进调度算法,实现了算法的调度器原型。新的调度算法对软实时虚拟机进行Credit比例预分配,采用动态调度时间片机制,以non-work-conserving方式实现软实时任务周期调度,保障调度周期满足运行周期要求。通过区分并发和非并发软实时虚拟机,采取不同的调度策略,在满足资源利用率的基础上,确保实时任务的顺利运行。测试结果表明,该调度算法在对并发和非并发软实时任务调度上,具有良好的表现,较好满足了软实时应用调度需求。  相似文献   

15.
利用机器学习方法解决存储领域中若干技术难题是目前存储领域的研究热点之一。强化学习作为一种以环境反馈作为输入、自适应环境的特殊的机器学习方法,能通过观测环境状态的变化,评估控制决策对系统性能的影响来选择最优的控制策略,基于强化学习的智能RAID控制技术具有重要的研究价值。本文针对高性能计算应用特点,将机器学习领域中的强化学习技术引入RAID控制器中,提出了基于强化学习的智能I/O调度算法RL-scheduler,利用Q-学习策略实现了面向并行应用的自治调度策略。RL-scheduler综合考虑了调度的公平性、磁盘寻道时间和MPI应用的I/O访问效率,并提出多Q-表交叉组织方法提高Q-表的更新效率。实验结果表明,RL-scheduler缩短了并行应用的平均I/O服务时间,提高了大规模并行计算系统的I/O吞吐率。  相似文献   

16.
In Grids scheduling decisions are often made on the basis of jobs being either data or computation intensive: in data intensive situations jobs may be pushed to the data and in computation intensive situations data may be pulled to the jobs. This kind of scheduling, in which there is no consideration of network characteristics, can lead to performance degradation in a Grid environment and may result in large processing queues and job execution delays due to site overloads. In this paper we describe a Data Intensive and Network Aware (DIANA) meta-scheduling approach, which takes into account data, processing power and network characteristics when making scheduling decisions across multiple sites. Through a practical implementation on a Grid testbed, we demonstrate that queue and execution times of data-intensive jobs can be significantly improved when we introduce our proposed DIANA scheduler. The basic scheduling decisions are dictated by a weighting factor for each potential target location which is a calculated function of network characteristics, processing cycles and data location and size. The job scheduler provides a global ranking of the computing resources and then selects an optimal one on the basis of this overall access and execution cost. The DIANA approach considers the Grid as a combination of active network elements and takes network characteristics as a first class criterion in the scheduling decision matrix along with computations and data. The scheduler can then make informed decisions by taking into account the changing state of the network, locality and size of the data and the pool of available processing cycles.  相似文献   

17.
随着大规模的MapReduce集群广泛地用于大数据处理,特别是当有多个任务需要使用同一个Hadoop集群时,一个关键问题是如何最大限度地减少集群的工作时间,提高MapReduce作业的服务效率。可将多个MapReduce作业当做一个调度任务建模,观察发现多个任务的总完工时间和任务的执行顺序有密切关系。 研究目标是设计作业调度系统分析模型,最小化一批MapReduce作业的总完工时间。提出一个更好的调度策略和实现方法, 使整个调度系统符合经典Johnson算法的条件, 从而可使用经典Johnson算法在线性时间内获取总完工时间的最优解。同时,针对需要使用两个或多个资源池进行平衡的问题, 提出了一种线性时间解决方案, 优于已知的近似模拟方案。该理论模型可应用于提高系统响应速度、节能和负载均衡等方面, 对应的应用实例提供了证实。  相似文献   

18.
朱洁  赵红  李雯睿 《计算机应用》2014,34(11):3227-3230
Hadoop集群单队列作业调度会产生短作业等待、资源利用率低的问题;采用多队列调度可兼顾公平、提高执行效率,但会带来手工配置参数、资源互占、算法复杂等问题。针对上述问题,提出三队列作业调度算法,利用区分作业类型、动态调整作业优先级、配置共享资源池、作业抢占等设计,达到平衡作业需求、简化一般作业调度流程、提升并行执行能力的目的。对短作业占比高,各作业占比均衡以及一般作业为主,偶尔出现长、短作业三种情况与先进先出(FIFO)算法进行了对比实验,结果三队列算法的运行时间均比FIFO算法要少。实验结果表明,在短作业聚集时,三队列算法的执行效率提升并不显著;但当各种作业并存且分布均衡时,效果很明显,这符合了算法设计时短作业优先、一般作业简化流程、兼顾长作业的初衷,提高了作业整体执行效率。  相似文献   

19.
Assembling and simultaneously using different types of distributed computing infrastructures (DCI) like Grids and Clouds is an increasingly common situation. Because infrastructures are characterized by different attributes such as price, performance, trust, and greenness, the task scheduling problem becomes more complex and challenging. In this paper we present the design for a fault-tolerant and trust-aware scheduler, which allows to execute Bag-of-Tasks applications on elastic and hybrid DCI, following user-defined scheduling strategies. Our approach, named Promethee scheduler, combines a pull-based scheduler with multi-criteria Promethee decision making algorithm. Because multi-criteria scheduling leads to the multiplication of the possible scheduling strategies, we propose SOFT, a methodology that allows to find the optimal scheduling strategies given a set of application requirements. The validation of this method is performed with a simulator that fully implements the Promethee scheduler and recreates an hybrid DCI environment including Internet Desktop Grid, Cloud and Best Effort Grid based on real failure traces. A set of experiments shows that the Promethee scheduler is able to maximize user satisfaction expressed accordingly to three distinct criteria: price, expected completion time and trust, while maximizing the infrastructure useful employment from the resources owner point of view. Finally, we present an optimization which bounds the computation time of the Promethee algorithm, making realistic the possible integration of the scheduler to a wide range of resource management software.  相似文献   

20.
Multiple performance requirements need to be guaranteed in some real-time applications such as multimedia data processing and real-time signal processing in addition to timing constraints.Unfortunately,most conventional scheduling algorithms only take one or two dimensions of them into account.Motivated by this fact,this paper investigates the problem of providing multiple performance guarantees including timeliness,QoS,throughput,QoS fairness and load balancing for a set of independent tasks by dynamic ...  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号