首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Minimizing Mean Completion Time in a Batch Processing System   总被引:8,自引:0,他引:8  
We consider batch processing jobs to minimize the mean completion time. A batch processing machine can handle up to $B$ jobs simultaneously. Each job is represented by an arrival time and a processing time. Jobs processed in a batch have the same completion time, i.e., their common starting time plus the processing time of their longest job. For batch processing, non-preemptive scheduling is usually required and we discuss this case. The batch processing problem reduces to the ordinary uniprocessor system scheduling problem if $B=1$. We focus on the other extreme case $B=+\infty$. Even for this seemingly simple extreme case, we are able to show that the problem is NP-hard for the weighted version. In addition, we establish a polynomial time algorithm for a special case when there are only a constant number of job processing times. Finally, we give a polynomial time approximation scheme for the general case.  相似文献   

2.
Implications of classical scheduling results for real-time systems   总被引:2,自引:0,他引:2  
Knowledge of complexity, fundamental limits and performance bounds-well known for many scheduling problems-helps real time designers choose a good design and algorithm and avoid poor ones. The scheduling problem has so many dimensions that it has no accepted taxonomy. We divide scheduling theory between uniprocessor and multiprocessor results. In the uniprocessor section, we begin with independent tasks and then consider shared resources and overload. In the multiprocessor section, we divide the work between static and dynamic algorithms  相似文献   

3.
In this paper we investigate dynamic speed scaling, a technique to reduce energy consumption in variable-speed microprocessors. While prior research has focused mostly on single processor environments, in this paper we investigate multiprocessor settings. We study the basic problem of scheduling a set of jobs, each specified by a release date, a deadline and a processing volume, on variable-speed processors so as to minimize the total energy consumption. We first settle the problem complexity if unit size jobs have to be scheduled. More specifically, we devise a polynomial time algorithm for jobs with agreeable deadlines and prove NP-hardness results if jobs have arbitrary deadlines. For the latter setting we also develop a polynomial time algorithm achieving a constant factor approximation guarantee. Additionally, we study problem settings where jobs have arbitrary processing requirements and, again, develop constant factor approximation algorithms. We finally transform our offline algorithms into constant competitive online strategies.  相似文献   

4.
Algorithms from scientific computing often exhibit a two-level parallelism based on potential method parallelism and potential system parallelism. We consider the parallel implementation of those algorithms on distributed memory machines. The two-level potential parallelism of algorithms is expressed in a specification consisting of an upper level hierarchy of multiprocessor tasks each of which has an internal structure of uniprocessor tasks. To achieve an optimal parallel execution time, the parallel execution of such a program requires an optimal scheduling of the multiprocessor tasks and an appropriate treatment of uniprocessor tasks. For an important subclass of structured method parallelism we present a scheduling methodology which takes data redistributions between multiprocessor tasks into account. As costs we use realistic parallel runtimes. The scheduling methodology is designed for an integration into a parallel compiler tool. We illustrate the multitask scheduling by several examples from numerical analysis.  相似文献   

5.
The high power consumption of modern processors becomes a major concern because it leads to decreased mission duration (for battery-operated systems), increased heat dissipation, and decreased reliability. While many techniques have been proposed to reduce power consumption for uniprocessor systems, there has been considerably less work on multiprocessor systems. In this paper, based on the concept of slack sharing among processors, we propose two novel power-aware scheduling algorithms for task sets with and without precedence constraints executing on multiprocessor systems. These scheduling techniques reclaim the time unused by a task to reduce the execution speed of future tasks and, thus, reduce the total energy consumption of the system. We also study the effect of discrete voltage/speed levels on the energy savings for multiprocessor systems and propose a new scheme of slack reservation to incorporate voltage/speed adjustment overhead in the scheduling algorithms. Simulation and trace-based results indicate that our algorithms achieve substantial energy savings on systems with variable voltage processors. Moreover, processors with a few discrete voltage/speed levels obtain nearly the same energy savings as processors with continuous voltage/speed, and the effect of voltage/speed adjustment overhead on the energy savings is relatively small.  相似文献   

6.
In this paper, we tackle the well‐known problem of scheduling a collection of parallel jobs on a set of processors either in a cluster or in a multiprocessor computer. For the makespan objective, that is, the completion time of the last job, this problem has been shown to be NP‐hard, and several heuristics have already been proposed to minimize the execution time. In this paper, we consider both rigid and moldable jobs. Our main contribution is the introduction of a new approach to the scheduling problem, based on the recent discoveries in the field of compressed sensing. In the proposed approach, all possible positions and shapes of the jobs are encoded into a matrix, and the scheduling is performed by selecting the best columns under natural constraints. Thus, the solution to the new scheduling formulation is naturally sparse, and we may use appropriate relaxations to achieve the optimization task in the quickest possible way. Among many possible relaxation strategies, we choose to minimize the p‐quasi‐norm for p∈(0,1). Minimization of the p‐quasi‐norm is implemented via a successive linear programming approximation heuristic. We propose several new algorithms based on this approach, and we assess their efficiency through simulations. The experiments show that the scheme outperforms the classic Largest Task First list based algorithm for scheduling small to medium instances but needs improvements to compete on larger numbers of jobs. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

7.
The authors study the problem of scheduling a set of tasks with known execution times and arbitrary precedence constraints to computing systems. The objective function used to measure the performance of a schedule in this paper is the sum of completion times of all tasks, which is called total completion time. Finding the minimum total completion time of tasks with precedence constraints on the uniprocessor system is known to be NP-complete, let alone on the multiprocessor system (Garey and Johnson 1979) Based on the well-known A? algorithm proposed in the field of artificial intelligence (Nilson 1980) two algorithms are developed to solve efficiently the scheduling problems on the uniprocessor system and multiprocessor system. Some evaluation functions are proposed to accelerate the search of an optimal schedule. A table named the backwards range-limited table is used to assist the computation of the evaluation function. Experimental results show that the proposed approaches can achieve the optimal schedule with greatly reduced search tree size, especially when bounding rules are applied.  相似文献   

8.
The Data Locality of Work Stealing   总被引:1,自引:0,他引:1  
This paper studies the data locality of the work-stealing scheduling algorithm on hardware-controlled shared-memory machines, where movement of data to and from the cache is solely controlled by the hardware. We present lower and upper bounds on the number of cache misses when using work stealing, and introduce a locality-guided work-stealing algorithm and its experimental validation. {As a lower bound, we show that a work-stealing application that exhibits good data locality on a uniprocessor may exhibit poor data locality on a multiprocessor. In particular, we show a family of multithreaded computations G n whose members perform Θ(n) operations (work) and incur a constant number of cache misses on a uniprocessor, while even on two processors the total number of cache misses soars to Ω(n) . On the other hand, we show a tight upper bound on the number of cache misses that nested-parallel computations, a large, important class of computations, incur due to multiprocessing. In particular, for nested-parallel computations, we show that on P processors a multiprocessor execution incurs an expected
more misses than the uniprocessor execution. Here m is the execution time of an instruction incurring a cache miss, s is the steal time, C is the size of cache, and T fty is the number of nodes on the longest chain of dependencies. Based on this we give strong execution time bounds for nested-parallel computations using work stealing.} For the second part of our results, we present a locality-guided work-stealing algorithm that improves the data locality of multithreaded computations by allowing a thread to have an affinity for a processor. Our initial experiments on iterative data-parallel applications show that the algorithm matches the performance of static-partitioning under traditional work loads but improves the performance up to 50% over static partitioning under multiprogrammed work loads. Furthermore, locality-guided work stealing improves the performance of work stealing up to 80%.  相似文献   

9.
Shachnai  Tamir 《Algorithmica》2002,32(4):651-678
Abstract. Modern computer systems distribute computation among several machines to speed up the execution of programs. Yet, setup and communication costs, as well as parallelism constraints, bound the number of machines that can share the execution of a given application, and the number of machines by which it can be processed simultaneously . We study the resulting scheduling problem, stated as follows. Given a set of n jobs and m uniform machines, assign the jobs to the machines subject to parallelism and machine allotment constraints, such that the overall completion time of the schedule (or makespan ) is minimized. Indeed, the multiprocessor scheduling problem (where each job can be processed by a single machine) is a special case of our problem; thus, our problem is strongly NP-hard. We present a (1+ α) -approximation algorithm for this problem, where α ∈ (0,1] depends on the minimal number of machine allotments and the minimal parallelism allowed for any job. Also, we show that when the maximal number of machines that can share the execution of a job is some fixed constant, our problem has a polynomial time approximation scheme ; for other special cases we give optimal polynomial time algorithms. Finally, through the relation of our problem to the classic preemptive scheduling problem on multiple machines, we shed some fresh light on what is known in scheduling folklore as the power of preemption.  相似文献   

10.
本文研究有n个作业需在5个处理机中心进行加工,处理机中心i由l1个恒速机组成的非抢占式多机flow shop调度最小和问题.每个作业有s个工序,每个工序需在对应的处理机中心的任一台机器上加工处理,作业到达前不能加工,所有作业通过处理机中心的路径相同.目标是确定一个作业在每个处理机中心机器上的可行调度序列,使所有作业在最后处理机中心的加权完成时间总和最小化.在作业处理时间需求、作业权重分别为独立同分布的有界随机变量时,通过特殊flow shop调度松弛方法,我们证明该问题在作业数趋于无穷时,一个基于有效作业最短加权平均处理时间需求的启发式算法是渐近最优的.  相似文献   

11.
Shachnai  Tamir 《Algorithmica》2008,32(4):651-678
Abstract. Modern computer systems distribute computation among several machines to speed up the execution of programs. Yet, setup and communication costs, as well as parallelism constraints, bound the number of machines that can share the execution of a given application, and the number of machines by which it can be processed simultaneously . We study the resulting scheduling problem, stated as follows. Given a set of n jobs and m uniform machines, assign the jobs to the machines subject to parallelism and machine allotment constraints, such that the overall completion time of the schedule (or makespan ) is minimized. Indeed, the multiprocessor scheduling problem (where each job can be processed by a single machine) is a special case of our problem; thus, our problem is strongly NP-hard. We present a (1+ α) -approximation algorithm for this problem, where α ∈ (0,1] depends on the minimal number of machine allotments and the minimal parallelism allowed for any job. Also, we show that when the maximal number of machines that can share the execution of a job is some fixed constant, our problem has a polynomial time approximation scheme ; for other special cases we give optimal polynomial time algorithms. Finally, through the relation of our problem to the classic preemptive scheduling problem on multiple machines, we shed some fresh light on what is known in scheduling folklore as the power of preemption.  相似文献   

12.
We consider the bounded single-machine parallel-batch scheduling problem with release dates and rejection. A job is either rejected, in which case a certain penalty has to be paid, or accepted and then processed on the machine. The objective is to minimize the sum of the makespan of the accepted jobs and the total penalty of the rejected jobs. When the jobs have identical release dates, we present a polynomial-time algorithm. When the jobs have a constant number of release dates, we give a pseudo-polynomial-time algorithm. For the general problem, we provide a 2-approximation algorithm and a polynomial-time approximation scheme.  相似文献   

13.
Motivated by applications in grid computing and project management, we study multiprocessor scheduling in scenarios where there is uncertainty in the successful execution of jobs when assigned to processors. We consider the problem of multiprocessor scheduling under uncertainty, in which we are given n unit-time jobs and m machines, a directed acyclic graph C giving the dependencies among the jobs, and for every job j and machine i, the probability p ij of the successful completion of job j when scheduled on machine i in any given particular step. The goal of the problem is to find a schedule that minimizes the expected makespan, that is, the expected time at which all of the jobs are completed.  相似文献   

14.
We study the basic problem of preemptive scheduling of a stream of jobs on a single processor. Consider an on-line stream of jobs, and let the ith job arrive at time r(i) and have processing time p(i). If C(i) is the completion time of job i, then the flow time of i is C(i) − r(i) and the stretch of i is the ratio of its flow time to its processing time; that is, . Flow time measures the time that a job is in the system regardless of the service it requests; the stretch measure relies on the intuition that a job that requires a long service time must be prepared to wait longer than jobs that require small service times. We present the improved algorithmic results for the average stretch metric in preemptive uniprocessor scheduling. Our first result is an off-line polynomial-time approximation scheme (PTAS) for average stretch scheduling. This improves upon the 2-approximation achieved by the on-line algorithm srpt that always schedules a job with the shortest remaining processing time. In a recent work, Chekuri and Khanna (Proc. 34th Ann. Symp. Theory Comput., 297–305, 2002) have presented approximation algorithms for weighted flow time, which is a more general metric than average stretch; their result also yields a PTAS for average stretch. Our second set of results considers the impact of incomplete knowledge of job sizes on the performance of on-line scheduling algorithms. We show that a constant-factor competitive ratio for average stretch is achievable even if the processing times (or remaining processing times) of jobs are known only to within a constant factor of accuracy.  相似文献   

15.
We consider a scheduling problem where a set of jobs has already been scheduled to minimize some cost objective on a single machine when the machine becomes unavailable for a period of time. The decision-maker needs to reschedule the jobs without excessively disrupting the original schedule. The disruption is measured as the maximum time deviation, for any given job, between the original and new schedules. We examine a general model where the maximum time disruption appears both as a constraint and as part of the cost objective. For a scheduling cost modeled as the makespan or maximum lateness, we provide a pseudopolynomial time optimal algorithm, a constant factor approximation algorithm, and a fully polynomial time approximation scheme. The approximation algorithm has an asymptotically achievable worst-case performance ratio of 2 and has average performance close to optimal. Managerial insights are given on how scheduling costs are affected by machine disruption and the approximation algorithm.  相似文献   

16.
We consider the problem of scheduling a set of nonsimultaneously available jobs on one machine. Each job has a ready time only at or after which the job can be processed. All the jobs have a common due date, which needs to be determined. The problem is to determine a due date and a schedule so as to minimize a total penalty depending on the earliness, tardiness and due date. We show that this problem is strongly NP-hard and give an efficient algorithm that finds an optimal due date and schedule when either the job sequence is predetermined or all jobs have the same processing time. We also propose three approximation algorithms for the general and special cases together with their experimental analysis.

Scope and purpose

We consider the single machine due date assignment problem for scheduling jobs which are ready for processing at different times. The problem under consideration arises in production planning and scheduling concerning the setting of appropriate due dates for a number of customer orders arriving over time. Most of the earlier publications on this subject assumed that the jobs are ready for processing simultaneously. This assumption is too restrictive for real-life production systems where jobs arrive at different times. We show that the problem with unequal ready times is NP-hard and develop fast heuristic algorithms for it, and exact algorithms for two special cases.  相似文献   

17.
因实际生产中调度问题的规模很大,分析其近似算法的绝对性能比很难,有时甚至不可行,所以研究近似算法的渐近性能比就很有必要,本文针对多机Flowshop加权完成时间调度问题,使用单机松弛和概率分析方法,证明了基于加权最短处理时间需求的启发式算法是渐近最优的.  相似文献   

18.
当前处理器由于较高的能量消耗,导致处理器热量散发的提高及系统可靠性的降低,已经成为目前计算机领域较为关心的问题.然而目前一些有效降低能量消耗的技术大多针对单处理器系统,较少考虑多处理器系统.提出的调度算法针对多处理器计算环境,以执行时间最快的任务优先调度为基础,结合其它有效技术(共享空闲时间回收),使得实时任务在其截止期内完成的同时能够有效地减低整个系统的能量消耗.针对独立任务集及具有依赖关系的任务集,提出两种针对同构计算环境的算法:STFBA1(Shortest—Task—First—Based Algorithm)及STFBA2,及两钟针对多任务集的算法HSA1(Hybrid Seheduling Algorithm)及HAS2.在单任务集计算环境下,与目前所知的有效算法相比,算法具有更好的性能(调度长度及能量消耗).在多任务集计算环境下,基于混合调度策略的算法能够明显改进调度性能.  相似文献   

19.
Although the earliest-deadline-first (EDF) policy is known to be optimal for preemptive real-time task scheduling in uniprocessor systems, the schedulability analysis problem has recently been shown to be $\mathit{co}\mathcal{NP}$ -hard. Therefore, approximation algorithms, and in particular, approximations based on resource augmentation have attracted a lot of attention for both uniprocessor and multiprocessor systems. Resource augmentation based approximations assume a certain speedup of the processor(s). Using the notion of approximate demand bound function (dbf), in this paper we show that for uniprocessor systems the resource augmentation factor is at most $\frac{2e-1}{e} \approx1.6322$ , where e is the Euler number. We approximate the dbf using a linear approximation when the analysis interval length of interest is larger than the relative deadline of the task. For identical multiprocessor systems with M processors and constrained-deadline task sets, we show that the deadline-monotonic partitioning (that has been proposed by Baruah and Fisher) with the approximate dbf leads to an approximation factor of $\frac{3e-1}{e}-\frac{1}{M} \approx 2.6322-\frac{1}{M}$ with respect to resource augmentation. We also show that the corresponding factor is $3-\frac{1}{M}$ for arbitrary-deadline task sets. The best known results so far were $3-\frac{1}{M}$ for constrained-deadline tasks and $4-\frac {2}{M}$ for arbitrary-deadline ones. Our tighter analysis exploits the structure of the approximate dbf directly and uses the processor utilization violations (which were ignored in all previous analysis) for analyzing resource augmentation factors. We also provide concrete input instances to show that the lower bound on the resource augmentation factor for uniprocessor systems—using the above approximate dbf—is 1.5, and the corresponding bound is 2.5 for identical multiprocessor systems with an arbitrary order of fitting and a large number of processors. Further, we also provide a polynomial-time approximation scheme (PTAS) to derive near-optimal solutions under the assumption that the ratio of the maximum relative deadline to the minimum relative deadline of tasks is a constant, which is a more relaxed assumption compared to the assumptions required for deriving such a PTAS in the past.  相似文献   

20.
非同起点加工的多机调度合成算法   总被引:1,自引:0,他引:1  
针对调度h个独立任务到初始时刻并非都空闲的m台机器上加工,使得机器最长加工时间(makespan)最短的问题,改进MLPT算法以减少运行时间,改进MULTIFIT算法以减少迭代次数,提出以改进的MLPT算法结果为改进的MULTIFIT算法的初始上界的合成算法-CMM,从理论上对MLPT,MULTIFIT和CMM算法的时间复杂度和调度结果进行了分析和比较,实验结果表明:改进的MULTIFIT经MULTIFIT的平均迭代次数少;CMM在平均迭代次数方面甚至比改进的MULTIFIT还少得多且调度结果不次于MULTIFIT和MLPT的优者。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号