首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Models for two processor sharing policies called task scheduling processor sharing and job scheduling processor sharing are developed and analyzed. The first policy schedules each task independently and allows parallel execution of an individual program, whereas the second policy schedules each job as a unit, thereby not allowing parallel execution of an individual program. It is found that task scheduling performs better than job scheduling for most system parameter values. The performance of the task scheduling processor sharing is compared to a first come first serve policy. First come first serve performs better than processor sharing over a wide range of system parameters. Processor sharing performs best when the task service time variability is high. The performance of processor sharing and first come first serve is studied with two classes of jobs, and for when a specific number of processors is statically assigned to each of the classes  相似文献   

2.
In a shared-memory multiprocessor system, it may be more efficient to schedule a task on one processor than on another if relevant data already reside in a particular processor's cache. The effects of this type of processor affinity are examined. It is observed that tasks continuously alternate between executing at a processor and releasing this processor due to I/O, synchronization, quantum expiration, or preemption. Queuing network models of different abstract scheduling policies are formulated, spanning the range from ignoring affinity to fixing tasks on processors. These models are solved via mean value analysis, where possible, and by simulation otherwise. An analytic cache model is developed and used in these scheduling models to include the effects of an initial burst of cache misses experienced by tasks when they return to a processor for execution. A mean-value technique is also developed and used in the scheduling models to include the effects of increased bus traffic due to these bursts of cache misses. Only a small amount of affinity information needs to be maintained for each task. The importance of having a policy that adapts its behavior to changes in system load is demonstrated  相似文献   

3.
分布式控制系统中存在有强实时、软实时和非实时等多种实时性的任务,其中强实时任务必须在其时限前完成,否则会出现灾难性后果,因此必须为分布式控制系统提供一定的容错能力。首先给出了用于调度多种实时性任务的单处理器调度算法——双优先级队列调度算法,并分析算法的可调度性条件。针对分布式控制系统,考虑基版本与副版本的执行时间不同时,结合版本复制技术和单处理器调度算法提出了一种新的容错调度算法。分析了算法的可调度行,给出了可任务集的可调度条件判断方法和基版本任务时限的设置方法。在此基础上,采用启发式静态任务分配算法,保证各处理器的负载均衡。本算法在保证任务容错可调度的条件下,可提高系统中各处理器的利用率,仿真结果表明该算法是有效的。  相似文献   

4.
针对TinyOS先来先服务调度策略中重要任务不能及时响应的不足,提出一种基于多优先级任务队列的调度策略。该调度策略将原来一个任务队列增加为三个优先级队列并引入抢占机制,最高优先级队列中的任务在满足抢占原则时才可以抢占其他队列正在执行的任务,任务只能在不同队列之间发生抢占,这样既减少了上下文切换,又保证了重要任务的优先执行。实验结果表明,该调度策略在不影响原有系统性能的情况下,提高了TinyOS对重要任务的响应性能。  相似文献   

5.
实时多处理器系统的动态分批优化调度算法   总被引:3,自引:1,他引:3  
提出了一种实时多处理器系统的新的高效动态调度算法--动态分批优化调度算法,该算法突破了以往算法中一次只安排一项任务的做法,采用在每次扩充当前局部调度时,按一定规则在待调度的任务集中选取一批任务,对该批任务中的每项任务在每个处理器上运行构造目标函数,将问题转化为非平衡分配问题,一次性为这些任务都安排一个处理器或为每个处理器安排一项任务,使得这种安排具有最好的"合适性",以增大未安排任务的可行性.这种方法极大地提高了算法的调度成功率.同时,为了研究该算法的有效性,对其进行了大量的模拟,分析了一些任务参数的变化对算法调度成功率的影响,并与节约算法的调度成功率进行了比较.模拟结果显示,在节约算法的调度成功率小于10%的约束条件下,该算法的调度成功率大于90%,说明新算法的优势是非常明显的.  相似文献   

6.
基于多队列自适应的DTN传染路由算法   总被引:2,自引:0,他引:2  
传染路由是DTN中一类较简单的基本路由算法.针对DTN网络环境易变的特点及传染路由的不足提出多队列自适应传染路由,采用多队列方式管理存储空间,利用效用函数对队列内信息进行排序,针对不同队列及网络情况采用相异的转发机制,从而降低网络负载率、提高传输率并降低传输时延,同时可提供简单的QoS.仿真证明本算法优于路由算法Spray andw ait和MaxProp.  相似文献   

7.
One of the major issues that needs to be addressed in distributed memory multiprocessor (DMM) systems is the program task partitioning and scheduling problems, i.e. mapping of an application program's precedence related task threads among the processing elements of a DMM system. The optimal task partitioning and scheduling problem, with the goal of minimizing the program execution time and interprocessor communication overhead, is known to be an NP-complete problem. The paper addresses the design, development and performance evaluation of a novel static task partitioning and scheduling method called linear clustering with task duplication (LCTD). LCTD employs the linear (sequential) execution of tasks and task duplication heuristics in achieving minimized computation and interprocessor communication delays in DMMs. The superiority of the proposed LCTD algorithm is demonstrated through simulation studies and comparison against several of the existing static scheduling schemes, such as heavy node first (HNF) and linear clustering. We show that the proposed method can obtain an average of 33% improvement in program execution time and 21% improvement in processor utilization compared to linear clustering and HNF methods.  相似文献   

8.
徐洪智  李仁发 《计算机工程》2008,34(23):29-30,4
In-Tree任务图可用来表示归并、求和等分治算法的很多问题,该文针对这种任务图提出一种分层调度算法,利用队列存放被调度的任务,在同层任务调度中,优先把前驱不为空的任务调度到其一个前驱处理器上执行,只有前驱为空的任务才考虑是否分配新的处理器。实验表明,与以前的算法相比,该算法在调度长度相当的情况下,使用了更少的处理器。  相似文献   

9.
Most parallel jobs cannot be fully parallelized. In a homogeneous parallel machine-one in which all processors are identical-the serial fraction of the computation has to be executed at the speed of any of the identical processors, limiting the speedup that can be obtained due to parallelism. In a heterogeneous architecture, the sequential bottleneck can be greatly reduced by running the sequential part of the job or even the critical tasks in a faster processor. This paper uses Markov chain based models to analyze the performance of static and dynamic processor assignment policies for heterogeneous architectures. Parallel jobs are assumed to be described by acyclic directed task graphs. A new static processor assignment policy, called Largest Task First Minimum Finish Time (LTFMFT), is introduced. The analysis shows that this policy is very sensitive to the degree of heterogeneity of the architecture, and that it outperforms all other policies analyzed. Three dynamic assignment disciplines are compared and it is shown that, in heterogeneous environments, the disciplines that perform better are those that consider the structure of the task graph, and not only the service demands of the individual tasks. The performance of heterogeneous architectures is compared with cost-equivalent homogeneous ones taking into account different scheduling policies. Finally, static and dynamic processor assignment disciplines are compared in terms of performance.  相似文献   

10.
A duality between scheduling and routing problems associated with a set of parallel queues is established. This allows one to determine the optimal policy for either system, once it is determined for its dual system. Moreover, the evaluation of different design alternatives (e.g., allocation of buffers) can be accommodated in the same duality framework. A crucial assumption is that both systems should be Markovian. Furthermore, when there is no buffer at the controller, the scheduling policy is assumed to be preemptive. On the other hand, when there exists buffer space dedicated to the controller, both the routing and scheduling policies are assumed to be nonidling. Various applications are presented. It is shown, for instance, that the smallest residual capacity scheduling policy is optimal, as it is the dual of the well-known shortest queue routing policy  相似文献   

11.
复杂系统的形式化描述对新系统的设计以及现有系统的改进与评价都具有十分重要的作用;针对处理机系统容错实时混合任务调度,提出采用确定与随机Petri网进行建模与性能分析;首先,根据任务执行的优先级、周期性、容错性和实时性,将任务分为四类;然后,采用DSPN对任务调度执行过程,不同优先级任务抢占式调度,处理机故障及故障恢复过程进行建模,由此构成处理机系统容错实时任务调度过程的DSPN模型;最后,仿真实验结果表明,在负载相同情况下,处理机利用率基本相同,且具有容错的实时任务调度算法可以有效地降低任务错失率;容错实时任务调度DSPN模型可以为复杂任务调度系统的Petri网建模与分析奠定了基础,并为实际工程应用提供了理论指导。  相似文献   

12.
For Non-Uniform Memory Access (NUMA) multiprocessors, memory access overhead is crucial to system performance. Processor scheduling and page placement schemes, dominant factors of memory access overhead, are closely related. In particular, if the processor scheduling scheme is dynamic space-sharing, it should be considered together with the page placement scheme for efficient process execution. Most research in this area, however, has focused exclusively on either the processor scheduling scheme or the page placement scheme alone without considering the interaction between the two. This paper proposes several policies for cluster-based NUMA multiprocessors that are combinations of a processor scheduling scheme and a page placement scheme and investigates the interaction between them. The simulation results show that policies that cooperate to employ the home-cluster concept achieve the best performance. The paper also compares the best of the proposed policies with other existing dynamic processor scheduling policies. Based on our study reported here, the best policy is found to perform better than other existing policies.  相似文献   

13.
Multi-armed bandits with switching penalties   总被引:2,自引:0,他引:2  
The multi-armed bandit problem with switching penalties (switching cost and switching delays) is investigated. It is shown that under an optimal policy, decisions about the processor allocation need to be made only at stopping times that achieve an appropriate index, the well-known “Gittins index” or a “switching index” that is defined for switching cost and switching delays. An algorithm for the computation of the “switching index” is presented. Furthermore, sufficient conditions for optimality of allocation strategies, based on limited look-ahead techniques, are established. These conditions together with the above-mentioned feature of optimal scheduling policies simplify the search for an optimal allocation policy. For a special class of multi-armed bandits (scheduling of parallel queues with switching penalties and no arrivals), it is shown that the aforementioned property of optimal policies is sufficient to determine an optimal allocation strategy. In general, the determination of optimal allocation policies remains a difficult and challenging task  相似文献   

14.
实时异构系统的动态分批优化调度算法   总被引:8,自引:0,他引:8  
提出了一种实时异构系统的动态分批优化调度算法,该算法采用的是在每次扩充当前局部调度时,按一定规则在待调度的任务集中选取一批任务,对该批任务中的每项任务在每个处理器上的运行综合各种因素构造目标函数,将问题转化为非平衡分配问题,一次性为这些任务都分配一个处理器或为每个处理器分配一项任务,使得这种分配具有最好的“合适性”,以增大未被调度任务的可行性.这种方法有效地提高了算法调度成功率.同时,为了评估该算法的性能,对其进行了大量的模拟,分析了一些任务参数的变化对算法调度成功率的影响,并与老算法的调度成功率进行了比较.模拟结果显示,新算法优于老算法.  相似文献   

15.
人工智能的飞速发展对高性能计算提出了更高的要求,异构计算环境下任务调度问题一直是高性能计算中的关键问题.本文提出一种基于优先队列划分的调度算法(PQDSA),该算法根据DAG(有向无循环图)任务集的入口节点数量确定优先队列数,通过任务的通信开销和计算开销划分任务队列,进而将关键节点任务分配给合适的队列,以产生效果较佳的任务调度队列,从而提高任务间的并行性,降低任务集的完工时间.与此同时,进一步基于插入策略将任务调度到处理器上,使任务调度更加高效地执行.PQDSA算法可以减少任务间的时间消耗,提高处理器的调度效率.通过与两个经典算法的性能对比,实验结果表明本文提出的PQDSA算法在任务完工时间和调度效率方面都要明显优于对比的算法.  相似文献   

16.
On the granularity and clustering of directed acyclic task graphs   总被引:1,自引:0,他引:1  
The authors consider the impact of the granularity on scheduling task graphs. Scheduling consists of two parts, the processors assignment of tasks, also called clustering, and the ordering of tasks for execution in each processor. The authors introduce two types of clusterings: nonlinear and linear clusterings. A clustering is nonlinear if two parallel tasks are mapped in the same cluster otherwise it is linear. Linear clustering fully exploits the natural parallelism of a given directed acyclic task graph (DAG) while nonlinear clustering sequentializes independent tasks to reduce parallelism. The authors also introduce a new quantification of the granularity of a DAG and define a coarse grain DAG as the one whose granularity is greater than one. It is proved that every nonlinear clustering of a coarse grain DAG can be transformed into a linear clustering that has less or equal parallel time than the nonlinear one. This result is used to prove the optimality of some important linear clusterings used in parallel numerical computing  相似文献   

17.
Often hard real-time systems require results that are produced on time despite the occurrence of processor failures. This paper considers a distributed system where tasks are periodic and each task occurs in multiple copies which are periodically synchronized in order to handle failures. The problem of preemptively scheduling a set of such tasks is discussed where every occurrence of a task has to be completely executed before the next occurrence of the same task. First, a static scheduling algorithm is proposed which uses periodic checkpoints to tolerate processor failures. Then, the performance of the algorithm is substancially improved employing a mixed strategy which constructs a schedule where high frequency tasks are duplicated, and low frequency tasks are periodically checkpointed. The performance of the solution proposed is evaluated in terms of the minimum achievable processor utilization due to the useful computation of the tasks. Moreover, analytical and simulation studies are used to reveal interesting trade-offs associated with the scheduling algorithm. In particular, if high frequency tasks are less than 70 percent of the total number of tasks then the mixed strategy yields a higher processor utilization than the task duplication scheme.  相似文献   

18.
针对相控阵雷达实时任务调度中时间资源利用不充分的问题,提出了一种基于改进时间指针的任务自适应调度方法.以时间指针为对象,从整个调度时间轴上所有满足时间指针处执行条件的任务请求中,选择一个优先级最高的任务作为当前时刻的执行任务,有效减少了空闲时间的浪费,使得相控阵雷达能够在有限时间资源内调度执行更多的任务.与基于传统时间指针的调度方法仿真对比,结果表明:方法提高了任务调度成功率和时间利用率,有效提升1了相控阵雷达的整体调度性能,具有一定的优越性.  相似文献   

19.
云任务调度是云计算研究的一个热点。云任务调度方法的好坏直接影响云平台的整体性能。提出一种基于模板遗传算法(TBGA)的任务调度方法。首先,根据处理机的运算速度和带宽等条件,计算出每个处理机应分配的任务量模板大小;然后,根据模板大小将任务集合中的任务划分为多个子集合;最后,利用遗传算法将集合中的任务分配到对应的处理机。实验证明通过此方法能得到总任务完成时间较短的调度结果。通过仿真实验将TBGA算法与Min-Min算法和遗传算法(GA)进行比较,实验结果表明,TBGA算法与Min-Min算法相比任务集合完成时间降低了20%左右,与遗传算法相比任务集合完成时间降低了30%左右,是一种有效的任务调度算法。  相似文献   

20.
Adaptive location policies for global scheduling   总被引:1,自引:0,他引:1  
Two important components of a global scheduling algorithm are its transfer policy and its location policy. While the transfer policy determines whether a task should be transferred, the location policy determines where it should be transferred. Based on their location policies, global scheduling algorithms can be broadly classified as receiver-initiated, sender-initiated, or symmetrically-initiated. The relative performance of these classes of algorithms has been shown to depend on the system workload. We present two adaptive location policies for global scheduling in distributed systems. These location policies are general, and can be used in conjunction with many existing transfer policies. By adapting to the system workload, the proposed policies capture the advantages of both sender-initiated and receiver-initiated policies. In addition, by adaptively directing their search activities toward the nodes that are most likely to be suitable counterparts in task transfers, the proposed policies provide short transfer latency and low overhead, and more important, high probability of finding a suitable counterpart if one exists. These properties allow these policies to deliver good performance over a very wide range of system operating conditions. The proposed policies are compared with nonadaptive policies, and are shown to considerably improve performance and to avoid causing system instability  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号