首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In prior work on soft real-time (SRT) multiprocessor scheduling, tardiness bounds have been derived for a variety of scheduling algorithms, most notably, the global earliest-deadline-first (G-EDF) algorithm. In this paper, we devise G-EDF-like (GEL) schedulers, which have identical implementations to G-EDF and therefore the same overheads, but that provide better tardiness bounds. We discuss how to analyze these schedulers and propose methods to determine scheduler parameters to meet several different tardiness bound criteria. We employ linear programs to adjust such parameters to optimize arbitrary tardiness criteria, and to analyze lateness bounds (lateness is related to tardiness). We also propose a particular scheduling algorithm, namely the global fair lateness (G-FL) algorithm, to minimize maximum absolute lateness bounds. Unlike the other schedulers described in this paper, G-FL only requires linear programming for analysis. We argue that our proposed schedulers, such as G-FL, should replace G-EDF for SRT applications.  相似文献   

2.
分时EDF算法及其在多媒体操作系统中的应用   总被引:2,自引:0,他引:2  
提出了一种新的CPU调度算法--分时EDF(Earliest Deadine First)算法,该算法能保证硬实时任务不丢失死线,并易于在分时系统中实现。以分时EDF算法为基础,提出一种新的CPU层次调度算法--HRFSFQ,该算法用于多媒体操作系统时能保证各类任务的QoS。最后通过大量实验证明了上述算法的有效性和正确性。  相似文献   

3.
Task scheduling is a fundamental issue in achieving high efficiency in cloud computing. However, it is a big challenge for efficient scheduling algorithm design and implementation (as general scheduling problem is NP‐complete). Most existing task‐scheduling methods of cloud computing only consider task resource requirements for CPU and memory, without considering bandwidth requirements. In order to obtain better performance, in this paper, we propose a bandwidth‐aware algorithm for divisible task scheduling in cloud‐computing environments. A nonlinear programming model for the divisible task‐scheduling problem under the bounded multi‐port model is presented. By solving this model, the optimized allocation scheme that determines proper number of tasks assigned to each virtual resource node is obtained. On the basis of the optimized allocation scheme, a heuristic algorithm for divisible load scheduling, called bandwidth‐aware task‐scheduling (BATS) algorithm, is proposed. The performance of algorithm is evaluated using CloudSim toolkit. Experimental result shows that, compared with the fair‐based task‐scheduling algorithm, the bandwidth‐only task‐scheduling algorithm, and the computation‐only task‐scheduling algorithm, the proposed algorithm (BATS) has better performance. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

4.
Max-Min Fair Scheduling in Input-Queued Switches   总被引:1,自引:0,他引:1  
Fairness in traffic management can improve the isolation between traffic streams, offer a more predictable performance, eliminate transient bottlenecks, mitigate the effect of certain kinds of denial-of-service attacks, and serve as a critical component of a quality-of-service strategy to achieve certain guaranteed services such as delay bounds and minimum bandwidths. In this paper, we choose a popular notion of fairness called max-min fairness and provide a rigorous definition in the context of input-queued switches. We show that being fair at the output ports alone or at the input ports alone or even at both input and output ports does not actually achieve an overall max-min fair allocation of bandwidth in a switch. Instead, we propose a new algorithm that can be readily implemented in a distributed fashion at the input and output ports to determine the exact max-min fair rate allocations for the flows through the switch. In addition to proving the correctness of the algorithm, we propose a practical scheduling strategy based on our algorithm. We present simulation results, using both real traffic traces and synthetic traffic, to evaluate the fairness of a variety of popular scheduling algorithms for input-queued switches. The results show that our scheduling strategy achieves better fairness than other known algorithms for input-queued switches and, in addition, achieves throughput performance very close to that of the best schedulers.  相似文献   

5.
Energy efficient scheduling of parallel tasks on multiprocessor computers   总被引:2,自引:1,他引:1  
In this paper, scheduling parallel tasks on multiprocessor computers with dynamically variable voltage and speed are addressed as combinatorial optimization problems. Two problems are defined, namely, minimizing schedule length with energy consumption constraint and minimizing energy consumption with schedule length constraint. The first problem has applications in general multiprocessor and multicore processor computing systems where energy consumption is an important concern and in mobile computers where energy conservation is a main concern. The second problem has applications in real-time multiprocessing systems and environments where timing constraint is a major requirement. Our scheduling problems are defined such that the energy-delay product is optimized by fixing one factor and minimizing the other. It is noticed that power-aware scheduling of parallel tasks has rarely been discussed before. Our investigation in this paper makes some initial attempt to energy-efficient scheduling of parallel tasks on multiprocessor computers with dynamic voltage and speed. Our scheduling problems contain three nontrivial subproblems, namely, system partitioning, task scheduling, and power supplying. Each subproblem should be solved efficiently, so that heuristic algorithms with overall good performance can be developed. The above decomposition of our optimization problems into three subproblems makes design and analysis of heuristic algorithms tractable. A unique feature of our work is to compare the performance of our algorithms with optimal solutions analytically and validate our results experimentally, not to compare the performance of heuristic algorithms among themselves only experimentally. The harmonic system partitioning and processor allocation scheme is used, which divides a multiprocessor computer into clusters of equal sizes and schedules tasks of similar sizes together to increase processor utilization. A three-level energy/time/power allocation scheme is adopted for a given schedule, such that the schedule length is minimized by consuming given amount of energy or the energy consumed is minimized without missing a given deadline. The performance of our heuristic algorithms is analyzed, and accurate performance bounds are derived. Simulation data which validate our analytical results are also presented. It is found that our analytical results provide very accurate estimation of the expected normalized schedule length and the expected normalized energy consumption and that our heuristic algorithms are able to produce solutions very close to optimum.  相似文献   

6.
Utilization Bounds for EDF Scheduling on Real-Time Multiprocessor Systems   总被引:1,自引:3,他引:1  
The utilization bound for earliest deadline first (EDF) scheduling is extended from uniprocessors to homogeneous multiprocessor systems with partitioning strategies. First results are provided for a basic task model, which includes periodic and independent tasks with deadlines equal to periods. Since the multiprocessor utilization bounds depend on the allocation algorithm, different allocation algorithms have been considered, ranging from simple heuristics to optimal allocation algorithms. As multiprocessor utilization bounds for EDF scheduling depend strongly on task sizes, all these bounds have been obtained as a function of a parameter which takes task sizes into account. Theoretically, the utilization bounds for multiprocessor EDF scheduling can be considered a partial solution to the bin-packing problem, which is known to be NP-complete. The basic task model is extended to include resource sharing, release jitter, deadlines less than periods, aperiodic tasks, non-preemptive sections, context switches, and mode changes.  相似文献   

7.
Decay usage scheduling is a priority- and usage-based approach to CPU allocation in which preference is given to processes that have consumed little CPU in the recent past. The author develops an analytic model for decay usage schedulers running compute-bound workloads, such as those found in many engineering and scientific environments; the model is validated from measurements of a Unix system. This model is used in two ways. First, ways to parameterize decay usage schedulers are studied to achieve a wide range of service rates. Doing so requires a fine granularity of control and a large range of control. The results show that, for a fixed representation of process priorities a larger range of control makes the granularity of control coarser, and a finer granularity of control decreases the range of control. A second use of the analytic model is to construct a low overhead algorithms for achieving service rate objectives. Existing approaches require adding a feedback loop to the scheduler. This overhead is avoided by exploiting the feedback already present in decay usage schedulers. Using both empirical and analytical techniques, it is shown that the algorithm is effective and that it provides fairness when the system is over- or under-loaded  相似文献   

8.
容错多处理机中一种高效的实时调度算法   总被引:5,自引:0,他引:5  
针对基于主副版本容错的多处理机中独立的、抢占性的硬实时任务,提出了一种高效的调度算法——TPFTRM(task partition based fault tolerant rate-monotonic)算法.该算法将单机实时RM 算法扩展到容错多处理机上,并且调度过程中从不使用主动执行的任务副版本,而仅使用被动执行和主副重叠方式执行的任务副版本,从而最大限度地利用副版本重叠和分离技术提高了算法调度性能.此外,TPFTRM 根据任务负载不同将任务集合划分成两个不相交的子集进行分配;还根据处理机调度的任务版本不同,将处理机集合划分成3 个不相交的子集进行调度,从而使TPFTRM 调度算法便于理解、实现以及减少了调度所需要的运行时间.模拟实验对各种具有不同周期和任务负载的任务集合进行了调度测试.实验结果表明,TPFTRM与目前所知同类算法相比,在调度相同参数的任务集合时不仅明显减少了调度所需要的处理机数目,还减少了调度所需要的运行时间,从而证实了TPFTRM 算法的高效性.  相似文献   

9.
In this paper, we propose, design, implement, and evaluate a CPU scheduler and a memory management scheme for interactive soft real-time applications. Our CPU scheduler provides a new CPU reservation algorithm that is based on the well-known Constant Bandwidth Server (CBS) algorithm but is more flexible in allocating the CPU time to multiple concurrently-executing real-time applications. Our CPU scheduler also employs a new multicore scheduling algorithm, extending the Earliest Deadline First to yield Window-constraint Migrations (EDF-WM) algorithm, to improve the absolute CPU bandwidth available in reservation-based systems. Furthermore, we propose a memory reservation mechanism incorporating a new paging algorithm, called Private-Shared-Anonymous Paging (PSAP). This PSAP algorithm allows interactive real-time applications to be responsive under memory pressure without wasting and compromising the memory resource available for contending best-effort applications. Our evaluation demonstrates that our CPU scheduler enables the simultaneous playback of multiple movies to perform at the stable frame-rates more than existing real-time CPU schedulers, while also improves the ratio of hard-deadline guarantee for randomly-generated task sets. Furthermore, we show that our memory management scheme can protect the simultaneous playback of multiple movies from the interference introduced by memory pressure, whereas these movies can become unresponsive under existing memory management schemes.  相似文献   

10.
We design a task mapper TPCM for assigning tasks to virtual machines, and an application-aware virtual machine scheduler TPCS oriented for parallel computing to achieve a high performance in virtual computing systems. To solve the problem of mapping tasks to virtual machines, a virtual machine mapping algorithm (VMMA) in TPCM is presented to achieve load balance in a cluster. Based on such mapping results, TPCS is constructed including three components: a middleware supporting an application-driven scheduling, a device driver in the guest OS kernel, and a virtual machine scheduling algorithm. These components are implemented in the user space, guest OS, and the CPU virtualization subsystem of the Xen hypervisor, respectively. In TPCS, the progress statuses of tasks are transmitted to the underlying kernel from the user space, thus enabling virtual machine scheduling policy to schedule based on the progress of tasks. This policy aims to exchange completion time of tasks for resource utilization. Experimental results show that TPCM can mine the parallelism among tasks to implement the mapping from tasks to virtual machines based on the relations among subtasks. The TPCS scheduler can complete the tasks in a shorter time than can Credit and other schedulers, because it uses task progress to ensure that the tasks in virtual machines complete simultaneously, thereby reducing the time spent in pending, synchronization, communication, and switching. Therefore, parallel tasks can collaborate with each other to achieve higher resource utilization and lower overheads. We conclude that the TPCS scheduler can overcome the shortcomings of present algorithms in perceiving the progress of tasks, making it better than schedulers currently used in parallel computing.  相似文献   

11.
为适应实际系统中任务集的不断变化以及不可忽视状态切换开销的要求,针对多核多处理器系统中常见的周期任务模型,提出一种基于动态松弛时间回收的开销敏感节能实时调度算法DSROM,在每个TL面的初始时刻、任务提前完成时刻实现节能调度及动态松弛时间回收,在不违反周期任务集可调度性的基础上,达到实时约束与能耗节余之间的合理折衷。模拟实验结果表明,DSROM算法不仅保证了周期任务集的最优可调度性,而且当任务集总负载超过某一个值后,其节能效果整体优于现有方法,最多可节能近20%。  相似文献   

12.
In view of the problem of inaccurate scheduling by using traditional resource scheduling method, because the method is mainly based on extracting and classifying the resource features to make scheduling, ignoring the effect of the feature relevance between the resources on the scheduling results. This paper presents a model for multimedia cloud resource scheduling based on multi- device constraint. In this method the objective function is no longer constrained only by the CPU computing capacity and the minimized completion time, but to achieve a minimum time-consuming of CPU, memory and other peripherals operation are considered as the scheduling objectives. Then the utilization of solving constrained jointly is employed to obtain the mapping relationship of the optimal virtual and physical machine. Moreover, a regressive dimensionality reduction algorithm is designed for this scheduling model to solve the high dimensional problems aroused by multi-device constraints. Simulation results show that the improved algorithm has a better performance than the traditional algorithm, has a good efficiency and has a certain convergence.  相似文献   

13.
The provision of Quality of Service (QoS) in interconnection networks is required for new multimedia and time-sensitive applications, which are very important for recent utility computing data centers (UCDCs) using high performance networks. These interconnection networks support switch-based principles and establish high demands in terms of bandwidth, time-delay, and delivery over short distances. A key component for networks with QoS support is the egress link scheduling algorithm. Apart from providing a good performance in terms of, for example, good end-to-end delay (also called latency) and fair bandwidth allocation, an ideal scheduling algorithm implemented in a high-performance network with QoS support should satisfy another important property which is to have a low computational and implementation complexity. In this paper, we propose specific implementations (taking into account the characteristics of current high performance networks) of several fair-queuing scheduling algorithms and compare their complexity in terms of silicon area and computation delay. In order to carry out this comparison, we have devised our own hardware comparison methodology. Following this methodology, we have performed our own hardware implementation for the different schedulers. We have modeled the schedulers using the Handel-C language and employed the DK design suite tool from Celoxica in order to obtain hardware estimates on silicon area and arbitration time.  相似文献   

14.
Over the past decade, the problem of fair bandwidth allocation among contending traffic flows on a link has been extensively researched. However, as these flows traverse a computer network, they share different kinds of resources (e.g., links, buffers, router CPU). The ultimate goal should hence be overall fairness in the allocation of multiple resources rather than a specific resource. Moreover, conventional resource scheduling algorithms depend strongly upon the assumption of prior knowledge of network parameters and cannot handle variations or lack of information about these parameters. In this paper, we present a novel scheduler called the composite bandwidth and CPU scheduler (CBCS), which jointly allocates the fair share of the link bandwidth as well as processing resource to all competing flows. CBCS also uses a simple and adaptive online prediction scheme for reliably estimating the processing times of the incoming data packets. Analytically, we prove that CBCS is efficient, with a per-packet work complexity of O(1). Finally, we present simulation results and experimental outcomes from a real-world implementation of CBCS on an Intel IXP 2400 network processor. Our results highlight the improved performance achieved by CBCS and demonstrate the ease with which it can be implemented on off-the-shelf hardware  相似文献   

15.
We present an approach to designing cellular automata-based multiprocessor scheduling algorithms in which extracting knowledge about the scheduling process occurs. We consider the simplest case when a multiprocessor system is limited to two-processors. To design cellular automata corresponding to a given program graph, we propose a generic definition of program graph neighborhood, transparent to the various kinds, sizes, and shapes of program graphs. The cellular automata-based scheduler works in two modes: learning mode and operation mode. Discovered rules are typically suitable for sequential cellular automata working as a scheduler, while the most interesting and promising feature of cellular automata are their massive parallelism. To overcome difficulties in evolving parallel cellular automata rules, we propose using coevolutionary genetic algorithm. Discovered this way, rules enable us to design effective parallel schedulers. We present a number of experimental results for both sequential and parallel scheduling algorithms discovered in the context of a cellular automata-based scheduling system  相似文献   

16.
This paper studies runtime partitioning, scheduling and load balancing techniques for improving performance of online WWW-based information systems such as digital libraries. The main performance bottlenecks of such a system are caused by the server computing capability and Internet bandwidth. Our observations and solutions are based on our experience with the Alexandria Digital Library (ADL) testbed at UCSB, which provides online browsing and processing of documents, digitized maps, and other geo-spatially mapped data via the WWW. A proper partitioning and scheduling of computation and communication in processing a user request on a multiprocessor server and transferring some computation to client-site machines can reduce network traffic and substantially improve system response time. We propose a partitioning and scheduling mechanism that adapts to resource changes and optimizes resource utilization and demonstrate the application of this mechanism for online information browsing. We also provide a performance analysis and experimental results to study the impact of resource availability and the effectiveness of our scheduling techniques.  相似文献   

17.
We propose three novel mathematical optimization formulations that solve the same two-type heterogeneous multiprocessor scheduling problem for a real-time taskset with hard constraints. Our formulations are based on a global scheduling scheme and a fluid model. The first formulation is a mixed-integer nonlinear program, since the scheduling problem is intuitively considered as an assignment problem. However, by changing the scheduling problem to first determine a task workload partition and then to find the execution order of all tasks, the computation time can be significantly reduced. Specifically, the workload partitioning problem can be formulated as a continuous nonlinear program for a system with continuous operating frequency, and as a continuous linear program for a practical system with a discrete speed level set. The latter problem can therefore be solved by an interior point method to any accuracy in polynomial time. The task ordering problem can be solved by an algorithm with a complexity that is linear in the total number of tasks. The work is evaluated against existing global energy/feasibility optimal workload allocation formulations. The results illustrate that our algorithms are both feasibility optimal and energy optimal for both implicit and constrained deadline tasksets. Specifically, our algorithm can achieve up to 40% energy saving for some simulated tasksets with constrained deadlines. The benefit of our formulation compared with existing work is that our algorithms can solve a more general class of scheduling problems due to incorporating a scheduling dynamic model in the formulations and allowing for a time-varying speed profile.  相似文献   

18.
When simulating large virtual environments (VEs), contention for limited resources such as the CPU, rendering pipeline, or network bandwidth frequently degrades the system's performance. Whenever such a competition occurs and not all elements that require the resource can be serviced, an approximation must be made to avoid compromising interactive performance. We propose an enhancement to the round-robin approach called Priority Round Robin scheduling (PRR). This algorithm enforces priorities, while retaining the output sensitivity and starvation-free performance of round-robin scheduling. Priorities are set by a user-defined error metric (such as visual error), which the algorithm attempts to minimize. This permits not only scheduling the entities competing for a resource such as updates competing for the network, but also filling the gap between filtering and scheduling techniques. We evaluate our algorithm in a client-server system  相似文献   

19.
The increasing amount of real-time traffic carried over the Internet requires end-to-end quality of service (QoS) support. To this end, the QoS Schedulers, that are implemented in routers, assign the available bandwidth resources to packet flows according to their respective allocated rates. Packet Fair Queuing (PFQ) schedulers can provide fair service and low end-to-end delay bound to the traffic flows. However, they have higher implementation complexity compared to other algorithms, because of the requirements of tracking the system state, and searching for the packet to get service among all flows, that are queued at the outgoing interface. QoS scheduling is a data plane functionality, which requires hardware implementation for high speed router interfaces. The previous works on hardware implementation of PFQ schedulers are specific to certain algorithms, and they do not provide any results on real hardware platforms. In this paper, we present a general hardware design framework for PFQ schedulers, and apply this framework to the WF2Q+ PFQ algorithm to demonstrate its properties. We carry out the entire implementation of the WF2Q+ algorithm on an FPGA, and evaluate its performance with real traffic flows. In addition, we implement WFQ as a second PFQ algorithm to demonstrate the generality of the framework.  相似文献   

20.
结合大规模接入汇聚路由器需要对不同汇聚业务流进行不同的处理这一实际需求,基于CICQ交换结构,该文给出了一种支持DiffServ模型的调度策略(DS),该算法以“节点行为”方式对业务流进行调度。和以往算法相比,DS采取了分布式的控制策略,并且具有较低的时间复杂度,工程上更易实现。仿真结果表明,DS不仅能够为EF和AF业务提供带宽保证,而且具有良好的时延性能。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号