首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
IQ switches store packets at input ports to avoid the memory speedup required by OQ switches. However, packet schedulers are needed to determine an I/O (input/output) interconnection pattern that avoids conflicts among packets at output ports. Today, centralized, single-chip, scheduler implementation are largely dominant. In the near future, the multi-chip scheduler implementation will be needed to reduce the hardware scheduler complexity in very large, high-speed, switches. However, the multi-chip implementation implies introducing a non-negligible delay among input and output selectors used to determine the I/O interconnection pattern at each time slot. This delay, mainly due to inter-chip latency, requires modifications to traditional scheduling algorithms, which normally rely on the hypothesis that information exchange among selectors can be performed with negligible delay. We propose a novel multicast scheduler, named IMRR, an extension of a previously proposed multicast scheduling algorithm named mRRM, making it suitable to a multi-chip implementation, and examine its performance by simulation.  相似文献   

3.
MapReduce已经成为主流的海量数据处理模式,任务调度作为其关键环节已受到业界广泛关注。针对已有的延迟调度算法存在的问题,即建立在任务都是短任务的理论假设有一定限制,当节点处理不同长度的任务时算法性能严重下降和基于静态的等待时间阈值不能适应不同用户的作业需求,提出了一种基于任务分类的延迟调度算法。该算法通过给不同长度的任务设置不同的等待时间阈值,以适应不同作业的响应需求。通过分析各动态参数,根据所建任务模型调整任务的等待时间阈值。仿真验证该算法在响应时间及负载均衡性方面优于已有的延迟调度算法。  相似文献   

4.
Many wormhole interconnection networks for parallel systems, and more recently system area networks, implement virtual channels to provide a number of services including improved link utilization and lower latencies. The forwarding of flits from the virtual channels on to the physical channel is typically accomplished using flit-based round-robin (FBRR) scheduling. This paper presents a novel scheduling strategy, anchored opportunity queueing (AOQ), which preserves the throughput and fairness characteristics of FBRR while significantly reducing the average delay experienced by packets. The AOQ scheduler achieves lower average latencies by trying, as far as possible, to complete the transmission of a complete packet before beginning the transmission of flits from another packet. The AOQ scheduler achieves provable fairness in the number of opportunities it offers to each of the virtual channels for transmissions of flits over the physical channel. We prove this by showing that the relative fairness bound, a popular measure of fairness, is a small finite constant in the case of the AOQ scheduler. Finally, we present simulation results comparing the delay characteristics of AOQ with other schedulers for virtual channels. The AOQ scheduler is simple to implement in hardware, and also offers a practical solution in other contexts such as in scheduling ATM cells in Internet backbone switches.  相似文献   

5.
同构Hadoop集群环境下改进的延迟调度算法   总被引:1,自引:1,他引:0  
在Hadoop框架下计算资源和数据资源可以在不同物理位置的特点产生本地化问题。延迟调度算法的产生旨在解决本地化问题, 此算法根据任务待处理数据的物理位置作为作业的计算节点, 调度任务至目标节点。但是可能出现同一作业中若干任务集中运行在某一计算节点, 导致作业达不到理想的并行效果。针对原有的延迟调度算法, 提出延迟一容量调度算法, 允许部分任务选择非本地化节点作为原延迟调度算法中任务的目标计算节点, 以提高作业的响应时间与增加作业的并行程度。最后通过实验对比分析, 改进后的算法在执行效率和并行效果明显优于原延迟调度算法。  相似文献   

6.
In this paper, we present a QoS-adaptive admission control and resource scheduling framework for continuous media (CM) servers. The framework consists of two parts. One is a reserve-based admission control mechanism in which new streams, arriving during periods of congestion, are offered lower QoS, instead of being blocked. The other part is a scheduler for continuous media with dynamic resource allocation to achieve higher utilization than non-dynamic schedulers by effectively sharing available resources among contending streams and by reclamation which is a scheduler-initiated negotiation to reallocate resources among streams to improve overall QoS. This soft-QoS framework recognizes that CM applications can generally tolerate certain variations on QoS parameters; that is, it exploits the findings about human tolerance to degradation in quality of multimedia streams. Using our policy, we could increase the number of simultaneously running clients that could be supported and could ensure a good response ratio and better resource utilization under heavy traffic requirements. Our admission control and scheduling strategy provides three principle advantages over conventional mechanisms. First, it guarantees better total system utilization. Second, it provides better disk utilization and larger admission ratio for input CM streams, which is a major advantage. Third, it still presents acceptable play-out qualities compared to the conventional greedy admission control algorithm.  相似文献   

7.
《Computer Networks》2007,51(11):3220-3231
The proportional delay differentiation model provides controllable and predictable delay differentiation, that is, the packet delay proportion between two classes of services is consistent on any measured timescale. Previous studies have focused on improving the accuracy of the achieved delay proportion between classes, and have not considered reducing the packet queueing delay, since these proposed scheduling algorithms are independent of the packet service time, such that the mean queueing delay is invariant, as specified by the conservation law. This paper proposes maximum WTP (MWTP) and variance WTP (VWTP) schedulers, modified from the waiting-time priority (WTP) algorithm which is an excellent scheduler for performing proportional delay differentiation. All of the proposed schedulers account for the packet transmission time. Simulation results indicate that when the link utilization is moderate, the two schedulers not only yield more accurate delay proportions than the WTP scheduler, regardless of whether the timescale is long or short, but also reduce the mean queueing delay. The effects of load distribution, packet size, and coefficient of variation (CoV) of packet sizes, on the performance of all schedulers are also investigated. Our proposed schedulers always outperform WTP.  相似文献   

8.
We study the problem of executing parallel programs, in particular Cilk programs, on a collection of processors of different speeds. We consider a model in which each processor maintains an estimate of its own speed, where communication between processors has a cost, and where all scheduling must be online. This problem has been considered previously in the fields of asynchronous parallel computing and scheduling theory. Our model is a bridge between the assumptions in these fields. We provide a new more accurate analysis of an old scheduling algorithm called the maximum utilization scheduler . Based on this analysis, we generalize this scheduling policy and define the high utilization scheduler . We next focus on the Cilk platform and introduce a new algorithm for scheduling Cilk multithreaded parallel programs on heterogeneous processors. This scheduler is inspired by the high utilization scheduler and is modified to fit in a Cilk context. A crucial aspect of our algorithm is that it keeps the original spirit of the Cilk scheduler. In fact, when our new algorithm runs on homogeneous processors, it exactly mimics the dynamics of the original Cilk scheduler.  相似文献   

9.
In prior work on soft real-time (SRT) multiprocessor scheduling, tardiness bounds have been derived for a variety of scheduling algorithms, most notably, the global earliest-deadline-first (G-EDF) algorithm. In this paper, we devise G-EDF-like (GEL) schedulers, which have identical implementations to G-EDF and therefore the same overheads, but that provide better tardiness bounds. We discuss how to analyze these schedulers and propose methods to determine scheduler parameters to meet several different tardiness bound criteria. We employ linear programs to adjust such parameters to optimize arbitrary tardiness criteria, and to analyze lateness bounds (lateness is related to tardiness). We also propose a particular scheduling algorithm, namely the global fair lateness (G-FL) algorithm, to minimize maximum absolute lateness bounds. Unlike the other schedulers described in this paper, G-FL only requires linear programming for analysis. We argue that our proposed schedulers, such as G-FL, should replace G-EDF for SRT applications.  相似文献   

10.
大数据流式计算平台Apache Storm默认采用轮询的方式进行任务调度,未考虑到拓扑中各任务计算开销的差异以及任务之间不同类型的通信模式,在负载均衡和通信开销方面存在较大的优化空间。针对这一问题,提出一种Storm环境下基于权重的任务调度算法(TSAW-Storm)。该算法首先根据各任务的CPU资源占用情况以及任务间的数据流大小,分别确定拓扑的点权和边权;并利用最大化边权增益的思想,逐步构建起各工作节点中承载的任务集合,在保证集群负载均衡的同时,尽可能将边权较大的节点间数据流转化为节点内数据流,从而降低网络传输开销。实验结果表明,在包含有8个工作节点的WordCount基准测试中,TSAW-Storm的系统延迟和节点间数据流大小相比Storm默认调度算法分别降低了30.0%和32.9%,且各工作节点的CPU负载标准差仅为Storm默认调度算法的25.8%;此外,在与在线调度算法的对比实验中,TSAW-Storm在系统延迟、节点间数据流大小和CPU负载标准差方面分别降低了7.76%、11.8%和5.93%,且算法的执行开销明显降低,有效提高了Storm系统的运行效率。  相似文献   

11.
Most commercial network switches are designed to achieve good average throughput and delay needed for Internet traffic, whereas hard real-time applications demand a bounded delay. Our real-time switch combines clearance-time-optimal switching with clock-based scheduling on a crossbar switching fabric. We use real-time virtual machine tasks to serve both periodic and aperiodic traffic, which simplifies analysis and provides isolation from other system operations. We can then show that any feasible traffic will be switched in two clock periods. This delay bound is enabled by introducing one-shot traffic, which can be constructed at the cost of a fixed delay of one clock period. We carry out simulation to compare our switch with the popular iSLIP crossbar switch scheduler. Our switch has a larger schedulability region, a bounded lower end-to-end switching delay, and a shorter clearance time which is the time required to serve every packet in the system.  相似文献   

12.
Providing QoS with the Deficit Table Scheduler   总被引:1,自引:0,他引:1  
A key component for networks with Quality of Service (QoS) support is the egress link scheduling algorithm. An ideal scheduling algorithm implemented in a high-performance network with QoS support should satisfy two main properties: good end-to-end delay and implementation simplicity. Table-based schedulers try to offer a simple implementation and good latency bounds. Some of the latest proposals of network technologies, like Advanced Switching and InfiniBand, include in their specifications one of these schedulers. However, these table-based schedulers do not work properly with variable packet sizes, as is usually the case in current network technologies. We have proposed a new table-based scheduler, which we have called Deficit Table (DTable) scheduler, that works properly with variable packet sizes. Moreover, we have proposed a methodology to configure this table-based scheduler in such a way that it permits us to decouple the bounding between the bandwidth and latency assignments. In this paper, we thoroughly review the provision of QoS with the DTable scheduler and our configuration methodology, and evaluate the performance of our proposals in a multimedia scenario. Simulation results show that our proposals are able to provide a similar latency performance than more complex scheduling algorithms. Moreover, we show the advantages of our decoupling configuration methodology over the usual ways of configuring this kind of table-based schedulers.  相似文献   

13.
Cluster scheduling, where processors are grouped into clusters and the tasks that are allocated to one cluster are scheduled by a global scheduler, has attracted attention in multiprocessor real-time systems research recently. In this paper, assuming that an optimal global scheduler is adopted within each cluster, we investigate the worst-case utilization bounds for cluster scheduling with different task allocation/partitioning heuristics. First, we develop a lower limit on the utilization bounds for cluster scheduling with any reasonable task allocation scheme. Then, the lower limit is shown to be the exact utilization bound for cluster scheduling with the worst-fit task allocation scheme. For other task allocation heuristics (such as first-fit, best-fit, first-fit decreasing, best-fit decreasing and worst-fit decreasing), higher utilization bounds are derived for systems with both homogeneous clusters (where each cluster has the same number of processors) and heterogeneous clusters (where clusters have different number of processors). In addition, focusing on an efficient optimal global scheduler, namely the boundary-fair (Bfair) algorithm, we propose a period-aware task allocation heuristic with the goal of reducing the scheduling overhead (e.g., the number of scheduling points, context switches and task migrations). Simulation results indicate that the percentage of task sets that can be scheduled is significantly improved under cluster scheduling even for small-size clusters, compared to that of the partitioned scheduling. Moreover, when comparing to the simple generic task allocation scheme (e.g., first-fit), the proposed period-aware task allocation heuristic markedly reduces the scheduling overhead of cluster scheduling with the Bfair scheduler.  相似文献   

14.
在大规模的Hadoop集群中,良好的任务调度策略对提高数据本地性、减小网络传输开销、减少作业执行时间以及提高集群的作业吞吐量都有着重要的影响。本文针对Hadoop架构中Reduce任务的数据本地性较低问题,提出了一种基于延迟调度策略的Reduce任务调度优化算法,通过提高Reduce任务的数据本地性来减少作业执行时间以及提高作业吞吐量,该算法在Hadoop架构的Early Shuffle阶段,使用多级延迟调度策略来提高Reduce任务的数据本地性。最后重写原生公平调度器代码实现了该调度算法,并与原生公平调度器进行了对比实验分析,实验结果表明该算法明显减少了作业执行时间,提高了集群的作业吞吐量。  相似文献   

15.
Latency-rate (LR) schedulers have shown their ability in providing fair and weighted sharing of bandwidth with an upper bound on delivery latency of packets while earliest departure first (EDF) schedulers have shown their ability in providing LR-decoupled service whereby the delivery latency of packets is not bounded by the reserved rate. However, EDF schedulers require traffic shapers to ensure flow protection. We propose quantum-based earliest deadline first scheduling (QEDF), a quantum-based scheduler that provides flow protection, throughput guarantee and delay bound guarantee for flows that require LR-coupled and LR-decoupled types of reservations. It classifies flows into time-critical (TC), jitter-sensitive (JS), and rate-based (RB) classes and uses a quality-of-service forwarding rule to determine the next packet to be serviced by the scheduler. It provides nonpreemptive priority service to TC queues. This allows LR-decoupled reservation for flows that have a low rate and intolerable delay. Packets from JS queues can be delayed by other packets if forwarding the latter will not result in the former missing its deadline. As a quantum-based scheduler, the QEDF scheduler provides throughput guarantees for RB queues. We present both analytical and simulation results of QEDF, whereby we evaluated QEDF in its deployment as a single-class as well as a multiservice scheduler  相似文献   

16.
给出了IEEE 802.16 PMP支持QoS的带宽调度算法,此算法由BS scheduler和SS scheduler共同实现,带宽的分配单位是时隙,并在NS-2上实现了该算法。通过建立仿真环境,对各种数据流的端到端的延时进行比较,结果表明算法的有效性。  相似文献   

17.
《Performance Evaluation》2006,63(9-10):956-987
Aggregate scheduling has been proposed as a solution for achieving scalability in large-size networks. However, in order to enable the provisioning of real-time services, such as video delivery or voice conversations, in aggregate scheduling networks, end-to-end delay bounds for single flows are required. In this paper, we derive per-flow end-to-end delay bounds in aggregate scheduling networks in which per-egress (or sink-tree) aggregation is in place, and flows traffic is aggregated according to a FIFO policy. The derivation process is based on Network Calculus, which is suitably extended to this purpose. We show that the bound is tight by deriving the scenario in which it is attained. A tight delay bound can be employed for a variety of purposes: for example, devising optimal aggregation criteria and rate provisioning policies based on pre-specified flow delay bounds.  相似文献   

18.
Hybrid optical-wireless networks provide the inexpensive broadband bandwidth, vital for modern applications, as well as mobility, and scalability required for an access network. However, in order to provide satisfactory Quality of Service (QoS) on such a non-homogeneous network, innovative designs are required.This paper proposes a novel scheduling mechanism to significantly improve the delay guarantee, while maintaining high-level throughput, by predicting the incoming traffic to optical network units (ONU). The proposed scheduler managed to exploit the available information in hybrid optical-wireless networks, to enhance the ONU scheduler. This results in accurate prediction of incoming traffic, which leads to intelligent and traffic-aware, scheduling and dynamic bandwidth assignment (DBA).Based on the proposed architecture, two DBA algorithms are proposed and their performance is evaluated by extensive simulations. Moreover, the maximum throughput of such network is analyzed. The results show that by using the proposed algorithms, the delay bound of delay-sensitive traffic classes can be decreased by a factor of two, without any adverse effect on the throughput.  相似文献   

19.
A new approach for dynamic job scheduling in mesh-connected multiprocessor systems, which supports a multiuser environment, is proposed in this paper. Our approach combines a submesh reservation policy with a priority-based scheduling policy to obtain high performance in terms of high throughput, high utilization, and low turn-around times for jobs. This high performance is achieved at the expense of scheduling jobs in a strictly fair, FCFS fashion; in fact, the algorithm is parameterized to allow trade-offs between performance and (short-term) POPS fairness. The proposed scheduler can be used with any submesh allocation policy. A fast and efficient implementation of the proposed scheduler has also been presented. The performance of the proposed scheme has been compared with the FCFS policy, the only existing scheduling strategy for meshes, to demonstrate the effectiveness of the proposed approach. Simulation results indicate that our scheduling strategy outperforms the FCFS policy significantly. Specifically, our strategy significantly reduces the average waiting delay of jobs over the FCFS policy. The fast implementation of the proposed scheduler results in low allocation and deallocation time overhead, as well as low space overhead  相似文献   

20.
队列调度算法在网络中的应用研究   总被引:4,自引:0,他引:4  
作为保证QoS的一种重要手段,队列调度算法近年来引起了网络研究者的广泛关注。本文首先介绍了队列调度问题及一些常用的队列调度算法,然后提出一个非GPS模型的队列模型及调度算法——WDQ算法(Weighted Delay Queuing,基于权重的延迟队列),并且解释了这种算法能够有效抵抗通信量的突发,具有控制不同权重分组延迟的能力,对于提高和改善网络服务质量QoS方面的研究和网络运行情况的研究具有积极意义。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号