首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Many applications need to solve the deadline guaranteed packet scheduling problem. However, it is a very difficult problem if three or more deadlines are present in a set of packets to be scheduled. The traditional approach to dealing with this problem is to use EDF (Earliest Deadline First) or similar methods. Recently, a non-EDF based algorithm was proposed that constantly produces a higher throughput than EDF-based algorithms by repeatedly finding an optimal scheduling for two classes. However, this new method requires the two classes be non-overloaded, which greatly restricts its applications. Since the overloaded situation is not avoidable from one iteration to the next in dealing with multiple classes, it is compelling to answer the open question: Can we find an optimal schedule for two overloaded classes efficiently? This paper first proves that this problem is NP-complete. Then, this paper proposes an optimal preprocessing algorithm that guarantees to drop a minimum number of packets from the two classes such that the remaining set is non-overloaded. This result directly improves on the new method.  相似文献   

2.
Aiming at minimizing communication overhead of iterative scheduling algorithms for input-queued packet switches, an efficient single-iteration single-bit request scheduling algorithm called Highest Rank First with Request Compression 1 (HRF/RC1) is proposed. In HRF/RC1, scheduling priority is given to the preferred input–output pair first, where each input has a distinct preferred output in each time slot. If an input does not have backlogged packets for its preferred output, each of its non-empty VOQs sends a single-bit request to the corresponding output. This single bit distinguishes one longest VOQ from other non-empty VOQs among an input port. If an output receives a request from its preferred input, it grants this input. Otherwise, it gives the higher priority to the longest VOQ than other non-empty VOQs. Similarly, an input accepts the grant following the same propriety sequence. In case of a tie, the winner is selected randomly. Compared with other single-iteration algorithms with comparable communication overhead, we show by simulations that HRF/RC1 always gives the best delay-throughput performance.  相似文献   

3.
报文交换采用报文缓冲区来存储调度输出端口的数据包,而缓冲区的读写速度往往决定了T-Bit路由器自身的性能。针对目前的DRAM读写速率较低这一缺陷,提出了一种利用DDR内存来实现支持QOS的高速大容量的缓存机制。实现了一种支持12路2.5GbpsIP报文调度工程方案,该方案可保证调度输出端口速率可达10Gbps。  相似文献   

4.
MPLS VPN支持QoS的研究   总被引:6,自引:0,他引:6  
首先介绍了两种VPNQoS模型:管道模型和软管模型,然后描述了在大规模网络中MPLS与DiffServ等相关技术结合提供QoS的方法,最后讨论如何在MPLSVPN中实现软管模型,为用户提供分等级的QoS。  相似文献   

5.
6.
Linux网络系统对QoS的支持   总被引:1,自引:0,他引:1  
Linux为流量控制提供了大量的函数,这是Linux支持QoS的基础,流量控制主要由3部分组成,队列策略、分类和过滤,该文对这三个部分进行了详细的讨论,并描述了区分服务的实现。  相似文献   

7.
熊智  晏蒲柳  郭成城 《计算机工程》2006,32(17):35-37,4
为了让Web集群服务器支持QoS,在分配器上实现了一些QoS的机制,包括区分服务、性能隔离、服务器动态划分、接纳控制和内容自适应。对于高优先级的请求,系统确保其服务质量满足事先商定的服务质量协议;对于低优先级的请求,系统提供尽力而为的服务。尤其,当服务器重载时,分配器不是简单地靠丢弃请求,而是采用内容自适应机制来防止服务器过载。实际测试表明,系统达到了所有的设计要求。  相似文献   

8.
As Grid computing has emerged as a technology for providing the computational resources to industries and scientific projects, new requirements arise. Nowadays, resource management has become an important research area in the Grid computing environment. To provision the appropriate resource to a corresponding application is a tedious task. So, it is important to check and verify the provisioning of the resource before the application’s execution. In this paper, a resource provisioning framework has been presented that offers a resource provisioning policy, which caters to provisioned resource allocation and resource scheduling. The framework has been formally specified and verified. Formal specification and verification of the framework helps in predicting possible errors before the scheduling process itself, and thus results in efficient resource provisioning and scheduling of Grid resources.  相似文献   

9.
王荣  陈越 《计算机应用》2005,25(7):1488-1490,1493
传统的基于crossbar的输入排队交换结构在提供良好的QoS方面存在很大的不足,而CICQ(combined input and crosspoint buffered queuing)交换结构与传统的交换结构比,不但能在各种输入流下提供接近输出排队的吞吐率,而且能提供良好的QoS支持。基于CICQ结构,提出了在输入排队条件下实现基于流的分布式DRR分组公平调度算法的方案,并通过仿真验证了这一方案的有效性。  相似文献   

10.
APEX is an adaptive disk scheduling framework with Quality-of-Service (QoS) support designed for environments with highly varying disk bandwidth usage. APEX is based on a three-layer scheduling architecture: (1) the upper layer realizes different service classes using a set of queues; (2) the mid-layer distributes available disk bandwidth among these queues; and (3) the lower layer is handled by the disk itself, which does the final ordering of disk requests. We demonstrate the use of APEX in an example scenario, a Learning-on-Demand (LoD) application supported by a multimedia system, where students can search for and playback multimedia-based learning material. In this paper, we present the scheduling concepts of APEX which are based on an extended token bucket algorithm. The disk requests scheduled for service are assembled into batches in order to exploit the intelligence of modern disks. Combined with a specialized work-conservation scheme, this enables APEX to apply bandwidth where it is needed, without the loss of efficiency. We demonstrate, through simulations, that APEX provides both higher throughput and lower response times than other mixed-media disk schedulers while still avoiding deadline violations for real-time requests. We also show its robustness with respect to misaligned bandwidth allocation. The work was conducted while Ketil Lund was an employee at UniK – University Graduate Center, Kjeller, Norway.  相似文献   

11.
《Computer Networks》2007,51(9):2368-2378
Orthogonal frequency division multiple access (OFDMA) transmission is the technology of choice for fourth generation (4G) wireless networks (802.16, 3GPP and 3GPP2). In both uplink and downlink directions, resources can be allocated in three dimensions, frequency, time and power. The uplink resource allocation problem is more complicated due to the limited transmission power of the subscriber stations (SS). In this paper we consider the uplink resource allocation problem for generic OFDMA wireless networks and derive the optimal scheduler taking into account constraints, such as quality of service (QoS), that were not considered in previous works.  相似文献   

12.
IPV6流标签提供的服务质量支持   总被引:3,自引:0,他引:3  
作为下一代互联网协议,IPv6除了在地址空间上明显优于IPv4之外,还在数据报头中保留了20位流标签字段为实时流提供有别于尽力而为流的服务。流标签的研完尚处于试验阶段,目前还没有制定出正式的规范。本文在介绍了流和流标签的定义及属性之后,从流标签支持包分类、加速包转发、支持资源预留以及支持QoS的格式定义等多个方面探讨了流标签支持服务质量的能力,分析各种方式的利弊,给出可能的发展方向。  相似文献   

13.
Assuring end-to-end QoS in enterprise distributed real-time and embedded (DRE) systems is hard due to the heterogeneity and transient behavior of communication networks, the lack of integrated mechanisms that schedule communication and computing resources holistically, and the scalability limits of IP multicast in wide-area networks (WANs). This paper makes three contributions to research on overcoming these problems in the context of enterprise DRE systems that use the OMG Data Distribution Service (DDS) quality-of-service (QoS)-enabled publish/subscribe (pub/sub) middleware over WANs. First, it codifies the limitations of conventional DDS implementations deployed over WANs. Second, it describes a middleware component called Proxy DDS that bridges multiple, isolated DDS domains deployed over WANs. Third, it describes the NetQSIP framework that combines multi-layer, standards-based technologies including the OMG-DDS, Session Initiation Protocol (SIP), and IP DiffServ to support end-to-end QoS in a WAN and shield pub/sub applications from tedious and error-prone details of network QoS mechanisms. The results of experiments using Proxy DDS and NetQSIP show how combining DDS with SIP in DiffServ networks significantly improves dynamic resource reservation in WANs and provides effective end-to-end QoS management.  相似文献   

14.
IPv6作为新一代的互联网协议,对IP网络的服务质量提供了很好的支持,尤其是其报头中20位的流标签的使用,更是比IPv4的尽力而为服务有了很大的提高。本文介绍了IPv6流标签的定义和使用规范,分析了流标签在两种主要体系结构中对QoS如何支持,并用实际数据分析表现出流标签对于传统协议更能较好的支持QoS。  相似文献   

15.
Utilizing optical technologies for the design of packet switches and routers offers several advantages in terms of scalability, high bandwidth, power consumption, and cost. However, the configuration delays of optical crossbars are much longer than that of the electronic counterpart, which makes the conventional slot-by-slot scheduling methods no longer the feasible solution. Therefore, some tradeoff must be found between the empty time slots and configuration overhead. This paper classifies such scheduling problems into preemptive and non-preemptive scenarios, each has its own advantages and disadvantages. Although non-preemptive scheduling is inherently not good at achieving the above-mentioned tradeoff, it is shown, however, that the proposed maximum weight matching (MWM) based greedy algorithm is guaranteed to achieve an approximation 2 for arbitrary configuration delay, and with a relatively low time complexity O(N2). For preemptive scheduling, a novel 2-approximation heuristic is presented. Each time in finding a switch configuration, the 2-approximation heuristic guarantees the covering cost of the remaining traffic matrix to have 2-approximation. Simulation results demonstrate that 2-approximation heuristic (1) performs close to the optimal scheduling, and (2) outperforms ADJUST and DOUBLE in terms of traffic transmission delay and time complexity.  相似文献   

16.
The buffered crossbar switch architecture has recently gained considerable research attention. In such a switch, besides normal input and output queues, a small buffer is associated with each crosspoint. Due to the introduction of crossbar buffers, output and input dependency is eliminated, and the scheduling process is greatly simplified. We analyze the performance of switch policies by means of competitive analysis, where a uniform guarantee is provided for all traffic patterns. We assume that each packet has an intrinsic value designating its priority and the goal of the switch policy is to maximize the weighted throughput of the switch. We consider FIFO queueing buffering policies, which are deployed by the majority of today’s Internet routers. In packet-mode scheduling, a packet is divided into a number of unit length cells and the scheduling policy is constrained to schedule all the cells contiguously, which removes reassembly overhead and improves Quality-of-Service. For the case of variable length packets with uniform value density (Best Effort model), where the packet value is proportional to its size, we present a packet-mode greedy switch policy that is 7-competitive. For the case of unit size packets with variable values (Differentiated Services model), we propose a β-preemptive (β is a preemption factor) greedy switch policy that achieves a competitive ratio of 6 + 4β + β 2 + 3/(β − 1). In particular, its competitive ratio is at most 19.95 for the preemption factor of β = 1.67. As far as we know, this is the first constant-competitive FIFO policy for this architecture in the case of variable value packets. In addition, we evaluate performance of β-preemptive greedy switch policy by simulations and show that it outperforms other natural switch policies. The presented policies are simple and thus can be efficiently implemented at high speeds. Moreover, our results hold for any value of the internal switch fabric speedup.  相似文献   

17.
针对中高速传感器网络中混合业务QoS(Quality of Service)要求,跨层考虑物理层和数据链路层参数,提出了一种保证混合业务服务质量的调度算法AM-LWDF。该算法同时考虑时延优先级和吞吐量优先级,在满足实时业务QoS约束的前提下,以最大化系统吞吐量为目标建立了相应的优化模型,对实时业务能够满足时延较小的要求,对非实时业务满足吞吐量较大的要求。仿真结果表明,该调度算法可以灵活地在时延和吞吐量之间取得满意的折衷,并保证不同类型业务用户间的公平性。  相似文献   

18.
一种网格资源调度中QoS的最大化匹配算法   总被引:1,自引:0,他引:1  
针对网格资源选择中复杂的QoS参数处理和精确匹配导致的资源调度率低下问题,将QoS参数按性质分类,定义了QoS参数距离,实现QoS参数相似性判断,由此提出了一种软化的参数处理模型,给出了一种最大化匹配调度算法。实验表明,该算法提高了系统吞吐量、任务满足率、资源调度率和整个系统资源利用率。  相似文献   

19.
IEEE802.16标准在每个节点处都提供实时业务和非实时业务。由于基于优先级的业务的服务质量(QoS)的需求不同,因此需要对传统的调度算法进行改进使其具有更大的适应性。为了改善QoS端到端时延的性能,提出了一种混合调度算法(EDD和WFQ算法相结合)。仿真结果表明:在每个节点处,提出的混合算法比仅使用EDD算法能给实时业务产生的时延更少,并且还能够使单个BS在可允许的端到端时延范围内容纳数量更多的子SS,而且使用从BS到SS的GPSS模式的调度机制比使用GPC模式能产生更小的时延。  相似文献   

20.
Cross-layer optimization policy for QoS scheduling in computational grid   总被引:1,自引:0,他引:1  
This paper presents a cross-layer quality of service (QoS) optimization policy for computational grid. Efficient QoS management is critical for computational grid to meet heterogeneity and dynamics of resources and users’ requirements. There are different QoS metrics at different layers of computational grid. To improve perceived QoS by end users over computational grid, QoS supports can be addressed in different layers, including application layer, collective layer, fabric layer and so forth. The paper tackles cross-layer grid QoS optimization as optimization decomposition, each layer corresponds to a decomposed subproblem. The proposed policy produces an optimal set of grid resources, service compositions and user's payments at the fabric layer, collective layer and application layer respectively to maximize global grid QoS. The cross-layer optimization problem decomposes into three subproblems: grid resource allocation problem, service composing and user satisfaction degree maximization problem, all of which interact through the optimal variables for capacities of grid resources and service demand. In order to coordinate the subproblems, cross-layer QoS feedback mechanism is established to ensure different layer interactions. The simulations are conducted to validate the efficiency of the proposed policy.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号