首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Matching output queueing with a combined input/output-queued switch   总被引:19,自引:0,他引:19  
The Internet is facing two problems simultaneously: there is a need for a faster switching/routing infrastructure and a need to introduce guaranteed qualities-of-service (QoS). Each problem can be solved independently: switches and routers can be made faster by using input-queued crossbars instead of shared memory systems; QoS can be provided using weighted-fair queueing (WFQ)-based packet scheduling. Until now, however, the two solutions have been mutually exclusive-all of the work on WFQ-based scheduling algorithms has required that switches/routers use output-queueing or centralized shared memory. This paper demonstrates that a combined input/output-queueing (CIOQ) switch running twice as fast as an input-queued switch can provide precise emulation of a broad class of packet-scheduling algorithms, including WFQ and strict priorities. More precisely, we show that for an N×N switch, a “speedup” of 2-1/N is necessary, and a speedup of two is sufficient for this exact emulation. Perhaps most interestingly, this result holds for all traffic arrival patterns. On its own, the result is primarily a theoretical observation; it shows that it is possible to emulate purely OQ switches with CIOQ switches running at approximately twice the line rate. To make the result more practical, we introduce several scheduling algorithms that with a speedup of two can emulate an OQ switch. We focus our attention on the simplest of these algorithms, critical cells first (CCF), and consider its running time and implementation complexity. We conclude that additional techniques are required to make the scheduling algorithms implementable at a high speed and propose two specific strategies  相似文献   

2.
The buffered crossbar switch is a promising switching architecture that plays a crucial role for providing quality of service (QoS) in computer networks. Sufficient amount of resources—bandwidth and buffer space—must be allocated in buffered crossbar switches for QoS provision. Resource allocation based on deterministic QoS objectives might be too conservative in practical network operations. To improve resource utilization in buffered crossbar switches, we study the problem of resource allocation for statistical QoS provision in this paper. First, we develop a model and techniques for analyzing the probabilistic delay performance of buffered crossbar switches, which is described by the delay upper bound with a prescribed violation probability. Then, we determine the required amounts of bandwidth and buffer space to achieve the probabilistic delay objectives for different traffic classes in buffered crossbar switches. In our analysis, we apply the effective arrival envelope to specify traffic load in a statistical manner and characterize switch service capacity by using the service curve technique. Instead of just focusing on one specific type of scheduler, the model and techniques developed in this paper are very flexible and can be used for analyzing buffered crossbar switches with a wide variety of scheduling algorithms. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

3.
We consider a problem motivated by the desire to provide flexible, rate-based, quality of service guarantees for packets sent over input queued switches and switch networks. Our focus is solving a type of online traffic scheduling problem, whose input at each time step is a set of desired traffic rates through the switch network. These traffic rates in general cannot be exactly achieved since they assume arbitrarily small fractions of packets can be transmitted at each time step. The goal of the traffic scheduling problem is to closely approximate the given sequence of traffic rates by a sequence of transmissions in which only whole packets are sent. We prove worst-case bounds on the additional buffer use, which we call backlog, that results from using such an approximation. We first consider the NtimesN, input queued, crossbar switch. Our main result is an online packet-scheduling algorithm using no speedup that guarantees backlog at most (N+1)2 /4 packets at each input port and each output port. Upper bounds on worst-case backlog have been proved for the case of constant fluid schedules, such as the N2-2N+2 bound of Chang, Chen, and Huang (INFOCOM, 2000). Our main result for the crossbar switch is the first, to our knowledge, to bound backlog in terms of switch size N for arbitrary, time-varying fluid schedules, without using speedup. Our main result for Banyan networks is an exact characterization of the speedup required to maintain bounded backlog, in terms of polytopes derived from the network topology  相似文献   

4.
We study the relationship between the degree of blocking and the amount of resource speedup necessary for blocking switches to possess the capabilities of nonblocking switches. We construct an analogy between switch configurations and the codewords of certain error control codes for which we use space covering ideas to derive relations between speedup and number of switch configurations. To derive the necessary speedup for nonblocking, we use two sphere packing bounds: the Hamming bound and the Gilbert-Varshamov bound. To construct nonblocking switches with a given speedup we use maximum distance separable codes. We consider both multicast and point to point scenarios.  相似文献   

5.
We develop a general framework for a novel switch architecture, the cross-path switch, to provide per-session statistical quality of service (QoS) guarantees. With characterizing the service each session receives by service curves, we derive a set of statistical bounds on the delay, backlog, and departure processes at the switch on a per-session manner using exponential bounded burstiness processes as source session traffic models. These bounds show that the service guarantees offered by the cross-path switch depend on the way of token assignment in the central stage of the switch. To provide better performance guarantees, we determine the criteria for designing a token assignment algorithm for the cross-path switch. Also, we quantify the service guaranteed by the cross-path switch with the central stage implemented in optical domain, which is important for the provision of QoS guarantees to each session in semioptical networks.  相似文献   

6.
This paper gives a class of flow control algorithms for the adaptive allocation of bandwidths to virtual connections (VC) in high-speed, wide-area ATM networks. The feedback rate to the source from the network is parsimonious, with each feedback bit indicating whether the buffer at a distant switch is above or below a threshold. The service discipline at the switch is first-come-first-served. The important goal of adaptability aims to make all of the network bandwidth available to the active VCs, even though the number of such VCs is variable over a given range. Each VC has two parameters, one giving its minimum guaranteed bandwidth and the other is the weight for determining its share of the uncommitted bandwidth. Judicious selection of these parameters defines distinctive services, such as best effort and best effort with minimum bandwidth. We derive design rules for selecting the parameters of the algorithms such that the appropriate guarantees and fairness properties are exhibited in the dynamical behavior. The systematic use of “damping” in right proportion with “gain” is shown to be a powerful device for stabilizing behavior and achieving fairness. Our analyses are based on a simple analytic fluid model composed of a system of first-order delay-differential equations, which reflect the propagation delay across the network. Extensive simulations examine the following: (1) fairness, especially to start-up VCs; (2) oscillations; (3) transient behavior, such as the rate of equalization from different initial conditions; (4) disparate bandwidth allocations; (5) multiple paths with diverse propagation delays; (6) adaptability and robustness with respect to parameters; and (7) interoperability of different algorithms  相似文献   

7.
We propose an efficient parallel switching architecture that requires no speedup and guarantees bounded delay. Our architecture consists of k input-output-queued switches with first-in-first-out queues, operating at the line speed in parallel under the control of a single scheduler, with k being independent of the number N of inputs and outputs. Arriving traffic is demultiplexed (spread) over the k identical switches, switched to the correct output, and multiplexed (combined) before departing from the parallel switch. We show that by using an appropriate demultiplexing strategy at the inputs and by applying the same matching at each of the k parallel switches during each cell slot, our scheme guarantees a way for cells of a flow to be read in order from the output queues of the switches, thus, eliminating the need for cell resequencing. Further, by allowing the scheduler to examine the state of only the first of the k parallel switches, our scheme also reduces considerably the amount of state information required by the scheduler. The switching algorithms that we develop are based on existing practical switching algorithms for input-queued switches, and have an additional communication complexity that is optimal up to a constant factor.  相似文献   

8.
On the provision of quality-of-service guarantees for input queued switches   总被引:7,自引:0,他引:7  
While the Internet has quietly served as a research and education vehicle for more than two decades, the last few years have witnessed its tremendous growth and its great potential for providing a wide variety of services. As a result, input-queued switching architectures, because of their distinguished advantage in building scalable switches, are currently receiving a lot of attention from both academia and industry as an attractive alternative for developing future-generation ATM/IP switches/routers. However, the problem of designing scheduling algorithms with QoS guarantees for input-queued switches has always been known to be a very challenging problem. We give an overview of the efforts in designing scheduling algorithms capable of providing QoS guarantees for input-queued switches. These algorithms are classified under three categories: those based on slot time assignment, those based on maximal matching, and those based on stable matching. We also present some open problems on this topic as future research directions in this area.  相似文献   

9.
We present three algorithms that provide performance guarantees for scheduling switches, such as optical switches, with configuration overhead. Each algorithm emulates an unconstrained (zero overhead) switch by accumulating a batch of configuration requests and generating a corresponding schedule for a constrained switch. Speedup is required both to cover the configuration overhead of the switch and to compensate for empty slots left by the scheduling algorithm. Scheduling algorithms are characterized by the number of configurations N/sub s/ they require to cover a batch of requests and the speedup required to compensate for empty slots S/sub min/. Initially, all switch reconfiguration is assumed to occur simultaneously. We show that a well-known exact matching algorithm, EXACT, leaves no empty slots (i.e., S/sub min/=1), but requires N/sub s//spl ap/N/sup 2/ configurations for an N-port switch leading to high configuration overhead or large batches and, hence, high delay. We present two new algorithms that reduce the number of configurations required substantially. MIN covers a batch of requests in the minimum possible number of configurations, N/sub s/=N, but at the expense of many empty slots, S/sub min//spl ap/4log/sub 2/N. DOUBLE strikes a balance, requiring twice as many configurations, N/sub s/=2N, while reducing the number of empty slots so that S/sub min/=2. Loosening the restriction on reconfiguration times, the scheduling problem is cast as an open shop. The best known practical scheduling algorithm for open shops, list scheduling (LIST), gives the same emulation requirements as DOUBLE. Therefore, we conclude that our architecture gains no advantages from allowing arbitrary switch reconfiguration. Finally, we show that DOUBLE and LIST offer the lowest required speedup to emulate an unconstrained switch across a wide range of port count and delay.  相似文献   

10.
Providing quality-of-service (QoS) guarantees over wireless packet networks poses a host of technical challenges that are not present in wireline networks. One of the key issues is how to account for the characteristics of the time-varying wireless channel and for the impact of link-layer error control in the provisioning of packet-level QoS. We accommodate both aspects in analyzing the packet-loss performance over a wireless link. We consider the cases of a single and multiplexed traffic streams. The link capacity fluctuates according to a fluid version of Gilbert-Elliott channel model. Traffic sources are modeled as on-off fluid processes. For the single-stream case, we derive the exact packet-loss rate (PLR) due to buffer overflow at the sender side of the wireless link. We also obtain a closed-form approximation for the corresponding wireless effective bandwidth. In the case of multiplexed streams, we obtain a good approximation for the PLR using the Chernoff-dominant eigenvalue (CDE) approach. Our analysis is then used to study the optimal forward error correction code rate that guarantees a given PLR while minimizing the allocated bandwidth. Numerical results and simulations are used to verify the adequacy of our analysis and to study the impact of error control on the allocation of bandwidth for guaranteed packet-loss performance  相似文献   

11.
The continuous growth in the demand for diversified quality-of-service (QoS) guarantees in broadband networks introduces new challenges in the design of packet switches that scale to large switching capacities. Packet scheduling is the most critical function involved in the provision of individual bandwidth and delay guarantees to the switched flows. Most of the scheduling techniques proposed so far assume the presence in the switch of a single contention point, residing in front of the outgoing links. Such an assumption is not consistent with the highly distributed nature of many popular architectures for scalable switches, which typically have multiple contention points, located in both ingress and egress port cards, as well as in the switching fabric. We define a distributed multilayered scheduler (DMS) to provide differentiated QoS guarantees to individual end-to-end flows in packet switches with multiple contention points. Our scheduling architecture is simple to implement, since it keeps per-flow scheduling confined within the port cards, and is suitable to support guaranteed and best-effort traffic in a wide range of QoS frameworks in both IP and ATM networks  相似文献   

12.
Current satellite systems operate according to circuit switching transfer modes. To improve flexibility and efficiency, several kinds of packet switching systems have been proposed. However, it appears that full packet switches are still too complex and expensive to be implemented on board the satellites in the near future. For the time being, dynamic bandwidth allocation capabilities (DBAC) provide a compromise solution when satellite systems are based on classical circuit switches, since the DBAC payload allows changing dynamically the capacity of each connection, without teardown and setup. We consider a DBAC satellite system, and define algorithms to allocate the bandwidth so as to provide deterministic and statistical QoS guarantees. Standard dual leaky buckets (DLBs) regulate the traffic sources. We define bandwidth-handling policies, design connection admission control rules, and evaluate the system performance analytically. Results show a significant increase in bandwidth utilization of our system, when compared to a plain circuit switching solution  相似文献   

13.
A maximal matching algorithm switches packets through a cross-bar with the speed-up of two without blocking them. Namely, traffic will go through the cross-bar controlled by a maximal matching algorithm if its outputs are not overloaded. Consequently, bandwidth reservations with delay guarantees are simple to provide. We propose a protocol for distributed bandwidth reservations, where users check the communication availability among themselves. It will be also shown that maximal matching algorithms cannot utilize full cross-bar capacity for some particular traffic patterns.  相似文献   

14.
Randomized scheduling algorithms for high-aggregate bandwidth switches   总被引:1,自引:0,他引:1  
The aggregate bandwidth of a switch is its port count multiplied by its operating line rate. We consider switches with high-aggregate bandwidths; for example, a 30-port switch operating at 40 Gb/s or a 1000-port switch operating at 1 Gb/s. Designing high-performance schedulers for such switches with input queues is a challenging problem for the following reasons: (1) high performance requires finding good matchings; (2) good matchings take time to find; and (3) in high-aggregate bandwidth switches there is either too little time (due to high line rates) or there is too much work to do (due to a high port count). We exploit the following features of the switching problem to devise simple-to-implement, high-performance schedulers for high-aggregate bandwidth switches: (1) the state of the switch (carried in the lengths of its queues) changes slowly with time, implying that heavy matchings will likely stay heavy over a period of time and (2) observing arriving packets will convey useful information about the state of the switch. The above features are exploited using hardware parallelism and randomization to yield three scheduling algorithms - APSARA, LAURA, and SERENA. These algorithms are shown to achieve 100% throughput and simulations show that their delay performance is quite close to that of the maximum weight matching, even when the traffic is correlated. We also consider the stability property of these algorithms under generic admissible traffic using the fluid-model technique. The main contribution of this paper is a suite of simple to implement, high-performance scheduling algorithms for input-queued switches. We exploit a novel operation, called MERGE, which combines the edges of two matchings to produce a heavier match, and study of the properties of this operation via simulations and theory. The stability proof of the randomized algorithms we present involves a derandomization procedure and uses methods which may have wider applicability.  相似文献   

15.
Input-buffered switches have been widely considered for implementing feasible packet switches. However, their matching process may not be time-efficient for switches with high-speed ports. Buffered crossbars (BXs) are an alternative to relax timing for packet switches with high-speed ports and to provide high-performance switching. BX switches were originally considered expensive, as the memory amount required in the crosspoints (XPs) is proportional to the square of the number of ports (O(N/sup 2/)). This limitation is now less stringent with the advances on chip-fabrication techniques, and when considering small crosspoint (XP) buffer sizes. In this paper, we study a combined input-crosspoint buffered packet switch, named CIXB, with virtual output queues (VOQs) at the inputs, and arbitration based on round-robin selection. We show that the CIXB switch achieves 100% throughput under uniform traffic, and high performance under nonuniform traffic, using one-cell XP buffer size and no speedup.  相似文献   

16.
Providing quality-of-service guarantees in both cell- and packet-based networks requires the use of a scheduling algorithm in the switches and network interfaces. These algorithms need to be implemented in hardware in a high-speed switch. The authors present a number of approaches to implement scheduling algorithms in hardware. They begin by presenting a general methodology for the design of timestamp-based fair queuing algorithms that provide the same bounds on end-to-end delay and fairness as those of weighted fair queuing, yet have efficient hardware implementations. Based on this general methodology, the authors describe two specific algorithms, frame-based fair queuing and starting potential-based fair queuing, and discuss illustrative implementations in hardware. These algorithms may be used in both cell switches and packet switches with variable-size packets. A methodology for combining a traffic shaper with this class of fair queuing schedulers is also presented for use in network interface devices, such as an ATM segmentation and reassembly device  相似文献   

17.
Multimedia applications, such as video‐conferencing and video‐on‐demand, often require quality of service (QoS) guarantees from the network, typically in the form of minimum bandwidth, maximum delay, jitter and packet loss constraints, among others. The problem of multicast routing subject to various forms of QoS constraints has been studied extensively. However, most previous efforts have focused on special situations where a single or a pair of constraints is considered. In general, routing under multiple constraints, even in the unicast case is an NP‐complete problem. We present in this paper two practical and efficient algorithms, called multi‐constrained QoS dependent multicast routing (M_QDMR) and (multicasting routing with multi‐constrained optimal path selection (M_MCOP)), for QoS‐based multicast routing under multiple constraints with cost optimization. We provide proof in the paper that our algorithms are correct. Furthermore, through extensive simulations, we illustrate the effectiveness and efficiency of our proposals and demonstrate their significant performance improvement in creating multicast trees with lower cost and higher success probability. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

18.
Input-queued cell switches employing the oldest-cell-first (OCF) policy have been shown to yield low mean delay characteristics. Moreover, it has been proven that OCF is stable for admissible traffic conditions when executed with a scheduling speedup of 2. However, as link speeds increase, the computational complexity of these algorithms limits their applicability in high port-density switches and routers. To address the scalability issue, we describe a Frame-Based Maximal Weight Matching (FMWM) algorithm, employing OCF as queue weights, in which a new scheduling decision is issued once every several cell times. Between scheduling decisions, the configuration of the crossbar switch remains unchanged. We further extend the analysis to address the case of multiple classes of service, and prove that the algorithm is stable with an internal buffer transfer speedup of 2, thereby significantly relaxing the timing constraints imposed on the scheduling process.  相似文献   

19.
Wireless ad-hoc networks consist of mobile nodes interconnected by multi-hopwireless paths. Unlike conventional wireless networks, ad-hoc networks haveno fixed network infrastructure or administrative support. Because of thedynamic nature of the network topology and limited bandwidth of wirelesschannels, Quality-of-Service (QoS) provisioning is an inherently complex anddifficult issue. In this paper, we propose a fully distributed and adaptivealgorithm to provide statistical QoS guarantees with respect toaccessibility of services in an ad-hoc network. In this algorithm,we focus on the optimization of a new QoS parameter of interest, serviceefficiency, while keeping protocol overheads to the minimum. To achievethis goal, we theoretically derive the lower and upper bounds of serviceefficiency based on a novel model for group mobility, followed by extensivesimulation results to verify the effectiveness of our algorithm.  相似文献   

20.
Cao  Guohong 《Wireless Networks》2003,9(2):131-142
Next generation high-speed cellular networks are expected to support multimedia applications, which require QoS provisions. Since frequency spectrum is the most expensive resource in wireless networks, it is a challenge to support QoS using limited frequency spectrum. In the literature, two orthogonal approaches are used to address the bandwidth utilization issue and the QoS provision issue; that is, channel allocation schemes have been proposed to improve bandwidth efficiency, whereas handoff management schemes, based on bandwidth reservation, have been proposed to guarantee a low connection dropping rate. However, little effort has been taken to address both issues together. In this paper, we integrate distributed channel allocation and adaptive handoff management to provide QoS guarantees and efficiently utilize the bandwidth. First, we present a complete distributed distributed channel allocation algorithm and propose techniques to reduce its message complexity and intra-handoff overhead. Second, we integrate the proposed distributed channel allocation algorithm with an adaptive handoff management scheme to provide QoS guarantees and efficiently utilize the bandwidth. Detailed simulation experiments are carried out to evaluate the proposed methodology. Compared to previous schemes, our scheme can significantly reduce the message complexity and intra-handoff overhead. Moreover, the proposed scheme can improve the bandwidth utilization while providing QoS guarantees.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号