首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 406 毫秒
1.
A switch model for ATM networks is analyzed. Its interconnection network is internally nonblocking and is provided with dedicated input and output queues, one per switch inlet and one per switch outlet. The switch operates with an internal speed-up: more than one packet per slot can be transferred from the head-of-line positions of the input queues to each output queue by the interconnection network. Two different operation modes are considered for the interaction between input and output queues: backpressure mode and queue loss mode. The analytical model developed for the evaluation of the switch performance under random traffic assumes an infinite size for the switch, arbitrary values for input and output queue size, as well as for the speed-up factor. Switch throughput, packet delay and loss performance are evaluated and the analytical model accuracy is assessed using computer simulation results  相似文献   

2.
This work studies the performance of a nonblocking space-division packet switch in a correlated input traffic environment. In constructing the input traffic model, the author considers that each input is a time division multiaccess (TDM) link connecting to multiple sources. Every source on a link supports one call at a time. Each call experiences the alternation of ON and OFF periods, and generates packets periodically while in ON period. The stochastic property of each call does not have to be identical. Packets from each individual call are destined to the same output. The output address of each call is assumed to be uniformly assigned at random. The author derives both upper and lower bounds of the maximum throughput at system saturation. His study indicates that, if the source access rate is substantially lower than the link transmission rate, the effect of input traffic correlation on the output contentions can generally be ignored. Also, the analysis of each input queue becomes separable from the rest of the switch. The same study is carried out with nonuniform call address assignment  相似文献   

3.
This paper is concerned with the ATM traffic characterization within the network. Most of the work performed up to now has studied the effects of traffic on the access multiplexer and the first switch of an ATM network. Various source models were assumed to generate the ATM traffic. So, while the performance of a single switch node has been exhaustively examined, the statistical behavior of the traffic modified as it crosses the network has not been thoroughly analyzed yet. This paper, through an analysis of a network of cascaded queues, indicates that limit distributions exist in the statistical behavior of the traffic streams and in the queue performance, although a formal proof is believed to be very hard to obtain. The first modelling step consists of deriving the exact interdeparture time distribution for the cells of a reference-connection arriving to the output queue of a switch node with a general interarrival time distribution and multiplexed with a background traffic stream. The analysis is iterated through a long sequence of cascaded output queues, until the interdeparture time distribution converges. Simulations show that the analytical results are accurate at each stage of the network under the hypothesis of independent queues, and are also good approximations in the case of correlated queues. This study shows that the queue performance at the limit point is always better than the M/D/1 case. The distributions found in this way depend only on the connection bandwidth and on the background traffic behavior. The initial characteristics of a connection (burst length distributions and burst interarrival time distributions) only influence the convergence speed, not the limit distribution  相似文献   

4.
Two simple models of queueing on anN times Nspace-division packet switch are examined. The switch operates synchronously with fixed-length packets; during each time slot, packets may arrive on any inputs addressed to any outputs. Because packet arrivals to the switch are unscheduled, more than one packet may arrive for the same output during the same time slot, making queueing unavoidable. Mean queue lengths are always greater for queueing on inputs than for queueing on outputs, and the output queues saturate only as the utilization approaches unity. Input queues, on the other hand, saturate at a utilization that depends onN, but is approximately(2 -sqrt{2}) = 0.586whenNis large. If output trunk utilization is the primary consideration, it is possible to slightly increase utilization of the output trunks-upto(1 - e^{-1}) = 0.632asN rightarrow infty-by dropping interfering packets at the end of each time slot, rather than storing them in the input queues. This improvement is possible, however, only when the utilization of the input trunks exceeds a second critical threshold-approximatelyln (1 +sqrt{2}) = 0.881for largeN.  相似文献   

5.
Achieving 100% throughput in an input-queued switch   总被引:3,自引:0,他引:3  
It is well known that head-of-line blocking limits the throughput of an input-queued switch with first-in-first-out (FIFO) queues. Under certain conditions, the throughput can be shown to be limited to approximately 58.6%. It is also known that if non-FIFO queueing policies are used, the throughput can be increased. However, it has not been previously shown that if a suitable queueing policy and scheduling algorithm are used, then it is possible to achieve 100% throughput for all independent arrival processes. In this paper we prove this to be the case using a simple linear programming argument and quadratic Lyapunov function. In particular, we assume that each input maintains a separate FIFO queue for each output and that the switch is scheduled using a maximum weight bipartite matching algorithm. We introduce two maximum weight matching algorithms: longest queue first (LQF) and oldest cell first (OCF). Both algorithms achieve 100% throughput for all independent arrival processes. LQF favors queues with larger occupancy, ensuring that larger queues will eventually be served. However, we find that LQF can lead to the permanent starvation of short queues. OCF overcomes this limitation by favoring cells with large waiting times  相似文献   

6.
The iSLIP scheduling algorithm for input-queued switches   总被引:1,自引:0,他引:1  
An increasing number of high performance internetworking protocol routers, LAN and asynchronous transfer mode (ATM) switches use a switched backplane based on a crossbar switch. Most often, these systems use input queues to hold packets waiting to traverse the switching fabric. It is well known that if simple first in first out (FIFO) input queues are used to hold packets then, even under benign conditions, head-of-line (HOL) blocking limits the achievable bandwidth to approximately 58.6% of the maximum. HOL blocking can be overcome by the use of virtual output queueing, which is described in this paper. A scheduling algorithm is used to configure the crossbar switch, deciding the order in which packets will be served. Previous results have shown that with a suitable scheduling algorithm, 100% throughput can be achieved. In this paper, we present a scheduling algorithm called iSLIP. An iterative, round-robin algorithm, iSLIP can achieve 100% throughput for uniform traffic, yet is simple to implement in hardware. Iterative and noniterative versions of the algorithms are presented, along with modified versions for prioritized traffic. Simulation results are presented to indicate the performance of iSLIP under benign and bursty traffic conditions. Prototype and commercial implementations of iSLIP exist in systems with aggregate bandwidths ranging from 50 to 500 Gb/s. When the traffic is nonuniform, iSLIP quickly adapts to a fair scheduling policy that is guaranteed never to starve an input queue. Finally, we describe the implementation complexity of iSLIP. Based on a two-dimensional (2-D) array of priority encoders, single-chip schedulers have been built supporting up to 32 ports, and making approximately 100 million scheduling decisions per second  相似文献   

7.
Many high-speed routers today use input-queuing (IQ) architectures with a crossbar switching fabric based on optical technology. Packets in the input queues are divided into cells of unit length, and the goal is to find a schedule of minimum makespan that forwards all packets to the output ports. The problem is complicated since, in optical switches, so-called configuration delay, that is the time required to reconfigure the switching fabric, is non-negligible with respect to the cell transmission time. We aim to design a scheduler whose complexity does not depend on the number of packets in the input queues. Thus, we focus on the nonpreemptive bipartite scheduling (NPBS) problem, where each input queue is connected to each output port in at most one configuration. We demonstrate that the NPBS problem is NP-hard for any value of the configuration delay, and approximation within a ratio smaller than 7/6 is NP-hard as well. For the offline version of the NPBS problem, we show that a simple greedy algorithm achieves an approximation factor of 2 for arbitrary configuration delay. Then, we consider the online version of the NPBS problem, where the switch gathers the incoming traffic periodically and then schedules the accumulated batches. We propose a scheduling algorithm that guarantees strict delay for any admissible traffic, provided that the switch has a moderate speed-up of two. Finally, we extend our results to the nonbipartite scheduling problem.  相似文献   

8.
Consider a single node queueing system which can be modeled by a finite quasi-birth-death (QBD) process. We present a computational technique for spectral analyses (i.e., second-order statistics) of output, queue, and loss. The emphasis is placed on the performance evaluation of output power spectrum and input-output coherence function with respect to various input power spectral properties and system parameters. The coherence function is defined to measure the linear relationship between input and output processes. Through the evaluation of the coherence function, we identify a so-called nonlinear break frequency, ωb, under which the low-frequency traffic stay intact via a queueing system. Such a low-frequency I/O linearity plays an important role in characterizing the output process, which may form a partial input to other “downstream” queues of the network. In particular, the unchanged “upstream” low-frequency traffic characteristics are expected to have a significant impact on the “downstream” queues as well. Our numerical analysis examines the sensitivity of ωb to traffic characteristics and system parameters. The study further indicates that the link capacity requirement of traffic at a given buffer system is essentially characterized by its maximum input rate filtered at ω b  相似文献   

9.
Son  J.W. Lee  H.T. Oh  Y.Y. Lee  J.Y. Lee  S.B. 《Electronics letters》1997,33(14):1192-1193
A switch architecture is proposed for alleviating the HOL blocking by employing even/odd dual FIFO queues at each input and even/odd dual switching planes dedicated to each even/odd queue. Under random traffic, it gives 76.4% throughput without output expansion and 100% with output expansion r=2, with the same amount of crosspoints as for the ordinary output expansion scheme  相似文献   

10.
The authors study the performance of a nonblocking space-division packet switch, given that the traffic intensities at the switch not only are nonuniform but also change as a function of time. A finite-state Markov chain is used as an underlying process to govern the time variation of traffic for the entire switch. The packet arrivals at each input form an independent Bernoulli process modulated by the underlying Markov chain. The output address of each packet is independently and randomly assigned with probability distributions, which are also modulated by the Markov chain. Provided that the traffic on each output is not dominated by individual inputs the service time of each output queue for sufficiently large switches can be characterized by an independent Markov modulated phase-type process. A matrix geometric solution for the resultant quasi-birth-death type queuing process is presented. The maximum throughput is obtained at the system saturation. The performance of the switch is numerically examined under various traffic conditions. A contention priority scheme to improve the switch performance is proposed  相似文献   

11.
Pierre Le Gall 《电信纪事》1994,49(3-4):111-126
To correctly model traffic in packet switched networks for single links with FC-FS discipline at each node, we introduce the influence of the phenomenon of short packets agglutinating behind long packets. It is then necessary to abandon the usual notions of local traffic sources and local G/G/1 queues in favor of the more realistic notion of traffic sources at the network input with local queues appearing as the consequence of a tandem queue whose parameters we define. The influence of the upstream transfer delay and the upstream network topology then appear. We compare theoretical results with those derived by traffic simulations.  相似文献   

12.
Presents a new scheduler, the two-dimensional round-robin (2DRR) scheduler, that provides high throughput and fair access in a packet switch that uses multiple input queues. We consider an architecture in which each input port maintains a separate queue for each output. In an N×N switch, our scheduler determines which of the queues in the total of N2 input queues are served during each time slot. We demonstrate the fairness properties of the 2DRR scheduler and compare its performance with that of the input and output queueing configurations, showing that our scheme achieves the same saturation throughput as output queueing. The 2DRR scheduler can be implemented using simple logic components, thereby allowing a very high-speed implementation  相似文献   

13.
The performance analysis of an input access scheme in a high-speed packet switch for broadband ISDN is presented. In this switch, each input port maintains a separate queue for each of the outputs, thus n 2 input queues in an (n×n) switch. Using synchronous operation, at most one packet per input and output will be transferred in any slot. We derive lower and upper bounds for the throughput which show close to optimal performance. The bounds are very tight and approach to unity for switch sizes on the order of a hundred under any traffic load, which is a significant result by itself. Then the mean packet delay is derived and its variance is bounded. A neural network implementation of this input access scheme is given. The energy function of the network, its optimized parameters and the connection matrix are determined. Simulation results of the neural network fall between the theoretical throughput bounds  相似文献   

14.
This paper develops an improved analysis of ATM switching architectures adopting a replicated banyan interconnection network provided with dedicated input and output queues, one per switch inlet and outlet. Two different plane selection policies are studied, random choice and alternate sharing, and two different operation modes are considered for the interaction between input and output queues, backpressure and output queue loss. These different internal operations are ranked in terms of traffic performance and the problem of optimal allocation of a given buffer budget between input and output queues is addressed. The analysis, which assumes that the network is loaded by uniform traffic, always provides conservative results whereas known models are less accurate and give optimistic traffic results. Packet delay and loss probability performance is evaluated for the ATM switch and its accuracy is assessed using computer simulation also in comparison with results given by previous models.  相似文献   

15.
The multiple input-queued (MIQ) asynchronous transfer mode (ATM) switch has drawn much interest as a promising candidate for a high-speed and high-performance packet switch. The most conspicuous feature of the switch is that each input port is equipped with m(1⩽m⩽N) distinct queues, each for a group of output ports. Since the MIQ switch has multiple queues, an input can serve up to m cells in a time slot, leading to an enhanced performance. We derive the average queue length, mean cell delay, and cell loss probability for the MIQ switch in terms of the number of queues in an input port (m) and input load. The results include a special case of the single input-queued (SIQ) switch (m=1), which is analyzed by Hui et al. (1987)  相似文献   

16.
We consider a problem motivated by the desire to provide flexible, rate-based, quality of service guarantees for packets sent over input queued switches and switch networks. Our focus is solving a type of online traffic scheduling problem, whose input at each time step is a set of desired traffic rates through the switch network. These traffic rates in general cannot be exactly achieved since they assume arbitrarily small fractions of packets can be transmitted at each time step. The goal of the traffic scheduling problem is to closely approximate the given sequence of traffic rates by a sequence of transmissions in which only whole packets are sent. We prove worst-case bounds on the additional buffer use, which we call backlog, that results from using such an approximation. We first consider the NtimesN, input queued, crossbar switch. Our main result is an online packet-scheduling algorithm using no speedup that guarantees backlog at most (N+1)2 /4 packets at each input port and each output port. Upper bounds on worst-case backlog have been proved for the case of constant fluid schedules, such as the N2-2N+2 bound of Chang, Chen, and Huang (INFOCOM, 2000). Our main result for the crossbar switch is the first, to our knowledge, to bound backlog in terms of switch size N for arbitrary, time-varying fluid schedules, without using speedup. Our main result for Banyan networks is an exact characterization of the speedup required to maintain bounded backlog, in terms of polytopes derived from the network topology  相似文献   

17.
A new packet switch architecture using two sets of time-division multiplexed buses is proposed. The horizontal buses collect packets from the input links, while the vertical buses distribute the packets to the output links. The two sets of buses are connected by a set of switching elements which coordinate the connections between the horizontal buses and the vertical buses so that each vertical bus is connected to only one horizontal bus at a time. The switch has the advantages of: (1) adding input and output links without increasing the bus and I/O adaptor speed; (2) being internally unbuffered; (3) having a very simple control circuit; and (4) having 100% throughput under uniform traffic. A combined analytical-simulation method is used to obtain the packet delay and packet loss probability. Numerical results show that for satisfactory performance, the buses need to run about 30% faster than the input line rate. With this speedup, even at a utilization factor of 0.9, each input adaptor requires only 31 buffers for a packet loss rate of 10-6. The output queue behaves essentially as an M/D/1 queue  相似文献   

18.
The output queues of an M×N packet switch are studied using a Markov-modulated flow model. The switching element is a central server which sequentially routes packets from the inputs to the outputs. The focus is on systems in which the server speed is such that the bulk of the queuing takes place in the output queues. The conventional point process approach neglects the impact of switching and transmission time. An attempt is made to account for these finite system speeds by using a Markov-modulated continuous flow to approximate the arrival process to an output queue. This model captures the dependency between arrivals at different outputs and reflects the fact that packet arrivals and departures are not instantaneous. The output queue content distribution is obtained, for both infinite and finite buffer systems, from the spectral expansion of the solution of a system of differential equations. Numerical examples and comparisons with the results of an M/M/1 approximation are presented  相似文献   

19.
ATM (asynchronous transfer mode) is a new technique for transmitting voice, data and video. The performance of atm networks will depend on switch structure. Performance analysis of an atm switch based on a three-stage Clos network is presented. In this paper two types of switches are studied: a switch with input queues in the switching elements and a switch with output queues. This study is at the cell level and intends to dimension the switch. First, the traffic is supposed to be uniform, cells arrive on each input according to a geometric arrival process, they are uniformly directed over all the network outputs. An analytic model is proposed for both input and output queues in the switching elements. A study of the saturation throughput is proposed for input buffer switching elements. This work proves the influence of buffer dimensioning on the different stages of the switch. Dissymmetric switching elements are shown to be better than symmetric ones. A model is then designed for nonuniform traffic patterns and output buffers. Two types of non-uniform traffic are presented: single source to single destination (sssd) and multi-hot spots traffic (mhs). Discrete event simulations are used to validate the different models.  相似文献   

20.
We introduce a new approach to ATM switching. We propose an ATM switch architecture which uses only a single shift-register-type buffering element to store and queue cells, and within the same (physical) queue, switches the cells by organizing them in logical queues destined for different output lines. The buffer is also a sequencer which allows flexible ordering of the cells in each logical queue to achieve any appropriate scheduling algorithm. This switch is proposed for use as the building block of large-stale multistage ATM switches because of low hardware complexity and flexibility in providing (per-VC) scheduling among the cells. The switch can also be used as scheduler/controller for RAM-based switches. The single-queue switch implements output queueing and performs full buffer sharing. The hardware complexity is low. The number of input and output lines can vary independently without affecting the switch core. The size of the buffering space can be increased simply by cascading the buffering elements  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号