首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper describes an efficient contention resolution algorithm and its distributed implementation for large capacity input queuing cross-connect switches, which will establish virtual paths in future broadband ATM networks. The algorithm dynamically allocates sending time to cells held in input queues when no contention is indicated in the designated output ports. An expression for the mean delay and the cell loss probability for random traffic are derived through an approximate analysis. Input cells are served on a first-come, first-served basis as conventional contention resolution algorithms whose throughput saturates at 58 per cent because of head of line blocking in input queues. The proposed algorithm achieves a maximum throughput of 76 per cent.  相似文献   

2.
When two or more packets that are destined to the same output of an ATM switch arrive at different inputs, buffers at inputs or outputs are used to queue all but one of these packets so that external conflict is prevented. Although input buffering ATM switches are more economical and simpler than output buffering ATM switches, significant loss of throughput can occur in input buffering ATM switches due to head‐of‐line (HOL) blocking when first‐in–first‐out (FIFO) queueing is employed. In order to avoid both external conflict and alleviate HOL blocking in non‐blocking ATM switches, some window‐based contention resolution algorithms were proposed in the literature. In this paper, we propose a window‐based contention resolution algorithm for a blocking ATM switch based on reverse baseline network with content addressable FIFO (CAFIFO) input buffers. The proposed algorithm prevents not only external conflicts but also internal conflicts, in addition to alleviating HOL blocking. This algorithm was obtained by adapting the ring reservation algorithm used on non‐blocking ATM switches to a reverse baseline network. The fact that a non‐blocking network is replaced by a log2 N‐stage reverse baseline network yields a significant economy in implementation. We have conducted extensive simulations to evaluate the performance of reverse baseline network using the proposed window‐based contention resolution algorithm. Simulation results show that the throughput of reverse baseline network can be as good as the throughput of non‐blocking switches if the window depth of input buffers is made sufficiently large. Copyright © 2000 John Wiley & Sons, Ltd.  相似文献   

3.
Asynchronous transfer mode (ATM) has been designated as the switching environment for future broadband integrated services digital networks (BISDN) networks and services. Although input-buffered space switches are more economical and simpler to implement than output-buffered space switches, they suffer from external blocking because of destination port contention. We review contention resolution methods used to avoid external blocking, and choose a solution based on ring reservation, resulting in an elegant and efficient mechanism requiring only nearest-neighbor communications. In addition to external blocking, space switches suffer from head-of-line (HOL) blocking, and our technique alleviates HOL blocking without arbitration time overhead. This method makes use of a novel content addressable first in/first out (CAFIFO) to achieve single-cycle windowing, and the CAFIFO design and operation are described in detail. High multicast throughput is achieved by employing call-splitting. Multiple latency priorities can also be supported. Simulation results, for both unicast and multicast switching, and both random and bursty traffic, highlight the versatility and excellent performance of the CAFIFO-based switch  相似文献   

4.
Access control and performance for multicast packet switching in a broadband network environment are studied. In terms of scheduling the transmission of the copies of the packet onto output ports, two basic service disciplines are defined: one-shot scheduling (all the copies transmitted in the same time slot) and call splitting (transmission over several time slots). As subcategories of call splitting, SS (strict-sense) specifies that each packet can send at most one copy to the destination per time slot, whereas WS (wide-sense) does not carry this restriction. A scheme called revision scheduling, which mitigates the head-of-line (HOL) blocking effect by sequentially combining the one-shot scheduling and the call splitting disciplines, is proposed. Output contention resolution implementations, in the form of combinational logic circuits designed to resolve output contentions arising in each of the call scheduling disciplines, are introduced. A neural-network-based contention resolution algorithm is proposed to demonstrate the improvement of the optimal scheduling  相似文献   

5.
The asynchronous transfer mode (ATM) is the choice of transport mode for broadband integrated service digital networks (B-ISDNs). We propose a window-based contention resolution algorithm to achieve higher throughput for nonblocking switches in ATM environments. In a nonblocking switch with input queues, significant loss of throughput can occur due to head-of-line (HOL) blocking when first-in first-out (FIFO) queueing is employed. To resolve this problem, we employ bypass queueing and present a cell scheduling algorithm which maximizes the switch throughput. We also employ a queue length based priority scheme to reduce the cell delay variations and cell loss probabilities. With the employed priority scheme, the variance of cell delay is also significantly reduced under nonuniform traffic, resulting in lower cell loss rates (CLRs) at a given buffer size. As the cell scheduling controller, we propose a neural network (NN) model which uses a high degree of parallelism. Due to higher switch throughput achieved with our cell scheduling, the cell loss probabilities and the buffer sizes necessary to guarantee a given CLR become smaller than those of other approaches based on sequential input window scheduling or output queueing  相似文献   

6.
A multicast switching architecture called a duplex multicast switch is proposed, and several switch control algorithms are developed. A duplex multicast switch, with two point-to-point routing nets, has potential to provide significantly better performance than a simplex multicast switch by reducing the output loadings of routing nets. To fully realize its potential, two call distribution algorithms, cluster distribution and spread distribution, are developed. Cluster distribution is partitioned into partial search cluster distribution and exhaustive search cluster distribution based on search policies, and the performance of the three algorithms is analyzed by the arrival modulation technique. The results show that a spread distribution eliminates most slot contention blocking and achieves near-optimal performance  相似文献   

7.
Son  J.W. Lee  H.T. Oh  Y.Y. Lee  J.Y. Lee  S.B. 《Electronics letters》1997,33(14):1192-1193
A switch architecture is proposed for alleviating the HOL blocking by employing even/odd dual FIFO queues at each input and even/odd dual switching planes dedicated to each even/odd queue. Under random traffic, it gives 76.4% throughput without output expansion and 100% with output expansion r=2, with the same amount of crosspoints as for the ordinary output expansion scheme  相似文献   

8.
Multicast switching is emerging as a new switching technology that can provide efficient transport in a broadband network for video and other multipoint communication services. The authors develop and analyze call scheduling algorithms for a multicast switch. In particular, they examine two general classes of scheduling algorithms: call packing algorithms and call splitting algorithms. The performance improvement by the call packing algorithms examined is shown to be negligible. In contrast, the call splitting algorithms can provide significantly lower blocking by reducing the level of output port contention. However, excessive call splitting could degrade performance because of the additional load introduced to the input ports. The authors present a simple call splitting algorithm called greedy splitting which achieves near-optimal performance  相似文献   

9.
A viable ATM switch architecture exploiting both input and output queueing on a space division switch is proposed. This architecture features both input and output ports that are divided into several groups, and an efficient contention resolution algorithm is developed. The performance study indicates that a group size of eight is sufficient to achieve 90% efficiency.<>  相似文献   

10.
一种新的基于CNN调度信元的输入缓冲ATM交换结构   总被引:1,自引:0,他引:1  
陈金山  韦岗 《通信学报》2000,21(4):71-74
提出了基于细胞神经网络 (CNN)调度信元的输入缓冲ASF方案 ,该方案消除了队头阻塞造成的输入缓冲ASF性能恶化。计算机仿真表明 ,该方案非常有效 ,其性能接近于输出缓冲ASF。  相似文献   

11.
Presents a new scheduler, the two-dimensional round-robin (2DRR) scheduler, that provides high throughput and fair access in a packet switch that uses multiple input queues. We consider an architecture in which each input port maintains a separate queue for each output. In an N×N switch, our scheduler determines which of the queues in the total of N2 input queues are served during each time slot. We demonstrate the fairness properties of the 2DRR scheduler and compare its performance with that of the input and output queueing configurations, showing that our scheme achieves the same saturation throughput as output queueing. The 2DRR scheduler can be implemented using simple logic components, thereby allowing a very high-speed implementation  相似文献   

12.
The iSLIP scheduling algorithm for input-queued switches   总被引:1,自引:0,他引:1  
An increasing number of high performance internetworking protocol routers, LAN and asynchronous transfer mode (ATM) switches use a switched backplane based on a crossbar switch. Most often, these systems use input queues to hold packets waiting to traverse the switching fabric. It is well known that if simple first in first out (FIFO) input queues are used to hold packets then, even under benign conditions, head-of-line (HOL) blocking limits the achievable bandwidth to approximately 58.6% of the maximum. HOL blocking can be overcome by the use of virtual output queueing, which is described in this paper. A scheduling algorithm is used to configure the crossbar switch, deciding the order in which packets will be served. Previous results have shown that with a suitable scheduling algorithm, 100% throughput can be achieved. In this paper, we present a scheduling algorithm called iSLIP. An iterative, round-robin algorithm, iSLIP can achieve 100% throughput for uniform traffic, yet is simple to implement in hardware. Iterative and noniterative versions of the algorithms are presented, along with modified versions for prioritized traffic. Simulation results are presented to indicate the performance of iSLIP under benign and bursty traffic conditions. Prototype and commercial implementations of iSLIP exist in systems with aggregate bandwidths ranging from 50 to 500 Gb/s. When the traffic is nonuniform, iSLIP quickly adapts to a fair scheduling policy that is guaranteed never to starve an input queue. Finally, we describe the implementation complexity of iSLIP. Based on a two-dimensional (2-D) array of priority encoders, single-chip schedulers have been built supporting up to 32 ports, and making approximately 100 million scheduling decisions per second  相似文献   

13.
The authors consider the output contention problem with a view towards increasing the throughput for asynchronous transfer mode (ATM) switching systems. A cell scheduling algorithm for increasing the throughput is proposed. The maximum throughput is increased up to 0.957. The efficiency (output trunk utilization/input trunk utilization) is almost equal to 100% and is independent of the switch size and traffic load. A switching system implemented with this cell scheduling algorithm is also proposed. The switching network usually consists of a sorting network followed by a routing network. Here, it is sufficient for a sorting network to establish input-output paths through it simultaneously without conflicts, and it is not necessary to append a routing network. In addition, a parallel mesh-connected architecture of a component of the switching system is proposed to speed up the cell scheduling of the system. Consequently, this approach can offer an effective alternative to ATM switching systems  相似文献   

14.
本文分析了一种具有点对多点服务能力的内部无阻塞输入/输出排队ATM交换机在反压控制下的性能指标。在每个输入端口信元的到达具有相同的强度,每个排头信元以一种相同的概率分布函数被复制成多个输往不同的输出端口的排头信元,复制后的排头柜元到达输出端口的概串相同为1/N,且输入、物出缓冲容量均为有限。为保证交换机内部不发生信元丢失,引入了反压机制(Backpressure)。本文利用矩阵几何分析法绘出了数值解,计算机仿真结果表明理论分析是正确的。  相似文献   

15.
We propose an efficient parallel switching architecture that requires no speedup and guarantees bounded delay. Our architecture consists of k input-output-queued switches with first-in-first-out queues, operating at the line speed in parallel under the control of a single scheduler, with k being independent of the number N of inputs and outputs. Arriving traffic is demultiplexed (spread) over the k identical switches, switched to the correct output, and multiplexed (combined) before departing from the parallel switch. We show that by using an appropriate demultiplexing strategy at the inputs and by applying the same matching at each of the k parallel switches during each cell slot, our scheme guarantees a way for cells of a flow to be read in order from the output queues of the switches, thus, eliminating the need for cell resequencing. Further, by allowing the scheduler to examine the state of only the first of the k parallel switches, our scheme also reduces considerably the amount of state information required by the scheduler. The switching algorithms that we develop are based on existing practical switching algorithms for input-queued switches, and have an additional communication complexity that is optimal up to a constant factor.  相似文献   

16.
Multicast scheduling for input-queued switches   总被引:10,自引:0,他引:10  
We design a scheduler for an M×N input-queued multicast switch. It is assumed that: 1) each input maintains a single queue for arriving multicast cells and 2) only the cell at the head of line (HOL) can be observed and scheduled at one time. The scheduler needs to be: 1) work-conserving (no output port may be idle as long as there is an input cell destined to it) and 2) fair (which means that no input cell may be held at HOL for more than a fixed number of cell times). The aim is to find a work-conserving, fair policy that delivers maximum throughput and minimizes input queue latency, and yet is simple to implement. When a scheduling policy decides which cells to schedule, contention may require that it leave a residue of cells to be scheduled in the next cell time. The selection of where to place the residue uniquely defines the scheduling policy. Subject to a fairness constraint, we argue that a policy which always concentrates the residue on as few inputs as possible generally outperforms all other policies. We find that there is a tradeoff among concentration of residue (for high throughput), strictness of fairness (to prevent starvation), and implementational simplicity (for the design of high-speed switches). By mapping the general multicast switching problem onto a variation of the popular block-packing game Tetris, we are able to analyze various scheduling policies which possess these attributes in different proportions. We present a novel scheduling policy, called TATRA, which performs extremely well and is strict in fairness. We also present a simple weight-based algorithm, called WBA  相似文献   

17.
We evaluate the performance of an N × N ATM discrete time multicast switch model with input queueing operating under two input access disciplines. First we present the analysis for the case of a purely random access discipline and subsequently we concentrate on a cyclic priority access based on a circulating token ring. In both cases, we focus on two HOL (head-of-line) packet service disciplines. Under the first (one-shot transmission discipline), all the copies generated by each HOL packet seek simultaneous transmission during the same time slot. Under the second service discipline (call-splitting), all HOL copies that can be transmitted in the same time slot are released while blocked copies compete for transmission in subsequent slots. In our analysis the performance measures introduced are the average packet delay in the input buffers as well as the maximum throughput of the switch. A significant part of the analysis is based on matrix geometric techniques. Finally, numerical results are presented and compared with computer simulations.  相似文献   

18.
We have proposed a new architecture for building a scalable multicast ATM switch from a few tens to a few thousands of input/output ports. The switch, called the Abacus switch, employs input and output buffering schemes. Cell replication, cell routing, and output contention resolution are all performed in a distributed way so that the switch can be scaled up to a large size. The Abacus switch adopts a novel algorithm to resolve the contention of both multicast and unicast cells destined for the same output port (or output module). The switch can also handle multiple priority traffic by routing cells according to their priority levels. This paper describes a key ASIC chip for building the Abacus switch. The chip, called the ATM routing and concentration (ARC) chip, contains a two-dimensional array (3×32) of switch elements that are arranged in a cross-bar structure. It provides the flexibility of configuring the chip into different group sizes to accommodate different ATM switch sizes. The ARC chip has been designed and fabricated using 0.8-μm CMOS technology and tested to operate correctly at 240 MHz, Although the ARC chip was designed to handle the line rate at OC-3 (155 Mb/s), the Abacus switch can accommodate a much higher line rate at OC-12 (622 Mb/s) or OC-48 (2.5 Gb/s) by using a bit-sliced technique or distributing cells in a cyclic order to different inputs of the ARC chip. When the latter scheme is used, the cell sequence is retained at the output of the Abacus switch  相似文献   

19.
The Tera ATM LAN project at Carnegie Mellon University addresses the interconnection of hundreds of workstations in the Electrical and Computer Engineering Department via an ATM-based network. The Tera network architecture consists of switched Ethernet clusters that are interconnected using an ATM network. This paper presents the Tera network architecture, including an Ethernet/ATM network interface, the Tera ATM switch, and its performance analysis. The Tera switch architecture for asynchronous transfer mode (ATM) local area networks (LAN's) incorporates a scalable nonblocking switching element with hybrid queueing discipline. The hybrid queueing strategy includes a global first-in first-out (FIFO) queue that is shared by all switch inputs and dedicated output queues with small speedup. Due to hybrid queueing, switch performance is comparable to output queueing switches. The shared input queue design is scalable since it is based on a Banyan network and N FIFO memories. The Tera switch incorporates an optimal throughput multicast stage that is also based on a Banyan network. Switch performance is evaluated using queueing analysis and simulation under various traffic patterns  相似文献   

20.
Describes a new architecture for a multicast ATM switch scalable from a few tens to a few thousands of input ports. The switch, called the Abacus switch, has a nonblocking switch fabric followed by small switch modules at the output ports. It has buffers at input and output ports. Cell replication, cell routing, output contention resolution, and cell addressing are all performed in a distributed way so that it can be scaled up to thousands of input and output ports. A novel algorithm has been proposed to resolve output port contention while achieving input buffers sharing, fairness among the input ports, and call splitting for multicasting. The channel-grouping mechanism is also adopted in the switch to reduce the hardware complexity and improve the switch's throughput, while the cell sequence integrity is preserved. The switch can also handle multiple priority traffic by routing cells according to their priority levels. The performance study of the Abacus switch in throughput, average cell delay, and cell loss rate is presented. A key ASIC chip for building the Abacus switch, called the ARC (ATM routing and concentration) chip, contains a two-dimensional array (32×32) of switch elements that are arranged in a crossbar structure. It provides the flexibility of configuring the chip into different group sizes to accommodate different ATM switch sizes. The ARC chip has been designed and fabricated using 0.8 μm CMOS technology and tested to operate correctly at 240 MHz  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号