首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The Tera ATM LAN project at Carnegie Mellon University addresses the interconnection of hundreds of workstations in the Electrical and Computer Engineering Department via an ATM-based network. The Tera network architecture consists of switched Ethernet clusters that are interconnected using an ATM network. This paper presents the Tera network architecture, including an Ethernet/ATM network interface, the Tera ATM switch, and its performance analysis. The Tera switch architecture for asynchronous transfer mode (ATM) local area networks (LAN's) incorporates a scalable nonblocking switching element with hybrid queueing discipline. The hybrid queueing strategy includes a global first-in first-out (FIFO) queue that is shared by all switch inputs and dedicated output queues with small speedup. Due to hybrid queueing, switch performance is comparable to output queueing switches. The shared input queue design is scalable since it is based on a Banyan network and N FIFO memories. The Tera switch incorporates an optimal throughput multicast stage that is also based on a Banyan network. Switch performance is evaluated using queueing analysis and simulation under various traffic patterns  相似文献   

2.
When two or more packets that are destined to the same output of an ATM switch arrive at different inputs, buffers at inputs or outputs are used to queue all but one of these packets so that external conflict is prevented. Although input buffering ATM switches are more economical and simpler than output buffering ATM switches, significant loss of throughput can occur in input buffering ATM switches due to head‐of‐line (HOL) blocking when first‐in–first‐out (FIFO) queueing is employed. In order to avoid both external conflict and alleviate HOL blocking in non‐blocking ATM switches, some window‐based contention resolution algorithms were proposed in the literature. In this paper, we propose a window‐based contention resolution algorithm for a blocking ATM switch based on reverse baseline network with content addressable FIFO (CAFIFO) input buffers. The proposed algorithm prevents not only external conflicts but also internal conflicts, in addition to alleviating HOL blocking. This algorithm was obtained by adapting the ring reservation algorithm used on non‐blocking ATM switches to a reverse baseline network. The fact that a non‐blocking network is replaced by a log2 N‐stage reverse baseline network yields a significant economy in implementation. We have conducted extensive simulations to evaluate the performance of reverse baseline network using the proposed window‐based contention resolution algorithm. Simulation results show that the throughput of reverse baseline network can be as good as the throughput of non‐blocking switches if the window depth of input buffers is made sufficiently large. Copyright © 2000 John Wiley & Sons, Ltd.  相似文献   

3.
Grouping output channels in a shared‐buffer ATM switch has shown to provide great saving in buffer space and better throughput under uniform traffic. However, uniform traffic does not represent a realistic view of traffic patterns in real systems. In this paper, we extend the queuing analysis of shared‐buffer channel‐grouped (SBCG) ATM switches under imbalanced traffic, as it better represent real‐life situations. The study focuses on the impact of the grouping factor and other key switch design parameters on the performance of such switches as compared to the unichannel allocation scheme in terms of cell loss probability, throughput, mean cell delay and buffer occupancy. Numerical results from both the analytical model and simulation are presented, and the accuracy of the analysis is presented. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

4.
Asynchronous transfer mode (ATM) is the transport technique for the broadband ISDN recommended by CCITT (I.121). Many switches have been proposed to accommodate the ATM that requires fast packet switching capability.1-8 The proposed switches for the broadband ISDN can be classified as being of input queueing or output queueing type. Those of the input queueing type have a throughput performance which is approximately 58 per cent that of the output queueing type. However, output queueing networks require larger amounts of hardware than input queueing networks. In this paper, we propose a new multistage switch with internal buffering that approaches a maximum throughput of 100 per cent as the buffering is increased. The switch is capable of broadcasting and self-routeing. It consists of two switching planes which consist of packet processors, 2 x 2 switching elements, distributors and buffers located between stages and in the output ports. The internal data rate of the proposed switch is the same as that of the arriving information stream. In this sense, the switch does not require speed-up. The switch has log2 N stages that forward packets in a store-and-forward fashion, thus incurring a latency of log2 N time periods. Performance analysis shows that the additional delay is small.  相似文献   

5.
In practical ATM switch design, a proper dimensioning of buffer sizes and a cost effective selection of speed-up factor should be considered to guarantee a specified cell loss requirement for a given traffic. Although a larger speed-up factor provides better throughput for the switch, increasing the speed-up factor involves greater complexity and cost. Hence, it may not be cost effective to increase the speed-up factor for 100% throughput. Moreover, with a given buffer budget, an increase in the speed-up factor beyond a certain value only adds to the cell loss. The paper addresses design trade-offs existing between finite input/output buffer sizes and speed-up factor in a nonblocking ATM switch. Another important issue is the adverse effect on cell loss performance caused by nonuniform traffic (different traffic intensity and unevenly distributed routing). The paper analyzes cell loss performance of ATM switches with nonuniform traffic, and examines the effect of each nonuniform traffic parameter. The authors also provide an algorithm for effective buffer sharing that alleviates the performance degradation caused by traffic nonuniformity  相似文献   

6.
Ultrafast photonic ATM switch with optical output buffers   总被引:1,自引:0,他引:1  
An ultrafast photonic asynchronous transfer mode (ATM) (ULPHA) switch based on a time-division broadcast-and-select network with optical output buffers is presented. The ULPHA switch has an ultra-high throughput and excellent traffic characteristics, since it utilizes ultrashort optical pulses for cell signals and avoids cell contentions by novel optical output buffers. Feasibility studies show that an 80×80 ULPHA switch with 1-Gb/s input/output is possible by applying the present technology, and that more than 1 Tb/s is possible by making a three-stage network using such switches. As an experimental demonstration, 4-bit 40-Gb/s optical cells were generated and certain cells were selected at an output on a self-routing basis. With its high throughput and excellent traffic considerations, the ULPHA switch is a strong candidate for a future large-capacity optical switching node  相似文献   

7.
Specific queueing models are derived in order to size the buffers of ATM switching elements in the cases of ATM or STM multiplexed traffic. Buffering is performed either at the outputs or in a central memory for ATM multiplexed traffic; for STM multiplexed traffic, buffers can also be provided at the inputs. The buffer size is chosen in order to ensure a loss probability in the switch smaller than 10?10. It is shown that the buffer size per output in the case of central queueing is smaller than the buffer size in case of output queueing for both ATM and STM multiplexed traffics. Moreover, for STM multiplexed traffic, buffer sizes are identical for input and output queueing. Lastly, it is pointed out that buffers used for STM multiplexed traffic should be 4 to 20 times larger than the corresponding buffers for ATM multiplexed traffic.  相似文献   

8.
A general expansion architecture is proposed that can be used in building large-scale switches using any type of asynchronous transfer mode (ATM) switch. The proposed universal multistage interconnection network (UniMIN) switch is composed of a buffered distribution network (DN) and a column of output switch modules (OSMs), which can be any type of ATM switch. ATM cells are routed to their destination using a two-level routing strategy. The DN provides each incoming cell with a self-routing path to the destined OSM, which is the switch module containing the destination output port. Further routing to the destined output port is performed by the destination OSM. Use of the channel grouping technique yields excellent delay/throughput performance in the DN, and the virtual FIFO concept is used for implementing the output buffers of the distribution module without internal speedup. We also propose a “fair virtual FIFO” to provide fairness between input links while preserving cell sequence. The distribution network is composed of one kind of distribution module which has the same size as the OSM, regardless of the overall switch size N. This gives good modular scalability in the UniMIN switch. Performance analysis for uniform traffic and hot-spot traffic shows that a negligible delay and cell loss ratio in the DN can be achieved with a small buffer size, and that DN yields robust performance even with hot-spot traffic. In addition, a fairness property of the proposed fair virtual FIFO is shown by a simulation study  相似文献   

9.
The asynchronous transfer mode (ATM) is the choice of transport mode for broadband integrated service digital networks (B-ISDNs). We propose a window-based contention resolution algorithm to achieve higher throughput for nonblocking switches in ATM environments. In a nonblocking switch with input queues, significant loss of throughput can occur due to head-of-line (HOL) blocking when first-in first-out (FIFO) queueing is employed. To resolve this problem, we employ bypass queueing and present a cell scheduling algorithm which maximizes the switch throughput. We also employ a queue length based priority scheme to reduce the cell delay variations and cell loss probabilities. With the employed priority scheme, the variance of cell delay is also significantly reduced under nonuniform traffic, resulting in lower cell loss rates (CLRs) at a given buffer size. As the cell scheduling controller, we propose a neural network (NN) model which uses a high degree of parallelism. Due to higher switch throughput achieved with our cell scheduling, the cell loss probabilities and the buffer sizes necessary to guarantee a given CLR become smaller than those of other approaches based on sequential input window scheduling or output queueing  相似文献   

10.
Shared buffering and channel grouping are powerful techniques with great benefits in terms of both performance and implementation. Shared‐buffer switches are known to have better performance and better utilization than input or output queued switches. With channel grouping, a cell is routed to a group of channels instead of a specific output channel. In this way, congestion due to output contention can be minimized and the switch performance can therefore be greatly improved. Although each technique is well known by itself in the traditional study of queuing systems, their combined use in ATM networks has not been much explored previously. In this paper, we develop an analytical model for a shared‐buffer ATM switch with grouped output channels. The model is then used to study the switch performance in terms of cell loss probability, cell delay and throughput. In particular, we study the impact of the channel grouping factor on the buffer requirements. Our results show that grouping the output channels in a shared‐buffer ATM switch leads to considerable savings in buffer space. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

11.
General models for a class of nonblocking architectures of asynchronous transfer mode (ATM) switches are described. Hardware aspects are discussed to show the implementation feasibility of the proposed switch architectures by means of the current technology. Performance issues are studied to point out the traffic bottlenecks of the different structures. It is shown that the classification of queueing is the main concept that enables the classification of nonblocking ATM switches. Three main packet queueing strategies can be adopted in the switching fabric: input queueing, shared queueing, and output queueing. Switch architectures adopting only one of these strategies are described. The ways in which two strategies can be jointly adopted in a switching fabric to result in the mixed queueing strategies input-output queueing, input-shared queueing, and shared-output queueing are also discussed  相似文献   

12.
In this letter, we analyze the performance of multiple input-queued asynchronous transfer mode (ATM) switches that use parallel iterative matching (PIM) for scheduling the transmission of head-of-line cells in the input queues. A queueing model of the switch is developed under independently, identically distributed, two-state Markov modulated Bernoulli processes bursty traffic. The underlying Markov chain of the queueing model is a quasi-birth-death (QBD) chain. The QBD chain is solved using an iterative computing method. Interesting performance metrics of the ATM switch such as the throughput, the mean cell delay, and the cell loss probability can be derived from the model. Numerical results from both the analytical model and simulation are presented, and the accuracy of the analysis is briefly discussed  相似文献   

13.
Performance studies, linking ATM switch capabilities to physical limitations imposed by integrated circuit technology, have been scarce. This paper explores trends in circuit capabilities, and makes projections toward the 0.25-μm technologies that will be available to all switch designers in the year 2000. The limits imposed by circuit technology are applied to shared buffer ATM switches. We determine requirements and physical limits for buffer capacity, buffer throughput, chip I/O throughput, and power dissipation. As a result, we are able to project chip counts, aggregate switch throughputs, and switch dimensions. As well, performance capabilities of single-chip shared buffer switches are estimated. A single-chip shared buffer switch implemented in 0.25-μm technology will be capable of an aggregate throughput of 1.3 Tb/s, will accomplish almost arbitrarily low cell loss rates for bursty traffic, and may be integrated together with translation tables supporting hundreds of connections per port  相似文献   

14.
A high-performance self-routing switch is proposed for ATM (asynchronous transfer mode) switch systems. Switching performance is enhanced by a rerouting algorithm applied to a particular multistage interconnection algorithm. The interconnection algorithm offers many access points to the output and resolves output contention by layering buffers at each switching stage. The author analyzes switching performance and shows that this switch can be easily engineered to have high throughput and low cell loss probability by increasing the number of switching stages. The author also illustrates that the number of switching stages required for a given cell loss probability shows gradual growth with increasing switch size. Analysis shows that the proposed switch is robust even with respect to nonuniform traffic  相似文献   

15.
Describes a new architecture for a multicast ATM switch scalable from a few tens to a few thousands of input ports. The switch, called the Abacus switch, has a nonblocking switch fabric followed by small switch modules at the output ports. It has buffers at input and output ports. Cell replication, cell routing, output contention resolution, and cell addressing are all performed in a distributed way so that it can be scaled up to thousands of input and output ports. A novel algorithm has been proposed to resolve output port contention while achieving input buffers sharing, fairness among the input ports, and call splitting for multicasting. The channel-grouping mechanism is also adopted in the switch to reduce the hardware complexity and improve the switch's throughput, while the cell sequence integrity is preserved. The switch can also handle multiple priority traffic by routing cells according to their priority levels. The performance study of the Abacus switch in throughput, average cell delay, and cell loss rate is presented. A key ASIC chip for building the Abacus switch, called the ARC (ATM routing and concentration) chip, contains a two-dimensional array (32×32) of switch elements that are arranged in a crossbar structure. It provides the flexibility of configuring the chip into different group sizes to accommodate different ATM switch sizes. The ARC chip has been designed and fabricated using 0.8 μm CMOS technology and tested to operate correctly at 240 MHz  相似文献   

16.
The design of a copy network is presented for use in an ATM (asynchronous transfer mode) switch supporting BISDN (broadband integrated services digital network) traffic. Inherent traffic characteristics of BISDN services require ATM switches to handle bursty traffic with multicast connections. In typical ATM switch designs a copy network is used to replicate multicast cells before being forwarded to a point-to-point routeing network. In such designs, a single multicast cell enters the switch and is replicated once for each multicast connection. Each copy is forwarded to the routeing network with a unique destination address and is routed to the appropriate output port. Non-blocking copy networks permit multiple cells to be multicasted at once, up to the number of outputs of the copy network. Another critical feature of ATM switch design is the location of buffers for the temporary storage of transmitted cells. Buffering is required when multiple cells require a common switch resource for transmission. Typically, one cell is granted the resource and is transmitted while the remaining cells are buffered. Current switch designs associate discrete buffers with individual switch resources. Discrete buffering is not efficient for bursty traffic as traffic bursts can overflow individual switch buffers and result in dropped cells, while other buffers are under-used. A new non-blocking copy network is presented in this paper with a shared-memory input buffer. Blocked cells from any switch input are stored in a single shared input buffer. The copy network consists of three banyan networks and shared-memory queues. The design is scalable for large numbers of inputs due to low hardware complexity, O (N log2 N), and distributed operation and control. It is shown in a simulation study that a switch incorporating the shared-memory copy network has increased throughput and lower buffer requirements to maintain low packet loss probability when compared to a switch with a discrete buffer copy network.  相似文献   

17.
ATM (asynchronous transfer mode) is a new technique for transmitting voice, data and video. The performance of atm networks will depend on switch structure. Performance analysis of an atm switch based on a three-stage Clos network is presented. In this paper two types of switches are studied: a switch with input queues in the switching elements and a switch with output queues. This study is at the cell level and intends to dimension the switch. First, the traffic is supposed to be uniform, cells arrive on each input according to a geometric arrival process, they are uniformly directed over all the network outputs. An analytic model is proposed for both input and output queues in the switching elements. A study of the saturation throughput is proposed for input buffer switching elements. This work proves the influence of buffer dimensioning on the different stages of the switch. Dissymmetric switching elements are shown to be better than symmetric ones. A model is then designed for nonuniform traffic patterns and output buffers. Two types of non-uniform traffic are presented: single source to single destination (sssd) and multi-hot spots traffic (mhs). Discrete event simulations are used to validate the different models.  相似文献   

18.
Both high-speed packet switches and statistical multiplexers are critical elements in the ATM (asynchronous transfer mode) network. Many switch architectures have been proposed and some of them have been built, but relatively fewer statistical multiplexer architectures have been investigated to date. It has been considered that multiplexers are a special kind of switches which can be implemented with similar approaches. The main function of a statistical multiplexer, however, is to concentrate traffic from a number of input ports to a comparatively smaller number of output ports; ‘switching’ in the sense that a cell must be delivered to a specific output port is often not required. This implies that the channel grouping design principle, in which more than one path is available for each virtual circuit connection, can be applied in the multiplexer. We show that this technique reduces the required buffer memory and increases the system performance significantly. The performances of three general approaches for implementing an ATM statistical multiplexer are studied through simulations with various bursty traffic assumptions. Based on the best performing approach (sharing output channels and buffers), we propose two architecture designs to implement a scalable statistical multiplexer that is modularly decomposed into many smaller multiplexers by using a novel grouping network.  相似文献   

19.
In this paper, we propose a new technique for reducing cell loss in multi‐banyan‐based ATM switching fabrics. We propose a switch architecture that uses incremental path reservation based on previously established connections. Path reservation is carried out sequentially within each banyan but multiple banyan planes can be concurrently reserved. We use a conflict resolution approach according to which banyans make concurrent reservation offers of conflict‐free paths to head of the line cells waiting in input buffers. A reservation offer from a given banyan is allocated to the cell whose source‐to‐destination path uses the largest number of partially allocated switching elements which are shared with previously reserved paths. Paths are incrementally clustered within each banyan. This approach leaves the largest number of free switching elements for subsequent reservations which has the effect of reducing the potential of future conflicts and improves throughput. We present a pipelined switch architecture based on the above concept of path‐clustering which we call path‐clustering banyan switching fabric (PCBSF). An efficient hardware that implements PCBSF is presented together with its theoretical basis. The performance and robustness of PCBSF are evaluated under simulated uniform traffic and ATM traffic. We also compare the cell loss rate of PCBSF to that of other pipelined banyan switches by varying the switch size, input buffer size, and traffic pattern. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

20.
A routing architecture applying the concept of multichannel transmission groups (MCTGs) for ATM systems is proposed. A queuing analysis of an internally nonblocking ATM switch employing this MCTG concept with partially shared output buffers is presented. The analysis is based on the discrete-time DA///D/c /B queuing model. Both bulk input traffic bulk-size distribution (A) and deterministic traffic (D1 +. . .+DN) are considered. The impact of switch speedup on the performance is also taken into account. It is shown that the MCTG architecture yields better performance in terms of delay and cell loss probability than its single channel counterpart. It is also found that the switch speedup required to closely approximate the optimal performance obtained by having the switch fabric run N times as fast as the input and output channels, where N is the size of the switch, is rather small compared to N. This makes the practical realization of the proposed switch architecture feasible  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号