首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this paper, we study the performance of a prioritized on-board baseband switch in conjunction with a multibeam satellite handling integrated services. The services considered for the analysis include voice, video, file transfer and interactive data. The prioritized switch uses both input and output buffering, switch speed-up as well as a two-phase head-of-line resolution algorithm, in order to reduce the buffer loss while maintaining acceptable user delays. The minimum required buffer capacity and switch speed-up for each service in a prioritized environment are found under uniform traffic conditions. It is shown that under uniform traffic conditions, only minimal buffering and switch speed-up are needed even for the lowest priority users. The performance dependence on the switch size is also substantially reduced with head of line resolution and buffering even in a prioritized environment.  相似文献   

2.
This paper proposes a new high-performance multicast ATM switch architecture. The switch, called the split-switching network (SSN), is based on banyan networks. The SSN achieves multicasting in a way that is non-typical for banyan-based switches: copying and routeing of multicast cells are carried out simultaneously and within the same fabric. Thus, cells are copied only when needed as they traverse the switch towards the appropriate output ports. The SSN consists of successive spliting stages, and buffering is provided in front of each stage. The SSN is non-blocking with complexity of order Nlog2/2N for a switch of size N, and is characterized by distributed and parallel control. The throughput-delay performance of the SSN is shown to be similar to that of a non-blocking output-buffering switch under different mixtures of unicast/multicast traffic. In particular, the SSN achieves a maximum throughput of 100 per cent and the cell delay and delay variation remain small for loads just below the maximum throughput.  相似文献   

3.
In this paper we present a novel fast packet switch architecture based on Banyan interconnection networks, called parallel-tree Banyan switch fabric (PTBSF). It consists of parallel Banyans (multiple outlets) arranged in a tree topology. The packets enter at the topmost Banyan. Internal conflicts are eliminated by using a conflict-free 3 × 4 switching element which distributes conflicting cells over different Banyans. Thus, cell loss may occur only at the lowest Banyan. Increasing the number of Banyans leads to a noticeable decrease in cell loss rate. The switch can be engineered to provide arbitrarily high throughput and low cell loss rate without the use of input buffering or cell pre-processing. The performance of the switch is evaluated analytically under uniform traffic load and by simulation, under a variety of asynchronous transfer mode (ATM) traffic loads. Compared to other proposed architectures, the switch exhibited stable and excellent performance with respect to cell loss and switching delay for all studied conditions as required by ATM traffic sources. The advantages of PTBSF are modularity, regularity, self-routing, low processing overhead, high throughput and robustness, under a variety of ATM traffic conditions. © 1998 John Wiley & Sons, Ltd.  相似文献   

4.
In a wireless ATM system, a network must provide seamless services to mobile users. To support this, mobility function should be added to existing ATM networks. Through a handoff operation, a mobile user can receive a service from the network without disconnecting the communication. A handoff results in connection path rerouting during an active connection. To avoid cell loss during a handoff, cell buffering and rerouting are required in the network. A handoff switch is a connection breakdown point on an original connection path in the network from which a new connection sub‐path is established. It performs cell buffering and rerouting during a handoff. Cell buffering and rerouting can introduce a cell out‐of‐sequence problem. In this paper we propose a handoff switch architecture with a shared memory. The architecture performs cell buffering and rerouting efficiently by managing logical queues of virtual connections in the shared memory and sorting head‐of‐line cells for transmission, thus achieving in‐sequence cell delivery during a handoff. We also present simulation results to understand the impacts of handoffs on switch performance. This revised version was published online in July 2006 with corrections to the Cover Date.  相似文献   

5.
The Data Vortex switch architecture has been proposed as a scalable low-latency interconnection fabric for optical packet switches. This self-routed hierarchical architecture employs synchronous timing and distributed traffic-control signaling to eliminate optical buffering and to reduce the required routing logic, greatly facilitating a photonic implementation. In previous work, we have shown the efficient scalability of the architecture under uniform and random traffic conditions while maintaining high throughput and low-latency performance. This paper reports on the performance of the Data Vortex architecture under nonuniform and bursty traffic conditions. The results show that the switch architecture performs well under modest nonuniform traffic, but an excessive degree of nonuniformity will severely limit the scalability. As long as a modest degree of asymmetry between the number of input and output ports is provided, the Data Vortex switch is shown to handle very bursty traffic with little performance degradation.  相似文献   

6.
具有纵横输入互连方式和缓冲结构的递归Knockout交换网络   总被引:1,自引:0,他引:1  
本文提出了具有纵横输入(CrosbarInput)互连方式和输入缓冲(InputBufered)结构的递归Knockout交换网络(CIBRKS).通过采用纵横输入互连方式可减少内部小交换单元的数目,并可使信元传送顺序不会受群输出端口数目的影响.而通过在每个输入端放置缓冲器可在保持丢失率性能不变的情况下,可使整个交换网络的级数减少,从而也就减少了信元在群网络中的传输时延.另外,在该结构中,通过把信元滤址的功能从每个小交换单元中提取出来放在每个输入端口,又进一步减少了小交换单元的功能.通过比较,我们认为,作为大规模ATM交换网络结构,CIBRKS结构比传统的RKS结构具有较好的性能/复杂度特性.  相似文献   

7.
The performance of a growable architecture for broadband asynchronous transfer mode (ATM) switching consisting of a memoryless self-routing interconnect fabric and modest-size packet switch modules is examined. The cell loss probability is the focus because the architecture attains the best possible delay-throughput performance if the packet switch modules use output queuing. There are two sources of cell loss in the switch. First, cells are dropped if too many simultaneous arrivals are destined to a group of output ports. Second, because a simple, distributed path-assignment controller is used for speed and efficiency, cells are dropped when the controller cannot schedule a path through the switch. The authors compute an upper bound on arrivals, possibly including isochronous circuit connections, and show that both sources of cell loss can be made negligibly small  相似文献   

8.
This paper analyses the performance of the ATM switch fabric with Combined-Input/ Output Buffering(C-IOB) under two different service principles for the cells at the head of line (HOL) positions of input buffers: First Come First Service (FCFS)/Random Service(RS) for the set of HOL cells addressed to a given output port with different/same "age" (the waiting time at the HOL position) and Pure Random Service(PRS) for all HOL cells addressed to a given output port regardless of their "ages" while the Queue Loss (QL) transfer scheme is adopted for interaction between input and output buffers in the ATM switch fabric. The results obtained show that the C-IOB ATM switch fabric with PRS service policy and the QL transfer scheme is better than other buffering ATM switch fabrics.  相似文献   

9.
PetaStar: a petabit photonic packet switch   总被引:6,自引:0,他引:6  
This paper presents a new petabit photonic packet switch architecture, called PetaStar. Using a new multidimensional photonic multiplexing scheme that includes space, time, wavelength, and subcarrier domains, PetaStar is based on a three-stage Clos-network photonic switch fabric to provide scalable large-dimension switch interconnections with nanosecond reconfiguration speed. Packet buffering is implemented electronically at the input and output port controllers, allowing the central photonic switch fabric to transport high-speed optical signals without electrical-to-optical conversion. Optical time-division multiplexing technology further scales port speed beyond electronic speed up to 160 Gb/s to minimize the fiber connections. To solve output port contention and internal blocking in the three-stage Clos-network switch, we present a new matching scheme, called c-MAC, a concurrent matching algorithm for Clos-network switches. It is highly distributed such that the input-output matching and routing-path finding are concurrently performed by scheduling modules. One feasible architecture for the c-MAC scheme, where a crosspoint switch is used to provide the interconnections between the arbitration modules, is also proposed. With the c-MAC scheme, and an internal speedup of 1.5, PetaStar with a switch size of 6400 /spl times/ 6400 and total capacity of 1.024 petabit/s can be achieved at a throughput close to 100% under various traffic conditions.  相似文献   

10.
The authors propose a new space-division fast packet switch architecture based on banyan interconnection networks, called the tandem banyan switching fabric (TBSF). It consists of placing banyan networks in tandem, offering multiple paths from each input to each output, thus overcoming in a very simple way the effect of conflicts among packets (to which banyan networks are prone) and achieving output buffering. From a hardware implementation perspective, this architecture is simple in that it consists of several instances of only two VLSI chips, one implementing the banyan network and the other implementing the output buffer function. The basic structure and operation of the tandem banyan switching fabric are described, and its performance is discussed. The authors propose a modification to the basic structure which decreases the hardware complexity of the switch while maintaining its performance. An implementation of the banyan network using a high-performance BiCMOS sea-of-gates on 0.8-μm technology is reported  相似文献   

11.
This paper presents the integration of the Prelude switch architecture into a monochip ATM switch, COM16M, capable of handling 16 multiplexes carrying ATM cells at 622 Mb/s. It is a fully autonomous switch, i.e., the chip includes clock adaptation, routing, and cell buffering as well as header translation and control capabilities. The switch is integrated into one single chip containing 6000000 transistors implemented in a 0.5-μm CMOS process  相似文献   

12.
Proposes and analyzes a recursive modular architecture for implementing a large-scale multicast output buffered ATM switch (MOBAS). A multicast knockout principle, an extension of the generalized knockout principle, is applied in constructing the MOBAS in order to reduce the hardware complexity (e.g., the number of switch elements and interconnection wires) by almost one order of magnitude. In the proposed switch architecture, four major functions of designing a multicast switch: cell replication, cell routing, cell contention resolution, and cell addressing, are all performed distributively so that a large switch size is achievable. The architecture of the MOBAS has a regular and uniform structure and, thus, has the advantages of: (1) easy expansion due to the modular structure, (2) high integration density for VLSI implementation, (3) relaxed synchronization for data and clock signals, and (4) building the center switch fabric (i.e., the multicast grouping network) with a single type of chip. A two-stage structure of the multicast output buffered ATM switch (MOBAS) is described. The performance of the switch fabric in cell loss probability is analyzed, and the numerical results are shown. The authors show that a switch designed to meet the performance requirement for unicast calls will also satisfy multicast calls' performance. A 16×16 ATM crosspoint switch chip based on the proposed architecture has been implemented using CMOS 2-μm technology and tested to operate correctly  相似文献   

13.
We introduce a new approach to ATM switching. We propose an ATM switch architecture which uses only a single shift-register-type buffering element to store and queue cells, and within the same (physical) queue, switches the cells by organizing them in logical queues destined for different output lines. The buffer is also a sequencer which allows flexible ordering of the cells in each logical queue to achieve any appropriate scheduling algorithm. This switch is proposed for use as the building block of large-stale multistage ATM switches because of low hardware complexity and flexibility in providing (per-VC) scheduling among the cells. The switch can also be used as scheduler/controller for RAM-based switches. The single-queue switch implements output queueing and performs full buffer sharing. The hardware complexity is low. The number of input and output lines can vary independently without affecting the switch core. The size of the buffering space can be increased simply by cascading the buffering elements  相似文献   

14.
An ATM switch fabric which is capable of being reconfigured based on the statistics of a previous time period, is introduced. Taking under consideration the strong correlation between ports in a campus or LAN ATM switch, the proposed architecture exhibits improved performance. We prove the performance improvement, by applying data collected from a campus production ATM switch onto our proposed architecture  相似文献   

15.
WDM packet routing for high-capacity data networks   总被引:3,自引:0,他引:3  
We present experimental and numerical studies of a novel packet-switch architecture, the data vortex, designed for large-scale photonic interconnections. The selfrouting multihop packet switch efficiently scales to large port counts (>10 k) while maintaining low latencies, a narrow latency distribution, and high throughput. To facilitate optical implementation, the data-vortex architecture employs a novel hierarchical topology, traffic control, and synchronous timing that act to reduce the necessary routing logic operations and buffering. As a result of this architecture, all routing decisions for the data packets are based on a single logic operation at each node. The routing is further simplified by the employment of wavelength division multiplexing (WDM)-encoded header bits, which enable packet-header processing by simple wavelength filtering. The packet payload remains in the optical domain as it propagates through the data-vortex switch fabric, exploiting the transparency and high bandwidths achievable in fiber optic transmission. In this paper, we discuss numerical simulations of the data-vortex performance and report results from an experimental investigation of multihop WDM packet routing in a recirculating test bed  相似文献   

16.
In this paper, we pursue a performance analysis under hotspot traffic conditions of a four-level prioritized non-blocking baseband switch for use on board a switching multibeam satellite. Both finite input and output buffering as well as speed-up are employed to reduce the loss which is critical in a satellite application. In addition, in order to improve the performance of the two lowest priority users a head of line resolution (HLR) technique is implemented. It is shown that with HLR and the proper adjustment of the switch speed-up and the input and output buffers the loss can be substantially reduced. It is also shown that the dependence on the switch size which is characteristic of the unbuffered discard case is substantially reduced, even in a prioritized environment, allowing larger switches to be implemented.  相似文献   

17.
The telecommunications networks of the future are likely to be packet switched networks consisting of wide bandwidth optical fiber transmission media, and large, highly parallel, self-routing switches. Recent considerations of switch architectures have focused on internally nonblocking networks with packet buffering at the switch outputs. These have optimal throughput and delay performance. The author considers a switch architecture consisting of parallel plans of low-speed internally blocking switch networks, in conjunction with input and output buffering. This architecture is desirable from the viewpoint of modularity and hardware cost, especially for large switches. Although this architecture is suboptimal, the throughput shortfall may be overcome by adding extra switch planes. A form of input queuing called bypass queuing can improve the throughput of the switch and thereby reduce the number of switch planes required. An input port controller is described which distributes packets to all switch planes according to the bypass policy, while preserving packet order for virtual circuits. Some simulation results for switch throughput are presented  相似文献   

18.
Output-queued switch emulation by fabrics with limited memory   总被引:9,自引:0,他引:9  
The output-queued (OQ) switch is often considered an ideal packet switching architecture for providing quality-of-service guarantees. Unfortunately, the high-speed memory requirements of the OQ switch prevent its use for large-scale devices. A previous result indicates that a crossbar switch fabric combined with lower speed input and output memory and two times speedup can exactly emulate an OQ switch; however, the complexity of the proposed centralized scheduling algorithms prevents scalability. This paper examines switch fabrics with limited memory and their ability to exactly emulate an OQ switch. The switch architecture of interest contains input queueing, fabric queueing, flow-control between the limited fabric buffers and the inputs, and output queueing. We present sufficient conditions that enable this combined input/fabric/output-queued switch with two times speedup to emulate a broad class of scheduling algorithms operating an OQ switch. Novel scheduling algorithms are then presented for the scalable buffered crossbar fabric. It is demonstrated that the addition of a small amount of memory at the crosspoints allows for distributed scheduling and significantly reduces scheduling complexity when compared with the memoryless crossbar fabric. We argue that a buffered crossbar system performing OQ switch emulation is feasible for OQ switch schedulers such as first-in-first-out, strict priority and earliest deadline first, and provides an attractive alternative to both crossbar switch fabrics and to the OQ switch architecture.  相似文献   

19.
An asynchronous transfer mode (ATM) switch architecture that uses the broadcasting transmission medium for transmission of cells from input ports to output ports is introduced. Cell transmission and its control are separated completely, and cell transmission control, i.e. header operation, is executed before cell transmission (control ahead). With this operation, cell transmission and its control can be executed in a pipeline style, allowing high-speed cell exchange and making transmission control easier. One of the essential problems for ATM switches which use the broadcasting transmission medium is high-speed operation of the transmission medium. The switch fabric performance is analyzed according to its switching speed. Numerical results show that the ATM switch proposed shows good cell loss performance even when its switching speed is restricted, provided that switch utilization is below 1. Extensions to the switch that lead to robustness against bursty traffic are shown  相似文献   

20.
An asynchronous transfer mode (ATM) switch chip set, which employs a shared multibuffer architecture, and its control method are described. This switch architecture features multiple-buffer memories located between two crosspoint switches. By controlling the input-side crosspoint switch so as to equalize the number of stored ATM cells in each buffer memory, these buffer memories can be treated as a single large shared buffer memory. Thus, buffers are used efficiently and the cell loss ratio is reduced to a minimum. Furthermore, no multiplexing or demultiplexing is required to store and restore the ATM cells by virtue of parallel access to the buffer memories via the crosspoint switches. Access time for the buffer memory is thus greatly reduced. This feature enables high-speed switch operation. A three-VLSI chip set using 0.8-μm BiCMOS process technology has been developed. Four aligner LSIs, nine bit-sliced buffer-switch LSIs, and one control LSI are combined to create a 622-Mb/s 8×8 ATM switching system that operates at 78 MHz. In the switch fabric, 155-Mb/s ATM cells can also be switched on the 622-Mb/s port using time-division multiplexing  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号