首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 78 毫秒
1.
钱炜宏  李乐民 《电子学报》1998,26(11):46-50,54
本文分析了一种内部无阻塞反压型输入/输出排队ATM交换机,在非均匀负载输入下的信元丢失、信元延时指标,文中采用一一种Geom/PH/I/K排队模型分析输入排队系统仲裁系统的分析采用了一种二维Markov过程,结论对设计一种反压型输入/输出排队ATM交换机有参考意义。  相似文献   

2.
本文利用矩阵几何分析法分析了内部无阻塞输入/输出排队反压型ATM交换机在均匀贝努利输出下的信元丢失、信元延时及吴吐量等性能指标。本文结论对实际设计一反压型输入/输出排队分组交换机具有一定参考意义。  相似文献   

3.
输入/输出ATM交换机在突发性业务下的性能   总被引:1,自引:0,他引:1  
本文详尽分析了内部无阻塞输入/输出排队反压型ATM交换机在突发性业务下信元丢失、交换机最大吞吐量等性能。输入端口信元的到达过程是ON-OFF突发流,且ON态以概率p发送信元,ON-OFF长度为Pareto分布的随机变量;属于同一突发流的信元输往同一个输出端口,不同突发流的信元等概率输往不同的输出端口;输入/输出缓冲器长度有限,交换机加速因子S任意。本文同时比较了突发长度为周期/几何分布下的交换机性能,其结论对实际设计一输入/输出排队反压型ATM交换机具有一定参考意义。  相似文献   

4.
钱炜宏  李乐民 《通信学报》1998,19(12):27-33
本文分析了一种输入排队缓冲器有丢失优先级的内部无阻塞输入/输出排队ATM交换机在反压控制下的信元丢失指标。在每个输入端口高、低优先级信元的到达具有相同的强度,到达输出端口的概率相同为1N,且输入、输出缓冲容量均为有限。为保证交换机内部不发生信元丢失,引入了反压机制(Backpressure)。文中表明,使用丢失优先级策略的交换机比纯输入/输出排队交换机更能满足不同业务服务等级QoS(QualityofService)的丢失要求,而且所需缓冲容量减少。  相似文献   

5.
本文分析了一种具有点对多点服务能力的内部无阻塞输入/输出排队ATM交换机在反压控制下的性能指标。在每个输入端口信元的到达具有相同的强度,每个排头信元以一种相同的概率分布函数被复制成多个输往不同的输出端口的排头信元,复制后的排头柜元到达输出端口的概串相同为1/N,且输入、物出缓冲容量均为有限。为保证交换机内部不发生信元丢失,引入了反压机制(Backpressure)。本文利用矩阵几何分析法绘出了数值解,计算机仿真结果表明理论分析是正确的。  相似文献   

6.
具有多优先级的输入/输出排队ATM交换机性能分析   总被引:4,自引:0,他引:4  
钱炜宏  李乐民 《通信学报》1997,18(12):32-38
本文分析了N×N内部无阻塞的输入/输出排队ATM交换机在多优先级负载输入下,交换机平均延时、吞吐量与加速因子S的关系。交换机按时隙(SLOT)同步工作,输入端口的到达过程为贝努利(Bernouli)过程,各优先级业务在每个输入端口拥有各自不同的队列,其服务规则都是先来先服务(FCFS)。输入缓冲容量很大,分析时假定为∞;输出缓冲容量为B(B≥S),并有反压(Backpressure)机制保证输出缓冲器不溢出。本文研究了多优先级交换机性能与加速因子S(1≤S≤N)的关系。计算机仿真结果表明理论分析是正确的  相似文献   

7.
路欣  彭来献 《现代电子技术》2004,27(3):99-101,105
在当今高速路由器/交换机中,为提高传输效率.通常采用基于固定长度信元的定长交换技术。本文主要对基于固定长度的信元排队的WFQ(Weighted Fair Queueing)进行了建模和仿真研究,详细分析了模型的各个组成部分。利用信元定长的特点,使用了固定时间驱动的方式实现了该模型。最后比较了的FIFO(First In First Out)和WFQ的公平性。仿真结果表明,基于信元排队的WFQ适用于高速路由器/交换机中。  相似文献   

8.
一种新的高性能ATM共享存储交换单元   总被引:1,自引:0,他引:1  
ATM交换单元中.可选用三种基本排队策略一输入捧队、输出捧队和共享存贮捧队。根据平均信元时延、吞吐量和信元丢失概率性能.最佳的排队策略是共享存储捧队。由于传统的共享存储交换单元必须缓冲经过交换单元的所有信元,因此交换单元的吞吐量和信元丢失概率性能仍不理想,特别是对于大规模交换单元情况更是如此。本文提出了一种新的高性能共享存储交换单元,称为阻塞信元共享存储(BCSM)交换单元。顾名思义,BCSM仅仅缓冲交换单元输入端口的阻塞信元,而不必缓冲经过交换单元的所有信元。均匀业务下的分析结果证明,BCSM交换单元比一般共享存贮交换单元具有更好的性能。  相似文献   

9.
对于输入缓冲和输出缓冲ATM组播交换系统,目前主要有两种信元调度算法:窗口调度算法和输出缓冲算法。这两种算法分别用于输入缓冲和输出缓冲系统中,其缺点是对处理器速度和存储器访问速度要求较高。笔者给出的算法对交换网络的处理速度和存储器访问速度要求不高。可以大大改善了交换机的延迟-吞吐率性能。  相似文献   

10.
提出一种基于输入缓存机制的ATM网络优先级流控策略,并运用系统仿真技术对采用该优先级流控及FIFO输入流控两种方案的ATM多媒体网络性能进行计算机仿真研究。结果表明:采用优先级流控策略将改善ATM多媒体网非实时信元丢失率及实时信元排队时延性能,具有较大实用价值。  相似文献   

11.
The Tera ATM LAN project at Carnegie Mellon University addresses the interconnection of hundreds of workstations in the Electrical and Computer Engineering Department via an ATM-based network. The Tera network architecture consists of switched Ethernet clusters that are interconnected using an ATM network. This paper presents the Tera network architecture, including an Ethernet/ATM network interface, the Tera ATM switch, and its performance analysis. The Tera switch architecture for asynchronous transfer mode (ATM) local area networks (LAN's) incorporates a scalable nonblocking switching element with hybrid queueing discipline. The hybrid queueing strategy includes a global first-in first-out (FIFO) queue that is shared by all switch inputs and dedicated output queues with small speedup. Due to hybrid queueing, switch performance is comparable to output queueing switches. The shared input queue design is scalable since it is based on a Banyan network and N FIFO memories. The Tera switch incorporates an optimal throughput multicast stage that is also based on a Banyan network. Switch performance is evaluated using queueing analysis and simulation under various traffic patterns  相似文献   

12.
Modeling alternatives for a fast packet switching system are analyzed. A nonblocking switch fabric that runs at the same speed as the input/output links is considered. The performance of the considered approaches are derived by theoretical analysis and computer simulations. Performance comparison between input queueing approaches with different selection policies are presented. Novel input and output queueing techniques are also proposed. In particular it is shown that, depending on the implementation, the input queueing approach studied in this paper achieves the same performance as the optimum (output) queueing alternative, without resorting to a faster packet switch fabric  相似文献   

13.
Output-queued switch emulation by fabrics with limited memory   总被引:9,自引:0,他引:9  
The output-queued (OQ) switch is often considered an ideal packet switching architecture for providing quality-of-service guarantees. Unfortunately, the high-speed memory requirements of the OQ switch prevent its use for large-scale devices. A previous result indicates that a crossbar switch fabric combined with lower speed input and output memory and two times speedup can exactly emulate an OQ switch; however, the complexity of the proposed centralized scheduling algorithms prevents scalability. This paper examines switch fabrics with limited memory and their ability to exactly emulate an OQ switch. The switch architecture of interest contains input queueing, fabric queueing, flow-control between the limited fabric buffers and the inputs, and output queueing. We present sufficient conditions that enable this combined input/fabric/output-queued switch with two times speedup to emulate a broad class of scheduling algorithms operating an OQ switch. Novel scheduling algorithms are then presented for the scalable buffered crossbar fabric. It is demonstrated that the addition of a small amount of memory at the crosspoints allows for distributed scheduling and significantly reduces scheduling complexity when compared with the memoryless crossbar fabric. We argue that a buffered crossbar system performing OQ switch emulation is feasible for OQ switch schedulers such as first-in-first-out, strict priority and earliest deadline first, and provides an attractive alternative to both crossbar switch fabrics and to the OQ switch architecture.  相似文献   

14.
Two simple models of queueing on anN times Nspace-division packet switch are examined. The switch operates synchronously with fixed-length packets; during each time slot, packets may arrive on any inputs addressed to any outputs. Because packet arrivals to the switch are unscheduled, more than one packet may arrive for the same output during the same time slot, making queueing unavoidable. Mean queue lengths are always greater for queueing on inputs than for queueing on outputs, and the output queues saturate only as the utilization approaches unity. Input queues, on the other hand, saturate at a utilization that depends onN, but is approximately(2 -sqrt{2}) = 0.586whenNis large. If output trunk utilization is the primary consideration, it is possible to slightly increase utilization of the output trunks-upto(1 - e^{-1}) = 0.632asN rightarrow infty-by dropping interfering packets at the end of each time slot, rather than storing them in the input queues. This improvement is possible, however, only when the utilization of the input trunks exceeds a second critical threshold-approximatelyln (1 +sqrt{2}) = 0.881for largeN.  相似文献   

15.
A new ATM switch architecture is presented. Our proposed Multinet switch is a self-routing multistage switch with partially shared internal buffers capable of achieving 100% throughput under uniform traffic. Although it provides incoming ATM cells with multiple paths, the cell sequence is maintained throughout the switch fabric thus eliminating the out-of-order cell sequence problem. Cells contending for the same output addresses are buffered internally according to a partially shared queueing discipline. In a partially shared queueing scheme, buffers are partially shared to accommodate bursty traffic and to limit the performance degradation that may occur in a completely shared system where a small number of calls may hog the entire buffer space unfairly. Although the hardware complexity in terms of number of crosspoints is similar to that of input queueing switches, the Multinet switch has throughput and delay performance similar to output queueing switches  相似文献   

16.
输出排队结构是快速分组交换中交换性能最佳的交换结构。本文研究输出排队结构交换任意种优先级业务的排队性能。分别在独立和相关到达两种情况下导出了任意优先级业务的平均排队长度等特征参数,发现当N×N规模互连网络的端口数N足够大时,两种业务到达模型的排队性能趋于一致。文中提出了一种数值迭代法来求取用二维Markov过程表示的高、低优先级分组队列长度的稳态解。计算机模拟结果证实了文中的分析。  相似文献   

17.
In this paper, we study the performance of an input and output queueing switch with a window scheme and a speed constraint. The performance of a non-blocking ATM switch can usually be improved by increasing the switching speed. Also, the performance of a switch can be improved using a window scheme by relaxing the first-in-firstout (FIFO) queueing discipline in the input queue. Thus, one can expect that a combined scheme of windowing and a speed constraint can improve further the performance of the packet switch. Here, we analyze the maximum throughput of the input and output queueing switch with a speed constraint combined with windowing, and show that it is possible to obtain high throughput with a small increment of speed-up and window size. For analysis, we model the HOL queueing system as a virtual queueing system. By analyzing the dynamics of HOL packets in this virtual queueing model, we obtain the service probability of the HOL server as a function of output contention capabilities. Using the result, we apply the flow conservation relation to this model and obtain the maximum throughput. The analytical results are verified by simulation.  相似文献   

18.
We have previously proposed an efficient switch architecture called multiple input/output-queued (MIOQ) switch and showed that the MIOQ switch can match the performance of an output-queued switch statistically. In this paper, we prove theoretically that the MIOQ switch can match the output queueing exactly , not statistically, with no speedup of any component. More specifically, we show that the MIOQ switch with two parallel switches (which we call a parallel MIOQ (PMIOQ) switch in this paper) can provide exact emulation of an output-queued switch with a broad class of service scheduling algorithms including FIFO, weighted fair queueing (WFQ) and strict priority queueing regardless of incoming traffic pattern and switch size. To do that, we first propose the stable strategic alliance (SSA) algorithm that can produce a stable many-to-many assignment, and prove its finite, stable and deterministic properties. Next, we apply the SSA algorithm to the scheduling of a PMIOQ switch with two parallel switches, and show that the stability condition of the SSA algorithm guarantees for the PMIOQ switch to emulate an output-queued switch exactly. To avoid possible conflicts in a parallel switch, each input-output pair matched by the SSA algorithm must be mapped to one of two crossbar switches. For this mapping, we also propose a simple algorithm that requires at most 2N steps for all matched input-output pairs. In addition, to relieve the implementation burden of N input buffers being accessed simultaneously, we propose a buffering scheme called redundant buffering which requires two memory devices instead of N physically-separate memories. In conclusion, we demonstrate that the MIOQ switch requires two crossbar switches in parallel and two physical memories at each input and output to emulate an output-queued switch with no speedup of any component.  相似文献   

19.
Asynchronous transfer mode (ATM) is the transport technique for the broadband ISDN recommended by CCITT (I.121). Many switches have been proposed to accommodate the ATM that requires fast packet switching capability.1-8 The proposed switches for the broadband ISDN can be classified as being of input queueing or output queueing type. Those of the input queueing type have a throughput performance which is approximately 58 per cent that of the output queueing type. However, output queueing networks require larger amounts of hardware than input queueing networks. In this paper, we propose a new multistage switch with internal buffering that approaches a maximum throughput of 100 per cent as the buffering is increased. The switch is capable of broadcasting and self-routeing. It consists of two switching planes which consist of packet processors, 2 x 2 switching elements, distributors and buffers located between stages and in the output ports. The internal data rate of the proposed switch is the same as that of the arriving information stream. In this sense, the switch does not require speed-up. The switch has log2 N stages that forward packets in a store-and-forward fashion, thus incurring a latency of log2 N time periods. Performance analysis shows that the additional delay is small.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号