首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到18条相似文献,搜索用时 359 毫秒
1.
钱炜宏  李乐民 《通信学报》1998,19(12):27-33
本文分析了一种输入排队缓冲器有丢失优先级的内部无阻塞输入/输出排队ATM交换机在反压控制下的信元丢失指标。在每个输入端口高、低优先级信元的到达具有相同的强度,到达输出端口的概率相同为1N,且输入、输出缓冲容量均为有限。为保证交换机内部不发生信元丢失,引入了反压机制(Backpressure)。文中表明,使用丢失优先级策略的交换机比纯输入/输出排队交换机更能满足不同业务服务等级QoS(QualityofService)的丢失要求,而且所需缓冲容量减少。  相似文献   

2.
输入/输出ATM交换机在突发性业务下的性能   总被引:1,自引:0,他引:1  
本文详尽分析了内部无阻塞输入/输出排队反压型ATM交换机在突发性业务下信元丢失、交换机最大吞吐量等性能。输入端口信元的到达过程是ON-OFF突发流,且ON态以概率p发送信元,ON-OFF长度为Pareto分布的随机变量;属于同一突发流的信元输往同一个输出端口,不同突发流的信元等概率输往不同的输出端口;输入/输出缓冲器长度有限,交换机加速因子S任意。本文同时比较了突发长度为周期/几何分布下的交换机性能,其结论对实际设计一输入/输出排队反压型ATM交换机具有一定参考意义。  相似文献   

3.
具有多优先级的输入/输出排队ATM交换机性能分析   总被引:4,自引:0,他引:4  
钱炜宏  李乐民 《通信学报》1997,18(12):32-38
本文分析了N×N内部无阻塞的输入/输出排队ATM交换机在多优先级负载输入下,交换机平均延时、吞吐量与加速因子S的关系。交换机按时隙(SLOT)同步工作,输入端口的到达过程为贝努利(Bernouli)过程,各优先级业务在每个输入端口拥有各自不同的队列,其服务规则都是先来先服务(FCFS)。输入缓冲容量很大,分析时假定为∞;输出缓冲容量为B(B≥S),并有反压(Backpressure)机制保证输出缓冲器不溢出。本文研究了多优先级交换机性能与加速因子S(1≤S≤N)的关系。计算机仿真结果表明理论分析是正确的  相似文献   

4.
钱炜宏  李乐民 《电子学报》1998,26(11):46-50,54
本文分析了一种内部无阻塞反压型输入/输出排队ATM交换机,在非均匀负载输入下的信元丢失、信元延时指标,文中采用一一种Geom/PH/I/K排队模型分析输入排队系统仲裁系统的分析采用了一种二维Markov过程,结论对设计一种反压型输入/输出排队ATM交换机有参考意义。  相似文献   

5.
本文利用矩阵几何分析法分析了内部无阻塞输入/输出排队反压型ATM交换机在均匀贝努利输入下的信元丢失、信元延时及吞吐量等性能指标。本文结论对实际设计一反压型输入/输出排队分组交换机具有一定参考意义。  相似文献   

6.
本文利用矩阵几何分析法分析了内部无阻塞输入/输出排队反压型ATM交换机在均匀贝努利输出下的信元丢失、信元延时及吴吐量等性能指标。本文结论对实际设计一反压型输入/输出排队分组交换机具有一定参考意义。  相似文献   

7.
本文分析了一种具有点对多点服务能力的内部无阻塞输入/输出排队ATM交换机在反压控制下的性能指标。  相似文献   

8.
一种新的高性能ATM共享存储交换单元   总被引:1,自引:0,他引:1  
ATM交换单元中.可选用三种基本排队策略一输入捧队、输出捧队和共享存贮捧队。根据平均信元时延、吞吐量和信元丢失概率性能.最佳的排队策略是共享存储捧队。由于传统的共享存储交换单元必须缓冲经过交换单元的所有信元,因此交换单元的吞吐量和信元丢失概率性能仍不理想,特别是对于大规模交换单元情况更是如此。本文提出了一种新的高性能共享存储交换单元,称为阻塞信元共享存储(BCSM)交换单元。顾名思义,BCSM仅仅缓冲交换单元输入端口的阻塞信元,而不必缓冲经过交换单元的所有信元。均匀业务下的分析结果证明,BCSM交换单元比一般共享存贮交换单元具有更好的性能。  相似文献   

9.
对于输入缓冲和输出缓冲ATM组播交换系统,目前主要有两种信元调度算法:窗口调度算法和输出缓冲算法。这两种算法分别用于输入缓冲和输出缓冲系统中,其缺点是对处理器速度和存储器访问速度要求较高。笔者给出的算法对交换网络的处理速度和存储器访问速度要求不高。可以大大改善了交换机的延迟-吞吐率性能。  相似文献   

10.
本文提出了一种基于多业务并行ATM 交换机的智能输入端口控制器。主要功能是为新到达的单播和多播信元在并行多级互连网(MIN)中寻找到一条能够成功建立连接的路径。目的是降低信元在并行多级互连网中的阻塞概率,提高交换机的吞吐量  相似文献   

11.
This paper analyses the performance of the ATM switch fabric with Combined-Input/ Output Buffering(C-IOB) under two different service principles for the cells at the head of line (HOL) positions of input buffers: First Come First Service (FCFS)/Random Service(RS) for the set of HOL cells addressed to a given output port with different/same "age" (the waiting time at the HOL position) and Pure Random Service(PRS) for all HOL cells addressed to a given output port regardless of their "ages" while the Queue Loss (QL) transfer scheme is adopted for interaction between input and output buffers in the ATM switch fabric. The results obtained show that the C-IOB ATM switch fabric with PRS service policy and the QL transfer scheme is better than other buffering ATM switch fabrics.  相似文献   

12.
We propose an efficient multicast cell-scheduling algorithm, called multiple-slot cell-scheduling algorithm, for multicast ATM switching systems with input queues. Cells in an input-queueing system are usually served based on the first-in-first-out (FIFO) discipline, which may have a serious head-of-line (HOL) blocking problem. Our algorithm differs from previous algorithms in that we consider the output contention resolution for multiple time slots instead of the current time slot only. Like a window-based scheduling algorithm, our algorithm allows cells behind an HOL cell to be transmitted prior to the HOL cell in the same input port. Thus, HOL blocking can be alleviated. We have illustrated that the delay-throughput performance of our algorithm outperforms most of those algorithms that consider only the output contention resolution for the current time slot. We also present a simple and efficient architecture for realizing our algorithm, which can dramatically reduce the time complexity. We believe that the proposed architecture is very suitable for multicast asynchronous transfer mode (ATM) switching systems with input queues  相似文献   

13.
The multiple input-queued (MIQ) switch is the switch which manages multiple (m) queues in each input port, each of which is dedicated to a group of output ports. Since each input port can switch up to m cells in a time slot, one from each queue, it hardly suffers from the head-of-line (HOL) blocking which is known to be the decisive factor limiting the throughput of the single input-queued (SIQ) switch. As a result, the MIQ switch guarantees enhanced performance characteristics as the number of queues m in an input increases. However, the service of multiple cells from an input could cause internal speedup or expansion of the switch fabric, diluting the merit of high-speed operation in the conventional SIQ scheme. The restricted rule is contrived to circumvent this side effect by regulating the number of cells switched from an input port. We analyze the performance of the MIQ switch employing the restricted rule. For the switch using the restricted rule, the closed formulas for the throughput bound, the mean cell delay and average queue length, and the cell loss bound of the switch are derived as functions of m, by generalizing the analysis for the SIQ switch by J.Y. Hui and E. Arthurs (see IEEE J. Select. Areas Commun., vol.SAC-5, p.1262-73, 1987).  相似文献   

14.
In this paper, we study the performance of an input and output queueing switch with a window scheme and a speed constraint. The performance of a non-blocking ATM switch can usually be improved by increasing the switching speed. Also, the performance of a switch can be improved using a window scheme by relaxing the first-in-firstout (FIFO) queueing discipline in the input queue. Thus, one can expect that a combined scheme of windowing and a speed constraint can improve further the performance of the packet switch. Here, we analyze the maximum throughput of the input and output queueing switch with a speed constraint combined with windowing, and show that it is possible to obtain high throughput with a small increment of speed-up and window size. For analysis, we model the HOL queueing system as a virtual queueing system. By analyzing the dynamics of HOL packets in this virtual queueing model, we obtain the service probability of the HOL server as a function of output contention capabilities. Using the result, we apply the flow conservation relation to this model and obtain the maximum throughput. The analytical results are verified by simulation.  相似文献   

15.
This letter proposes a high-speed input and output buffering asynchronous transfer mode (ATM) switch, named the tandem-crosspoint (TDXP) switch, The TDXP switch consists of multiple crossbar switch planes, which are connected in tandem at every crosspoint. The TDXP switch does not increase the internal line speed in eliminating head-of-line (HOL) blocking. In addition, since the TDXP switch employs a simple cell reading algorithm at the input buffer in order to retain the cell sequence, the TDXP switch does not require the cell sequences to be rebuilt at output buffers using time stamps, as is required by a parallel switch. It is shown that the TDXP switch can eliminate the HOL blocking effectively and achieve high throughput  相似文献   

16.
Multicast scheduling for input-queued switches   总被引:10,自引:0,他引:10  
We design a scheduler for an M×N input-queued multicast switch. It is assumed that: 1) each input maintains a single queue for arriving multicast cells and 2) only the cell at the head of line (HOL) can be observed and scheduled at one time. The scheduler needs to be: 1) work-conserving (no output port may be idle as long as there is an input cell destined to it) and 2) fair (which means that no input cell may be held at HOL for more than a fixed number of cell times). The aim is to find a work-conserving, fair policy that delivers maximum throughput and minimizes input queue latency, and yet is simple to implement. When a scheduling policy decides which cells to schedule, contention may require that it leave a residue of cells to be scheduled in the next cell time. The selection of where to place the residue uniquely defines the scheduling policy. Subject to a fairness constraint, we argue that a policy which always concentrates the residue on as few inputs as possible generally outperforms all other policies. We find that there is a tradeoff among concentration of residue (for high throughput), strictness of fairness (to prevent starvation), and implementational simplicity (for the design of high-speed switches). By mapping the general multicast switching problem onto a variation of the popular block-packing game Tetris, we are able to analyze various scheduling policies which possess these attributes in different proportions. We present a novel scheduling policy, called TATRA, which performs extremely well and is strict in fairness. We also present a simple weight-based algorithm, called WBA  相似文献   

17.
Son  J.W. Lee  H.T. Oh  Y.Y. Lee  J.Y. Lee  S.B. 《Electronics letters》1997,33(14):1192-1193
A switch architecture is proposed for alleviating the HOL blocking by employing even/odd dual FIFO queues at each input and even/odd dual switching planes dedicated to each even/odd queue. Under random traffic, it gives 76.4% throughput without output expansion and 100% with output expansion r=2, with the same amount of crosspoints as for the ordinary output expansion scheme  相似文献   

18.
Motivated by the observation that switch throughput is mainly limited by the number of the maximum matching or pairing, instead of the head-of-line (HOL) effect, a pairing algorithm trying to maximize the number of pairing, for switches with K buffers in each input port is proposed. As shown in the related formula and simulation data, this algorithm performs well and can boost the switch throughput to 0.981 from traditional 0.632 when K=4 even as the switch size→∞  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号