首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到18条相似文献,搜索用时 140 毫秒
1.
王斌  丁炜 《现代有线传输》2003,(3):45-47,54
输入队列(IQ)交换机在采用虚输出队列(VOQ)技术基础上,能够提供低成本的高速交换机,但在一般调度算法下,IQ交换机缺乏保证QoS的能力。本文在Birkhoff和Von Neumann研究的基础上运用随机过程理论和网络计算理论提出一种预留带宽的调度算法,并分析了相关的延迟上界和VOQ需要的内存情况。  相似文献   

2.
王晶  乔庐峰  陈庆华  郑振  李欢欢 《通信技术》2015,48(10):1196-1224
针对星载IP交换机中硬件存储资源使用受限的情况,提出了一种适用于共享存储交换结构、存储资源占用少的队列管理器。通过添加索引的方法,使得所有的单播队列能够共享一个指针存储区。根据位图映射,将组播指针转化为多个单播指针,即可把组播操作的数据流按照单播操作方式写到相应的逻辑队列路径,达到节约存储器资源的目的。该队列管理器通过链表数据结构的头部和尾部来控制指针索引的写入和输出。最后,在Xilinx的xc6vlx130t FPGA进行了综合实现,结果显示,该方案相比基于指针复制的队列管理器,在8端口的交换机中存储器资源的使用量要节约22%以上。  相似文献   

3.
Internet 中的交换机面临着高速交换和提供QoS保证的双重挑战,前者要求交换机的缓存以线速工作,后者要求交换机能完全模仿输出队列交换机。目前交叉点缓存交换机仿真输出队列交换机的方案需要交换机内部加速2倍,对硬件实现要求较高。该文利用双端口技术,提出了一种新型的交叉点缓存交换机结构,理论分析说明,该变长分组交换机在无需内部加速的情况下能够仿真输出队列交换机,并且交叉点缓存的需求是有下界的,从而表明该交换结构适合高速交换。  相似文献   

4.
提出了一种新的基于旁路队列Banyan交换的ATM交换机结构,既适合于单播情况,又适合于多播情况,复杂度也比较低。本文详细讨论了交换机吞吐量的一种新算法,并依据此算法对该交换机进行了性能分析。  相似文献   

5.
目前的高速交换机大都采用虚拟输出排队(VOQ)方法,并把变长分组拆成定长信元后交换。已有的关于队列组织与管理的文章都着重于讨论N×N交换结构中每个端口对应于一个实际物理端口的情况,但在实际中经常需要对端口进行复用和解复用。我们针对863“实用化综合接入系统”边缘路由器子系统的设计特点,自行提出了基于输出子端口的队列组织与解复用算法。这篇文章阐述了该算法,并给出了算法的性能分析和具体实现。  相似文献   

6.
雷达组网通信中的ATM交换机   总被引:1,自引:0,他引:1  
介绍了ATM交换技术在雷达组网通信中的应用。通过对基于中央处理站的雷达组网通信技术需求的分析,提出了一种类似于用户—服务器模式的ATM交换结构。ATM技术是B—ISDN的基础,ATM交换技术是ATM技术的核心,它不仅可用于民用通信业务,而且对于电子干扰环境下雷达网对密集多目标跟踪时所涉及的通信业务也很有效。文章对ATM交换机在雷达网中的数据交换作了分析,并以计算机仿真手段评估了三种特定通信环境下输出缓冲型ATM交换机的利用率和输出队列概率分布情况。  相似文献   

7.
在FCFS(先来先服务)准则下,ATM(异步传递模式)交换机的吞吐量为0.59。文章提出了三种提高ATM交换机的吞吐量的方案:方案A(输入扩展方案)、方案B(窗口选择方案)和方案C(信元舍充方案)。笔者认为,对于方案C,所有信元都属于一个猝发的相关业务,被分配到同一个输出端口,而且每一个业务源都是IBP(中断贝努利业务进程)模型,方案C的结果表明:目标的相关性不影响吞吐量,当所有的输入业务平衡时,  相似文献   

8.
输入/输出ATM交换机在突发性业务下的性能   总被引:1,自引:0,他引:1  
本文详尽分析了内部无阻塞输入/输出排队反压型ATM交换机在突发性业务下信元丢失、交换机最大吞吐量等性能。输入端口信元的到达过程是ON-OFF突发流,且ON态以概率p发送信元,ON-OFF长度为Pareto分布的随机变量;属于同一突发流的信元输往同一个输出端口,不同突发流的信元等概率输往不同的输出端口;输入/输出缓冲器长度有限,交换机加速因子S任意。本文同时比较了突发长度为周期/几何分布下的交换机性能,其结论对实际设计一输入/输出排队反压型ATM交换机具有一定参考意义。  相似文献   

9.
王斌  丁炜 《现代传输》2003,5(3):45-47
输入队列 ( IQ)交换机在采用虚输出队列 ( VOQ)技术基础上 ,能够提供低成本的高速交换机 ,但在一般调度算法下 ,IQ交换机缺乏保证 Qo S的能力。本文在 Birkhoff和 Von Neumann研究的基础上运用随机过程理论和网络计算理论提出一种预留带宽的调度算法 ,并分析了相关的延迟上界和 VOQ需要的内存情况。  相似文献   

10.
本文利用矩阵几何分析法分析了内部无阻塞输入/输出排队反压型ATM交换机在均匀贝努利输入下的信元丢失、信元延时及吞吐量等性能指标。本文结论对实际设计一反压型输入/输出排队分组交换机具有一定参考意义。  相似文献   

11.
We introduce a new approach to ATM switching. We propose an ATM switch architecture which uses only a single shift-register-type buffering element to store and queue cells, and within the same (physical) queue, switches the cells by organizing them in logical queues destined for different output lines. The buffer is also a sequencer which allows flexible ordering of the cells in each logical queue to achieve any appropriate scheduling algorithm. This switch is proposed for use as the building block of large-stale multistage ATM switches because of low hardware complexity and flexibility in providing (per-VC) scheduling among the cells. The switch can also be used as scheduler/controller for RAM-based switches. The single-queue switch implements output queueing and performs full buffer sharing. The hardware complexity is low. The number of input and output lines can vary independently without affecting the switch core. The size of the buffering space can be increased simply by cascading the buffering elements  相似文献   

12.
1IntroductionTheAsynchronousTransferMode(ATM)isconsideredapromisingtechniquetotransferandswitchvariouskindsofmedia,suchastele...  相似文献   

13.
Output-queued switch emulation by fabrics with limited memory   总被引:9,自引:0,他引:9  
The output-queued (OQ) switch is often considered an ideal packet switching architecture for providing quality-of-service guarantees. Unfortunately, the high-speed memory requirements of the OQ switch prevent its use for large-scale devices. A previous result indicates that a crossbar switch fabric combined with lower speed input and output memory and two times speedup can exactly emulate an OQ switch; however, the complexity of the proposed centralized scheduling algorithms prevents scalability. This paper examines switch fabrics with limited memory and their ability to exactly emulate an OQ switch. The switch architecture of interest contains input queueing, fabric queueing, flow-control between the limited fabric buffers and the inputs, and output queueing. We present sufficient conditions that enable this combined input/fabric/output-queued switch with two times speedup to emulate a broad class of scheduling algorithms operating an OQ switch. Novel scheduling algorithms are then presented for the scalable buffered crossbar fabric. It is demonstrated that the addition of a small amount of memory at the crosspoints allows for distributed scheduling and significantly reduces scheduling complexity when compared with the memoryless crossbar fabric. We argue that a buffered crossbar system performing OQ switch emulation is feasible for OQ switch schedulers such as first-in-first-out, strict priority and earliest deadline first, and provides an attractive alternative to both crossbar switch fabrics and to the OQ switch architecture.  相似文献   

14.
Shared buffer switches consist of a memory pool completely shared among output ports of a switch. Shared buffer switches achieve low packet loss performance as buffer space is allocated in a flexible manner. However, this type of buffered switches suffers from high packet losses when the input traffic is imbalanced and bursty. Heavily loaded output ports dominate the usage of shared memory and lightly loaded ports cannot have access to these buffers. To regulate the lengths of very active queues and avoid performance degradations, threshold‐based dynamic buffer management policy, decay function threshold, is proposed in this paper. Decay function threshold is a per‐queue threshold scheme that uses a tailored threshold for each output port queue. This scheme suggests that buffer space occupied by an output port decays as the queue size of this port increases and/or empty buffer space decreases. Results have shown that decay function threshold policy is as good as well‐known dynamic thresholds scheme, and more robust when multicast traffic is used. The main advantage of using this policy is that besides best‐effort traffic it provides support to quality of service (QoS) traffic by using an integrated buffer management and scheduling framework. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

15.
钱炜宏  李乐民 《电子学报》1998,26(11):46-50,54
本文分析了一种内部无阻塞反压型输入/输出排队ATM交换机,在非均匀负载输入下的信元丢失、信元延时指标,文中采用一一种Geom/PH/I/K排队模型分析输入排队系统仲裁系统的分析采用了一种二维Markov过程,结论对设计一种反压型输入/输出排队ATM交换机有参考意义。  相似文献   

16.
针对局域网和城域网中的多种数据传输速率结构,该文提出了一种共享存储器交换结构,在普通共享存储器交换结构的基础上,对支持可变的端口速率,以及支持变长数据包交换进行了改进,所提出的交换结构还具有自同步特点,即各输入输出端口之间不需要全局同步;同时还考虑了对变长数据包的队列管理。  相似文献   

17.
The Tera ATM LAN project at Carnegie Mellon University addresses the interconnection of hundreds of workstations in the Electrical and Computer Engineering Department via an ATM-based network. The Tera network architecture consists of switched Ethernet clusters that are interconnected using an ATM network. This paper presents the Tera network architecture, including an Ethernet/ATM network interface, the Tera ATM switch, and its performance analysis. The Tera switch architecture for asynchronous transfer mode (ATM) local area networks (LAN's) incorporates a scalable nonblocking switching element with hybrid queueing discipline. The hybrid queueing strategy includes a global first-in first-out (FIFO) queue that is shared by all switch inputs and dedicated output queues with small speedup. Due to hybrid queueing, switch performance is comparable to output queueing switches. The shared input queue design is scalable since it is based on a Banyan network and N FIFO memories. The Tera switch incorporates an optimal throughput multicast stage that is also based on a Banyan network. Switch performance is evaluated using queueing analysis and simulation under various traffic patterns  相似文献   

18.
We describe the development and analysis of an asynchronous transfer mode (ATM) switch architecture based on input–output buffers, a sort-Banyan network and a feedback acknowledgement (ACK) signal to be sent to the input unit. This is an input-buffer and output-buffer type of switch but with the different approach of feedback, which uses an acknowledgement feedback filter for recycling cells that lose contention at the routing network. In contrast to another design1 which uses a merge network, a path allocation network and a concentration network at the output of the sort network to generate the acknowledgement signal, in this new proposal, the filler network has been simplified using only N filter nodes (2 × 2 switch element) and multiplexers which are placed at the feedforward of the sort network. This switch provides non-blocking, low cell loss and high throughput properties. It is designed with internal speed-up to enhance its throughput, to reduce the head of line (HOL) blocking, and to reduce the end-to-end delay.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号