首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 796 毫秒
1.
定长光突发下的FDL缓存和调度   总被引:1,自引:1,他引:0  
张劲松  曹明翠  罗风光  罗志祥 《激光技术》2005,29(2):153-155,161
为了改善光突发交换的阻塞性能,分析了定长光突发交换在FDL光缓存的排队和调度,提出了基于中间共享FDL光缓存结构的长队列优先(LQP)调度方案,该方案使交换端口充分共享FDL缓存单元,调度时间短,在模拟仿真中得到较好的阻塞性能结果。  相似文献   

2.
基于同波长光纤延迟线集光突发交换结构及其性能分析   总被引:2,自引:0,他引:2  
黄安鹏  谢麟振 《通信学报》2003,24(12):21-31
由于光突发交换机制本身就避免了光缓存的使用,但是现实中,光突发包之间竞争以及提供优先级服务又依靠光纤延迟线来解决。为此,我们设计了应用光纤延迟线的光突发交换网络核心节点结构。为了避免光纤延迟线色散引起的突发包输入排队缓存偏移,在该结构中设计了同波长光纤延迟线集。采用了空分交换矩阵,避免了波长转换的需要。为了有效运行该交换结构,我们提出了输入排队与自适应光缓存调度算法,而该算法不仅仅适用于光突发交换,也适合于光纤延迟线得到普遍应用的光分组交换。该调度算法能够提供优先级服务,避免队头阻塞,对该调度算法建立了严格的理论分析模型,并进行了仿真。仿真结果表明,与传统的延迟线竞争解决方案相比,这一调度算法能够改善交换性能1到2个数量级,是利用光纤延迟线解决光突发交换中竞争问题的一个较佳方案。  相似文献   

3.
一种可提供QoS保障的新型交换结构   总被引:3,自引:1,他引:2       下载免费PDF全文
伊鹏  汪斌强  郭云飞  李挥 《电子学报》2007,35(7):1257-1263
本文采用带缓存交叉开关作为核心交换单元,构建了一种空分复用扩展的联合输入/交叉节点/输出排队(SDM-CICOQ)交换结构,从理论上证明了当扩展因子为2时,SDM-CICOQ交换结构可以获得100%的吞吐量,并且能够完全模拟输出排队(OQ)交换结构,从而能够提供服务质量(QoS)保障.本文还给出了一种层次化优先级调度(HPS)方案作为SDM-CICOQ交换结构调度机制的工程设计参考,仿真结果表明采用HPS调度方案SDM-CICOQ交换结构可获得良好的性能.  相似文献   

4.
戴乐 《光子技术》2005,(1):42-46
光突发交换网中的光交换节点的FDL缓存器的最大延迟时间B对系统性能有很大的影响,B过短会使突发丢失率偏大,B过长会增加业务的排队延迟。文中介绍了设计光交换节点的FDL缓存中的最大延迟时间的方法,并且用数值结果表明了使用光缓存可以降低丢失率。  相似文献   

5.
高速信元交换调度算法研究   总被引:11,自引:2,他引:9       下载免费PDF全文
输入缓存交换结构的特点是缓存器和交换结构的运行速率与端口速率相等、实现容易,但存在队头阻塞(HOL),其吞吐率只有约58%.采用虚拟输出排队方法(VOQ)和适当的信元调度算法可消除HOL,使吞吐率达到100%.本文通过仿真对几种调度算法:PIM、iSLIP和LPF进行了全面地研究、比较和评价.  相似文献   

6.
针对带缓存的的交叉开关(Combined Input and Crosspoint Buffered Queuing,CICQ)交换结构在多端口多优先级而引起的存储规模和面积开销增大的问题,提出了一种基于共享缓存思想的CICQ交换结构。它使用虚拟输入排队(VOQ)的方式,支持多优先级流控,并通过动态分配虚拟输出队列的方式设计队列管理结构,从而降低复杂度,减少对缓存区容量的需求。通过对各模块进行VHDL代码设计和使用Modelsim进行仿真,结果表明该结构能够实现交换结构数据的缓存。  相似文献   

7.
在光突发交换(OBS)网络中,突发竞争是影响网络性能的一个重要因素,因此如何有效地解决它,成了OBS网络非常重要的问题.在分析当前文献中的解决方案的优缺点后,提出了一种突发竞争解决方案的系统实现.该实现将光纤延迟线(FDL)时城缓存与波长转换器(TWC)波长变换、空城技术结合在一起,构造了一个基于前向转发缓存和反馈循环缓存的两级交换结构.最后从多方面对该系统实现的竞争解决有效性进行了性能分析和计算机仿真,结果表明:它在适当的业务强度(小于0.6)下,能有效改善突发丢失率和突发延迟;同时能降低系统所需的光器件数目.  相似文献   

8.
光突发交换核心节点的缓存结构设计   总被引:2,自引:2,他引:0  
提出了光突发交换(OBS)核心节点上缓存结构的合理而高效的设计方案。在建立核心节点的突发流与突发竞争模型的基础上,提出缓存结构设计应满足的基本条件,进而义提出将缓存结构和交换结构相结合的集成结构方案;同时,通过归纳和扩展,提出缓存结构的多角度综合分类规则。分析和仿真结果显示,集成结构方案满足所提出的缓存设计基本条件,在不受成本等限制时,尽量使用光纤共享型,而且在同样的共享范围,输出型要比输入型好,特别是光纤共享情况更为明显。  相似文献   

9.
光突发交换(Optical Burst Switching,OBS)网络是下一代IP over WDM全光网络的发展趋势.文章主要研究并分析了OBS网络中配备有光缓存的核心路由器性能,给出了突发数据分组阻塞率、输出端口占用率、排队时延等性能参数.所得结果对0BS网络理论研究进行了一定的补充.  相似文献   

10.
在路由器或交换机的交换结构中实现组播是提高组播应用速度的重要途径之一。传统的交叉开关结构(crossbar)组播调度方案有两种缺陷,一种是性能较低,另一种是实现的复杂度太高,无法满足高速交换的需要。该文提出了一个新的基于交叉开关的两级组播交换结构(TSMS),第1级是组播到单播的交换结构,第2级是联合输入和输出排队(CIOQ)交换,并为该结构设计了合适的最大扇出排队(FCN)优先-均匀分配中间缓存调度算法(LFCNF-UMBA)。理论分析和仿真实验都显示在该结构中,加速比低于22/(N+1)倍时吞吐率不可能实现100%;而采用LFCNF-UMBA调度算法,2倍加速比就可保证在任意允许(admissible)组播的吞吐率达到100%。  相似文献   

11.
The Data Vortex switch architecture has been proposed as a scalable low-latency interconnection fabric for optical packet switches. This self-routed hierarchical architecture employs synchronous timing and distributed traffic-control signaling to eliminate optical buffering and to reduce the required routing logic, greatly facilitating a photonic implementation. In previous work, we have shown the efficient scalability of the architecture under uniform and random traffic conditions while maintaining high throughput and low-latency performance. This paper reports on the performance of the Data Vortex architecture under nonuniform and bursty traffic conditions. The results show that the switch architecture performs well under modest nonuniform traffic, but an excessive degree of nonuniformity will severely limit the scalability. As long as a modest degree of asymmetry between the number of input and output ports is provided, the Data Vortex switch is shown to handle very bursty traffic with little performance degradation.  相似文献   

12.
This paper presents the performance evaluation of a new cell‐based multicast switch for broadband communications. Using distributed control and a modular design, the balanced gamma (BG) switch features high performance for unicast, multicast and combined traffic under both random and bursty conditions. Although it has buffers on input and output ports, the multicast BG switch follows predominantly an output‐buffered architecture. The performance is evaluated under uniform and non‐uniform traffic conditions in terms of cell loss ratio and cell delay. An analytical model is presented to analyse the performance of the multicast BG switch under multicast random traffic and used to verify simulation results. The delay performance under multicast bursty traffic is compared with those from an ideal pure output‐buffered multicast switch to demonstrate how close its performance is to that of the ideal but impractical switch. Performance comparisons with other published switches are also studied through simulation for non‐uniform and bursty traffic. It is shown that the multicast BG switch achieves a performance close to that of the ideal switch while keeping hardware complexity reasonable. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

13.
Considers an N×N nonblocking, space division, input queuing ATM cell switch, and a class of Markovian models for cell arrivals on each of its inputs. The traffic at each input comprises geometrically distributed bursts of cells, each burst destined for a particular output. The inputs differ in the burstiness of the offered traffic, with burstiness being characterized in terms of the average burst length. We analyze burst delays where some inputs receive traffic with low burstiness and others receive traffic with higher burstiness. Three policies for head-of-the-line contention resolution are studied: two static priority policies [shorter-expected-burst-length-first (SEBF), longer-expected-burst-length-first (LEBF)] and random selection (RS). Direct queuing analysis is used to obtain approximations for asymptotic high and low priority mean burst delays with the priority policies. Simulation is used for obtaining mean burst delays for finite N and for the random selection policy. As the traffic burstiness increases, the asymptotic analysis can serve as a good approximation only for large switch sizes. Qualitative performance comparisons based on the asymptotic analysis are, however, found to continue to hold for finite switch sizes. It is found that the SEBF policy yields the best delay performance over a wide range of loads, while RS lies in between. SEBF drastically reduces the delay of the less bursty traffic while only slightly increasing the delay of the more bursty traffic. LEBF causes severe degradation in the delay of less bursty traffic, while only marginally improving the delays of the more bursty traffic. RS can be an adequate compromise if there is no prior knowledge of input traffic burstiness  相似文献   

14.
This paper presents and evaluates a quasi-optimal scheduling algorithm for input buffered cell-based switches, named reservation with preemption and acknowledgment (RPA). RPA is based on reservation rounds where the switch input ports indicate their most urgent data transfer needs, possibly overwriting less urgent requests by other input ports, and an acknowledgment round to allow input ports to determine what data they can actually transfer toward the desired switch output port. RPA must be executed during every cell time to determine which cells can be transferred during the following cell time. RPA is shown to be as simple as the simplest proposals of input queuing scheduling, efficient in the sense that no admissible traffic pattern was found under which RPA shows throughput limitations, and flexible, allowing the support of packet-mode operations and different traffic classes with either strict priority discipline or bandwidth guarantee requirements. The effectiveness of RPA is assessed with detailed simulations in uniform as well as unbalanced traffic conditions and its performance is compared with output queuing switches and the optimal maximum weighted matching (MWM) algorithm for input-buffered switches. A bound on the performance difference between the heuristic weight matching adopted in RPA and MWM is analytically computed  相似文献   

15.
A packet-switched system architecture based on the combination of a single-chip output-buffered switch element and input queues that sort arriving packets on a per-output-port basis is proposed. Scheduling is performed in a distributed two-stage approach. Independent arbiters at each of the inputs resolve input contention. Whereas the output-buffered switch element resolves output contention. As a result of this distribution of functionality, complexity of the input arbiters is only linearly proportional to the number of output ports N, thus offering better scalability than purely input-buffered approaches that require complex centralized schedulers. Since the input queues are used as the main buffering mechanism, only a relatively small amount of memory (on the order of N/sup 2/ packet locations) is required in the shared-memory switch, allowing high-throughput implementations. We present simulation results to demonstrate the high performance and robustness under bursty traffic achieved with the proposed system architecture. A practical implementation in the form of the PRIZMA family of switch chips is outlined, with emphasis on its versatility in scaling in terms of both port speed and number of ports, and its support for quality-of-service mechanisms.  相似文献   

16.
The authors present a high-performance self-routing packet switch architecture, called Sunshine, that can support a wide range of services having diverse performance objectives and traffic characteristics. Sunshine is based on Batcher-banyan networks and achieves high performance by utilizing both internal and output queuing techniques within a single architecture. This queuing strategy results in an extremely robust and efficient architecture suitable for a wide range of services. An enhanced architecture allowing the bandwidth from an arbitrary set of transmission links to be aggregated into trunk groups to create high bandwidth pipes is also presented. Trunk groups appear as a single logical port on the switch and can be used to increase the efficiency of the switch in an extremely bursty environment or to increase the access bandwidth for selected high-bandwidth terminations. Simulation results are presented  相似文献   

17.
An asynchronous transfer mode (ATM) switch architecture that uses the broadcasting transmission medium for transmission of cells from input ports to output ports is introduced. Cell transmission and its control are separated completely, and cell transmission control, i.e. header operation, is executed before cell transmission (control ahead). With this operation, cell transmission and its control can be executed in a pipeline style, allowing high-speed cell exchange and making transmission control easier. One of the essential problems for ATM switches which use the broadcasting transmission medium is high-speed operation of the transmission medium. The switch fabric performance is analyzed according to its switching speed. Numerical results show that the ATM switch proposed shows good cell loss performance even when its switching speed is restricted, provided that switch utilization is below 1. Extensions to the switch that lead to robustness against bursty traffic are shown  相似文献   

18.
This paper proposes a new high-performance switching element with the new shared-memory queuing policy, which is called blocked-cell shared-memory (BCSM) queuing. As the name means, instead of buffering all cells through them, the BCSM switching elements only buffer the blocked cells at their input ports. Theoretic analysis results under the uniform traffic model prove that BCSM switching elements have better performance than shared-memory switching elements.  相似文献   

19.
The periodic cell stream is a very important member among the input traffic sources in ATM networks. In this paper, a finite-buffered ATM multiplexer with traffic sources composed of a periodic cell stream, multiple i.i.d Bernoulli cell streams and bursty two-state Markov Modulated Bernoulli Process (MMBP) cell streams is exactly analyzed. The probability mass function of queuing delay, the autocorrelation and power spectrum of delay jitter for this periodic cell stream are derived. The analysis is used to expose the behavior of delay jitter for a periodic cell stream through an ATM multiplexer in a bursty traffic environment. The simulation results indicate that the analytical results are accurate.  相似文献   

20.
The paper studies input-queued packet switches loaded with both unicast and multicast traffic. The packet switch architecture is assumed to comprise a switching fabric with multicast (and broadcast) capabilities, operating in a synchronous slotted fashion. Fixed-size data units, called cells, are transferred from each switch input to any set of outputs in one time slot, according to the decisions of the switch scheduler, that identifies at each time slot a set of nonconflicting cells, i.e., cells neither coming from the same input, nor directed to the same output. First, multicast traffic admissibility conditions are discussed, and a simple counterexample is presented, showing intrinsic performance losses of input-queued with respect to output-queued switch architectures. Second, the optimal scheduling discipline to transfer multicast packets from inputs to outputs is defined. This discipline is rather complex, requires a queuing architecture that probably is not implementable, and does not guarantee in-sequence delivery of data. However, from the definition of the optimal multicast scheduling discipline, the formal characterization of the sustainable multicast traffic region naturally follows. Then, several theorems showing intrinsic performance losses of input-queued with respect to output-queued switch architectures are proved. In particular, we prove that, when using per multicast flow FIFO queueing architectures, the internal speedup that guarantees 100% throughput under admissible traffic grows with the number of switch ports.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号