首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 171 毫秒
1.
OBS网络中基于优先级与参数可调的偏射路由算法   总被引:1,自引:0,他引:1       下载免费PDF全文
管爱红  王波云  张元  傅洪亮 《电子学报》2011,39(7):1668-1672
为了保证OBS网络中不同优先级业务的服务质量和解决偏射算法在偏射控制上的问题,提出一种基于优先级与参数可调的偏射路由算法.该算法通过设置可调参数“偏射概率”来控制冲突突发包的偏射,并从可调丢包率和偏射路径长度意义上寻找最佳偏射路由.当冲突发生时,分割优先级低的突发包,并将分割突发包偏射到空闲的链路上;在空闲的链路中选择...  相似文献   

2.
支持业务均衡的OBS网络分布式回退偏射路由算法   总被引:1,自引:1,他引:0  
突发竞争是引起光突发交换(OBS)网络丢包的主要因素之一,而作为一种有效的突发竞争解决方法的偏射路由,因其对光缓存器的性能和数量要求较低而备受关注。然而,现有偏射路由算法忽略了偏射业务对偏射路径上原有业务的影响以及业务被偏射后自身的成功传输率,同时也忽略由偏射引起的偏置时间不够的问题,致使OBS网络整体丢包率较高。为此,本文提出了一种支持业务均衡的分布式回退偏射路由算法(DBDF-LB),基本思路在于:从全网业务均衡出发,根据网络状态信息分布式的为偏射业务选择一条丢包率最小、跳数最短的路径;然后通过回退机制,利用网络资源来缓存突发包,从而补偿由偏射引起的额外偏置时间。与典型的最短路径偏射算法(SPDF)相比,DBDF-LB能有效降低丢包率约23~50%,而成功传输突发包所经历的平均跳数增加不到1跳。  相似文献   

3.
提出了一种基于优先级的偏射路由机制,该机制通过分割偏射优先级低的突发包来保证高优先级突发包的QoS.仿真结果表明,高优先级突发包的丢失率比低优先级的要低,并且突发包的丢失率随数据信道数目的增加而降低.该机制不仅能够有效地降低突发包的丢失率,而且能够为高优先级突发包提供很好的QoS保障.  相似文献   

4.
iSCSI虚拟交换机包转发调度算法FC-WFQ   总被引:1,自引:0,他引:1  
与传统IP交换机不同,iSCSI虚拟交换机不仅实现IP数据包的转发,还应完成TCP、iSCSI和SCSI的协议处理.短包优先调度算法可保证iSCSI控制及命令PDU(Packet Data Unit)的优先传输,却未对该类数据包的转发带宽作出限制,该方法缺少针对iSCSI数据流特性的定量分析.通过排队理论建立iSCSI虚拟交换机转发iSCSI PDU过程的数学模型,然后提出一种iSCSI虚拟交换机的包调度算法FC-WFQ(Flow Control-WFQ),最后应用网络仿真软件ns-2建立仿真场景对交换系统进行测试.随着命令到达率以及命令中读写比例的变化,FC-WFQ对各数据流的转发带宽权重做相应的实时调整.实验结果显示,该调度算法可明显降低读写任务的平均响应时间,并显著提高iSCSI虚拟交换机的吞吐量.  相似文献   

5.
突发竞争是OBS(光突发交换)网络需要解决的关键问题,偏射路由作为一种有效的竞争解决方案而被广泛研究。文章提出了一种基于拥塞避免的提前偏射路由算法,利用周期性反馈的网络拥塞信息按一定概率提前偏射部分突发包。与传统的最短偏射路由算法相比,本算法达到了避免拥塞以及均衡网络负载的目的。仿真结果表明:文章所提算法在突发丢失率、吞吐量以及平均链路利用率方面性能都有所提高。  相似文献   

6.
光突发交换是适合在当前技术条件下的新交换技术,它比电路交换灵活,带宽利用率高,又比光分组交换易于实现,将成为下一代光IP骨干网的核心技术。在光突发交换网络中,偏射路由算法是有效解决突发竞争、提高网络性能的一种重要措施。首先分析了发送端控制的偏射路由技术,在此基础上提出了一种基于阈值检测的偏射路由的改进算法。该算法通过跳数来控制无效偏射路由并通过有条件地丢弃偏射突发来保证正常突发的丢包率,从而减少偏射路由对网络负荷的影响,使整个网络的丢包率得到一定提高,改善网络性能。  相似文献   

7.
白雪峰 《通信世界》2002,(16):35-35,41
目前的通信网络是由各种不同的单一服务网络重叠交叉构成的,各种电信业务是通过指定的用户接口提供的,但是指定的接口只能提供指定的业务,业务的实现由电信业务网络的业务交换机来提供,其交换机制和呼叫处理机制与所提供的电信业务密切相关,业务交换机之间又是通过与业务密切相关的传输和信令控制功能接口来定义的,对电信业务、交换和传输路由的处理依赖于各个交换机的通信功能、信令功能、管理功能、计费功能和业务提供功能,这些功能在各个电信业务网络中是分别处理和管理的。  相似文献   

8.
光突发交换(OBS)网络中,偏射路由算法是有效地解决突发竞争、提高网络性能的一种重要措施,但通过分析发现,它可能会导致偏射路由上正常(非偏射)突发的丢失率上升.提出了一种基于竞争控制的条件偏射路由算法,根据定义的偏射条件检测函数,有条件的偏射或丢弃竞争突发来保证偏射路由上正常突发的QoS.仿真表明该算法可以很好地控制偏射突发对网络偏射路由上正常流量的影响.  相似文献   

9.
陈赓  夏玮玮  沈连丰 《通信学报》2014,35(12):78-88
针对异构无线网络融合环境提出了一种基于多门限预留机制的自适应带宽分配算法,从而为多业务提供QoS保证。该算法采用多宿主传输机制,通过预设各个网络中不同业务的带宽分配门限,并基于各个网络中不同业务和用户的带宽分配矩阵,根据业务k支持的传输速率等级需求和网络状态的变化,将自适应带宽分配问题转化为一个动态优化问题并采用迭代方法来求解,在得到各个网络中不同业务和用户优化的带宽分配矩阵的同时,在带宽预留门限和网络容量的约束条件下实现网络实时吞吐量的最大化,以提高整个异构网络带宽的利用效率。数值仿真结果显示,所提算法能够支持满足QoS需求的传输速率等级,减小了新用户接入异构网络的阻塞概率,提高了平均用户接入率并将网络吞吐量最大提高40%。  相似文献   

10.
文章把信道分组和PPJET(Preemptive Prioritized Just Enough Time)协议以及光纤延时线三者结合使用,提出了一种新的QoS调度方案D-PPJET(Developed-PPJET)协议.在该协议下,高优先级数据包可以被动态地调度到任何可用的信道上去,而低优先级数据包只能在规定的部分信道上调度,并利用光纤延时线(FDL)缓存机制为未成功调度的突发包提供二次调度的机会,这样高优先级业务包调度成功的机会就大大增加,为高优先级业务提供了保证,同时还使得低优先级突发分组丢失率下降.结果表明它能够大大地改善整个网络突发包的丢失率、信道吞吐量及信道利用率.  相似文献   

11.
徐宁  余少华 《中国通信》2013,10(2):134-142
The fast growth of Internet has created the need for high-speed switches. Recently, the crosspoint-queue switch has attracted attention because of its scalability and high performance. However, the Crosspoint-Queue switch does not perform well under non-uniform traffic. To overcome this limitation, the Load-Balanced Crosspoint-Queued switch architecture has been proposed. In this architecture, a load-balance stage is placed ahead of the Crosspoint-Queued stage. The load-balance stage transforms the incoming non-uniform traffic into nearly uniform traffic at the input port of the second stage. To avoid out-of-order cells, this stage employs flow-based queues in each crosspoint buffer. Analysis and simulation results reveal that under non-uniform traffic, this new switch architecture achieves a delay performance similar to that of the Output-Queued switch without the need for internal acceleration. In addition, its throughput is much better than that of the pure crosspoint-queued switch. Finally, it can achieve the same packet loss rate as the crosspoint-queue switch, while using a buffer size that is only 65% of that used by the crosspoint-queue switch.  相似文献   

12.
徐宁  余少华  汪学舜 《电子学报》2012,40(12):2360-2366
针对混合输入-交叉点队列(CICQ)交换结构受限于"流控通信延时"、"需要2倍内部加速仿真输出队列(OQ)交换"以及单纯交叉点缓冲(CQ)存在"非均衡流量模式下吞吐量性能不足"等问题,本文提出一种新型的"负载均衡交叉点缓冲交换结构".采用固定模式时隙轮转匹配进行负载均衡处理,将到达输入端口的非均衡流量转化为近似均衡流量并且平均分配到同一输出端口对应的交叉缓冲中,从而可以利用较小的交叉点缓冲来模拟输出队列调度,简化调度过程并且提高吞吐量.理论分析证明了这种新结构的稳定性以及模拟输出队列交换的能力.同时仿真表明,采用该交换结构可以在不需要内部加速的条件下获得相当于输出队列交换的性能,并且有效地解决了交叉点缓冲队列非均衡流量性能不足的问题.  相似文献   

13.
A general model is presented to study the performance of a family of space-domain packet switches, implementing both input and output queuing and varying degrees of speedup. Based on this model, the impact of the speedup factor on the switch performance is analyzed. In particular, the maximum switch throughput, and the average system delay for any given degree of speedup are obtained. The results demonstrate that the switch can achieve 99% throughput with a modest speedup factor of four. Packet blocking probability for systems with finite buffers can also be derived from this model, and the impact of buffer allocation on blocking probability is investigated. Given a fixed buffer budget, this analysis obtains an optimal placement of buffers among input and output ports to minimize the blocking probability. The model is also extended to cover a nonhomogeneous system, where traffic intensity at each input varies and destination distribution is not uniform. Using this model, the effect of traffic imbalance on the maximum switch throughput is studied. It is seen that input imbalance has a more adverse effect on throughput than output imbalance  相似文献   

14.
The performance analysis of an input access scheme in a high-speed packet switch for broadband ISDN is presented. In this switch, each input port maintains a separate queue for each of the outputs, thus n 2 input queues in an (n×n) switch. Using synchronous operation, at most one packet per input and output will be transferred in any slot. We derive lower and upper bounds for the throughput which show close to optimal performance. The bounds are very tight and approach to unity for switch sizes on the order of a hundred under any traffic load, which is a significant result by itself. Then the mean packet delay is derived and its variance is bounded. A neural network implementation of this input access scheme is given. The energy function of the network, its optimized parameters and the connection matrix are determined. Simulation results of the neural network fall between the theoretical throughput bounds  相似文献   

15.
In this paper we present a novel fast packet switch architecture based on Banyan interconnection networks, called parallel-tree Banyan switch fabric (PTBSF). It consists of parallel Banyans (multiple outlets) arranged in a tree topology. The packets enter at the topmost Banyan. Internal conflicts are eliminated by using a conflict-free 3 × 4 switching element which distributes conflicting cells over different Banyans. Thus, cell loss may occur only at the lowest Banyan. Increasing the number of Banyans leads to a noticeable decrease in cell loss rate. The switch can be engineered to provide arbitrarily high throughput and low cell loss rate without the use of input buffering or cell pre-processing. The performance of the switch is evaluated analytically under uniform traffic load and by simulation, under a variety of asynchronous transfer mode (ATM) traffic loads. Compared to other proposed architectures, the switch exhibited stable and excellent performance with respect to cell loss and switching delay for all studied conditions as required by ATM traffic sources. The advantages of PTBSF are modularity, regularity, self-routing, low processing overhead, high throughput and robustness, under a variety of ATM traffic conditions. © 1998 John Wiley & Sons, Ltd.  相似文献   

16.
This paper proposes a new three input nodal structure within the data vortex packet switched interconnection network. With additional optical switches, the modified architecture allows for two input packets in addition to a buffered packet to be processed simultaneously within a routing node. A much higher degree of parallel processing is allowed in comparison to previously proposed enhanced buffer node with two input processing or the original network node with single input processing. Unlike the previous contention prevention mechanism, the new network operates by introducing the packet blocking within the node if no exit path is available. This eliminates the traffic control signaling and the strict timing alignment associated with the routing paths which simplifies the overall network implementation. This study shows that both data throughput and the latency performance are improved significantly within the new network. The study compares the three input node with the two input node as well as the original single input data vortex node. Due to additional switch count and nodal cost, networks that support the same I/O ports and of the same cost are compared for a fair comparison. The limitation introduced by the blocking rate is also addressed. The study has shown that under reasonable traffic and network condition, the blocking rate can be kept very low without introducing complex controls and management for dropped packets. As previous architectures require operation under saturation point, the proposed architecture should also operate at reasonable level of network redundancy to avoid excessive packet drop. This study provides guidance and criteria on the proposed three input network design and operation for feasible applications. The proposed network provides an attractive alternative to the previous architectures for higher throughput and lower latency performance.  相似文献   

17.
Input–output queued switches have been widely considered as the most feasible solution for large capacity packet switches and IP routers. In this paper, we propose a ping‐pong arbitration scheme (PPA) for output contention resolution in input–output queued switches. The challenge is to develop a high speed and cost‐effective arbitration scheme in order to maximize the switch throughput and delay performance for supporting multimedia services with various quality‐of‐service (QoS) requirements. The basic idea is to divide the inputs into groups and apply arbitration recursively. Our recursive arbiter is hierarchically structured, consisting of multiple small‐size arbiters at each layer. The arbitration time of an n‐input switch is proportional to log4?n/2? when we group every two inputs or every two input groups at each layer. We present a 256×256 terabit crossbar multicast packet switch using the PPA. The design shows that our scheme can reduce the arbitration time of the 256×256 switch to 11 gates delay, demonstrating the arbitration is no longer the bottleneck limiting the switch capacity. The priority handling in arbitration is also addressed. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

18.
Shared Memory (SM) switches are widely used for its high throughput,low delay and efficient use of memory.This paper compares the performance of two prominent switching schemes of SM packet switches:Cell-Based Switching (CBS) and Packet-Based Switching (PBS).Theoretical analysis is carried out to draw qualitative conclusion on the memory requirement,throughput and packet delay of the two schemes.Furthermore,simulations are carried out to get quantitative results of the performance comparison under various system load,traffic patterns,and memory sizes.Simulation results show that PBS has the advantage of shorter time delay while CBS has lower memory requirement and outperforms in throughput when the memory size is limited.The comparison can be used for tradeoff between performance and complexity in switch design.  相似文献   

19.
Shuffleout is a blocking multistage asynchronous transfer mode (ATM) switch using shortest path routing with deflection, in which output queues are connected to all the stages. This paper describes a model for the performance evaluation of the shuffleout switch under arbitrary nonuniform traffic patterns. The analytical model that has been developed computes the load distribution on each interstage link by properly taking into account the switch inlet on which the packet has been received and the switch outlet the packet is addressing. Such a model allows the computation not only of the average load per stage but also its distribution over the different links belonging to the interstage pattern for each switch input/output pair. Different classes of nonuniform traffic patterns have been identified and for each of them the traffic performance of the switch is evaluated by thus emphasizing the evaluation of the network unfairness  相似文献   

20.
We evaluate the performance of an N × N ATM discrete time multicast switch model with input queueing operating under two input access disciplines. First we present the analysis for the case of a purely random access discipline and subsequently we concentrate on a cyclic priority access based on a circulating token ring. In both cases, we focus on two HOL (head-of-line) packet service disciplines. Under the first (one-shot transmission discipline), all the copies generated by each HOL packet seek simultaneous transmission during the same time slot. Under the second service discipline (call-splitting), all HOL copies that can be transmitted in the same time slot are released while blocked copies compete for transmission in subsequent slots. In our analysis the performance measures introduced are the average packet delay in the input buffers as well as the maximum throughput of the switch. A significant part of the analysis is based on matrix geometric techniques. Finally, numerical results are presented and compared with computer simulations.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号