首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 109 毫秒
1.
分组交换网络中队列调度算法的研究及其展望   总被引:34,自引:3,他引:31  
本文主要讨论分组交换网络中的队列调度算法,对现有的调度算法进行了分类和比较研究,分析了其性能指标和技术特点,最后结合我们的相关研究工作讨论了未来的发展趋势并给出了有待研究的一些课题.  相似文献   

2.
一种用于分组调度的遗传模拟退火算法   总被引:3,自引:2,他引:1  
分组调度已成为高速IP路由器中的关键技术之一。文章基于目前高速路由交换技术所采用的主体结构,带有虚拟输出队列(Virtual-output—Queue,VOQ)的输入队列交换结构,提出了一种遗传模拟退火算法,并将该算法应用于分组调度问题的求解之中。通过遗传模拟退火算法和传统遗传算法的仿真结果可以看出,遗传模拟退火算法具有良好的鲁棒性和收敛性。  相似文献   

3.
张怡  周诠 《现代电子技术》2007,30(2):145-148,151
对于输入缓存类型的分组交换系统,调度算法是交换系统的关键技术之一,其性能直接影响着交换单元的性能。研究了几种典型的极大匹配调度算法:PIM,iSLIP,FIRM和输出串行调度算法。通过OPNET构造了一种调度算法模型,以8×8 crossbar交换结构为例,基于该模型对这几种极大匹配调度算法进行了仿真。根据仿真结果从平均调度时延等性能指标及实现复杂度方面进行了分析和比较,指出了现有算法的优缺点,并提出了进一步改进的方向,对卫星ATM/IP交换系统的研究与设计具有指导作用。  相似文献   

4.
刘伟  杜娟  杨帅 《现代电子技术》2010,33(14):105-108
Clos网络是多端口的路由器和交换机中经常采用的交换网络,其优点在于它是一个结构全对称的网络。比较了多级Clos网络分布式调度算法中定长分组和变长分组交换的特点;给出一种基于变长分组交换的MSM型三级Clos交换网络结构和相应的ACBS调度算法;消除了分组负载分配的不公平性。分析表明该调度算法优于传统算法,并通过仿真实验验证了算法的有效性。  相似文献   

5.
胡庆 《电信交换》2003,(4):1-7,49
文章对高速路由器交换技术中普遍采用的Crossbar交换结构的各种主要调度算法进行了分类阐述和对比分析,讨论了调度算法的基本问题,主要分析了最大匹配类算法和权重匹配类算法两种目前的主流调度算法,最后对高性能调度算法研究的最新进展进行了介绍。  相似文献   

6.
一种多级多平面分组交换结构中的带宽保证型调度算法   总被引:2,自引:0,他引:2  
多级多平面分组交换结构MPMS以其优异的可扩展性正成为新一代交换路由设备的交换核心.但MPMS结构中的调度算法却往往比较复杂.该文提出了一种MPMS结构的带宽保证型调度算法BG-CRRD,该算法将分组流预留带宽信息引入判决机制,仿真实验表明,BG-CRRD在Bernoulli均匀流量条件下可以获得100%的吞吐率,在非均匀流量条件极坏情况下获得高达92%的吞吐率,在过载情况下根据预定带宽分配输出链路带宽.  相似文献   

7.
网络编码理论与交换调度算法相结合重点是实现在联合输入输出排队(CIOQ)交换结构中提供组播服务。文章证明了对一个流中的分组进行线性网络编码可以承载不允许网络编码时不能够承载的交换流量模式,也就是说,网络编码允许CIOQ交换结构在实现组播服务时有更大的速率区域,并给出了基于图论方法的描述。运用增强冲突图的稳定集多面体等概念,文章证明了计算离线调度的问题可以简化成某种图染色问题,同时,也针对组播调度提出了一个称之为最大权重稳定集的在线调度算法。  相似文献   

8.
光分组交换节点技术   总被引:1,自引:0,他引:1  
文章首先介绍了光分组交换网络的分类和光分组交换节点的基本结构,接着详细讨论了全光分组交换节点设计和实现中的关键问题:交换结构的设计、光存储的实现以及分组拥塞问题的解决方案。  相似文献   

9.
研究了以光纤延迟线作为主要的常规缓存,以电存储器作为辅助缓存的光电混合缓存的光电混合缓存结构,并用改进的FF-VF算法调度冲突的分组,达到改善长度可变光分组交换的分组丢失率目的.分析和仿真结果都表明,光电混合缓存和改进的FF-VF算法能改善可变长光分组交换在负载较高时的分组丢失率性能,并减少光纤延迟线的数目.  相似文献   

10.
基于光突发交换的下一代光互联网技术   总被引:2,自引:1,他引:1  
给出了基于光突发交换的下一代光互联网技术的体系结构,然后从光突发数据格式、光突发的装配、光突发交换节点与网络结构、资源预留协议,以及数据信道调度算法等方面讨论了其核心技术问题.最后,简要地指出了下一代光互联网技术的未来研究方向和主要研究内容.  相似文献   

11.
This paper proposes two almost all-optical packet switch architectures, called the “packing switch” and the “scheduling switch” architecture, which when combined with appropriate wait-for-reservation or tell-and-go connection and how control protocols provide lossless communication for traffic that satisfies certain smoothness properties. Both switch architectures preserve the order of packets that use a given input-output pair, and are consistent with virtual circuit switching, The scheduling switch requires 2klogT+k2 two-state elementary switches (or 2klogT+2klogk elementary switches, if a different version is used) where k is the number of inputs and T is a parameter that measures the allowed burstiness of the traffic. The packing switch requires very little processing of the packet header, and uses k2logT+klogk two-state switches. We also examine the suitability of the proposed architectures for the design of circuit switched networks. We find that the scheduling switch combines low hardware cost with little processing requirements at the nodes, and is an attractive architecture for both packet-switched and circuit-switched high-speed networks  相似文献   

12.
The continuous growth in the demand for diversified quality-of-service (QoS) guarantees in broadband networks introduces new challenges in the design of packet switches that scale to large switching capacities. Packet scheduling is the most critical function involved in the provision of individual bandwidth and delay guarantees to the switched flows. Most of the scheduling techniques proposed so far assume the presence in the switch of a single contention point, residing in front of the outgoing links. Such an assumption is not consistent with the highly distributed nature of many popular architectures for scalable switches, which typically have multiple contention points, located in both ingress and egress port cards, as well as in the switching fabric. We define a distributed multilayered scheduler (DMS) to provide differentiated QoS guarantees to individual end-to-end flows in packet switches with multiple contention points. Our scheduling architecture is simple to implement, since it keeps per-flow scheduling confined within the port cards, and is suitable to support guaranteed and best-effort traffic in a wide range of QoS frameworks in both IP and ATM networks  相似文献   

13.
Scheduling algorithms for optical packet fabrics   总被引:1,自引:0,他引:1  
Utilizing optical technologies to build packet fabrics for high-capacity switches and routers has several advantages in terms of scalability, power consumption, and cost. However, several technology related problems have to be overcome to be able to use such an approach. The reconfiguration times of optical crossbars are longer than those of electronic fabrics and end-to-end clock recovery in such systems add to the reconfiguration overheads. Both these problems can limit the efficiency of optical packet fabrics. In addition, existing work on input-buffered switches mostly assumes fixed size packets (referred as envelopes in this paper). When fixed size switching is used for Internet protocol networks where packets are of variable size, the incoming packets need to be fragmented to fit the fixed size envelopes. This fragmentation can lead to, possibly large loss of bandwidth and even instability. This paper addresses all of the above issues by presenting packetization and scheduling techniques that allow optical packet fabrics to be used within switches and routers. The proposed scheme aggregates multiple packets in a single envelope and when used in combination with proper scheduling algorithms, it can provide system stability as well as bandwidth and delay guarantees. As a result of the aggregation method, the reconfiguration frequency required from the optics is reduced, facilitating the use of optical technologies in implementing packet switch fabrics.  相似文献   

14.
We develop a method of high-speed buffer management for output-buffered photonic packet switches. The use of optical fiber delay lines is a promising solution to constructing optical buffers. The buffer manager determines packet delays in the fiber delay line buffer before the packets arrive at the buffer. We propose a buffer management method based on a parallel and pipeline processing architecture consisting of (log/sub 2/N+1) pipeline stages, where N is the number of ports of the packet switch. This is an expansion of a simple sequential scheduling used to determine the delays of arriving packets. Since the time complexity of each processor in the pipeline stages is O(1), the throughput of this buffer management is N times larger than that of the sequential scheduling method. This method can be used for buffer management of asynchronously arriving variable-length packets. We show the feasibility of a buffer manager supporting 128 /spl times/ 40 Gb/s photonic packet switches, which provide at least eight times as much throughput as the latest electronic IP routers. The proposed method for asynchronous packets overestimates the buffer occupancy to enable parallel processing. We show through simulation experiments that the degradation in the performance of the method resulting from this overestimation is quite acceptable.  相似文献   

15.
The overhead associated with reconfiguring a switch fabric in optical packet switches is an important issue in relation to the packet transmission time and can adversely affect switch performance. The reconfiguration overhead increases the mean waiting time of packets and reduces throughput. The scheduling of packets must take into account the reconfiguration frequency. This work proposes an analytical model for input-buffered optical packet switches with the reconfiguration overhead and analytically finds the optimal reconfiguration frequency that minimizes the mean waiting time of packets. The analytical model is suitable for several round-robin (RR) scheduling schemes in which only non-empty virtual output queues (VOQs) are served or all VOQs are served and is used to examine the effects of the RR scheduling schemes and various network parameters on the mean waiting time of packets. Quantitative examples demonstrate that properly balancing the reconfiguration frequency can effectively reduce the mean waiting time of packets.  相似文献   

16.
Since packet switches with input queueing require low-speed buffers and simple cross-bar fabrics, they potentially provide high switching capacities. In these switches, the port that is a source of a multicast session might easily get congested with the increasing popularity of this session. We propose the protocol for scheduling packets in switches with input buffers for varying popularity of different content on the Internet. Copies of a multicast packet are forwarded through the switch, so that multicasting is evenly distributed over switch ports. The performance trade-off between capacity that can be reserved and guaranteed packet delay is discussed.  相似文献   

17.
The goal of this paper is to design optimal scheduling and memory management so as to minimize packet loss in input queued switches with finite input buffers. The contribution is to obtain closed-form optimal strategies that minimize packet loss in 2/spl times/2 switches with equal arrival rates for all streams. For arbitrary arrival rates, the contribution is to identify certain characteristics of the optimal strategy, and use these characteristics to design a near-optimal heuristic. A lower bound for the cost associated with packet loss for N/spl times/N switches is obtained. This lower bound is used to design a heuristic which attains near-minimum packet loss in N/spl times/N switches with arbitrary N. These policies reduce packet loss by about 25% as compared to the optimal strategy for the infinite buffer case. The framework and the policies proposed here apply to buffer-constrained wireless networks as well.  相似文献   

18.
Packet-mode scheduling in input-queued cell-based switches   总被引:1,自引:0,他引:1  
We consider input-queued switch architectures dealing at their interfaces with variable-size packets, but internally operating on fixed-size cells. Packets are segmented into cells at input ports, transferred through the switching fabric, and reassembled at output ports. Cell transfers are controlled by a scheduling algorithm, which operates in packet-mode: all cells belonging to the same packet are transferred from inputs to outputs without interruption. We prove that input-queued switches using packet-mode scheduling can achieve 100% throughput, and we show by simulation that, depending on the packet size distribution, packet-mode scheduling may provide advantages over cell-mode scheduling.  相似文献   

19.
Optical packet switching (OPS) has been proposed as a strong candidate for future metro networks. This paper assesses the viability of an OPS-based ring architecture as proposed within the research project DAVID (Data And Voice Integration on DWDM), funded by the European Commission through the Information Society Technologies (IST) framework. Its feasibility is discussed from a physical-layer point of view, and its limitations in size are explored. Through dimensioning studies, we show that the proposed OPS architecture is competitive with respect to alternative metropolitan area network (MAN) approaches, including synchronous digital hierarchy, resilient packet rings (RPR), and star-based Ethernet. Finally, the proposed OPS architectures are discussed from a logical performance point of view, and a high-quality scheduling algorithm to control the packet-switching operations in the rings is explained.  相似文献   

20.
This paper considers a general parallel buffered packet switch (PBPS) architecture which is based on multiple packet switches operating independently and in parallel. A load-balancing mechanism is used at each input to distribute the traffic to the parallel switches. The buffer structure of each of the parallel packet switches is based on either a dedicated, a shared, or a buffered-crosspoint output-queued architecture. As in such PBPS multipath switches, packets may get out of order when they travel independently in parallel through these switches, a resequencing mechanism is necessary at the output side. This paper addresses the issue of evaluating the minimum resequence-queue size required for a deadlock-free lossless operation. An analytical method is presented for the exact evaluation of the worst-case resequencing delay and the worst-case resequence-queue size. The results obtained reveal their relation, and demonstrate the impact of the various system parameters on resequencing  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号