首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 187 毫秒
1.
CICQ(Combined Input Crosspoint Queued)是一种在crossbar交叉点加入少量缓存的交换结构,具有无需内部加速比及分布并行调度的特性。为了自适应网络环境中各种业务流量,提高在非均匀流量下的性能,该文提出了一种基于最长队列预测的高效CICQ交换结构调度算法RR-LQD (Round Robin with Longest Queue Detecting)。RR-LQD算法复杂度为O(1),具有良好的可扩展性;通过预测局部最长队列并尽力为其服务,保持调度中队列长度的均衡,能够适应各种非均匀流量的网络环境。仿真结果表明:在各种均匀和非均匀流量下,RR-LQD算法均能达到100%的吞吐量,并且具有优良的时延性能。该文使用FPGA芯片实现了RR-LQD算法仲裁器,能够满足高速、大容量交换结构的设计需要。  相似文献   

2.
徐宁  余少华  汪学舜 《电子学报》2012,40(12):2360-2366
针对混合输入-交叉点队列(CICQ)交换结构受限于"流控通信延时"、"需要2倍内部加速仿真输出队列(OQ)交换"以及单纯交叉点缓冲(CQ)存在"非均衡流量模式下吞吐量性能不足"等问题,本文提出一种新型的"负载均衡交叉点缓冲交换结构".采用固定模式时隙轮转匹配进行负载均衡处理,将到达输入端口的非均衡流量转化为近似均衡流量并且平均分配到同一输出端口对应的交叉缓冲中,从而可以利用较小的交叉点缓冲来模拟输出队列调度,简化调度过程并且提高吞吐量.理论分析证明了这种新结构的稳定性以及模拟输出队列交换的能力.同时仿真表明,采用该交换结构可以在不需要内部加速的条件下获得相当于输出队列交换的性能,并且有效地解决了交叉点缓冲队列非均衡流量性能不足的问题.  相似文献   

3.
This letter presents an efficient scheduling algorithm DTRR (Dual-Threshold Round Robin) for input-queued switches. In DTRR, a new matched input and output by round robin in a cell time will be locked by two self-adaptive thresholds whenever the queue length or the wait-time of the head cell in the corresponding Virtual Output Queue (VOQ) exceeds the thresholds. The locked input and output will be matched directly in the succeeding cell time until they are unlocked. By employing queue length and wait-time thresholds which are updated every cell time simultaneously, DTRR achieves a good tradeoff between the performance and hardware complexity. Simulation results indicate that the delay performance of DTRR is competitive compared to other typical scheduling algorithms under various traffic patterns especially under diagonal traffic.  相似文献   

4.
A new ATM switch architecture is presented. Our proposed Multinet switch is a self-routing multistage switch with partially shared internal buffers capable of achieving 100% throughput under uniform traffic. Although it provides incoming ATM cells with multiple paths, the cell sequence is maintained throughout the switch fabric thus eliminating the out-of-order cell sequence problem. Cells contending for the same output addresses are buffered internally according to a partially shared queueing discipline. In a partially shared queueing scheme, buffers are partially shared to accommodate bursty traffic and to limit the performance degradation that may occur in a completely shared system where a small number of calls may hog the entire buffer space unfairly. Although the hardware complexity in terms of number of crosspoints is similar to that of input queueing switches, the Multinet switch has throughput and delay performance similar to output queueing switches  相似文献   

5.
We develop a general model, called latency-rate servers (ℒℛ servers), for the analysis of traffic scheduling algorithms in broadband packet networks. The behavior of an ℒℛ server is determined by two parameters-the latency and the allocated rate. Several well-known scheduling algorithms, such as weighted fair queueing, virtualclock, self-clocked fair queueing, weighted round robin, and deficit round robin, belong to the class of ℒℛ servers. We derive tight upper bounds on the end-to-end delay, internal burstiness, and buffer requirements of individual sessions in an arbitrary network of ℒℛ servers in terms of the latencies of the individual schedulers in the network, when the session traffic is shaped by a token bucket. The theory of ℒℛ servers enables computation of tight upper bounds on end-to-end delay and buffer requirements in a heterogeneous network, where individual servers may support different scheduling architectures and under different traffic models  相似文献   

6.
In this paper, we propose the modified dynamic weighted round robin (MDWRR) cell scheduling algorithm, which guarantees the delay property of real‐time traffic and also efficiently transmits non‐real‐time traffic. The proposed scheduling algorithm is a variation of the dynamic weighted round robin (DWRR) algorithm and guarantees the delay property of real‐time traffic by adding a cell transmission procedure based on delay priority. It also uses a threshold to prevent the cell loss of non‐real‐time traffic that is due to the cell transmission procedure based on delay priority. Though the MDWRR scheduling algorithm may be more complex than the conventional DWRR scheme, considering delay priority minimizes cell delay and decreases the required size of the temporary buffer. The results of our performance study show that the proposed scheduling algorithm has better performance than the conventional DWRR scheme because of the delay guarantee of real‐time traffic.  相似文献   

7.
徐宁  余少华 《中国通信》2013,10(2):134-142
The fast growth of Internet has created the need for high-speed switches. Recently, the crosspoint-queue switch has attracted attention because of its scalability and high performance. However, the Crosspoint-Queue switch does not perform well under non-uniform traffic. To overcome this limitation, the Load-Balanced Crosspoint-Queued switch architecture has been proposed. In this architecture, a load-balance stage is placed ahead of the Crosspoint-Queued stage. The load-balance stage transforms the incoming non-uniform traffic into nearly uniform traffic at the input port of the second stage. To avoid out-of-order cells, this stage employs flow-based queues in each crosspoint buffer. Analysis and simulation results reveal that under non-uniform traffic, this new switch architecture achieves a delay performance similar to that of the Output-Queued switch without the need for internal acceleration. In addition, its throughput is much better than that of the pure crosspoint-queued switch. Finally, it can achieve the same packet loss rate as the crosspoint-queue switch, while using a buffer size that is only 65% of that used by the crosspoint-queue switch.  相似文献   

8.
This paper presents the performance evaluation of a new cell‐based multicast switch for broadband communications. Using distributed control and a modular design, the balanced gamma (BG) switch features high performance for unicast, multicast and combined traffic under both random and bursty conditions. Although it has buffers on input and output ports, the multicast BG switch follows predominantly an output‐buffered architecture. The performance is evaluated under uniform and non‐uniform traffic conditions in terms of cell loss ratio and cell delay. An analytical model is presented to analyse the performance of the multicast BG switch under multicast random traffic and used to verify simulation results. The delay performance under multicast bursty traffic is compared with those from an ideal pure output‐buffered multicast switch to demonstrate how close its performance is to that of the ideal but impractical switch. Performance comparisons with other published switches are also studied through simulation for non‐uniform and bursty traffic. It is shown that the multicast BG switch achieves a performance close to that of the ideal switch while keeping hardware complexity reasonable. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

9.
Son  J.W. Lee  H.T. Oh  Y.Y. Lee  J.Y. Lee  S.B. 《Electronics letters》1997,33(14):1192-1193
A switch architecture is proposed for alleviating the HOL blocking by employing even/odd dual FIFO queues at each input and even/odd dual switching planes dedicated to each even/odd queue. Under random traffic, it gives 76.4% throughput without output expansion and 100% with output expansion r=2, with the same amount of crosspoints as for the ordinary output expansion scheme  相似文献   

10.
Consider a single packet switch with a finite number of packet buffers shared between several output queues. An arriving packet is lost if no free buffer is available, as in the CIGALE network. It has been observed by simulation that if load increases too much, congestion may occur, i.e., throughput declines; it appears that the busiest link's queue tends to hog the buffers. Therefore, we will limit the queue length and when the queue is full the packet will be dropped. We expect that this restricted buffer sharing policy will avoid congestion under conditions of heavy load. A queueing model of a packet switch is defined and solved by local balance. Loss probability is evaluated, and values of queue limit to minimize loss are found; they depend on load. A Square-Root rule is introduced to make the choice of queue limit independent of load. For a sample switch, with three output links, a comparison is made between performance under different buffer sharing policies; it is shown that restricted sharing prevents congestion by making throughput an increasing function of load.  相似文献   

11.
RED分组丢弃算法性能研究   总被引:5,自引:1,他引:4       下载免费PDF全文
 本文研究了在ATM交换机上实现的RED算法的性能.在固定有效带宽、时变有效带宽情况下和同种、异种业务环境下,研究了RED算法的通过率、公平性和时延等性能.经研究表明:RED算法有必要与EPD算法相结合,构成RED+EPD算法.采用RED+EPD算法的ATM交换机通过控制平均排队长度,有效地减小了交换机的平均排队时延.通过与其他分组丢弃算法进行性能比较表明:采用RED+EPD算法的ATM交换机,可提供比EPD算法略高的通过率,更好的公平性和更低的排队时延,能较好地支持具有时延要求的业务.  相似文献   

12.
郑敏  郑竹林  王斌 《电子与信息学报》2007,29(12):2978-2980
CICQ (Combined Input and Cross-point-Queued switch)交换机是一种在交叉点缓存加入少量缓存的交换结构,是当前研究的一个热点。该文研究了基于交叉点缓存的各种调度算法和基于CICQ的交换结构,提出了LFF-LBF算法,运用通畅度和拥塞度两个概念,保证了最急迫的信元被最先服务。仿真分析表明该算法在均匀分布和突发业务源的情况下具有良好的时延性能和稳定性能。  相似文献   

13.
A switching network that approaches a maximum throughput of 100% as buffering is increased is proposed. This self-routing switching network consists of simple 2×2 switching elements, distributors, and buffers located between stages and in the output ports. The proposed switching requires a speedup factor of two. The structure and the operation of the switching network are described, and its performance is analyzed. The switch has log2N stages that move packets in a store-and-forward fashion, incurring a latency of log2 N time periods. The performance analysis of the switch under uniform traffic pattern shows that the additional delay is small, and a maximum throughput of 100% is achieved as buffering is increased  相似文献   

14.
HiPower is a photonic ATM switch having a two-layered structure, consisting of an electrical control layer and an optical transport layer, realized by a detouring hypercube interconnection network structure. Four sorting-based routing algorithms suitable for high-speed hardware control of HiPower are proposed. They are evaluated by computer simulations in terms of delay and cell loss in the switch under uniform traffic distribution. The simulation results suggest that all four methods are acceptable in their traffic characteristics and that the DD method, in which the cell nearest to its destination is given the highest priority in routing, seems to be the most attractive from the hardware implementation viewpoint. It is also confirmed that subpriority sorting based on the number of detourings reduces the delay variance. Simulation results proving that the detouring hypercube network is a practical and powerful architecture for a two-layered ATM cell switch, thus, the HiPower providing high throughput, are given  相似文献   

15.
A new packet switch architecture using two sets of time-division multiplexed buses is proposed. The horizontal buses collect packets from the input links, while the vertical buses distribute the packets to the output links. The two sets of buses are connected by a set of switching elements which coordinate the connections between the horizontal buses and the vertical buses so that each vertical bus is connected to only one horizontal bus at a time. The switch has the advantages of: (1) adding input and output links without increasing the bus and I/O adaptor speed; (2) being internally unbuffered; (3) having a very simple control circuit; and (4) having 100% throughput under uniform traffic. A combined analytical-simulation method is used to obtain the packet delay and packet loss probability. Numerical results show that for satisfactory performance, the buses need to run about 30% faster than the input line rate. With this speedup, even at a utilization factor of 0.9, each input adaptor requires only 31 buffers for a packet loss rate of 10-6. The output queue behaves essentially as an M/D/1 queue  相似文献   

16.
We study the delay performance of all-optical packet communication networks configured as ring and bus topologies employing cross-connect switches (or wavelength routers). Under a cross-connect network implementation, a packet experiences no (or minimal) internal queueing delays. Thus, the network can be implemented by high speed all-optical components. We further assume a packet-switched network operation, such as that using a slotted ring or bus access methods. In this case, a packet's delay is known before it is fed into the network. This can be used to determine if a packet must be dropped (when its end-to-end delay requirement is not met) at the time it accesses the network. It also leads to better utilization of network capacity resources. We also derive the delay performance for networks under a store-and-forward network operation. We show these implementations to yield very close average end-to-end packet queueing delay performance. We note that a cross-connect network operation can yield a somewhat higher queueing delay variance levels. However, the mean queueing delay for all traffic flows are the same for a cross-connect network operation (under equal nodal traffic loading), while that in a store-and-forward network increases as the path length increases. For a ring network loaded by a uniform traffic matrix, the queueing delay incurred by 90% of the packets in a cross-connect network may be lower than that experienced in a store-and-forward network. We also study a store-and-forward network operation under a nodal round robin (fair queueing) scheduling policy. We show the variance performance of the packet queueing delay for such a network to be close to that exhibited by a cross-connect (all-optical) network.  相似文献   

17.
Scheduling algorithm for VOQ switches   总被引:1,自引:0,他引:1  
A variety of matching schemes for VOQ switches that provide high throughput for uniform traffic have been proposed. The dual round robin matching (DRRM) scheme has performance similar to iSLIP and lower implementation complexity. DRRM with exhaustive service (EDRRM) algorithm was created as modification of DRRM algorithm with goal to improve performance of DRRM algorithm for bursty and non-uniform traffic conditions. Under extremely unbalanced arrival traffic, an exhaustive service policy may lead to unfairness and starvation. This paper proposes matching scheme for VOQ switches that provides high throughput almost the same as EDRRM scheme and avoid unfairness and starvation under unbalanced traffic.  相似文献   

18.
周熙  王柱京  佘阳 《电讯技术》2006,46(5):58-62
提出了地面站采用预测预约算法的混合式按需/自由分配-预测预约(CFDAMA-PR)卫星媒体接入控制协议。CFDAMA-PR协议可采用预定预约时隙或轮询预约时隙发送预约请求。在文中,通过计算机仿真对预定时隙预约CFDAMA-PR、CFDAMA-PA与基本DAMA协议进行性能比较,同时还对轮询预约CFDAMA-PR和CFDAMA-RR协议进行了仿真性能比较,仿真中采用IFP突发信源,仿真结果说明CFDAMA-PR协议采用预测预约算法,实现了更好的时延/吞吐量性能。  相似文献   

19.
Three disciplines for prioritized transmission of messages in packet switching systems are considered. Messages from a finite number of classes having their lengths drawn independently from general service time distributions are assumed to arrive to the system according to independent Poisson processes. After entering the switch, the messages are divided into packets, overhead is added, and then the packets join a queue to be served according to one of the following disciplines: head-of-the-line (HOL), HOL with message preemption, and prioritized round robin (RR). Preemption of packet transmission is not allowed in any of the disciplines, and the service disciplines of different classes need not be the same. One result of the work presented here is a model to assess delays in the case of the prioritized RR; this work appears to be the first in which messages are not of fixed length and all quanta from a given message are served at the same priority. Numerical results which illustrate effects of choice of packet lengths and service disciplines upon delay of messages from the different classes are presented.  相似文献   

20.
On the speedup required for work-conserving crossbar switches   总被引:5,自引:0,他引:5  
This paper describes the architecture for a work-conserving server using a combined I/O-buffered crossbar switch. The switch employs a novel algorithm based on output occupancy, the lowest occupancy output first algorithm (LOOFA), and a speedup of only two. A work-conserving switch provides the same throughput performance as an output-buffered switch. The work-conserving property of the switch is independent of the switch size and input traffic pattern. We also present a suite of algorithms that can be used in combination with LOOFA. These algorithms determine the fairness and delay properties of the switch. We also describe a mechanism to provide delay bounds for real-time traffic using LOOFA. These delay bounds are achievable without requiring output-buffered switch emulation  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号