首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A single-stage non-blocking N × N packet switch with combined input and output queueing is considere. The limited queueing at the output ports partially resolves output port contention. Overflow at the output queues is prevented by employment of a backpressure mechanism and additional queueing at the input ports. This paper investigates the performance of the switch under two different modes of operation: asynchronous and synchronous or slotted. For the purpose of comparison a switch model is developed. Assuming Poisson packet arrivals, several performance measures are obtained analytically. These include the distribution of the delay through the switch, the input queue length distribution, packet losses at the inputs in the case of finite input queues, and the maximum switch throughput. The results obtained demonstrate a slight performance advantage of asynchronous over synchronous operation. However, the maximum switch throughput is the same for both modes of operation.  相似文献   

2.
《Performance Evaluation》2006,63(4-5):278-294
We consider a time-slotted queueing model where each time slot can either be an arrival slot, in which new packets arrive, or a departure slot, in which packets are transmitted and hence depart from the queue. The slot scheduling strategy we consider describes periodically, and for a fixed number of time slots, which slots are arrival and departure slots. We consider a static and a dynamic strategy. For both strategies, we obtain expressions for the probability generating function of the steady-state queue length and the packet delay. The model is motivated by cable-access networks, which are often regulated by a request–grant procedure in which actual data transmission is preceded by a reservation procedure. Time slots can then either be used for reservation or for data transmission.  相似文献   

3.
尹德斌  谢剑英 《计算机仿真》2007,24(7):149-152,182
文中提出了一种新的加权公平队列调度算法 (P-WFQ). 该算法使用相对权重作为一次轮询中的服务概率来实现加权公平调度, 解决了传统的加权公平队列调度算法(WFQ、WRR)普遍存在的基于每个数据包的权重计算的问题, 从而大大降低了算法的复杂度. 另外使用了自适应队列管理技术, 有效提高了交换机的缓冲区利用率, 并可以在有少量丢包的代价下减小队列的排队延迟抖动. 仿真结果证明了算法的有效性和实用性.  相似文献   

4.
Describes Tiny Tera: a small, high-bandwidth, single-stage switch. Tiny Tera has 32 ports switching fixed-size packets, each operating at over 10 Gbps (approximately the Sonet OC-192e rate, a telecom standard for system interconnects). The switch distinguishes four classes of traffic and includes efficient support for multicasting. We aim to demonstrate that it is possible to use currently available CMOS technology to build this compact switch with an aggregate bandwidth of approximately 1 terabit per second and a central hub no larger than a can of soda. Such a switch could serve as a core for an ATM switch or an Internet router. Tiny Tera is an input-buffered switch, which makes it the highest bandwidth switch possible given a particular CMOS and memory technology. The switch consists of three logical elements: ports, a central crossbar switch, and a central scheduler. It queues packets at a port on entry and optionally prior to exit. The scheduler, which has a map of each port's queue occupancy, determines the crossbar configuration every packet time slot. Input queueing, parallelism, and tight integration are the keys to such a high-bandwidth switch. Input queueing reduces the memory bandwidth requirements: When a switch queues packets at the input, the buffer memories need run no faster than the line rate. Thus, there is no need for the speedup required in output-queued switches  相似文献   

5.
Multicast enables efficient data transmission from one source to multiple destinations, and has been playing an important role in Internet multimedia applications. Although several multicast scheduling schemes for packet switches have been proposed in the literature, they usually aim to achieve only short multicast latency and high throughput without considering bandwidth guarantees. However, fair bandwidth allocation is critical for the quality of service (QoS) of the network, and is necessary to support multicast applications requiring guaranteed performance services, such as online audio and video streaming. This paper addresses the issue of bandwidth guaranteed multicast scheduling on virtual output queued (VOQ) switches. We propose the Credit based Multicast Fair scheduling (CMF) algorithm, which aims at achieving not only short multicast latency but also fair bandwidth allocation. CMF uses a credit based strategy to guarantee the reserved bandwidth of an input port on each output port of the switch. It keeps track of the difference between the reserved bandwidth and actually received bandwidth, and minimizes the difference to ensure fairness. Moreover, in order to fully utilize the multicast capability provided by the switch, CMF lets a multicast packet simultaneously send transmission requests to multiple output ports. In this way, a multicast packet has more chances to be delivered to multiple destination output ports in the same time slot and thus to achieve short multicast latency. Extensive simulations are conducted to evaluate the performance of CMF, and the results demonstrate that CMF achieves the two design goals: fair bandwidth allocation and short multicast latency.  相似文献   

6.
In the online packet buffering problem (also known as the unweighted FIFO variant of buffer management), we focus on a single network packet switching device with several input ports and one output port. This device forwards unit-size, unit-value packets from input ports to the output port. Buffers attached to input ports may accumulate incoming packets for later transmission; if they cannot accommodate all incoming packets, their excess is lost. A packet buffering algorithm has to choose from which buffers to transmit packets in order to minimize the number of lost packets and thus maximize the throughput. We present a tight lower bound of e/(e?1)≈1.582 on the competitive ratio of the throughput maximization, which holds even for fractional or randomized algorithms. This improves the previously best known lower bound of 1.4659 and matches the performance of the algorithm Random Schedule. Our result contradicts the claimed performance of the algorithm Random Permutation; we point out a flaw in its original analysis.  相似文献   

7.
IQ switches store packets at input ports to avoid the memory speedup required by OQ switches. However, packet schedulers are needed to determine an I/O (input/output) interconnection pattern that avoids conflicts among packets at output ports. Today, centralized, single-chip, scheduler implementation are largely dominant. In the near future, the multi-chip scheduler implementation will be needed to reduce the hardware scheduler complexity in very large, high-speed, switches. However, the multi-chip implementation implies introducing a non-negligible delay among input and output selectors used to determine the I/O interconnection pattern at each time slot. This delay, mainly due to inter-chip latency, requires modifications to traditional scheduling algorithms, which normally rely on the hypothesis that information exchange among selectors can be performed with negligible delay. We propose a novel multicast scheduler, named IMRR, an extension of a previously proposed multicast scheduling algorithm named mRRM, making it suitable to a multi-chip implementation, and examine its performance by simulation.  相似文献   

8.
The burstiness of video traffic in future wired and wireless networks makes dynamic management of network resources more critical. This helps to meet stringent delay and loss requirements of video traffic and improves its Quality of Service (QoS). Since buffer management in queueing system plays an important role towards effective control of QoS for various types of applications, we have focused on its dynamic management. In this paper, we have proposed ADPBS scheme for priority queueing system. The performance of this scheme is analyzed with MPEG-4 encoded video sequence as input to the ADPBS queue. The simulation is carried out in MATLAB for various combinations of threshold control parameters, different traffic load and input traffic mix variations. The results of simulations indicate that dynamically controlled threshold in ADPBS contributes to significant reduction of packet loss of different priority classes as compared with static partial buffer sharing queue and first in first out queue based on threshold control parameters and the traffic type.  相似文献   

9.
一种维序的基于组合输入输出排队的并行交换结构   总被引:4,自引:0,他引:4  
戴艺  苏金树  孙志刚 《软件学报》2008,19(12):3207-3217
提出一种按序排队(in-order queuing,简称IOQ)PPS体系结构,通过在分流控制器引入固定尺寸的缓冲区,实现负载在每个交换平面的均匀分配;中间层组合输入输出排队(combined input-and-output queuing,简称CIOQ)交换平面受控于中央调度器,在每个时间槽(timeslot),中央调度器将同一种匹配实施到每一个交换平面,称之为同步调度策略.可以证明,在该体系结构下,轮询(round robin)分派算法配合同步调度策略可以保证同一条流的信元按序从交换平面读出.进一步提出了严格最长队列优先同步调度算法,极大地减少了中央调度器需要维护的状态信息和信元重定序开销.与目前主流的PPS设计相比,IOQPPS(in-order queuing parallel packet switch)实现机制简单,易于硬件实现.模拟结果表明,IOQPPS具有最优的延迟性能.  相似文献   

10.
Due to the rapid development in computer networks, congestion becomes a critical issue. Congestion usually occurs when the connection demands on network resources, i.e. buffer spaces, exceed the available ones. We propose in this paper a new discrete-time queueing network analytical model based on dynamic random early drop (DRED) algorithm to control the congestion in early stages. We apply our analytical model on two-queue nodes queueing network. Furthermore, we compare between the proposed analytical model and three known active queue management (AQM) algorithms, including DRED, random early detection (RED) and adaptive RED, in order to figure out which of them offers better quality of service (QoS). We also experimentally compare the queue nodes of the proposed analytical model and the three AQM methods in terms of different performance measures, including, average queue length, average queueing delay, throughput, packet loss probability, etc., aiming to determine the queue node that offers better performance.  相似文献   

11.
提出了一种动态概率优先级算法DPP,针对一类对延时和丢包率要求相对较高的应用,根据AF1队列长度动态调整概率计算参数p,有效地解决了由于突发流量带来的QoS性能下降问题。不同实验环境下的仿真结果表明,DPP算法有效改善了突发性对分组平均排队延时的影响,减少了分组丢包率。  相似文献   

12.
This paper deals with path-wise performance analysis rather than a nodal one to enrich results previously obtained in the literature under simple but unsatisfactory assumptions, e.g., Poisson processes. First deriving the per-stream loss probability, delay, and delay variance of an isolated queue with multi-class input streams modeled by heterogeneous two-state Markov-modulated Poisson processes (MMPPs), we then propose simple and novel decomposition schemes working together with an input parameter modification scheme to (approximately) extract the per-stream output process for a lossy queue receiving MMPPs under a general service time distribution. The novelty of the decompositions is that they can be easily implemented based on a lossless queueing model. Through numerical experiments, we show that the accuracy in estimating the per-stream output process using such schemes is good. These decomposition schemes together with the input parameter modification scheme and a moment-based fitting algorithm used to fit the per-stream output as a two-state MMPP make analysis of path-wise performance viable by virtually treating each node in isolation along a path to get performance measures sequentially from the source node en route to the destination node.  相似文献   

13.
在高速网络中,商用存储器的存取速率一直是路由器调度性能提高的制约因素.为此该文提出了两级分布式存储器(TSDM)结构,该结构可以大大降低对商用存储器的存取速率的要求.通过分析给出了该结构模拟输出排队调度所需存储器个数的下界,并从理论上证明了该结构的交换单元无需加速即可模拟输出排队调度.最后文章从工程实现的角度给出了TSDM结构的一种工程简化设计方案,并通过仿真对该方案的性能进行了验证。  相似文献   

14.
Performance of ATM networks depends on switch performance and architecture. This paper presents a simulation study of a new dynamic allocation of input buffer space in ATM switching elements. The switching elements are composed of input and output buffers which are used to store received and forwarded cells, respectively. Efficient and fair use of buffer space in an ATM switch is essential to gain high throughput and low cell loss performance from the network. In this paper, input buffer space of each switching element is allocated dynamically as a function of traffic load. A shared buffer pool is provided with threshold-based virtual partition among input ports, which supplies the necessary input buffer space as required by each input port. The system behaviour under varying traffic loads has investigated using a simulation program. Also, a comparison with a static allocation scheme shows that the threshold based dynamic buffer allocation scheme ensures an increased network throughput and a fair share of the buffer space even under bursty loading conditions.  相似文献   

15.
Bar-Noy  Freund  Landa  Naor 《Algorithmica》2008,36(3):225-247
Abstract. Consider the following problem. A switch connecting n input channels to a single output channel must deliver all incoming messages through this channel. Messages are composed of packets , and in each time slot the switch can deliver a single packet from one of the input queues to the output channel. In order to prevent packet loss, a buffer is maintained for each input channel. The goal of a switching policy is to minimize the maximum buffer size. The setting is on-line; decisions must be made based on the current state without knowledge of future events. This general scenario models multiplexing tasks in various systems such as communication networks, cable modem systems, and traffic control. Traditionally, researchers analyzed the performance of a given policy assuming some distribution on the arrival rates of messages at the input queues, or assuming that the service rate is at least the aggregate of all the input rates. We use competitive analysis, avoiding any prior assumptions on the input. We show O(log n )-competitive switching policies for the problem and demonstrate matching lower bounds.  相似文献   

16.
基于三级存储阵列缓存高速数据包及性能分析   总被引:2,自引:1,他引:1  
王鹏  伊鹏  金德鹏  曾烈光 《软件学报》2005,16(12):2181-2189
高速网络设备一般需要大容量高速数据包存储器来缓存收到的数据包.但以目前的存储器工艺水平很难实现这样的存储器,从而限制了整个网络的发展.提出一种新型的三级存储阵列结构可以成功解决数据包存储器的容量和带宽问题,理论上可以实现任意高速数据包的缓存.使用"最关键队列优先"算法完成对三级存储阵列的管理,证明了使用该算法能够保证数据包的无时延调度输出,并且其所需的系统规模最小,同时推导出系统规模的上、下限.最后给出三级存储阵列的一种可实现方案,从而使该结构易于硬件实现.  相似文献   

17.
Decomposition of general tandem queueing networks with MMPP input   总被引:1,自引:0,他引:1  
Armin   《Performance Evaluation》2001,44(1-4):5-23
For tandem queueing networks with generally distributed service times, decomposition often is the only feasible solution method besides simulation. The network is partitioned into individual nodes which are analyzed in isolation. In most existing decomposition algorithms for continuous-time networks, the output of a queue is usually approximated as a renewal process, which becomes the arrival process to the next queue. In this paper, the internal traffic processes are described as semi-Markov processes (SMPs) and Markov modulated Poisson processes (MMPPs). Thus, correlations in the traffic streams, which are known to have a considerable impact on performance, are taken into account to some extent. A two-state MMPP, which arises frequently in communications modeling, serves as input to the first queue of the tandem network. Furthermore, the single nodes may have infinite or finite buffers. Customers who arrive at a full buffer will get lost.

In principle, the analysis of an individual queue as an MMPP/G/1(/K) system delivers a wide range of performance measures. For different examples of tandem networks, stationary mean queue lengths at arbitrary time are compared to simulation data. The relative errors of the results, which are computed promptly by the decomposition component of the tool TimeNET, remain within a reasonable range.  相似文献   


18.
In recent years, network applications and hardware technology have been developed in impressive speed. That is, a large-scale network switching system is needed to satisfy all demand among network service providers and population, such as data, voice, image, video on demand, videoconferencing, telecommunications, remote control and teaching, etc. A general large-scale network switching system cannot fulfill various application needs, such as the correctness of data transmission and the capacity of extension for input/output port of switching system. In this paper, we propose a nested ring-based architecture to build a very large-scale network switching system. In order to satisfy the various network application needs, a nested ring-based architecture is designed as a switching element. It can make input data exchange fast to the destined output port, and input/output port of switching system can easily be extended up to 100 ports or 1000 ports to construct a very large-scale network switching system. The simulation results show that a better performance can be achieved in data transmission, delay, throughput and extensibility for the proposed system.  相似文献   

19.
孙志刚  卢锡城 《软件学报》2001,12(8):1170-1176
输入缓冲交换开关已经在越来越多的ATM交换机和高性能路由器中使用.对于独立的信元到达,VOQ(virtual output queueing)技术与LQF(1ongest queue first)和OCF(oldest cell first)等加权调度算法的结合使用可以使利用交换开关的吞吐率达到100%.然而LQF和OCF等加权调度算法过于复杂,无法用硬件实现.提出了多步调度策略,使得用硬件实现加权调度算法成为可能.在该策略下,对于独立的信元到达,LQF算法仍可以达到100%的利用开关吞吐率,并具有良好的  相似文献   

20.
In this paper, the average packet delay on IEEE 802.11 DCF under finite load traffic in multi-hop ad hoc networks is analyzed. We employ a Markov chain model to analyze the probability of transmission at each node in an arbitrary slot and derive the channel access delay. We model each node using an M/G/1 queue and derive the queueing delay. The model is extended from analyzing the single-hop average packet delay to evaluating the end-to-end packet delay in multi-hop ad hoc networks without assuming the traffic to be in a saturation state. To validate our analytic results, we have done extensive simulation. The analytic and the simulation results match very well.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号