首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The performance of output buffers in multipath ATM switches is closely related to the output traffic distribution, which characterizes the packet arrival rate at each output link connected to the output buffer of a given output port. Many multipath switches produce nonuniform output traffic distributions even if the input traffic patterns are uniform. Focusing on the nonuniform output traffic distribution, we analyze the output buffer performances of several multipath switches under both uniform and nonuniform input traffic patterns. It is shown that the output traffic distributions are different for the various multipath switches and the output buffer performance measured in terms of packet loss probability and mean waiting time improves as the nonuniformity of the output traffic distribution becomes higher.  相似文献   

2.
Performance of ATM networks depends on switch performance and architecture. This paper presents a simulation study of a new dynamic allocation of input buffer space in ATM switching elements. The switching elements are composed of input and output buffers which are used to store received and forwarded cells, respectively. Efficient and fair use of buffer space in an ATM switch is essential to gain high throughput and low cell loss performance from the network. In this paper, input buffer space of each switching element is allocated dynamically as a function of traffic load. A shared buffer pool is provided with threshold-based virtual partition among input ports, which supplies the necessary input buffer space as required by each input port. The system behaviour under varying traffic loads has investigated using a simulation program. Also, a comparison with a static allocation scheme shows that the threshold based dynamic buffer allocation scheme ensures an increased network throughput and a fair share of the buffer space even under bursty loading conditions.  相似文献   

3.
This paper studies the performance of trunk grouping in packet switch system design, with emphasis on the analysis of maximum throughput, input queue delay and packet loss rate. The trunk grouping technique can be implemented on both sides of the switch. In principle, the output trunk grouping relieves traffic output contentions, while the input trunk grouping, proposed in this paper, prevents individual input links from overloading. The study shows a significant advantage of both input and output trunk groupings in removing local congestions caused by individual links, especially in a highly nonuniform traffic environment. To implement trunk grouping, we suggest not to designate the connection of each virtual circuit to individual links in high speed network protocol design.  相似文献   

4.
Shared-buffer switches have many advantages such as relatively low cell loss rate and good buffer utilization, and they are increasingly favoured in recent VLSI switch designs for ATM. However, their performance degrades dramatically under nonuniform traffic due to the monopolization of the buffer by some favoured cells. To overcome this, restricted types of sharing and hot-spot pushout (HSPO) have been proposed, and the latter has been shown by simulation to perform better in all situations. In this paper we develop an analytical model for performance evaluation of a shared-buffer asynchronous transfer mode (ATM) switch with HSPO under bursty traffic. This analytical model is an improved version of the first model ever developed for this purpose. We balance the relative queues to approximate the effects of pushout, while keeping only four state-variables, and our model gives a good agreement with simulation, for calculating throughput and cell loss.  相似文献   

5.
《Computer Networks》1999,31(18):1927-1933
Efficient and fair use of buffer space in an Asynchronous Transfer Mode (ATM) switch is essential to gain high throughput and low cell loss performance from the network. In this paper a shared buffer architecture associated with threshold-based virtual partition among output ports is proposed. Thresholds are updated based on traffic characteristics on each outgoing link, so as to adapt to traffic loads. The system behavior under varying traffic patterns is investigated via simulation; cell loss rate is the quality of service (QoS) measure used in this study. Our study shows that the threshold based dynamic buffer allocation scheme ensures a fair share of the buffer space even under bursty loading conditions.  相似文献   

6.
To simplify traffic control in a network, it is desirable that the traffic-control policy at a network node depends only on the external traffic loads on the input and output links, but not on the detail addressing or distribution of packets from inputs to outputs. In other words, it should be possible to guarantee the grade-of-service of an input-output connection by controlling the aggregate loads on the input and output. Switch nodes in which such a traffic-control policy is possible are said to have the property of the sufficiency of the knowledge of external loads (SKEL). One way to demonstrate the feasibility of SKEL for a particular switch is to show that the performance under any nonuniform traffic distribution from inputs to outputs is better than or close to the performance under the uniform traffic distribution. The contributions of this paper are twofold: clarifying issues related to SKEL and establishing its feasibility for generic input- and output-buffered switches on a rigorous basis. The following summarizes our major results: (1) The packet-loss probability due to the Knockout switch-design principle for packets destined for an arbitrary output is maximum when the traffic to that output originates uniformly from all inputs; (2) The packet-loss probability for packets destined for a particular output under uniform traffic closely approximates the loss probability for packets from the worst-case input to that output under nonuniform traffic; (3) For mean and variance of delay, similar results as in (1) and (2) can be obtained; (4) For an input-queued switch, external link loadings that do not give rise to queue saturation under uniform traffic will not do so under nonuniform traffic either.  相似文献   

7.
In this paper, we study the mean delay and maximum buffer requirements at different levels of burstiness for highly bursty data traffic in an ATM node. This performance study is done via an event-driven simulation program which considers both real-time and data traffic. We assume that data traffic is loss-sensitive. A large buffer (fat bucket) is allocated to data traffic to accommodate sudden long burst of cells. Real-time traffic is delay-sensitive. We impose input traffic shaping on real-time traffic using a leaky-bucket based input rate control method. Channel capacity is allocated based on the average arrival rate of each input source to maximize the utilization of channel capacity. Simulation results show that both the maximum buffer requirements and mean node delay for data traffic are directly proportional to the burstiness of its input traffic. Results for mean node delay and cell loss probability of real-time traffic are also analyzed. The simulation program is written in C++ and has been verified using the zero mean statistics concept by comparing simulation results to known theoretical or observed results.  相似文献   

8.
A number of recent studies have addressed the use of priority mechanisms in Asynchronous Transfer Mode (ATM) switches. This investigation concerns the performance evaluation and dimensioning of a shared-buffer switching element with a threshold priority mechanism (partial buffer sharing). It assumes that incoming ATM cells are distinguished by a space priority assignment, i.e., loss of high priority cell should be less likely than loss of a low priority cell. The evaluation method is analytic, based on an approximate discrete-time, finite-state Markov model of a switch and its incoming traffic. The development focuse son the formulation of steady-state loss probabilities for each priority class. Evaluation of delay measures for each class is also supported by the model; results concerning the latter are illustrated without development. The analysis of loss probabilities is then used to dimension the buffer capacity and threshold level such that required maximum cell loss probabilities are just satisfied for each cell type. Moreover, when so dimensioned with respect to relatively stringent loss requirements, i.e., probabilities of 10−10 and 10−5 for high and low priority cells, respectively, we find that both loss performance and resource utilization are appreciably improved over a comparable switch without such a mechanism.  相似文献   

9.
目前已证明,在同样大小缓冲区的条件下,基于中央排队的交换系统具有较好的性能。文中对在中央排队交换系统中如何实现多址传输及解决多址传输与单址传输冲突的方法进行详细描述,提出了一种称为具有输出缓冲和屏蔽优先级的单写单读方案,实现多址传输并解决多址传输与单址传输的冲突,降低信元丢失率和信元延时,最后给出该方案的仿真建模方法。  相似文献   

10.
Fuzzy-based rate control for real-time MPEG video   总被引:3,自引:0,他引:3  
We propose a fuzzy logic-based control scheme for real-time motion picture expert group (MPEG) video to avoid long delay or excessive loss at the user-network interface (UNI) in an asynchronous transfer mode (ATM) network. The system consists of a shaper whose role is to smooth the MPEG output traffic to reduce the burstiness of the video stream. The input and output rates of the shaper buffer are controlled by two fuzzy logic-based controllers. To avoid a long delay at the shaper, the first controller aims to tune the output rate of the shaper in the video frame time scale based on the number of available transmission credits at the UNI and the occupancy of the shaper's buffer. Based on the average occupancy of the shaper's buffer and its variance, the second controller tunes the input rate to the shaper over a much larger time scale by applying a closed-loop MPEG encoding scheme. With this approach, the traffic enters the network at an almost constant bit rate (with a very small variation) allowing simple network management functions such as admission control and bandwidth allocation, while guaranteeing a relatively constant video quality since the encoding rate is changed only in critical periods when the shaper buffer “threatens” to overflow. The performance of the proposed scheme is evaluated through numerical tests on real video sequences  相似文献   

11.
Bar-Noy  Freund  Landa  Naor 《Algorithmica》2008,36(3):225-247
Abstract. Consider the following problem. A switch connecting n input channels to a single output channel must deliver all incoming messages through this channel. Messages are composed of packets , and in each time slot the switch can deliver a single packet from one of the input queues to the output channel. In order to prevent packet loss, a buffer is maintained for each input channel. The goal of a switching policy is to minimize the maximum buffer size. The setting is on-line; decisions must be made based on the current state without knowledge of future events. This general scenario models multiplexing tasks in various systems such as communication networks, cable modem systems, and traffic control. Traditionally, researchers analyzed the performance of a given policy assuming some distribution on the arrival rates of messages at the input queues, or assuming that the service rate is at least the aggregate of all the input rates. We use competitive analysis, avoiding any prior assumptions on the input. We show O(log n )-competitive switching policies for the problem and demonstrate matching lower bounds.  相似文献   

12.
13.
With the increase of internet protocol (IP) packets the performance of routers became an important issue in internet/working. In this paper we examine the matching algorithm in gigabit router which has input queue with virtual output queueing. Dynamic queue scheduling is also proposed to reduce the packet delay and packet loss probability. Port partitioning is employed to reduce the computational burden of the scheduler in a switch which matches the input and output ports for fast packet switching. Each port is divided into two groups such that the matching algorithm is implemented within each pair of groups in parallel. The matching is performed by exchanging the pair of groups at every time slot. Two algorithms, maximal weight matching by port partitioning (MPP) and modified maximal weight matching by port partitioning (MMPP) are presented. In dynamic queue scheduling, a popup decision rule for each delay critical packet is made to reduce both the delay of the delay critical packet and the loss probability of loss critical packet. Computational results show that MMPP has the lowest delay and requires the least buffer size. The throughput is illustrated to be linear to the packet arrival rate, which can be achieved under highly efficient matching algorithm. The dynamic queue scheduling is illustrated to be highly effective when the occupancy of the input buffer is relatively high.Scope and purposeTo cope with the increasing internet traffic, it is necessary to improve the performance of routers. To accelerate the switching from input ports to output in the router partitioning of ports and dynamic queueing are proposed. Input and output ports are partitioned into two groups A/B and a/b, respectively. The matching for the packet switching is performed between group pairs (A, a) and (B, b) in parallel at one time slot and (A, b) and (B, a) at the next time slot. Dynamic queueing is proposed at each input port to reduce the packet delay and packet loss probability by employing the popup decision rule and applying it to each delay critical packet.The partitioning of ports is illustrated to be highly effective in view of delay, required buffer size and throughput. The dynamic queueing also demonstrates good performance when the traffic volume is high.  相似文献   

14.
We consider an ATM switch consisting of a number of output-buffered ATM switch elements operating in parallel. Such an architecture may lead to uneven distribution of cells over the parallel switch element buffers, if no modifications to the design is made. With this motivation, we propose a threshold based load balancing method to evenly distribute the cells over the parallel switch element buffers. We present the details of how the load balancing method can be implemented. Our computer simulations show that the proposed load balancing method significantly reduces the cell loss probability. The study is done with two input traffic models: random traffic and bursty traffic.  相似文献   

15.
针对现有的联合输入交叉点排队(CICQ)调度算法在设计时未充分利用交叉点缓存状态信息的问题,提出一种CICQ状态堆调度算法。该算法分布式地运行于CICQ结构的各个输入端口和输出端口。仿真结果表明,在均匀或非均匀流量模型下,基于该算法的CICQ结构都能获得与输出排队结构相当的性能,且具有较高的时延。  相似文献   

16.
基于模糊控制的ATM网络VBR视频传输平滑策略   总被引:1,自引:0,他引:1  
VBR视频传输的突发性是影响ATM网络服务质量的关键因素,文中通过模糊控制方法对视频传输流量的阈值进行监控,实现了接入的平滑并可动态调整传输速度。文中以传输流量及这发级作为评价指标进行分析,结果表明VBR视频传输自适应平滑策略的实效性。  相似文献   

17.
With the introduction of diverse rate requirements under a variety of statistical multiplexing schemes, traffic burstiness behavior of a source stream and its quality-of-service (QoS) performances within the ATM networks become difficult to model and analyze. In this paper, we address this issue and propose a rate-controlled service discipline that provides control of the traffic burstiness while maintaining QoS guarantees for traffic flows with various rate requirements. According to our analysis, traffic streams from different connections can be well regulated at the output of each network node based on their rate requirements. Traffic envelope and the associated burstiness behavior inside the network can thus be effectively characterized. In addition, by assuming a leaky-bucket constrained input source, we prove that the proposed scheme can provide end-to-end delay and jitter bounds for each connection passing through a multi-hop network. Further, due to the low traffic burstiness, only a small buffer space is required at the internal switches for guaranteeing QoS requirements.  相似文献   

18.
iLQF调度算法及其参数的仿真   总被引:1,自引:0,他引:1  
文章介绍一种无内部阻塞ATM交换结构的输入缓存模型及其缓存信元的iLQF调度算法(即迭代的最长队列优先调度算法),并具体给出该算法的实现方案。通过仿真,分析和确定了交换结构的输入缓存长度和iLQF调度算法的迭代次数对信元丢失率的影响。  相似文献   

19.
The simulation of high-speed telecommunication systems such as ATM (Asynchronous Transfer Mode) networks has generally required excessively long run times. This paper reviews alternative approaches using parallelism to speed up simulations of discrete event systems, and telecommunication networks in particular. Subsequently, a new simulation method is introduced for the fast parallel simulation of a common network element, namely, a work-conserving finite capacity statistical multiplexer of bursty ON/OFF sources arriving on input links of equal peak rate. The primary performance measure of interest is the cell loss ratio, due to buffer overflows. The proposed method is based on two principal techniques: (1) the derivation of low-level (cell level) statistics from a higher level (burst level) simulation and (2) parallel execution of the burst level simulation program. For the latter, atime-division parallel simulation method is used where simulations operating at different intervals of simulated time are executed concurrently on different processors. Both techniques contribute to the overall speedup. Furthermore, these techniques support simulations that are driven by traces of actual network traffic (trace-driven simulation), in addition to standard models for source traffic. An analysis of this technique is described, indicating that it offers excellent potential for delivering good performance. Measurements of an implementation running on a 32 processor KSR-2 multiprocessor demonstrate that, for certain model parameter settings, the simulator is able to simulate up to 10 billion cell arrivals per second of wallclock time.  相似文献   

20.
K. Begain 《Calcolo》1995,32(3-4):137-152
The paper addresses the analysis of a single multiplexing node in ATM networks. It presents analytical models for evaluating the performance parameters of a multiplexer that has N independent and identical ON-OFF type input sources, M independent Constant Bit Rate inputs, and an output channel with finite buffer. The channel speed is assumed to be an integer times of the source speed in ON state which equal to speed of the CBR sources. A bidimensional Homogeneous Discrete Time Markov Chain is introduced where the two dimensions describe the number of ON sources and the number of cells in the finite buffer at a given time. Two time scales are defined in order to ensure accurate results in calculating the performance parameters, e.g. cell loss and cell delay. Three alternative models of the cell arrival process are discussed and the performance parameters are derived.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号