首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We apply recent results in queueing theory to propose a methodology for bounding the buffer depth and packet delay in high radix interconnection networks. While most work in interconnection networks has been focused on the throughput and average latency in such systems, few studies have been done providing statistical guarantees for buffer depth and packet delays. These parameters are key in the design and performance of a network. We present a methodology for calculating such bounds for a practical high radix network and through extensive simulations show its effectiveness for both bursty and non-bursty injection traffic. Our results suggest that modest speedups and buffer depths enable reliable networks without flow control to be constructed.  相似文献   

2.
Kee-Yin Ng  Joseph  Song  Shibin  Tang  Bihai 《Real-Time Systems》2002,23(3):297-317
ATM is a connection-oriented technology and messages are divided into fixed size packet called cells to facilitate communications. However, before two hosts can communicate, a connection has to be established between them. Consider a real-time communication application running on top of an ATM network. In order to provide the real-time service, we require the network to provide a performance guarantee for the connection. There are two types of performance guarantees: deterministic and statistical guarantees. While a deterministic guarantee provides an absolute bound on the worst case cell delay experienced in an ATM switch, a statistical guarantee provides a probabilistic bound on the worst case cell delay. In this paper, we use a self-similar stochastic process to characterize the arrival of the real-time traffic. Extending from our previous work on deterministic delay guarantee, we provide methods for determinating the statistical delay bound for the worst case cell delay in an ATM switch with various output port controllers. We conclude this paper with two case studies: one based on ordinary LAN traffic and one on the variable bit-rate MPEG video transmission of the movie Star Wars. To show the effectiveness of our statistical delay guarantee, we compare it with the delay derived by Cruz as well as the actual cell delay determined by the two traffic traces.  相似文献   

3.
In this paper, we investigate the performance of a nonblocking packet switch having input buffers and a limited amount of buffers within the switch fabric, where contention for the output ports occurs. A novel scheduling scheme based on head of line blocking is proposed, which improves the performance significantly. For uniform random traffic, a 16 × 16 switch has an achievable throughput equal to 87.5%. We also studied the performance of the switch modules under unbalanced and bursty traffic. Examination of the switch under two delay-priority classes reveals that the achievable throughput can be increased to 91%. To build a large size switching system, a three-stage interconnection network is used, which meets the demands of large scale ATM switch design, such as (1) modularity, (2) relaxed synchronization, (3) guaranteed high performance (i.e. high throughput, low variability of delay) without requiring large internal speed-up, and (4) maintaining packet sequence integrity.  相似文献   

4.
支持最少速率保证的UDP拥塞控制机制   总被引:1,自引:1,他引:1  
虽然目前 TCP流量在整个 Internet流量中占有主要地位 ,但随着网络带宽的不断升级改造 ,基于 UDP的音频、视频等实时多媒体流量日益增加 ,而这些实时流量一般都需要一定的带宽保证 ,同时又具有 TCP友好的端端拥塞控制机制 .从端主机和网关队列机制两方面着手 ,提出了一种支持 IETF定义的可控负载服务机制 ,其实现原理是在端主机方配置基于令牌桶的自适应的支持标记的速率调节机制 ,在网关采用加强的 RED(随机早期检测 )队列管理机制对不同的流量进行相应的处理 ,然后在 NS仿真环境下对其公平性、带宽使用效率等方面进行实验 ,证明了该机制的有效性和可行性  相似文献   

5.
Yongning  Gee-Swee   《Computer Communications》2006,29(18):3833-3843
Recently, a number of studies have been made based on the concept of Route Interference to provide deterministic end-to-end quality of service (QoS) guarantees. Nonetheless, these studies tend to confine to a simple scheduling scheme and study the traffic in a single-class environment or the highest priority traffic in a multi-class environment. This is rather restrictive. In this paper, we propose a new general service scheme to service flows. This scheme is represented by a Latency-Rate Max–Min service curve (LRMMSC). Subsequently, for a network of LRMMSC, we prove the existence of tight bounds on end-to-end queuing delay and buffer size needed for loss-free packet delivery, provided that all flows obey a given source rate condition in the form of their route interference. Our approach has two salient features: (1) the general nature of the concept of service curve enables the service scheme to be implemented by many well-known scheduling disciplines, (2) the general network model adopted with no constraints on the manner of packet queuing makes the results applicable to many complex networks. In addition, we have also derived a concise expression of end-to-end delay bound that depends only on the service offered to the buffers containing the considered flow. This is very useful in practice as the expression is simple and requires minimum amount of information input. Simulation experiments are conducted to verify the LRMMSC model. The analytical and simulation results exhibit close resemblance. In addition, the advantage of LRMMSC scheme in providing maximum end-to-end delay is also demonstrated.  相似文献   

6.
Max-Min Fair Scheduling in Input-Queued Switches   总被引:1,自引:0,他引:1  
Fairness in traffic management can improve the isolation between traffic streams, offer a more predictable performance, eliminate transient bottlenecks, mitigate the effect of certain kinds of denial-of-service attacks, and serve as a critical component of a quality-of-service strategy to achieve certain guaranteed services such as delay bounds and minimum bandwidths. In this paper, we choose a popular notion of fairness called max-min fairness and provide a rigorous definition in the context of input-queued switches. We show that being fair at the output ports alone or at the input ports alone or even at both input and output ports does not actually achieve an overall max-min fair allocation of bandwidth in a switch. Instead, we propose a new algorithm that can be readily implemented in a distributed fashion at the input and output ports to determine the exact max-min fair rate allocations for the flows through the switch. In addition to proving the correctness of the algorithm, we propose a practical scheduling strategy based on our algorithm. We present simulation results, using both real traffic traces and synthetic traffic, to evaluate the fairness of a variety of popular scheduling algorithms for input-queued switches. The results show that our scheduling strategy achieves better fairness than other known algorithms for input-queued switches and, in addition, achieves throughput performance very close to that of the best schedulers.  相似文献   

7.
Optical Burst Switching (OBS) is a promising switching technology for the next generation all-optical networks. An OBS network without wavelength converters and fiber delay lines can be implemented simply and cost-effectively using the existing technology. However, this kind of networks suffers from a relatively high burst loss probability at the OBS core nodes. To overcome this issue and consolidate OBS networks with QoS provisioning capabilities, we propose a wavelength partitioning approach, called Optimization-based Topology-aware Wavelength Partitioning approach (OTWP). OTWP formulates the wavelength partitioning problem, based on the topology of the network, as an Integer Linear Programming (ILP) model and uses a tabu search algorithm (TS) to resolve large instances efficiently. We use OTWP to develop an absolute QoS differentiation scheme, called Absolute Fair Quality of service Differentiation scheme (AFQD). AFQD is the first absolute QoS provisioning scheme that guarantees loss-free transmission for high priority traffic, inside the OBS network, regardless of its topology. Also, we use OTWP to develop a wavelength assignment scheme, called Best Effort Traffic Wavelength Assignment scheme (BETWA). BETWA aims to reduce loss probability for best effort traffic. To make AFQD adaptive to non-uniform traffic, we develop a wavelength borrowing protocol, called Wavelength Borrowing Protocol (WBP). Numerical results show the effectiveness of the proposed tabu search algorithm to resolve large instances of the partitioning problem. Also, simulation results, using ns-2, show that: (a) AFQD provides an excellent quality of service differentiation; (b) BETWA substantially decreases the loss probability of best effort traffic to a remarkably low level for the OBS network under study; and (c) WBP makes AFQD adaptive to non-uniform traffic by reducing efficiently blocking probability for high priority traffic.  相似文献   

8.
《Computer Networks》2008,52(5):971-987
Providing end-to-end delay guarantees for delay sensitive applications is an important packet scheduling issue with routers. In this paper, to support end-to-end delay requirements, we propose a novel network scheduling scheme, called the bulk scheduling scheme (BSS), which is built on top of existing schedulers of intermediate nodes without modifying transmission protocols on either the sender or receiver sides. By inserting special control packets, which called TED (Traffic Specification with End-to-end Deadline) packets, into packet flows at the ingress router periodically, the BSS schedulers of the intermediate nodes can dynamically allocate the necessary bandwidth to each flow to enforce the end-to-end delay, according to the information in the TED packets. The introduction of TED packets incurs less overhead than the per-packet marking approaches. Three flow bandwidth estimation methods are presented, and their performance properties are analyzed. BSS also provides a dropping policy for discarding late packets and a feedback mechanism for discovering and resolving bottlenecks. The simulation results show that BSS performs efficiently as expected.  相似文献   

9.
Analytical and empirical studies have shown that self-similar traffic can have detrimental impact on network performance including amplified queuing delay and packet loss ratio. On the flip side, the ubiquity of scale-invariant burstiness observed across diverse networking contexts can be exploited to better design resource control algorithms. In this paper, we explore the issue of exploiting the self-similar characteristics of network traffic in TCP congestion control. We show that the correlation structure present in long-range dependent traffic can be detected on-line and used to predict the future traffic. We then devise a novel scheme, called TCP with traffic prediction (TCP-TP), that exploits the prediction result to infer, in the context of AIMD steady-state dynamics, the optimal operational point at which a TCP connection should operate. Through analytical reasoning, we show that the impact of prediction errors on fairness is minimal. We also conduct ns-2 simulation and FreeBSD 4.1-based implementation studies to validate the design and to demonstrate the performance improvement in terms of packet loss ratio and throughput attained by connections.  相似文献   

10.
We investigate Leaky Bucket (LB) schemes with a threshold in the data buffer, where the leak rate changes depending on the contents of the data buffer. We use a Markov modulated Poisson process (MMPP) as a bursty input traffic. We obtain the limiting distributions of the system state at an embedded point and at an arbitrary time. As performance measures we obtain cell loss probability and mean cell delay. We present some numerical results to show the effects of the threshold value, the rate of token generation, the size of the token pool and the size of the data buffer on the performances of the LB scheme with a threshold. Numerical examples show that the LB scheme with a threshold improves the system performance in comparison with the LB scheme without a threshold.  相似文献   

11.
The call types supported in high-speed packet networks vary widely in their bandwidth requirements and tolerance to message delay and loss. In this paper, we classify various traffic sources which are likely to be integrated in broadband ATM networks, and suggest schemes for bandwidth allocation and transmission scheduling to meet the quality and performance objectives. We propose ATM cell-multiplexing using a Dynamic Time-Slice (DTS) scheme which guarantees a required bandwidth for each traffic class and/or virtual circuit (VC), and is dynamic in that it allows the different traffic classes or VCs to share the bandwidth with a soft boundary. Any bandwidth momentarily unused by a class or a VC is made available to the other traffic present in the multiplexer. The scheme guarantees a desired bandwidth to connections which require a fixed wide bandwidth. Thus, it facilitates setting up circuit-like connections in a network using the ATM protocol for transport. The DTS scheme is an efficient way of combining constant bit-rate (CBR) services with variable bit-rate (VBR) stastically multiplexed services. We also described methodologies to schedule delivery of delay-tolerant data traffic within the framework of the DTS scheme. Important issues such as buffer allocations, guarantee of service quality, and ease of implementation are also discussed.  相似文献   

12.
A multistage switch connection approach to improve the performance of large local computer networks is devised. The single large LAN is partitioned into smaller networks and these networks are interconnected by a Delta multistage butterfly switch. The performance of the single large network and the proposed partitioned network have been modeled and simulated using SIMSCRIPT II.5. The scheme has been applied to the token bus topology as a case study. It has been shown that the throughput of the proposed partitioned network is increased and there is a significant decrease in the average packet delay compared to the single large network.  相似文献   

13.
本文对ADRR数据、语音综合通信协议的性能进行了分析。分析结果表明,该协议能充分利用信道,并在分组传输时延很小的情况下支持相当数量的语音用户,同时还为数据业务提供了足够的带宽,其性能要优于CSMA/CD和其它虚令牌DAMA方法。  相似文献   

14.
In this paper, we propose a framework for real-time multimedia transmission in asynchronous transfer mode (ATM) networks using an efficient traffic scheduling scheme called multilayer gated frame queueing (MGFQ). MGFQ employs only one set of FIFO queues to provide a wide range of QoS for real-time applications. We also propose special cell formats for real-time multimedia transport and a hybrid design to allow MGFQ to combine its scheduling scheme with Age Priority Packet Discarding scheme. For this hybrid design, the cell level performance as well as the packet level QoS can be improved at the same time, Simulation results show that this hybrid design will be useful for packetized voice and progressive layer-compressed video transmission across the backbone networks. With the presented framework and the MGFQ algorithm, real-time multimedia traffic streams can be much better supported in terms of cell/packet delay and jitter  相似文献   

15.
Providing performance guarantees for arriving traffic flows has become an important measure for today’s routing and switching systems. However, none of current scheduling algorithms built on CICQ (combined input and cross-point buffered) switches can provide flow level performance guarantees. Aiming at meeting this requirement, the feasibility of implementing flow level scheduling is discussed thoroughly. Then, based on the discussion, it comes up with a hybrid and stratified fair scheduling (HSFS) scheme, which is hierarchical and hybrid, for CICQ switches. With HSFS, each input port and output port can schedule variable length packets independently with a complexity of O(1). Theoretical analysis show that HSFS can provide delay bound, service rate and fair performance guarantees without speedup. Finally, we implement HSFS in SPES (switch performance evaluation system) to verify the analytical results.  相似文献   

16.
为到达业务提供性能保障是衡量一个交换系统性能的重要参考.针对现有联合输入交叉点排队交换结构(CICQ)调度策略缺乏基于流的服务质量保障,探讨了在CICQ交换结构实施基于"流"调度的可能性,提出了一种能够为到达业务流的提供公平服务的分层混合调度策略(HSFS).HSFS采用分层的混合调度机制,每个输入、输出端口可独立地进行变长分组交换,其复杂度为O(1),具有良好可扩展特性.理论分析结果表明,HSFS无需加速便能为到达业务提供时延上限、速率和公平性保障.最后,基于SPES对HSFS的性能进行了评估.  相似文献   

17.
A bus-structured local area communications network unidirectional bus system, over which packets are broadcast. Stations are connected to the communications channel by means of three passive taps. The multiple-access protocol is an extension of the register-insertion scheme used for loop networks; it is extremely efficient and guarantees that the packet access delay is less than a known maximum. Message-based priority functions can be introduced with a minimum overhead. The channel-access protocol allows an efficient integration of real-time traffic, such as packetized voice, with bursty data traffic. Simulation results quantitatively demonstrate the protocol performance.  相似文献   

18.
Networks on‐chip (NoCs) interconnect the components located inside a chip. In multicore chips, NoCs have a strong impact on the overall system performance. NoC bandwidth is limited by the critical path delay. Recent works show that the critical path delay is heavily affected by switch port buffer size. Therefore, by removing buffers, switch clock frequency can be increased. Recently, a new switching technique for NoCs called Blind Packet Switching (BPS) has been proposed, which is based on removing the switch port buffers. Since buffers consume a high percentage of switch power and area, BPS not only improves performance but also reduces power and area. In BPS, as there are no buffers at the switch ports, packets cannot be stopped and stored on them. If contention arises packets are dropped and later reinjected, negatively affecting performance. In order to prevent packet dropping, some techniques based on resource replication have been proposed. In this paper, we propose some alternative and complementary techniques that do not rely on resource replication. By using them, packet dropping is highly reduced. In particular, packet dropping is completely removed for a very wide network traffic range. Moreover, network throughput is increased and packet latency is reduced. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

19.
Dynamic channel assignment algorithms allow wireless nodes to switch channels when their traffic loads exceed certain thresholds. These thresholds represent estimations of their throughput capacities. Unfortunately, the threshold estimation may not be accurate due to co-channel interference (CCI) and adjacent-channel interference (ACI), especially with high traffic loads in dense networks. When the link capacity is over-estimated, these channel assignment algorithms are not effective. This is because the channel switch is not triggered even with overloaded data traffic and the link quality decreases significantly as the channel is overloaded. When the link capacity is under-estimated, the link is under-utilized. Moreover, when link traffic load increases from time to time, channel switch occurs frequently. Such frequent channel switches increase latency and degrade throughput, and can even cause network wide channel oscillations. In this paper, we propose a novel threshold-based control system, called balanced control system (BCS). The proposed threshold-based control policy consists of deciding, according to the real time traffic load and interference, whether to switch to another channel, which channel should be switched to and how to perform the switch. Our control model is based on a fuzzy logic control. The threshold which assists to make the channel switch decisions, could be deduced dynamically according to the real-time traffic of each node. We also design a novel dynamic channel assignment scheme, which is used for the selection of the new channel. The channel switch scheduler is provided to perform channel-switch processing for sender and receiver over enhanced routing protocols. We implement our system in NS2, and the simulation results show that with our proposed system, the performance improves by 12.3%–72.8% in throughput and reduces 23.2%–52.3% in latency.  相似文献   

20.
We propose a novel policy for scheduling upstream flows in Ethernet passive optical networks. This policy, called proportional sharing with load reservation (PSLR), provides bandwidth guarantees on a per-flow basis and redistributes the unused bandwidth among active flows in proportion to their priority level. We establish convergence conditions for the PSLR policy and show that it provides a fair service distribution among the flows. Moreover, we establish bounds for the backlog and delay on a per-flow basis, thus enabling a network to provide its users with absolute performance guarantees.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号