首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Service prioritization among different traffic classes is an important goal for the Internet. Conventional approaches to solving this problem consider the existing best-effort class as the low-priority class, and attempt to develop mechanisms that provide "better-than-best-effort" service. In this paper, we explore the opposite approach, and devise a new distributed algorithm to realize a low-priority service (as compared to the existing best effort) from the network endpoints. To this end, we develop TCP Low Priority (TCP-LP), a distributed algorithm whose goal is to utilize only the excess network bandwidth as compared to the "fair share" of bandwidth as targeted by TCP. The key mechanisms unique to TCP-LP congestion control are the use of one-way packet delays for early congestion indications and a TCP-transparent congestion avoidance policy. The results of our simulation and Internet experiments show that: 1) TCP-LP is largely non-intrusive to TCP traffic; 2) both single and aggregate TCP-LP flows are able to successfully utilize excess network bandwidth; moreover, multiple TCP-LP flows share excess bandwidth fairly; 3) substantial amounts of excess bandwidth are available to the low-priority class, even in the presence of "greedy" TCP flows; 4) the response times of web connections in the best-effort class decrease by up to 90% when long-lived bulk data transfers use TCP-LP rather than TCP; 5) despite their low-priority nature, TCP-LP flows are able to utilize significant amounts of available bandwidth in a wide-area network environment.  相似文献   

2.
Increased performance, fairness, and security remain important goals for service providers. In this work, we design an integrated distributed monitoring, traffic conditioning, and flow control system for higher performance and security of network domains. Edge routers monitor (using tomography techniques) a network domain to detect quality of service (QoS) violations—possibly caused by underprovisioning—as well as bandwidth theft attacks. To bound the monitoring overhead, a router only verifies service level agreement (SLA) parameters such as delay, loss, and throughput when anomalies are detected. The marking component of the edge router uses TCP flow characteristics to protect ‘fragile’ flows. Edge routers may also regulate unresponsive flows, and may propagate congestion information to upstream domains. Simulation results indicate that this design increases application‐level throughput of data applications such as large FTP transfers; achieves low packet delays and response times for Telnet and WWW traffic; and detects bandwidth theft attacks and service violations. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

3.
Existing fair-queuing algorithms use complicated flow management mechanisms, thus making them expensive to deploy in current high-bandwidth networks. In this paper we propose a scalable SCORE (stateless core) approach to provide fair bandwidth sharing for a traffic environment composed of TCP and UDP flows. At an edge router, the arrival rates of each flow are estimated, each packet then being labelled with this estimate. The outgoing link’s fair share at a router is estimated based on UDP traffic. Probabilistic dropping is used to regulate those flows that send more than the fair share. At a core router, all the functions performed by an edge router are repeated, excluding the flow rate estimation. The simulation results show that the degree of fairness achieved by the proposed solution is comparable to that of other algorithms, but with a lower implementation cost.  相似文献   

4.
区分服务中一种拥塞感知的单速三色标记算法   总被引:3,自引:1,他引:2       下载免费PDF全文
确保服务的实现依赖于在边界路由器执行的数据包标记策略和在核心路由器执行的队列管理策略.TCP流由于其拥塞自适应的特点对丢包很敏感,网络拥塞对其吞吐量影响很大.为此,我们设计了一种拥塞感知的单速三色标记算法CASR3CM.仿真实验表明,该算法不仅提高了AS TCP流的平均吞吐量,而且增强了吞吐量的稳定性.另外该算法也提高了AS TCP流之间占用带宽的公平性.  相似文献   

5.
Explicit allocation of best-effort packet delivery service   总被引:1,自引:0,他引:1  
This paper presents the “allocated-capacity” framework for providing different levels of best-effort service in times of network congestion. The “allocated-capacity” framework-extensions to the Internet protocols and algorithms-can allocate bandwidth to different users in a controlled and predictable way during network congestion. The framework supports two complementary ways of controlling the bandwidth allocation: sender-based and receiver-based. In today's heterogeneous and commercial Internet the framework can serve as a basis for charging for usage and for more efficiently utilizing the network resources. We focus on algorithms for essential components of the framework: a differential dropping algorithm for network routers and a tagging algorithm for profile meters at the edge of the network for bulk-data transfers. We present simulation results to illustrate the effectiveness of the combined algorithms in controlling transmission control protocol (TCP) traffic to achieve certain targeted sending rates  相似文献   

6.
In IEEE 802.16 networks, a bandwidth request-grant mechanism is used to accommodate various QoS requirements of heterogeneous traffic. However, it may not be effective for TCP flows since (a) there is no strict QoS requirement in TCP traffic; and (b) it is difficult to estimate the amount of required bandwidth due to dynamic changes of the sending rate. In this letter, we propose a new uplink scheduling scheme for best-effort TCP traffic in IEEE 802.16 networks. The proposed scheme does not need any bandwidth request process for allocation. Instead, it estimates the amount of bandwidth required for a flow based on its current sending rate. Through simulation, we show that the proposed scheme is effective to allocate bandwidth for TCP flows  相似文献   

7.
In addition to unresponsive UDP traffic, aggressive TCP flows pose a serious challenge to congestion control and stability of the future Internet. This paper considers the problem of dealing with such unresponsive TCP sessions that can be considered to collectively constitute a Denial-of-Service (DoS) attack on conforming TCP sessions. The proposed policing scheme, called HaDQ (HaTCh-based Dynamic Quarantine), is based on a recently proposed HaTCh mechanism, which accurately estimates the number of active flows without maintenance of per-flow states in a router. We augment HaTCh with a small Content Addressable Memory (CAM), called quarantine memory, to dynamically quarantine and penalize the unresponsive TCP flows. We exploit the advantage of the smaller, first-level cache of HaTCh for isolating and detecting the aggressive flows. The aggressive flows from the smaller cache are then moved to the quarantine memory and are precisely monitored for taking appropriate punitive action. While the proposed HaDQ technique is quite generic in that it can work with or without any AQM scheme, in this paper we have integrated HaDQ and an AQM scheme to compare it against some of the existing techniques. For this, we extend the HaTCh scheme to develop a complete AQM mechanism, called HRED. Simulation-based performance analysis indicates that by using a proper configuration of the monitoring period and the detection threshold, the proposed HaDQ scheme can achieve a low false drop rate (false positives) of less than 0.1%. Comparison with two AQM schemes (CHOKe and FRED), which were proposed for handling unresponsive UDP flows, shows that HaDQ is more effective in penalizing the bandwidth attackers and enforcing fairness between conforming and aggressive TCP flows.  相似文献   

8.
Classical Transmission Control Protocol (TCP) designs have never considered the identity of the competing transport protocol as useful information to TCP sources in congestion control mechanisms. When competing against a TCP flow on a bottleneck link, a User Datagram Protocol (UDP) flow can unfairly occupy the entire link bandwidth and suffocate all TCP flows on the link. If it were possible for a TCP source to know the type of transport protocol that deprives it of link access, perhaps it would be better for the TCP source to react in a way which prevents total starvation. In this paper, we use coefficient of variation and power spectral density of throughput traces to identify the presence of UDP transport protocols that compete against TCP flows on bottleneck links. Our results show clear traits that differentiate the presence of competing UDP flows from TCP flows independent of round-trip times variations. Signatures that we identified include an increase in coefficient of variation whenever a competing UDP flow joins the bottleneck link for the first time, noisy spectral density representation of a TCP flow when competing against a UDP flow in the bottleneck link, and a dominant frequency with outstanding power in the presence of TCP competition only. In addition, the results show that signatures for congestion caused by competing UDP flows are different from signatures due to congestion caused by competing TCP flows regardless of their round-trip times. The results in this paper present the first steps towards development of more ’intelligent’ congestion control algorithms with added capability of knowing the identity of aggressor protocols against TCP, and subsequently using this additional information for rate control.  相似文献   

9.
A survey on TCP-friendly congestion control   总被引:2,自引:0,他引:2  
Widmer  J. Denda  R. Mauve  M. 《IEEE network》2001,15(3):28-37
New trends in communication, in particular the deployment of multicast and real-time audio/video streaming applications, are likely to increase the percentage of non-TCP traffic in the Internet. These applications rarely perform congestion control in a TCP-friendly manner; they do not share the available bandwidth fairly with applications built on TCP, such as Web browsers, FTP, or e-mail clients. The Internet community strongly fears that the current evolution could lead to congestion collapse and starvation of TCP traffic. For this reason, TCP-friendly protocols are being developed that behave fairly with respect to coexistent TCP flows. We present a survey of current approaches to TCP friendliness and discuss their characteristics. Both unicast and multicast congestion control protocols are examined, and an evaluation of the different approaches is presented  相似文献   

10.
《IEEE network》2002,16(5):38-46
Today, the dominant paradigm for congestion control in the Internet is based on the notion of TCP friendliness. To be TCP-friendly, a source must behave in such a way as to achieve a bandwidth that is similar to the bandwidth obtained by a TCP flow that would observe the same round-trip time (RTT) and the same loss rate. However, with the success of the Internet comes the deployment of an increasing number of applications that do not use TCP as a transport protocol. These applications can often improve their own performance by not being TCP-friendly, which severely penalizes TCP flows. To design new applications to be TCP-friendly is often a difficult task. The idea of the fair queuing (FQ) paradigm as a means to improve congestion control was first introduced by Keshav (1991). While Keshav made a fundamental step toward a new paradigm for the design of congestion control protocols, he did not formalize his results so that his findings could be extended for the design of new congestion control protocols. We make this step and formally define the FQ paradigm as a paradigm for the design of new end-to-end congestion control protocols. This paradigm relies on FQ scheduling with per-flow scheduling and longest queue drop buffer management in each router. We assume only selfish and noncollaborative end users. Our main contribution is the formal statement of the congestion control problem as a whole, which enables us to demonstrate the validity of the FQ paradigm. We also demonstrate that the FQ paradigm does not adversely impact the throughput of TCP flows and explain how to apply the FQ paradigm for the design of new congestion control protocols. As a pragmatic validation of the FQ paradigm, we discuss a new multicast congestion control protocol called packet pair receiver-driven layered multicast (PLM).  相似文献   

11.
This paper reports the findings of a simulation study of the queueing behavior of “best-effort” traffic in the presence of constant bit-rate and variable bit-rate isochronous traffic. In this study, best-effort traffic refers to ATM cells that support communications between host end systems executing various applications and exchanging information using TCP/IP. The performance measures considered are TCP cell loss, TCP packet loss, mean cell queueing delay, and mean cell queue length. Our simulation results show that, under certain conditions, best-effort TCP traffic may experience as much as 2% cell loss. Our results also show that the probability of cell and packet loss decreases logarithmically with increased buffer size  相似文献   

12.
TCP-Jersey for wireless IP communications   总被引:6,自引:0,他引:6  
Improving the performance of the transmission control protocol (TCP) in wireless Internet protocol (IP) communications has been an active research area. The performance degradation of TCP in wireless and wired-wireless hybrid networks is mainly due to its lack of the ability to differentiate the packet losses caused by network congestions from the losses caused by wireless link errors. In this paper, we propose a new TCP scheme, called TCP-Jersey, which is capable of distinguishing the wireless packet losses from the congestion packet losses, and reacting accordingly. TCP-Jersey consists of two key components, the available bandwidth estimation (ABE) algorithm and the congestion warning (CW) router configuration. ABE is a TCP sender side addition that continuously estimates the bandwidth available to the connection and guides the sender to adjust its transmission rate when the network becomes congested. CW is a configuration of network routers such that routers alert end stations by marking all packets when there is a sign of an incipient congestion. The marking of packets by the CW configured routers helps the sender of the TCP connection to effectively differentiate packet losses caused by network congestion from those caused by wireless link errors. This paper describes the design of TCP-Jersey, and presents results from experiments using the NS-2 network simulator. Results from simulations show that in a congestion free network with 1% of random wireless packet loss rate, TCP-Jersey achieves 17% and 85% improvements in goodput over TCP-Westwood and TCP-Reno, respectively; in a congested network where TCP flow competes with VoIP flows, with 1% of random wireless packet loss rate, TCP-Jersey achieves 9% and 76% improvements in goodput over TCP-Westwood and TCP-Reno, respectively. Our experiments of multiple TCP flows show that TCP-Jersey maintains the fair and friendly behavior with respect to other TCP flows.  相似文献   

13.
The Internet is facing a twofold challenge: to increase network capacity in order to accommodate a steadily increasing number of users; to guarantee the quality of service for existing applications and for new multimedia applications requiring real-time network response. In order to meet these requirements, IETF is currently defining the differentiated service (DiffServ) architecture, which should offer a simple and scalable platform to guarantee differentiated QoS in the Internet. In the DiffServ domain, the assured forwarding service is designed to provide data applications with acceptable performance, overcoming the limits of the Internet's current best-effort service. Since data applications mostly rely on the TCP transport protocol, it is important to examine the interaction between the congestion avoidance and control mechanisms of TCP and assured forwarding. Our main purpose is to shed light on this interaction, and to show that, in the current DiffServ framework, poor performance of TCP traffic flows can result from the existing mismatch between the assured forwarding traffic conditioning procedures and the TCP congestion management. We propose a new adaptive packet marking policy to deal with congestion situations that may occur. We show that, with this policy, the provisioned rate for TCP flows can be achieved.  相似文献   

14.
There has been much interest in using active queue management in routers in order to protect users from connections that are not very responsive to congestion notification. An Internet draft recommends schemes based on random early detection for achieving these goals, to the extent that it is possible, in a system without “per-flow” state. However, a “stateless” system with first-in/first-out (FIFO) queueing is very much handicapped in the degree to which flow isolation and fairness can be achieved. Starting with the observation that a “stateless” system is but one extreme in a spectrum of design choices and that per-flow queueing for a large number of flows is possible, we present active queue management mechanisms that are tailored to provide a high degree of isolation and fairness for TCP connections in a gigabit IP router using per-flow queueing. We show that IP flow state in a router can be bounded if the scheduling discipline used has finite memory, and we investigate the performance implications of different buffer management strategies in such a system. We show that merely using per-flow scheduling is not sufficient to achieve effective isolation and fairness, and it must be combined with appropriate buffer management strategies  相似文献   

15.
Implicit admission control   总被引:3,自引:0,他引:3  
Internet protocols currently use packet-level mechanisms to detect and react to congestion. Although these controls are essential to ensure fair sharing of the available resource between multiple flows, in some cases they are insufficient to ensure overall network stability. We believe that it is also necessary to take account of higher level concepts, such as connections, flows, and sessions when controlling network congestion. This becomes of increasing importance as more real-time traffic is carried on the Internet, since this traffic is less elastic in nature than traditional Web traffic. We argue that, in order to achieve better utility of the network as a whole, higher level congestion controls are required. By way of example, we present a simple connection admission control (CAC) scheme which can significantly improve the overall performance. This paper discusses our motivation for the use of admission control in the Internet, focusing specifically on control for TCP flows. The technique is not TCP specific, and can be applied to any type of flow in a modern IP infrastructure. Simulation results are used to show that it can drastically improve the performance of TCP over bottleneck links. We go on to describe an implementation of our algorithm for a router running the Linux 2.2.9 operating system. We show that by giving routers at bottlenecks the ability to intelligently deny admission to TCP connections, the goodput of existing connections can be significantly increased. Furthermore, the fairness of the resource allocation achieved by TCP is improved  相似文献   

16.
基于RTT的TCP流带宽公平性保障机制   总被引:3,自引:0,他引:3  
TCP端到端的拥塞控制机制使得TCP连接获得的瓶颈带宽反比于RTT(数据包往返时间)。为了缓解TCP对于RTT较小流的偏向,区分服务的流量调节机制在RTT较小的流取得目标速率且获得多余资源的情况下可以确保RTT较大流不至于饥饿。现有的基于RTT的流量调节机制在网络拥塞程度较轻时非常有效,但是当网络拥塞程度较重时,由于对RTT较大流的过分保护而导致RTT较小流饥饿。因此,通过引进自适应的思想提出了改进方法,其主要思想就是根据网络的拥塞程度自适应地调整对RTT较大流的保护程度。大量的仿真试验表明所提的机制能有效保障TCP流的带宽公平性并且比现有的方法具有更好的强壮性。  相似文献   

17.
The Internet's excellent scalability and robustness result in part from the end-to-end nature of Internet congestion control. End-to-end congestion control algorithms alone, however, are unable to prevent the congestion collapse and unfairness created by applications that are unresponsive to network congestion. To address these maladies, we propose and investigate a novel congestion-avoidance mechanism called network border patrol (NBP). NBP entails the exchange of feedback between routers at the borders of a network in order to detect and restrict unresponsive traffic flows before they enter the network, thereby preventing congestion within the network. Moreover, NBP is complemented with the proposed enhanced core-stateless fair queueing (ECSFQ) mechanism, which provides fair bandwidth allocations to competing flows. Both NBP and ECSFQ are compliant with the Internet philosophy of pushing complexity toward the edges of the network whenever possible. Simulation results show that NBP effectively eliminates congestion collapse and that, when combined with ECSFQ, approximately max-min fair bandwidth allocations can be achieved for competing flows.  相似文献   

18.
This paper introduces a novel congestion detection scheme for high-bandwidth TCP flows over optical burst switching (OBS) networks, called statistical additive increase multiplicative decrease (SAIMD). SAIMD maintains and analyzes a number of previous round-trip time (RTTs) at the TCP senders in order to identify the confidence with which a packet loss event is due to network congestion. The confidence is derived by positioning short-term RTT in the spectrum of long-term historical RTTs. The derived confidence corresponding to the packet loss is then taken in the developed policy for TCP congestion window adjustment. We will show through extensive simulation that the proposed scheme can effectively solve the false congestion detection problem and significantly outperform the conventional TCP counterparts without losing fairness. The advantages gained in our scheme are at the expense of introducing more overhead in the SAIMD TCP senders. Based on the proposed congestion control algorithm, a throughput model is formulated, and is further verified by simulation results.   相似文献   

19.
提出一个以信息调度为基础的分布式的QoS(DQBI)体系结构。该结构可以在移动Ad Hoc网络中为实时传输和尽力而为传输提供QoS保证。DQBI模型改进了整个系统信息端到端的延迟性能,并且通过使用请求允许接入控制和拥塞控制机制来处理网络的拥塞。最后,仿真比较了MQRD结构和DQBI结构,发现DQBI结构可以更好地确保实时流和尽力而为流实现它们期望的服务水平。  相似文献   

20.
数据流的活动队列管理算法:MBLUE   总被引:3,自引:0,他引:3       下载免费PDF全文
徐建  李善平 《电子学报》2002,30(11):1732-1736
MBLUE(Modified BLUE)是一种面向数据流的活动队列管理算法.它不是使用平均队列长度指示缓冲区拥塞状态,而是使用数据报丢弃的频率和队列空闲程度来管理网络拥塞.探测瓶颈连接早期的拥塞信息,通过数据报的丢弃和标记避免拥塞.它只维护一个先进先出队列,以较少的数据流状态信息,在不同流之间公平的分配网络带宽.能够适应瞬时的猝发流,能合理控制非TCP数据流,又能够保持较短的平均队列长度,从而控制、减轻网络拥塞.通过TCP/IP网络的模拟,证实算法在公平的分配网络带宽和降低数据报的丢失率上具有较好的鲁棒性.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号