首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We design and implement an efficient on-line approach, FlowMate, for clustering flows (connections) emanating from a busy server, according to shared bottlenecks. Clusters can be periodically input to load balancing, congestion coordination, aggregation, admission control, or pricing modules. FlowMate uses in-band (passive) end-to-end delay measurements to infer shared bottlenecks. Delay information is piggybacked on feedback from the receivers, or, if impossible, TCP or application round-trip time estimates are used. We simulate FlowMate and examine the effects of network load, traffic burstiness, network buffer sizes, and packet drop policies on clustering correctness, evaluated via a novel accuracy metric. We find that coordinated congestion management techniques are more fair when integrated with FlowMate. We also implement FlowMate in the Linux kernel v2.4.17 and evaluate its performance on the Emulab testbed, using both synthetic and tcplib-generated traffic. Our results demonstrate that clustering of medium to long-lived flows is accurate, even with bursty background traffic. Finally, we validate our results on the Internet Planetlab testbed.  相似文献   

2.
基于RTT的TCP流带宽公平性保障机制   总被引:3,自引:0,他引:3  
TCP端到端的拥塞控制机制使得TCP连接获得的瓶颈带宽反比于RTT(数据包往返时间)。为了缓解TCP对于RTT较小流的偏向,区分服务的流量调节机制在RTT较小的流取得目标速率且获得多余资源的情况下可以确保RTT较大流不至于饥饿。现有的基于RTT的流量调节机制在网络拥塞程度较轻时非常有效,但是当网络拥塞程度较重时,由于对RTT较大流的过分保护而导致RTT较小流饥饿。因此,通过引进自适应的思想提出了改进方法,其主要思想就是根据网络的拥塞程度自适应地调整对RTT较大流的保护程度。大量的仿真试验表明所提的机制能有效保障TCP流的带宽公平性并且比现有的方法具有更好的强壮性。  相似文献   

3.
The Internet is facing a twofold challenge: to increase network capacity in order to accommodate a steadily increasing number of users; to guarantee the quality of service for existing applications and for new multimedia applications requiring real-time network response. In order to meet these requirements, IETF is currently defining the differentiated service (DiffServ) architecture, which should offer a simple and scalable platform to guarantee differentiated QoS in the Internet. In the DiffServ domain, the assured forwarding service is designed to provide data applications with acceptable performance, overcoming the limits of the Internet's current best-effort service. Since data applications mostly rely on the TCP transport protocol, it is important to examine the interaction between the congestion avoidance and control mechanisms of TCP and assured forwarding. Our main purpose is to shed light on this interaction, and to show that, in the current DiffServ framework, poor performance of TCP traffic flows can result from the existing mismatch between the assured forwarding traffic conditioning procedures and the TCP congestion management. We propose a new adaptive packet marking policy to deal with congestion situations that may occur. We show that, with this policy, the provisioned rate for TCP flows can be achieved.  相似文献   

4.
Service prioritization among different traffic classes is an important goal for the Internet. Conventional approaches to solving this problem consider the existing best-effort class as the low-priority class, and attempt to develop mechanisms that provide "better-than-best-effort" service. In this paper, we explore the opposite approach, and devise a new distributed algorithm to realize a low-priority service (as compared to the existing best effort) from the network endpoints. To this end, we develop TCP Low Priority (TCP-LP), a distributed algorithm whose goal is to utilize only the excess network bandwidth as compared to the "fair share" of bandwidth as targeted by TCP. The key mechanisms unique to TCP-LP congestion control are the use of one-way packet delays for early congestion indications and a TCP-transparent congestion avoidance policy. The results of our simulation and Internet experiments show that: 1) TCP-LP is largely non-intrusive to TCP traffic; 2) both single and aggregate TCP-LP flows are able to successfully utilize excess network bandwidth; moreover, multiple TCP-LP flows share excess bandwidth fairly; 3) substantial amounts of excess bandwidth are available to the low-priority class, even in the presence of "greedy" TCP flows; 4) the response times of web connections in the best-effort class decrease by up to 90% when long-lived bulk data transfers use TCP-LP rather than TCP; 5) despite their low-priority nature, TCP-LP flows are able to utilize significant amounts of available bandwidth in a wide-area network environment.  相似文献   

5.
Transport protocols for Internet-compatible satellite networks   总被引:6,自引:0,他引:6  
We address the question of how well end-to-end transport connections perform in a satellite environment composed of one or more satellites in geostationary orbit (GEO) or low-altitude Earth orbit (LEO), in which the connection may traverse a portion of the wired Internet. We first summarize the various ways in which latency and asymmetry can impair the performance of the Internet's transmission control protocol (TCP), and discuss extensions to standard TCP that alleviate some of these performance problems. Through analysis, simulation, and experiments, we quantify the performance of state-of-the-art TCP implementations in a satellite environment. A key part of the experimental method is the use of traffic models empirically derived from Internet traffic traces. We identify those TCP implementations that can be expected to perform reasonably well, and those that can suffer serious performance degradation. An important result is that, even with the best satellite-optimized TCP implementations, moderate levels of congestion in the wide-area Internet can seriously degrade performance for satellite connections. For scenarios in which TCP performance is poor, we investigate the potential improvement of using a satellite gateway, proxy, or Web cache to “split” transport connections in a manner transparent to end users. Finally, we describe a new transport protocol for use internally within a satellite network or as part of a split connection. This protocol, which we call the satellite transport protocol (STP), is optimized for challenging network impairments such as high latency, asymmetry, and high error rates. Among its chief benefits are up to an order of magnitude reduction in the bandwidth used in the reverse path, as compared to standard TCP, when conducting large file transfers. This is a particularly important attribute for the kind of asymmetric connectivity likely to dominate satellite-based Internet access  相似文献   

6.
Load Balancing for Parallel Forwarding   总被引:1,自引:0,他引:1  
Workload distribution is critical to the performance of network processor based parallel forwarding systems. Scheduling schemes that operate at the packet level, e.g., round-robin, cannot preserve packet-ordering within individual TCP connections. Moreover, these schemes create duplicate information in processor caches and therefore are inefficient in resource utilization. Hashing operates at the flow level and is naturally able to maintain per-connection packet ordering; besides, it does not pollute caches. A pure hash-based system, however, cannot balance processor load in the face of highly skewed flow-size distributions in the Internet; usually, adaptive methods are needed. In this paper, based on measurements of Internet traffic, we examine the sources of load imbalance in hash-based scheduling schemes. We prove that under certain Zipf-like flow-size distributions, hashing alone is not able to balance workload. We introduce a new metric to quantify the effects of adaptive load balancing on overall forwarding performance. To achieve both load balancing and efficient system resource utilization, we propose a scheduling scheme that classifies Internet flows into two categories: the aggressive and the normal, and applies different scheduling policies to the two classes of flows. Compared with most state-of-the-art parallel forwarding schemes, our work exploits flow-level Internet traffic characteristics.  相似文献   

7.
Current Internet congestion control protocols operate independently on a per-flow basis. Recent work has demonstrated that cooperative congestion control strategies between flows can improve performance for a variety of applications, ranging from aggregated TCP transmissions to multiple-sender multicast applications. However, in order for this cooperation to be effective, one must first identify the flows that are congested at the same set of resources. We present techniques based on loss or delay observations at end hosts to infer whether or not two flows experiencing congestion are congested at the same network resources. Our novel result is that such detection can be achieved for unicast flows, but the techniques can also be applied to multicast flows. We validate these techniques via queueing analysis, simulation and experimentation within the Internet. In addition, we demonstrate preliminary simulation results that show that the delay-based technique can determine whether two TCP flows are congested at the same set of resources. We also propose metrics that can be used as a measure of the amount of congestion sharing between two flows  相似文献   

8.
The precise nature of TCP dynamics over Internet connections has not been well understood, since the existing results are solely analytical or based on simulations. We employ the time-dependent exponent curves and logarithmic displacement curves to study TCP AIMD congestion window-size traces over Internet connections. We show that these dynamics have two dominant parts, a stochastic component in response to network traffic and a deterministic chaotic component due to the nonlinearity of protocol. These dynamics can be largely characterized as anomalous diffusions with a very large exponent.  相似文献   

9.
Internet traffic primarily consists of packets from elastic flows, i.e., Web transfers, file transfers, and e-mail, whose transmissions are mediated via the Transmission Control Protocol (TCP). In this paper, we develop a methodology to process TCP flow measurements in order to analyze throughput correlations among TCP flow classes that can be used to infer congestion sharing in the Internet. The primary contributions of this paper are: 1) development of a technique for processing flow records suitable for inferring congested resource sharing; 2) evaluation of the use of factor analysis on processed flow records to explore which TCP flow classes might share congested resources; and 3)validation of our inference methodology using bootstrap methods and nonintrusive, flow level measurements collected at a single network site. Our proposal for using flow level measurements to infer congestion sharing differs significantly from previous research that has employed packet level measurements for making inferences. Possible applications of our method include network monitoring and root cause analysis of poor performance  相似文献   

10.
A large number of Internet applications are sensitive to overload conditions in the network. While these applications have been designed to adapt somewhat to the varying conditions in the Internet, they can benefit greatly from an increased level of predictability in network services. We propose minor extensions to the packet queueing and discard mechanisms used in routers, coupled with simple control mechanisms at the source that enable the network to guarantee minimal levels of throughput to different sessions while sharing the residual network capacity in a cooperative manner. The service realized by the proposed mechanisms is an interpretation of the controlled-load service being standardized by the Internet Engineering Task Force. Although controlled-load service can be used in conjunction with any transport protocol, our focus in this paper is on understanding its interaction with Transmission Control Protocol (TCP). Specifically, we study the dynamics of TCP traffic in an integrated services network that simultaneously supports both best-effort and controlled-load sessions. In light of this study, we propose and experiment with modifications to TCP's congestion control mechanisms in order to improve its performance in networks where a minimum transmission rate is guaranteed. We then investigate the effect of network transients, such as changes in traffic load and in service levels, on the performance of controlled-load as well as best-effort connections. To capture the evolution of integrated services in the Internet, we also consider situations where only a selective set of routers are capable of providing service differentiation between best-effort and controlled-load traffic. Finally, we show how the service mechanisms proposed here can be embedded within other packet and link scheduling frameworks in a fully evolved integrated services Internet  相似文献   

11.
The introduction of high‐bandwidth demanding services such as multimedia services has resulted in important changes on how services in the Internet are accessed and what quality‐of‐experience requirements (i.e. limited amount of packet loss, fairness between connections) are expected to ensure a smooth service delivery. In the current congestion control mechanisms, misbehaving Transmission Control Protocol (TCP) stacks can easily achieve an unfair advantage over the other connections by not responding to Explicit Congestion Notification (ECN) warnings, sent by the active queue management (AQM) system when congestion in the network is imminent. In this article, we present an accountability mechanism that holds connections accountable for their actions through the detection and penalization of misbehaving TCP stacks with the goal of restoring the fairness in the network. The mechanism is specifically targeted at deployment in multimedia access networks as these environments are most prone to fairness issues due to misbehaving TCP stacks (i.e. long‐lived connections and a moderate connection pool size). We argue that a cognitive approach is best suited to cope with the dynamicity of the environment and therefore present a cognitive detection algorithm that combines machine learning algorithms to classify connections into well‐behaving and misbehaving profiles. This is in turn used by a differentiated AQM mechanism that provides a different treatment for the well‐behaving and misbehaving profiles. The performance of the cognitive accountability mechanism has been characterized both in terms of the accuracy of the cognitive detection algorithm and the overall impact of the mechanism on network fairness. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

12.
Google Quick UDP Internet Connections (QUIC) accounts for almost 10 % of the Internet traffic, and the protocol is not standardized at the Internet Engineering Task Force (IETF) yet. We distinguish Google QUIC (GQUIC) and IETF QUIC (IQUIC) since there may be differences between the two. Both Google and IETF versions run over UDP and cannot be split the way satellite systems usually do with TCP connections. The need for adapting any‐QUIC parameters needs to be evaluated. Since GQUIC is available, we analyze its behavior over a satellite communication system. In our evaluations, GQUIC quick connection establishment does not compensate an inappropriate congestion control. The resulting page downloading time doubles when using GQUIC as opposed to the performance with optimized split TCP connections. This paper concludes that specific tuning are required when any‐QUIC runs over a high bandwidth delay product (BDP) network.  相似文献   

13.
TCP Veno: TCP enhancement for transmission over wireless access networks   总被引:18,自引:0,他引:18  
Wireless access networks in the form of wireless local area networks, home networks, and cellular networks are becoming an integral part of the Internet. Unlike wired networks, random packet loss due to bit errors is not negligible in wireless networks, and this causes significant performance degradation of transmission control protocol (TCP). We propose and study a novel end-to-end congestion control mechanism called TCP Veno that is simple and effective for dealing with random packet loss. A key ingredient of Veno is that it monitors the network congestion level and uses that information to decide whether packet losses are likely to be due to congestion or random bit errors. Specifically: (1) it refines the multiplicative decrease algorithm of TCP Reno-the most widely deployed TCP version in practice-by adjusting the slow-start threshold according to the perceived network congestion level rather than a fixed drop factor and (2) it refines the linear increase algorithm so that the connection can stay longer in an operating region in which the network bandwidth is fully utilized. Based on extensive network testbed experiments and live Internet measurements, we show that Veno can achieve significant throughput improvements without adversely affecting other concurrent TCP connections, including other concurrent Reno connections. In typical wireless access networks with 1% random packet loss rate, throughput improvement of up to 80% can be demonstrated. A salient feature of Veno is that it modifies only the sender-side protocol of Reno without changing the receiver-side protocol stack.  相似文献   

14.
Quick User Datagram Protocol (UDP) Internet Connections (QUIC) is an experimental and low‐latency transport protocol proposed by Google, which is still being improved and specified in the Internet Engineering Task Force (IETF). The viewer's quality of experience (QoE) in HTTP adaptive streaming (HAS) applications may be improved with the help of QUIC's low‐latency, improved congestion control, and multiplexing features. We measured the streaming performance of QUIC on wireless and cellular networks in order to understand whether the problems that occur when running HTTP over TCP can be reduced by using HTTP over QUIC. The performance of QUIC was tested in the presence of network interface changes caused by the mobility of the viewer. We observed that QUIC resulted in quicker start of media streams, better streaming, and seeking experience, especially during the higher levels of congestion in the network and had a better performance than TCP when the viewer was mobile and switched between the wireless networks. Furthermore, we measured QUIC's performance in an emulated network that had a various amount of losses and delays to evaluate how QUIC's multiplexing feature would be beneficial for HAS applications. We compared the performance of HAS applications using multiplexing video streams with HTTP/1.1 over multiple TCP connections to HTTP/2 over one TCP connection and to QUIC over one UDP connection. To that effect, we observed that QUIC provided better performance than TCP on a network that had large delays. However, QUIC did not provide a significant improvement when the loss rate was large. Finally, we analyzed the performance of the congestion control mechanisms implemented by QUIC and TCP, and tested their ability to provide fairness among streaming clients. We found that QUIC always provided fairness among QUIC flows, but was not always fair to TCP.  相似文献   

15.
A survey on TCP-friendly congestion control   总被引:2,自引:0,他引:2  
Widmer  J. Denda  R. Mauve  M. 《IEEE network》2001,15(3):28-37
New trends in communication, in particular the deployment of multicast and real-time audio/video streaming applications, are likely to increase the percentage of non-TCP traffic in the Internet. These applications rarely perform congestion control in a TCP-friendly manner; they do not share the available bandwidth fairly with applications built on TCP, such as Web browsers, FTP, or e-mail clients. The Internet community strongly fears that the current evolution could lead to congestion collapse and starvation of TCP traffic. For this reason, TCP-friendly protocols are being developed that behave fairly with respect to coexistent TCP flows. We present a survey of current approaches to TCP friendliness and discuss their characteristics. Both unicast and multicast congestion control protocols are examined, and an evaluation of the different approaches is presented  相似文献   

16.
This paper provides a parallel review of two important issues for the next‐generation multimedia networking. Firstly, the emerging multimedia applications require a fresh approach to congestion control in the Internet. Currently, congestion control is performed by TCP; it is optimised for data traffic flows, which are inherently elastic. Audio and video traffic do not find the sudden rate fluctuations imposed by the TCP multiplicative‐decrease control algorithm optimal. The second important issue is the mobility support for multimedia applications. Wireless networks are characterized by a substantial packet loss due to the imperfection of the radio medium. This increased packet loss disturbs the foundation of TCP's loss‐based congestion control. This paper contributes to the ongoing discussion about the Internet congestion control by providing a parallel analysis of these two issues. The paper describes the main challenges, design guidelines, and existing proposals for the Internet congestion control, optimised for the multimedia traffic in the wireless network environment. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

17.
提出一个以信息调度为基础的分布式的QoS(DQBI)体系结构。该结构可以在移动Ad Hoc网络中为实时传输和尽力而为传输提供QoS保证。DQBI模型改进了整个系统信息端到端的延迟性能,并且通过使用请求允许接入控制和拥塞控制机制来处理网络的拥塞。最后,仿真比较了MQRD结构和DQBI结构,发现DQBI结构可以更好地确保实时流和尽力而为流实现它们期望的服务水平。  相似文献   

18.
A parameterizable methodology for Internet traffic flow profiling   总被引:16,自引:0,他引:16  
We present a parameterizable methodology for profiling Internet traffic flows at a variety of granularities. Our methodology differs from many previous studies that have concentrated on end-point definitions of flows in terms of state derived from observing the explicit opening and closing of TCP connections. Instead, our model defines flows based on traffic satisfying various temporal and spatial locality conditions, as observed at internal points of the network. This approach to flow characterization helps address some central problems in networking based on the Internet model. Among them are route caching, resource reservation at multiple service levels, usage based accounting, and the integration of IP traffic over an ATM fabric. We first define the parameter space and then concentrate on metrics characterizing both individual flows as well as the aggregate flow profile. We consider various granularities of the definition of a flow, such as by destination network, host-pair, or host and port quadruple. We include some measurements based on case studies we undertook, which yield significant insights into some aspects of Internet traffic, including demonstrating (i) the brevity of a significant fraction of IP flows at a variety of traffic aggregation granularities, (ii) that the number of host-pair IP flows is not significantly larger than the number of destination network flows, and (iii) that schemes for caching traffic information could significantly benefit from using application information  相似文献   

19.
This paper examines congestion control issues for TCP flows that require in-network processing on the fly in network elements such as gateways, proxies, firewalls and even routers. Applications of these flows are increasingly abundant in the future as the Internet evolves. Since these flows require use of CPUs in network elements, both bandwidth and CPU resources can be a bottleneck and thus congestion control must deal with ldquocongestionrdquo on both of these resources. In this paper, we show that conventional TCP/AQM schemes can significantly lose throughput and suffer harmful unfairness in this environment, particularly when CPU cycles become more scarce (which is likely the trend given the recent explosive growth rate of bandwidth). As a solution to this problem, we establish a notion of dual-resource proportional fairness and propose an AQM scheme, called Dual-Resource Queue (DRQ), that can closely approximate proportional fairness for TCP Reno sources with in-network processing requirements. DRQ is scalable because it does not maintain per-flow states while minimizing communication among different resource queues, and is also incrementally deployable because of no required change in TCP stacks. The simulation study shows that DRQ approximates proportional fairness without much implementation cost and even an incremental deployment of DRQ at the edge of the Internet improves the fairness and throughput of these TCP flows. Our work is at its early stage and might lead to an interesting development in congestion control research.  相似文献   

20.
This paper examines the performance of TCP/IP, the Internet data transport protocol, over wide-area networks (WANs) in which data traffic could coexist with real-time traffic such as voice and video. Specifically, we attempt to develop a basic understanding, using analysis and simulation, of the properties of TCP/IP in a regime where: (1) the bandwidth-delay product of the network is high compared to the buffering in the network and (2) packets may incur random loss (e.g., due to transient congestion caused by fluctuations in real-time traffic, or wireless links in the path of the connection). The following key results are obtained. First, random loss leads to significant throughput deterioration when the product of the loss probability and the square of the bandwidth-delay product is larger than one. Second, for multiple connections sharing a bottleneck link, TCP is grossly unfair toward connections with higher round-trip delays. This means that a simple first in first out (FIFO) queueing discipline might not suffice for data traffic in WANs. Finally, while the Reno version of TCP produces less bursty traffic than the original Tahoe version, it is less robust than the latter when successive losses are closely spaced. We conclude by indicating modifications that may be required both at the transport and network layers to provide good end-to-end performance over high-speed WANs  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号