首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The end-to-end congestion control mechanism of transmission control protocol (TCP) is critical to the robustness and fairness of the best-effort Internet. Since it is no longer practical to rely on end-systems to cooperatively deploy congestion control mechanisms, the network itself must now participate in regulating its own resource utilization. To that end, fairness-driven active queue management (AQM) is promising in sharing the scarce bandwidth among competing flows in a fair manner. However, most of the existing fairness-driven AQM schemes cannot provide efficient and fair bandwidth allocation while being scalable. This paper presents a novel fairness-driven AQM scheme, called CHORD (CHOKe with recent drop history) that seeks to maximize fair bandwidth sharing among aggregate flows while retaining the scalability in terms of the minimum possible state space and per-packet processing costs. Fairness is enforced by identifying and restricting high-bandwidth unresponsive flows at the time of congestion with a lightweight control function. The identification mechanism consists of a fixed-size cache to capture the history of recent drops with a state space equal to the size of the cache. The restriction mechanism is stateless with two matching trial phases and an adaptive drawing factor to take a strong punitive measure against the identified high-bandwidth unresponsive flows in proportion to the average buffer occupancy. Comprehensive performance evaluation indicates that among other well-known AQM schemes of comparable complexities, CHORD provides enhanced TCP goodput and intra-protocol fairness and is well-suited for fair bandwidth allocation to aggregate traffic across a wide range of packet and buffer sizes at a bottleneck router.  相似文献   

2.
基于AIMD算法的分层多播拥塞控制   总被引:1,自引:0,他引:1  
杨明  张福炎 《计算机学报》2003,26(10):1274-1279
提出了一种基于AIMD算法的分层多播拥塞控制算法.算法借助AIMD算法具有的良好TCP兼容性和稳定性,采用慢增慢减的速率调节原则来防止TCP中速率减半策略所带来的速率振荡.为避免反馈处理带来的复杂性和可扩缩性问题,提出了无须反馈的收方至发方间往返时延估计方法.算法采用类似TCP的慢启动算法来提高链路的利用率和收敛速度.通过仿真评估得出,算法对TCP流、不同多播流均表现出理想的公平性,并有很高的带宽利用率和良好的稳定性.  相似文献   

3.
1 Introduction In the current Internet, not all applications use TCP and they do not follow the same concept of fairly sharing the available bandwidth. The rapid growing of real-time streaming media applications will bring much UDP traffic without integrating TCP compatible congestion control mechanism into Internet. It threats the quality of service (QoS) of real-time applications and the stability of the current Internet. For this reason, it is desirable to define appropriate rate rule…  相似文献   

4.
传统的TCP由于采用速率减半的拥塞退避机制而使其在数据传输时易产生过大的速率波动,而UDP不具备拥塞退避机制,在拥塞的网络环境中,UDP流将大量抢占TCP流的网络带宽,同时自身的丢包也迅速增加,并可能带来系统拥塞崩溃的潜在危险,因此TCP和UDP都不能很好地满足实时流媒体业务的需要。文中研究了一个具有拥塞退避机制、网络吞吐量波动小,且能够与TCP协议公平分享带宽的传输协议———TCP友好速率控制协议(TFRC),并将其应用于实时多媒体流传输应用程序。研究测试结果表明采用TFRC后多媒体的实时播放较TCP平滑了许多。  相似文献   

5.
《Computer Networks》2008,52(15):2947-2960
This paper deals with a congestion control framework for elastic and real-time traffic, where the user’s application is associated with a utility function. We allow users to have concave as well as non-concave utility functions, and aim at allocating bandwidth such that utility values are shared fairly. To achieve this, we transform all utilities into strictly concave second order utilities and interpret the resource allocation problem as the global optimization problem of maximizing aggregate second order utility. We propose a new fairness criterion, utility proportional fairness, which is characterized by the unique solution to this problem. Our fairness criterion incorporates utility max–min fairness as a limiting case. Based on our analysis, we obtain congestion control laws at links and sources that (i) are linearly stable regardless of the network topology, provided that a bound on round-trip-times is known, (ii) provide a utility proportional fair resource allocation in equilibrium. We further investigate the efficiency of utility fair resource allocations. Our measure of efficiency is defined as the worst case ratio of the total utility of a utility proportional fair rate vector and the maximum possible total utility. We present a generic technique, which allows to obtain upper bounds on the efficiency loss. For special cases, such as linear and concave utility functions, and non-concave utility functions with bounded domain, we explicitly calculate such upper bounds. Then, we study utility fair resource allocations with respect to bandwidth fairness. We derive a fairness metric assessing the aggressiveness of utility functions. This allows us to design fair utility functions for various applications. Finally, we simulate the proposed algorithms using the NS2 simulator.  相似文献   

6.
TCP友好速率控制(TFRC)是用于对非TCP流进行拥塞控制的一种机制。由于无线环境下数据传输的丢包不一定是网络拥塞引起的,因此采用原有的TFRC协议常常会导致误操作。采用有偏队列管理策略,这个策略可以添加到任何主动队列管理机制中去区分拥塞丢包和非拥塞丢包,减少不必要的拥塞控制,使无线网络的传输性能得以提高。通过网络模拟软件进行模拟实验,验证了此方法对无线网络传输速率的稳定性有明显改善,保证了应用层的QoS。  相似文献   

7.
研究了基于WLAN访问Internet的网络基站处流,提出了一种基于队列长度的调度方法和基于信道容量的拥塞控制模式,以达到网络资源的公平分配,并解决由于不恰当处理基站处堆积数据包而引起的弊端。在提出的资源分配模型中,调度算法根据各条流堆积的队列长度来随机地选择将要发送的数据分组;而拥塞控制模式中,将链路使用率作为拥塞指示,通过计算,平等地反馈给每一条流的发送端。发送端根据反馈到的拥塞信息来调整发送速率,以达到资源分配的公平性。仿真的结果表明:各条流能公平地共享无线网络的带宽。此算法的最大的优点在于基站不需要按照某种特定的公平性定义来选择数据包却能达到很高的公平性。  相似文献   

8.
Transmission control protocol (TCP) has been recognized as the most important transport-layer protocol for the Internet. It is distinguished by its reliable transmission, flow control, and congestion control. However, the issue of fair bandwidth-sharing among competing flows was not properly addressed in TCP. As web-based applications and interactive applications grow more popular, the number of short-lived flows conveyed on the Internet continues to rise. With conventional TCP, short-lived flows will be unable to obtain a fair share of available bandwidth. As a result, short-lived flows will suffer from longer delays and a lower service rate. It is essential for the Internet to come up with an effective solution to this problem in order to accommodate the new traffic patterns.With a more equitable sharing of bottleneck bandwidth as its goal, a stateless queue management scheme featuring early drop maximum (EDM) is developed and presented in this article. The fundamental idea is to drop packets from those flows having more than an equal share of bandwidth. The congestion window size of a TCP sender is carried in the options field on each packet. The proposed scheme will be exercised on routers and make its decision on packet dropping according to the congestion windows. In case of link congestion, the queued packet with the largest congestion window will be dropped from the queue. This will lower the sending rate of its sender and release part of the occupied bandwidth for the use of other competing flows. By so doing, the entire system will approach an equilibrium point with a rapid and fair distribution of bandwidth. As a stateless approach, the proposed scheme inherits numerous advantages in implementation and scalability.Extensive simulations were conducted to verify the feasibility and the effectiveness of the proposed approach. As revealed in the simulation results, the proposed scheme outperforms existing stateless techniques, including Drop-Tail and Random Early Drop, in many respects, such as a fairer sharing of available bandwidth and a shorter response time for short-lived flows.  相似文献   

9.
Although the Differentiated Services architecture supports scalable packet forwarding based on aggregate flows, the detailed procedure of Quality of Service (QoS) flow set-up within this architecture has not been well established. In this paper we explore the possibility of a scalable QoS flow set-up using a sink-tree paradigm. The paradigm initially constructs a sink tree at each egress edge router using network topology and bandwidth information provided by a QoS extended version of Open Shortest Path First (OSPF), which is a widely used link-state routing protocol. Our sink-tree paradigm dynamically reallocates network bandwidths online according to traffic demands. As a consequence, our paradigm easily supports QoS routing, resource allocation, and admission control at ingress edge routers without consulting core routers in a way that the QoS flow set-up time and overhead are minimized. Simulation results are very encouraging in that the proposed methodology requires significantly less communication overhead in setting up QoS flows compared to the traditional per-flow signaling-based methodology while still maintaining high resource utilization.  相似文献   

10.
《Computer Networks》2007,51(7):1800-1814
Since traditional TCP congestion control is ill-suited for high speed networks, designing a high speed replacement for TCP has become a challenge. From the simulations of some existing high speed protocols, we observe that these high speed protocols make the round-trip time bias problem and the multiple-bottleneck bias problem more serious than for standard TCP. To address these problems, we apply the population ecology theory to design a novel congestion control algorithm. By analogy, we treat the network flows as the species in nature, the sending rates of the flows as the population number, and the bottleneck bandwidth as the food resources. Then we extend the construction method of population ecology models to design a control model, and implement the corresponding end-to-end protocol with virtual load factor feedback, which is called Explicit Virtual Load Feedback TCP (EVLF-TCP). The virtual load factor is computed based on the information of the bandwidth, the aggregate incoming traffic and the queue length in routers, and then senders adjust the sending rate based on the virtual load factor. Theoretical analysis and simulation results validate that EVLF-TCP achieves high utilization, fair bandwidth allocation independent of round-trip time, and near-zero packet drops. These characteristics are desirable for high speed networks.  相似文献   

11.
We investigate schemes for achieving service differentiation via weighted end-to-end congestion control mechanisms within the framework of the additive-increase-multiplicative-decrease (AIMD) principle, and study their performance as instantiations of the TCP protocol.Our first approach considers a class of weighted AIMD algorithms. This approach does not scale well in practice because it leads to excessive loss for flows with large weights, thereby causing early timeouts and a reduction in throughput.Our second approach considers a class of loss adaptive weighted AIMD algorithms. This approach scales by an order of magnitude compared to the previous approach, but is more susceptible to short-term unfairness and is sensitive to the accuracy of loss estimates.We conclude that adapting the congestion control parameters to the loss characteristics is critical to scalable service differentiation; on the other hand, estimating loss characteristics using purely end-to-end mechanisms is an inherently difficult problem.  相似文献   

12.
The well-known Transport Control Protocol (TCP) is a crucial component of the TCP/IP architecture on which the Internet is built, and is a de facto standard for reliable communication on the Internet. At the heart of the TCP protocol is its congestion control algorithm. While most practitioners believe that the TCP congestion control algorithm performs very well, a complete analysis of the congestion control algorithm is yet to be done. A lot of effort has, therefore, gone into the evaluation of different performance metrics like throughput and average latency under TCP. In this paper, we approach the problem from a different perspective and use the competitive analysis framework to provide some answers to the question “how good is the TCP/IP congestion control algorithm?” We describe how the TCP congestion control algorithm can be viewed as an online, distributed scheduling algorithm. We observe that existing lower bounds for non-clairvoyant scheduling algorithms imply that no online, distributed, non-clairvoyant algorithm can be competitive with an optimal offline algorithm if both algorithms were given the same resources. Therefore, in order to evaluate TCP using competitive analysis, we must limit the power of the adversary, or equivalently, allow TCP to have extra resources compared to an optimal, offline algorithm for the same problem. In this paper, we show that TCP is competitive to an optimal, offline algorithm provided the former is given more resources. Specifically, we prove first that for networks with a single bottleneck (or point of congestion), TCP is ${\mathcal{O}}(1)The well-known Transport Control Protocol (TCP) is a crucial component of the TCP/IP architecture on which the Internet is built, and is a de facto standard for reliable communication on the Internet. At the heart of the TCP protocol is its congestion control algorithm. While most practitioners believe that the TCP congestion control algorithm performs very well, a complete analysis of the congestion control algorithm is yet to be done. A lot of effort has, therefore, gone into the evaluation of different performance metrics like throughput and average latency under TCP. In this paper, we approach the problem from a different perspective and use the competitive analysis framework to provide some answers to the question “how good is the TCP/IP congestion control algorithm?” We describe how the TCP congestion control algorithm can be viewed as an online, distributed scheduling algorithm. We observe that existing lower bounds for non-clairvoyant scheduling algorithms imply that no online, distributed, non-clairvoyant algorithm can be competitive with an optimal offline algorithm if both algorithms were given the same resources. Therefore, in order to evaluate TCP using competitive analysis, we must limit the power of the adversary, or equivalently, allow TCP to have extra resources compared to an optimal, offline algorithm for the same problem. In this paper, we show that TCP is competitive to an optimal, offline algorithm provided the former is given more resources. Specifically, we prove first that for networks with a single bottleneck (or point of congestion), TCP is O(1){\mathcal{O}}(1)-competitive to an optimal centralized (global) algorithm in minimizing the user-perceived latency or flow time of the sessions, provided we allow TCP O(1){\mathcal{O}}(1) times as much bandwidth and O(1){\mathcal{O}}(1) extra time per session. Second, we show that TCP is fair by proving that the bandwidths allocated to sessions quickly converge to fair sharing of network bandwidth.  相似文献   

13.
In wireless multimedia sensor networks (WMSNs), sensor nodes use different types of sensors to gather different types of data. In multimedia applications, it is necessary to provide reliable and fair protocols in order to meet specific requirements of quality of service (QoS) demands in regard to these different types of data. To prolong the system lifetime of WMSNs, it is necessary to perform adjustments to the transmission rate and to mitigate network congestion. In previous works investigating WMSNs, exponential weighted priority-based rate control (EWPBRC) schemes with traffic load parameter (TLP) schemes in WMSNs were used to control congestion by adjusting transmission rates relative to various data types. However, when the TLP is fixed, a large change in data transmission causes a significant difference between input transmission rate and the estimated output transmission rate of each sensor node. This study proposes a novel fuzzy logical controller (FLC) pertaining to TLP schemes with an EWPBRC that estimates the output transmission rate of the parent node and then assigns a suitable transmission rate based on the traffic load of each child node, with attention paid to the different amounts of data being transmitted. Simulation results show that the performance of our proposed scheme has a better transmission rate as compared to PBRC: the delay and loss probability are reduced. In addition, our proposed scheme can effectively control different transmission data types insofar as achieving the QoS requirements of a system while decreasing network resource consumption.  相似文献   

14.
《Computer Networks》2003,41(2):211-225
The nowadays Internet architecture is mainly based on unicast communications and best-effort service. However, the development of the Internet encouraged emerging services that are sensitive to delay or packet loss, as it is the case for multimedia and group applications. The deployment of these applications should not compromise the proper transmission of TCP flows and would benefit significantly from flows that are responsive to congestion.We propose efficient congestion avoidance mechanism (ECAM)1 as a generic framework for congestion control in the Internet, to address this lack and important need of congestion control in various situations that occurs in the Internet. ECAM is designed for uncontrolled unicast and multicast traffic and supports both reliable and unreliable best-effort flows. ECAM works not only for best-effort service, but supports as well the new differentiated services, where out of profile packets may experience congestion. Implementation problems are also discussed.  相似文献   

15.
TCP-Cherry is a novel TCP congestion control scheme that we devised for ensuring high performance over satellite IP networks and the alikes which are characterized by long propagation delays and high link errors. In TCP-Cherry, two new algorithms, Fast-Forward Start and First-Aid Recovery, have been proposed for congestion control. Our algorithms use supplement segments, i.e., low-priority segments to probe the available bandwidth in the network for the TCP connections along with carrying new data blocks. In this paper, we present our new congestion control scheme, TCP-Cherry and devise an analytical model for it. Our major contributions in this paper include the analytical model and equations for performance evaluation, validation of the analytical model through comparison between analytical and simulation results and devising a guideline to tune the buffer related parameters both at the sender as well as the receiver ends for optimum throughput performance. Experiments show that simulation results and the calculated throughput from our analytical model match quite closely, thereby verifying the appropriateness of the model. In addition, from analysis of simulation results, we discover that a buffer size at the receiver, rwnd, that is around four times maxcwnd, or the maximum congestion window at the sender side, is likely to maintain high throughput over a wide range of operating conditions.  相似文献   

16.
The IETF’s recent differentiated services (DS) architecture, which specifies a scalable mechanism for treating packets differently, offers new opportunities for building end-to-end quality of service (QoS) systems. However, it also introduces new challenges. In particular, it is not clear whether TCP’s flow and congestion control mechanisms work well with the mechanisms used for end-to-end QoS. For that reason it is essential to analyze whether the existing DS mechanisms can be used with standard TCP implementations or whether it is necessary to wait for upcoming features introduced in future modified versions of TCP. The general-purpose architecture for reservation and allocation (GARA) supports flow-specific QoS specification, immediate and advance reservation, and online monitoring and control of both individual resources and heterogeneous resource ensembles. Using GARA, we evaluated actual DS mechanisms provided by Cisco routers. We present the results of this evaluation and discuss their impact on the performance of popular TCP implementations.  相似文献   

17.
In this paper, we propose a distributed congestion-aware channel assignment (DCACA) algorithm for multi-channel wireless mesh networks (MC–WMNs). The frequency channels are assigned according to the congestion measures which indicate the congestion status at each link. Depending on the selected congestion measure (e.g., queueing delay, packet loss probability, and differential backlog), various design objectives can be achieved. Our proposed distributed algorithm is simple to implement as it only requires each node to perform a local search. Unlike most of the previous channel assignment schemes, our proposed algorithm assigns not only the non-overlapped (i.e., orthogonal) frequency channels, but also the partially-overlapped channels. In this regard, we introduce the channel overlapping and mutual interference matrices which model the frequency overlapping among different channels. Simulation results show that in the presence of elastic traffic (e.g., TCP Vegas or TCP Reno) sources, our proposed DCACA algorithm increases the aggregate throughput and also decreases the average packet round-trip compared with the previously proposed Load-Aware channel assignment algorithm. Furthermore, in a congested IEEE 802.11b network setting, compared with the use of three non-overlapped channels, the aggregate network throughput can further be increased by 25% and the average round-trip time can be reduced by more than one half when all the 11 partially-overlapped channels are used.  相似文献   

18.
In this work, we attempt to improve the performance of TCP over ad hoc wireless networks (AWNs) by using a learning technique from the theory of learning automata. It is well-known that the use of TCP in its present form, for reliable transport over AWNs leads to unnecessary packet losses, thus limiting the achievable throughput. This is mainly due to the aggressive, reactive, and deterministic nature in updating its congestion window. As the AWNs are highly bandwidth constrained, the behavior of TCP leads to high contentions among the packets of the flow, thus causing a high amount of packet loss. This further leads to high power consumption at mobile nodes as the lost packets are recovered via several retransmissions at both TCP and MAC layers. Hence, our proposal, here after called as Learning-TCP, focuses on updating the congestion window in an efficient manner (conservative, proactive, and finer and flexible update in the congestion window) in order to reduce the contentions and congestion, thus improving the performance of TCP in AWNs. The key advantage of Learning-TCP is that, without relying on any network feedback such as explicit congestion and link-failure notifications, it adapts to the changing network conditions and appropriately updates the congestion window by observing the inter-arrival times of TCP acknowledgments. We implemented Learning-TCP in ns-2.28 and Linux kernel 2.6 as well, and evaluated its performance for a wide range of network conditions. In all the studies, we observed that Learning-TCP outperforms TCP-Newreno by showing significant improvement in the goodput and reduction in the packet loss while maintaining higher fairness to the competing flows.  相似文献   

19.
This paper presents a link model which captures the queue dynamics in response to a change in a transmission control protocol (TCP) source's congestion window. By considering both self-clocking and the link integrator effect, the model generalizes existing models and is shown to be more accurate by both open loop and closed loop packet level simulations. It reduces to the known static link model when flows' round trip delays are identical, and approximates the standard integrator link model when there is significant cross traffic. We apply this model to the stability analysis of fast active queue management scalable TCP (FAST TCP) including its filter dynamics. Under this model, the FAST control law is linearly stable for a single bottleneck link with an arbitrary distribution of round trip delays. This result resolves the notable discrepancy between empirical observations and previous theoretical predictions. The analysis highlights the critical role of self-clocking in TCP stability, and the proof technique is new and less conservative than existing ones.   相似文献   

20.
The Transmission Control Protocol (TCP), the most widely used transport protocol over the Internet, has been advertised to implement fairness between flows competing for the same narrow link. However, when session round-trip-times (RTTs) radically differ, the share may be anything but fair. This RTT-unfairness represents a problem that severely affects the performance of long-RTT flows and whose solution requires a revision of TCP’s congestion control scheme. To this aim, we discuss TCP Libra, a new transport protocol able to ensure fairness and scalability regardless of the RTT, while remaining friendly towards legacy TCP. As main contributions of this paper: (i) we focus on the model derivation and show how it leads to the design of TCP Libra; (ii) we analyze the role of its parameters and suggest how they may be adjusted to lead to asymptotic stability and fast convergence; (iii) we perform model-based, simulative, and real testbed comparisons with other TCP versions that have been reported as RTT-fair in the literature. Results demonstrate the ability of TCP Libra in ensuring RTT-fairness while remaining throughput efficient and friendly towards legacy TCP.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号