首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 781 毫秒
1.
由于传统TCP拥塞控制算法直接应用到MPTCP(Multipath Transport Control Protocol)中存在公平性问题,以及不能有效地发挥多路径传输的优势,因而从公平性方面对MPTCP现有拥塞控制算法进行研究.研究发现,现有的MPTCP拥塞控制算法均受到相同的回路时间限制.提出一种基于链路延迟的RTT补偿算法(Compensating for RTT mismatch,C-RTT ).该算法通过设置网络带宽占用比参数以及对MPTCP连接的子流设置侵略因子,从而保证瓶颈链路处MPTCP数据流和TCP数据流公平地共享可用带宽.最后通过NS3仿真实验证明,该算法能够有效地补偿链路中因RTT不等引起的公平性问题,并避免链路之间数据的非周期抖动,且保证了多路径传输的优越性.  相似文献   

2.
Multipath transport protocols like Stream Control Transmission Protocol (SCTP) and Multipath TCP (MPTCP) have been introduced in the past as alternatives to traditional single path transport protocols like TCP and UDP. Various approaches to divide the flow on multiple paths have also been proposed in the literature. In this work, we show that the bandwidth estimation based resource pooling (BERP) congestion control algorithm is a practical implementation of the Min–Max optimization approach for flow division and verify this through ns-2 based simulations.  相似文献   

3.
《IEEE network》2002,16(5):38-46
Today, the dominant paradigm for congestion control in the Internet is based on the notion of TCP friendliness. To be TCP-friendly, a source must behave in such a way as to achieve a bandwidth that is similar to the bandwidth obtained by a TCP flow that would observe the same round-trip time (RTT) and the same loss rate. However, with the success of the Internet comes the deployment of an increasing number of applications that do not use TCP as a transport protocol. These applications can often improve their own performance by not being TCP-friendly, which severely penalizes TCP flows. To design new applications to be TCP-friendly is often a difficult task. The idea of the fair queuing (FQ) paradigm as a means to improve congestion control was first introduced by Keshav (1991). While Keshav made a fundamental step toward a new paradigm for the design of congestion control protocols, he did not formalize his results so that his findings could be extended for the design of new congestion control protocols. We make this step and formally define the FQ paradigm as a paradigm for the design of new end-to-end congestion control protocols. This paradigm relies on FQ scheduling with per-flow scheduling and longest queue drop buffer management in each router. We assume only selfish and noncollaborative end users. Our main contribution is the formal statement of the congestion control problem as a whole, which enables us to demonstrate the validity of the FQ paradigm. We also demonstrate that the FQ paradigm does not adversely impact the throughput of TCP flows and explain how to apply the FQ paradigm for the design of new congestion control protocols. As a pragmatic validation of the FQ paradigm, we discuss a new multicast congestion control protocol called packet pair receiver-driven layered multicast (PLM).  相似文献   

4.
为节省网络资源,充分利用TCP与UDP两种网络传输协议各自的优势,设计了TCP、UDP的多路多数据流融合网络系统。该系统在数据采集端的多端复杂网路中采用TCP网络协议传输数据,保证了网络传输的可靠性,同时在数据转发端的点对点网络中采用UDP网络协议传输数据,节省了网络资源,解决了网络拥塞问题。该系统在多路TCP数据流融合过程中应用了队列(Queue)融合算法,解决了数据流不同步导致的数据重复转发及丢失问题。经过长时间运行测试,整个系统在网络传输速率、稳定性及可靠性方面都达到了设计要求。  相似文献   

5.
Improving TCP performance over wireless networks at the link layer   总被引:2,自引:0,他引:2  
We present the transport unaware link improvement protocol (TULIP), which dramatically improves the performance of TCP over lossy wireless links, without competing with or modifying the transport- or network-layer protocols. TULIP is tailored for the half-duplex radio links available with today's commercial radios and provides a MAC acceleration feature applicable to collision-avoidance MAC protocols (e.g., IEEE 802.11) to improve throughput. TULIP's timers rely on a maximum propagation delay over the link, rather than performing a round-trip time estimate of the channel delay. The protocol does not require a base station and keeps no TCP state. TULIP is exceptionally robust when bit error rates are high; it maintains high goodput, i.e., only those packets which are in fact dropped on the wireless link are retransmitted and then only when necessary. The performance of TULIP is compared against the performance of the Snoop protocol (a TCP-aware approach) and TCP without link-level retransmission support. The results of simulation experiments using the actual code of the Snoop protocol show that TULIP achieves higher throughput, lower packet delay, and smaller delay variance. This revised version was published online in June 2006 with corrections to the Cover Date.  相似文献   

6.
Improving TCP performance over wireless networks at the link layer   总被引:1,自引:0,他引:1  
We present the transport unaware link improvement protocol (TULIP), which dramatically improves the performance of TCP over lossy wireless links, without competing with or modifying the transport- or network-layer protocols. TULIP is tailored for the half-duplex radio links available with today's commercial radios and provides a MAC acceleration feature applicable to collision-avoidance MAC protocols (e.g., IEEE 802.11) to improve throughput. TULIP's timers rely on a maximum propagation delay over the link, rather than performing a round-trip time estimate of the channel delay. The protocol does not require a base station and keeps no TCP state. TULIP is exceptionally robust when bit error rates are high; it maintains high goodput, i.e., only those packets which are in fact dropped on the wireless link are retransmitted and then only when necessary. The performance of TULIP is compared against the performance of the Snoop protocol (a TCP-aware approach) and TCP without link-level retransmission support. The results of simulation experiments using the actual code of the Snoop protocol show that TULIP achieves higher throughput, lower packet delay, and smaller delay variance. This revised version was published online in June 2006 with corrections to the Cover Date.  相似文献   

7.
In this paper, we explore end-to-end loss differentiation algorithms (LDAs) for use with congestion-sensitive video transport protocols for networks with either backbone or last-hop wireless links. As our basic video transport protocol, we use UDP in conjunction with a congestion control mechanism extended with an LDA. For congestion control, we use the TCP-Friendly Rate Control (TFRC) algorithm. We extend TFRC to use an LDA when a connection uses at least one wireless link in the path between the sender and receiver. We then evaluate various LDAs under different wireless network topologies, competing traffic, and fairness scenarios to determine their effectiveness. In addition to evaluating LDAs derived from previous work, we also propose and evaluate a new LDA, ZigZag, and a hybrid LDA, ZBS, that selects among base LDAs depending upon observed network conditions. We evaluate these LDAs via simulation, and find that no single base algorithm performs well across all topologies and competition. However, the hybrid algorithm performs well across topologies and competition, and in some cases exceeds the performance of the best base LDA for a given scenario. All of the LDAs are reasonably fair when competing with TCP, and their fairness among flows using the same LDA depends on the network topology. In general, ZigZag and the hybrid algorithm are the fairest among all LDAs.  相似文献   

8.
The TCP was originally designed for wired networks, assuming transmission errors were negligible. Actually, any acknowledgment time‐out unconditionally triggers the congestion control mechanism, even in wireless networks in which this assumption is not valid. Consequently, in wireless networks, TCP performance significantly degrades. To avoid this degradation, this paper proposes the so‐called split TCP and UDP. In this approach, the access point splits the TCP connection and uses a customized and lighter transport protocol for the wireless segment. It takes advantage of the IEEE 802.11e Hybrid Coordination Function Controlled Channel Access (HCCA) mechanisms to remove redundant TCP functionalities. Specifically, the HCCA scheduler allows disabling of the congestion control in the wireless link. Similarly, the IEEE 802.11e error control service makes possible to eliminate TCP acknowledgments, therefore reducing the TCP protocol overhead. Finally, the usage of an HCCA scheduler permits providing fairness among the different data flows. The proposed split scheme is evaluated via extensive simulations. Results show that split TCP and User Datagram Protocol outperforms the analyzed TCP flavors—specifically designed for wireless environments—and the split TCP solution, achieving up to 95% of end‐user throughput gain. Furthermore, the proposed solution is TCP friendly because TCP flows are not degraded by the presence of flows by using this approach. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

9.
The characteristics of TCP and UDP lead to different network transmission behaviours. TCP is responsive to network congestion whereas UDP is not. This paper proposes two mechanisms that operate at the source node to regulate TCP and UDP flows and provide a differential service for them. One is the congestion‐control mechanism, which uses congestion signal detected by TCP flows to regulate the flows at the source node. Another is the time‐slot mechanism, which assigns different number of time slots to flows to control their flow transmission. Based on the priority of each flow, different bandwidth proportions are allocated for each flow and differential services are provided. Simulation results show some insights of these two mechanisms. Moreover, we summarize the factors that may impact the performance of these two mechanisms. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

10.
Transmission control protocol (TCP) is the most widely used transport protocol in today's Internet. Despite the fact that several mechanisms have been presented in recent literature to improve TCP, there remain some vexing attributes that impair TCPs performance. This paper addresses the issue of the efficiency and fairness of TCP in multihops satellite constellations. It mainly focuses on the effect of the change in flows count on TCP behavior. In case of a handover occurrence, a TCP sender may be forced to be sharing a new set of satellites with other users resulting in a change of flows count. This paper argues that the TCP rate of each flow should be dynamically adjusted to the available bandwidth when the number of flows that are competing for a single link, changes over time. An explicit and fair scheme is developed. The scheme matches the aggregate window size of all active TCP flows to the network pipe. At the same time, it provides all the active connections with feedbacks proportional to their round-trip time values so that the system converges to optimal efficiency and fairness. Feedbacks are signaled to TCP sources through the receiver's advertised window field in the TCP header of acknowledgments. Senders should accordingly regulate their sending rates. The proposed scheme is referred to as explicit and fair window adjustment (XFWA). Extensive simulation results show that the XFWA scheme substantially improves the system fairness, reduces the number of packet drops, and makes better utilization of the bottleneck link.  相似文献   

11.
Promoting the use of end-to-end congestion control in the Internet   总被引:2,自引:0,他引:2  
This paper considers the potentially negative impacts of an increasing deployment of non-congestion-controlled best-effort traffic on the Internet. These negative impacts range from extreme unfairness against competing TCP traffic to the potential for congestion collapse. To promote the inclusion of end-to-end congestion control in the design of future protocols using best-effort traffic, we argue that router mechanisms are needed to identify and restrict the bandwidth of selected high-bandwidth best-effort flows in times of congestion. The paper discusses several general approaches for identifying those flows suitable for bandwidth regulation. These approaches are to identify a high-bandwidth flow in times of congestion as unresponsive, “not TCP-friendly”, or simply using disproportionate bandwidth. A flow that is not “TCP-friendly” is one whose long-term arrival rate exceeds that of any conformant TCP in the same circumstances. An unresponsive flow is one failing to reduce its offered load at a router in response to an increased packet drop rate, and a disproportionate-bandwidth flow is one that uses considerably more bandwidth than other flows in a time of congestion  相似文献   

12.
There is a vast literature on the throughput analysis of the IEEE 802.11 media access control (MAC) protocol. However, very little has been done on investigating the interplay between the collision avoidance mechanisms of the 802.11 MAC protocol and the dynamics of upper layer transport protocols. In this paper, we tackle this issue from an analytical, simulative, and experimental perspective. Specifically, we develop Markov chain models to compute the distribution of the number of active stations in an 802.11 wireless local area network (WLAN) when long-lived transmission control protocol (TCP) connections compete with finite-load user datagram protocol (UDP) flows. By embedding these distributions in the MAC protocol modeling, we derive approximate but accurate expressions of the TCP and UDP throughput. We validate the model accuracy through performance tests carried out in a real WLAN for a wide range of configurations. Our analytical model and the supporting experimental outcomes show that 1) the total TCP throughput is basically independent of the number of open TCP connections and the aggregate TCP traffic can be equivalently modeled as two saturated flows; and 2) in the saturated regime, n UDP flows obtain about n times the aggregate throughput achieved by the TCP flows, which is independent of the overall number of persistent TCP connections.  相似文献   

13.
数据流的活动队列管理算法:MBLUE   总被引:3,自引:0,他引:3       下载免费PDF全文
徐建  李善平 《电子学报》2002,30(11):1732-1736
MBLUE(Modified BLUE)是一种面向数据流的活动队列管理算法.它不是使用平均队列长度指示缓冲区拥塞状态,而是使用数据报丢弃的频率和队列空闲程度来管理网络拥塞.探测瓶颈连接早期的拥塞信息,通过数据报的丢弃和标记避免拥塞.它只维护一个先进先出队列,以较少的数据流状态信息,在不同流之间公平的分配网络带宽.能够适应瞬时的猝发流,能合理控制非TCP数据流,又能够保持较短的平均队列长度,从而控制、减轻网络拥塞.通过TCP/IP网络的模拟,证实算法在公平的分配网络带宽和降低数据报的丢失率上具有较好的鲁棒性.  相似文献   

14.
Katz  D. Ford  P.S. 《IEEE network》1993,7(3):38-47
The Connectionless Network Protocol (CLNP), which is supported by the associated OSI routing protocols, is proposed as a replacement for the Internet Protocol (IP). The basis of the proposal is to run the Internet transport protocols, the Transmission Control Protocol (TCP), and the User Datagram Protocol (UDP) on top of CLNP in an approach known as TCP and UDP with bigger addresses (TUBA). The fundamentals of CLNP and the OSI connectionless routing architecture, the operation of the IP suite with CLNP replacing IP, the support of Internet applications operating on top of TUBA, and a transition plan to a TUBA Internet are discussed  相似文献   

15.
In this paper, we present a receiver-oriented, request/response protocol for the Web that is compatible with the dynamics of TCP's congestion control algorithm. The protocol, called WebTP, is designed to be completely receiver-based in terms of transport initiation, flow-control and congestion-control. We propose a dual window-cum-rate based congestion control mechanism that is compatible with parallel TCP flows, and in fact interacts better with a congested network state. In support of our receiver-driven design, we developed a novel retransmission scheme that is robust to delay variations. The resulting flows achieve efficient network utilization and are qualitatively fair in their interaction amongst themselves and even with competing TCP flows. The paper also provides detailed simulation results to support the protocol design.  相似文献   

16.

Transmission control protocol (TCP) is the widely and dominantly used protocol in today’s internet. A very recent implementation of congestion control algorithm is BBR by Google. Bottleneck bandwidth and round-trip time (BBR) is a congestion control algorithm which is created with the aim of increasing throughput and reducing delay. The congestion control protocols mentioned previously try to determine congestion limits by filling router queues. BBR drains the router queues at the bottleneck by sending exactly at the bottleneck link rate. This is done by the BBR through pacing rate which infers the delivery rate of the receiver and uses this as the estimated bottleneck bandwidth. But when the data rate is high, in the startup phase itself pipe becomes full and leads to some degradation in the Access Point of wireless environments by inducing losses specific to this environment. So the current pacing rate is not suitable for producing higher throughputs. Therefore, in the proposed system named R-BBR, this startup gain should be lower than the current startup gain which eventually would reduce pacing rate to reduce queue pressure in the sink node during the startup phase. The startup phase of BBR is modified to solve the problem of pipe full under high data rate. R-BBR has been evaluated over a wide range of wired as well as wireless networks by varying different factors like startup gain, congestion window, and pacing rate. It is inferred that R-BBR performs better than BBR with significant performance improvement.

  相似文献   

17.
Optimization of SIP Session Setup Delay for VoIP in 3G Wireless Networks   总被引:2,自引:0,他引:2  
Wireless networks beyond 2G aim at supporting real-time applications such as VoIP. Before a user can start a VoIP session, the end-user terminal has to establish the session using signaling protocols such as H.323 and session initiation protocol (SIP) in order to negotiate media parameters. The time interval to perform the session setup is called the session setup time. It can be affected by the quality of the wireless link, measured in terms of frame error rate (FER), which can result in retransmissions of packets lost and can lengthen the session setup time. Therefore, such protocols should have a session setup time optimized against loss. One way to do so is by choosing the appropriate retransmission timer and the underlying protocols. In this paper, we focus on SIP session setup delay and propose optimizing it using an adaptive retransmission timer. We also evaluate SIP session setup performances with various underlying protocols (transport control protocol (TCP), user datagram protocol (UDP), radio link protocols (RLPs)) as a function of the FER. For 19.2 Kbps channel, the SIP session setup time can be up to 6.12s with UDP and 7s with TCP when the FER is up to 10 percent. The use of RLP (1, 2, 3) and RLP (1, 1, 1, 1, 1, 1) puts the session setup time down to 3.4s under UDP and 4s under TCP for the same FER and the same channel bandwidth. We also compare SIP and H.323 performances using an adaptive retransmission timer: SIP outperforms H.323, especially for a FER higher than 2 percent.  相似文献   

18.
Quick User Datagram Protocol (UDP) Internet Connections (QUIC) is an experimental and low‐latency transport protocol proposed by Google, which is still being improved and specified in the Internet Engineering Task Force (IETF). The viewer's quality of experience (QoE) in HTTP adaptive streaming (HAS) applications may be improved with the help of QUIC's low‐latency, improved congestion control, and multiplexing features. We measured the streaming performance of QUIC on wireless and cellular networks in order to understand whether the problems that occur when running HTTP over TCP can be reduced by using HTTP over QUIC. The performance of QUIC was tested in the presence of network interface changes caused by the mobility of the viewer. We observed that QUIC resulted in quicker start of media streams, better streaming, and seeking experience, especially during the higher levels of congestion in the network and had a better performance than TCP when the viewer was mobile and switched between the wireless networks. Furthermore, we measured QUIC's performance in an emulated network that had a various amount of losses and delays to evaluate how QUIC's multiplexing feature would be beneficial for HAS applications. We compared the performance of HAS applications using multiplexing video streams with HTTP/1.1 over multiple TCP connections to HTTP/2 over one TCP connection and to QUIC over one UDP connection. To that effect, we observed that QUIC provided better performance than TCP on a network that had large delays. However, QUIC did not provide a significant improvement when the loss rate was large. Finally, we analyzed the performance of the congestion control mechanisms implemented by QUIC and TCP, and tested their ability to provide fairness among streaming clients. We found that QUIC always provided fairness among QUIC flows, but was not always fair to TCP.  相似文献   

19.
The throughput degradation of Transport Control Protocol (TCP)/Internet Protocol (IP) networks over lossy links due to the coexistence of congestion losses and link corruption losses is very similar to the degradation of processor performance (i.e., cycle per instruction) due to control hazards in computer design. First, two types of loss events in networks with lossy links are analogous to two possibilities of a branching result in computers (taken vs. not taken). Secondly, both problems result in performance degradations in their applications, i.e., penalties (in clock cycles) in a processor, and throughput degradation (in bits per second) in a TCP/IP network. This has motivated us to apply speculative techniques (i.e., speculating on the outcome of branch predictions), used to overcome control dependencies in a processor, for throughput improvements when lossy links are involved in TCP/IP connections. The objective of this paper is to propose a cross-layer network architecture to improve the network throughput over lossy links. The system consists of protocol-level speculation based algorithms at transport layer, and protocol enhancements at middleware and network layers that provide control and performance parameters to transport layer functions. Simulation results show that, compared with prior research, our proposed system is effective in improving network throughput over lossy links, capable of handling incorrect speculations, fair for other competing flows, backward compatible with legacy networks, and relatively easy to implement.  相似文献   

20.
Video streaming is often carried out by congestion controlled transport protocols to preserve network sustainability. However, the success of the growth of such non-live video flows is linked to the user quality of experience. Thus, one possible solution is to deploy complex quality of service systems inside the core network. Another possibility would be to keep the end-to-end principle while making aware transport protocols of video quality rather than throughput. The objective of this article is to investigate the latter by proposing a novel transport mechanism which targets video quality fairness among video flows. Our proposal, called VIRAL for virtual rate-quality curve, allows congestion controlled transport protocols to provide fairness in terms of both throughput and video quality. VIRAL is compliant with any rate-based congestion control mechanisms that enable a smooth sending rate for multimedia applications. Implemented inside TFRC a TCP-friendly protocol, we show that VIRAL enables both intra-fairness between video flows in terms of video quality and inter-fairness in terms of throughput between TCP and video flows.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号