首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In explicit TCP rate control, the receiver's advertised window size in acknowledgment (ACK) packets can be modified by intermediate network elements to reflect network congestion conditions. The TCP receiver's advertised window (i.e. the receive buffer of a TCP connection) limits the maximum window and consequently the throughput that can be achieved by the sender. Appropriate reduction of the advertised window can control the number of packets allowed to be sent from a TCP source. This paper evaluates the performance of a TCP rate control scheme in which the receiver's advertised window size in ACK packets are modified in a network node in order to match the generated load to the assigned bandwidth in the node. Using simulation and performance metrics such as the packet loss rates and the cumulative number of TCP timeouts, we examine the service improvement provided by the TCP rate control scheme to the users. The modified advertised windows computed in the network elements and the link utilization are also examined. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

2.
The transmission control protocol (TCP) is one of the most important Internet protocols. It provides reliable transport services between two end‐hosts. Since TCP performance affects overall network performance, many studies have been done to model TCP performance in the steady state. However, recent researches have shown that most TCP flows are short‐lived. Therefore, it is more meaningful to model TCP performance in relation to the initial stage of short‐lived flows. In addition, the next‐generation Internet will be an unified all‐IP network that includes both wireless and wired networks integrated together. In short, modelling short‐lived TCP flows in wireless networks constitutes an important axis of research. In this paper, we propose simple wireless TCP models for short‐lived flows that extend the existing analytical model proposed in [IEEE Commun. Lett. 2002; 6 (2):85–88]. In terms of wireless TCP, we categorized wireless TCP schemes into three types: end‐to‐end scheme, split connection scheme, and local retransmission scheme, which is similar to the classification proposed in [IEEE/ACM Trans. Networking 1997; 756–769]. To validate the proposed models, we performed ns‐2 simulations. The average differences between the session completion time calculated using the proposed model and the simulation result for three schemes are less than 9, 16, and 7 ms, respectively. Consequently, the proposed model provides a satisfactory means of modelling the TCP performance of short‐lived wireless TCP flows. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

3.
While existing research shows that feedback‐based congestion control mechanisms are capable of providing better video quality and higher link utilization for rate‐adaptive packet video, there has been relatively little study on how to share network bandwidth among competing rate‐adaptive video connections, when feedback control is used in a fully distributed network. This paper addresses this issue by presenting a framework of network bandwidth sharing for transporting rate‐adaptive packet video using feedback. We show how a weight‐based bandwidth sharing policy can be used to allocate network bandwidth among competing video connections and design a feedback control algorithm using an Available Bit Rate (ABR)‐like flow control mechanism. A novel video source rate adaptation algorithm is also introduced to decouple a video source's actual transmission rate from the rate used for distributed protocol convergence. Our feedback control algorithm provides guaranteed convergence and smooth source rate adaptation to our weight‐based bandwidth sharing policy under any network configuration and any set of link distances. Finally, we show the on‐line minimum rate renegotiation and weight adjustment options in our feedback control algorithm, which offer further flexibility in network bandwidth sharing for video connections. Copyright © 2000 John Wiley & Sons, Ltd.  相似文献   

4.
Large and sudden variations in packet transmission delays are often unavoidable in wireless networks. Such large delays, refer to as delay spikes (DSs), are likely to exceed several times the typical network round‐trip‐time figures, which can cause TCP spurious timeouts. The spurious timeouts lead to unnecessary retransmissions and reduction of the TCP sender's transmission rate, and degradation of TCP throughput. In this paper we propose a new scheme called DS‐Agent. The spurious timeout is detected by a DS‐Agent and thus TCP sender can response to this spurious timeout accordingly. The simulation results show the better performance of DS‐Agent scheme compared with F‐RTO and TCP Reno in the presence of DSs which is caused by mobility. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

5.
Mobile broadband interactive satellite communication system is of great interest in both academic and industrial communities. However, the conventional strict‐layered protocol stack architecture and the standard TCP version perform poorly over satellite link. In this paper, we propose a comprehensive cross‐layer Transmission Control Protocol (TCP) optimization architecture while considering the main factors that affect the TCP performance. In our proposed architecture, we adopt two TCP split connection performance enhancing proxies to isolate the satellite link from the terrestrial part of the broadband satellite communication system. Then, based on the proposed cross‐layer architecture, we present an analytical model for the TCP throughput by taking the modulation and coding (ModCod) mode and the allocated bandwidth into account. In addition, we put forward a TCP‐driven bandwidth sharing and ModCod mode optimization algorithm to maximize the TCP throughput in satellite link. Extensive simulation results illustrate that our proposed comprehensive cross‐layer TCP optimization approach is able to improve the TCP throughput significantly. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

6.
The introduction of high‐bandwidth demanding services such as multimedia services has resulted in important changes on how services in the Internet are accessed and what quality‐of‐experience requirements (i.e. limited amount of packet loss, fairness between connections) are expected to ensure a smooth service delivery. In the current congestion control mechanisms, misbehaving Transmission Control Protocol (TCP) stacks can easily achieve an unfair advantage over the other connections by not responding to Explicit Congestion Notification (ECN) warnings, sent by the active queue management (AQM) system when congestion in the network is imminent. In this article, we present an accountability mechanism that holds connections accountable for their actions through the detection and penalization of misbehaving TCP stacks with the goal of restoring the fairness in the network. The mechanism is specifically targeted at deployment in multimedia access networks as these environments are most prone to fairness issues due to misbehaving TCP stacks (i.e. long‐lived connections and a moderate connection pool size). We argue that a cognitive approach is best suited to cope with the dynamicity of the environment and therefore present a cognitive detection algorithm that combines machine learning algorithms to classify connections into well‐behaving and misbehaving profiles. This is in turn used by a differentiated AQM mechanism that provides a different treatment for the well‐behaving and misbehaving profiles. The performance of the cognitive accountability mechanism has been characterized both in terms of the accuracy of the cognitive detection algorithm and the overall impact of the mechanism on network fairness. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

7.
Although many research efforts have been devoted to network congestion in the face of an increase in the Internet traffic, there is little recent discussion on performance improvements for endhosts. In this paper, we propose a new architecture, called scalable socket buffer tuning (SSBT), to provide high‐performance and fair service for many TCP connections at Internet endhosts. SSBT has two major features. One is to reduce the number of memory accesses at the sender host by using some new system calls, called Simple Memory‐copy Reduction (SMR) scheme. The other is equation‐based automatic TCP buffer tuning (E‐ATBT), where the sender host estimates ‘expected’ throughput of the TCP connections through a simple mathematical equation, and assigns a send socket buffer to them according to the estimated throughput. If the socket buffer is short, max–min fairness policy is used. We confirm the effectiveness of our proposed algorithm through both simulation technique and an experimental system. From the experimental results, we have found that our SSBT can achieve up to a 30% gain for Web server throughput, and a fair and effective usage of the sender socket buffer can be achieved. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

8.
We propose a new scheme for a network service that guarantees a minimum throughput to flows accepted by admission control (AC). The whole scheme only uses a small set of packet classes in a core‐stateless network. At the ingress of the network each flow packet is marked into one of the sets of classes, and within the network, each class is assigned a different discarding priority. The AC method is based on edge‐to‐edge per‐flow throughput measurements using the first packets of the flow, and it requires flows to send with a minimum rate. We evaluate the scheme through simulations in a simple bottleneck topology with different traffic loads consisting of TCP flows that carry files of varying sizes. We use a modified TCP source with a new algorithm that forces the source to send with a minimum rate. We compare our scheme with the best‐effort service and we study the influence of the measurement duration on the scheme's performance. The results prove that the scheme guarantees the requested throughput to accepted flows and achieves a high utilization of network resources. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

9.
Adequately providing fault tolerance while using network capacity efficiently is a major topic of research in optical networks. In order to improve the network utilization, grooming of low‐rate connections in optical networks has been usually performed at the edge of the network. However, in all‐optical networks once a channel is assigned, its entire capacity is dedicated to the users independently of its grooming capabilities. As current users don't usually require such big capacities, bandwidth inefficiencies still occur. In this paper we address this issue introducing unlimited grooming per link (UGPL), a new restoration mechanism for opaque mesh optical networks that grooms connections on a per‐link basis. Simulation results show that UGPL provides the best bandwidth efficiency and the best blocking probability compared to traditional 1 + 1 protection and 1 : N end‐to‐end sharing schemes. Furthermore, we show that the 1 : N end‐to‐end restoration scheme provides no benefits over the simpler and faster 1 + 1 protection scheme. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

10.
The demand for higher data rate has spurred the adoption of multiple‐input multiple‐output (MIMO) transmission techniques in IEEE 802.11 products. MIMO techniques provide an additional spatial dimension that can significantly increase the channel capacity. A number of multiuser MIMO system have been proposed, where the multiple antenna at the physical layer are employed for multiuser access, allowing multiple users to share the same bandwidth. As these MIMO physical layer technologies further evolve, the usable bandwidth per application increases; hence, the average service time per application decreases. However, in the IEEE 802.11 distributed coordination function‐based systems, a considerable amount of bandwidth is wasted during the medium access and coordination process. Therefore, as the usable bandwidth is enhanced using MIMO technology, the bandwidth wastage of medium access and coordination becomes a significant performance bottleneck. Hence, there is a fundamental need for bandwidth sharing schemes at the medium access control (MAC) layer where multiple connections can concurrently use the increased bandwidth provided by the physical layer MIMO technologies. In this paper, we propose the MIMO‐aware rate splitting (MRS) MAC protocol and examine its behavior under different scenarios. MRS is a distributed MAC protocol where nodes locally cooperate with one another to share bandwidth via splitting the spatial channels of MIMO systems. Simulation results of MRS protocol are obtained and compared with those of IEEE 802.11n protocol. We show that our proposed MRS scheme can significantly outperform the IEEE 802.11n in medium access delay and throughput. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

11.
The relative differentiated service model provides assurances for the relative quality ordering between service classes, rather than for the actual service level in each class. In this paper, we describe a relative loss rate differentiation scheme where packet drop probabilities are determined according to an active queue management (AQM) mechanism based on random early detection (RED) in a first‐in first‐out (FIFO) queue, are weighted in inverse proportion to the price that the network operator assigns to each service class. Basically, we describe a scheme where relative loss rate differentiation is incorporated directly into AQM. Most TCP flows today, particularly Web flows, can be characterized as short‐lived flows. Using simulations with short‐lived TCP flows, we show that the scheme is very effective in ensuring relative loss rate differentiation between service classes during times of network congestion. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

12.
In this paper, we propose a bidirectional bandwidth-allocation mechanism to improve TCP performance in the IEEE 802.16 broadband wireless access networks. By coupling the bandwidth allocation for uplink and downlink connections, the proposed mechanism increases the throughput of the downlink TCP flow and it enhances the efficiency of uplink bandwidth allocation for the TCP acknowledgment (ACK). According to the IEEE 802.16 standard, when serving a downlink TCP flow, the transmission of the uplink ACK, which is performed over a separate unidirectional connection, incurs additional bandwidth-request/allocation delay. Thus, it increases the round trip time of the downlink TCP flow and results in the decrease of throughput accordingly. First, we derive an analytical model to investigate the effect of the uplink bandwidth-request/allocation delay on the downlink TCP throughput. Second, we propose a simple, yet effective, bidirectional bandwidth-allocation scheme that combines proactive bandwidth allocation with piggyback bandwidth request. The proposed scheme reduces unnecessary bandwidth-request delay and the relevant signaling overhead due to proactive allocation; meanwhile, it maintains high efficiency of uplink bandwidth usage by using piggyback request. Moreover, our proposed scheme is quite simple and practical; it can be simply implemented in the base station without requiring any modification in the subscriber stations or resorting to any cross-layer signaling mechanisms. The simulation results ascertain that the proposed approach significantly increases the downlink TCP throughput and the uplink bandwidth efficiency.  相似文献   

13.
该文在分析光突发交换(OBS)网络对TCP性能影响的基础上,研究了单个突发所包含的属于同一TCP/ IP连接的分组数对TCP Reno吞吐量性能的影响,得到了一个吞吐量与突发丢失率、单个突发所包含分组数以及往返时延(RTT)的闭合表达式;并通过仿真验证了分析的正确性;分析和仿真结果表明,在接入链路带宽较大时,突发所包含的分组数存在一个最佳值,使TCP吞吐量达到最大。  相似文献   

14.
Internet access from mobile phones over cellular networks suffers from severe bandwidth limitations and high bit error rates over wireless access links. Tailoring TCP connections to best fit the characteristics of this bottleneck link is thus very important for overall performance improvement. In this work, we propose a simple algorithm in deciding the optimal TCP segment size to maximize the utilization of the bottleneck wireless TCP connection for mobile contents server access, taking the dynamic TCP window variation into account. The proposed algorithm can be used when the product of the access rate of the wireless link and the propagation time to mobile contents servers is not large. With some numerical examples, it is shown that the optimal TCP segment size becomes a constant value when the TCP window size (WS) exceeds a threshold. One can set the maximum segment size (MSS) of a wireless TCP connection to this optimal segment size for mobile contents server access for maximum efficiency on the expensive wireless link. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

15.
While Transmission Control Protocol (TCP) Performance Enhancing Proxy (PEP) solutions have long been undisputed to solve the inherent satellite problems, the improvement of the regular end‐to‐end TCP congestion avoidance algorithms and the recent emphasis on the PEPs drawbacks have opened the question of the PEPs sustainability. Nevertheless, with a vast majority of Internet connections shorter than ten segments, TCP PEPs continue to be required to counter the poor efficiency of the end‐to‐end TCP start‐up mechanisms. To reduce the PEPs dependency, designing a new fast start‐up TCP mechanism is therefore a major concern. But, while enlarging the Initial Window (IW) up to ten segments is, without any doubt, the fastest solution to deal with a short‐lived connection in an uncongested network, numerous researchers are concerned about the impact of the large initial burst on an already congested network. Based on traffic observations and real experiments, Initial Spreading has been designed to remove those concerns whatever the load and type of networks. It offers performance similar to a large IW in uncongested network and outperforms existing end‐to‐end solutions in congested networks. In this paper, we show that Initial Spreading, taking care of the satellite specificities, is an efficient end‐to‐end alternative to the TCP PEPs. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

16.
Most high‐speed links do not have adequate buffering and as a result Active Queue Management (AQM) schemes that utilize queue size information for congestion control cannot be effectively applied on these links. A high‐speed link will, typically, have small buffers in relation to the bandwidth‐delay product of the link. In this paper we argue that rate‐based AQM schemes be used for such links. The goal here is to match the aggregate rate of the active TCP connections to the available capacity while maintaining minimal queue size and high link utilization. The AQM scheme described here employs a Proportional–Integral (PI) control strategy and explicitly takes into account the time delay in the control process. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

17.
With the growth in Internet access services over networks with asymmetric links such as asymmetric digital subscriber line (ADSL) and cable-based access networks, it becomes crucial to evaluate the performance of TCP/IP over systems in which the bottleneck link speed on the reverse path is considerably slower than that on the forward path. In this paper, we provide guidelines for designing network control mechanisms for supporting TCP/IP. We determine the throughput as a function of buffering, round-trip times, and normalized asymmetry (defined as the ratio of the transmission time of acknowledgment (ACK) in the reverse path to that of data packets in the forward path). We identify three modes of operation which are dependent on the forward buffer size and the normalized asymmetry, and determine the conditions under which the forward link is fully utilized. We also show that drop-from-front discarding of ACKs on the reverse link provides performance advantages over other drop mechanisms in use. Asymmetry increases the TCP already high sensitivity to random packet losses that occur on a time scale faster than the connection round-trip time. We generalize the by-now well-known relation relating the square root of the random loss probability to obtained TCP throughput, originally derived considering only data path congestion. Specifically, random loss leads to significant throughput deterioration when the product of the loss probability, the normalized asymmetry and the square of the bandwidth delay product is large. Congestion in the reverse path adds considerably to TCP unfairness when multiple connections share the reverse bottleneck link. We show how such problems can be alleviated by per-connection buffer and bandwidth allocation on the reverse path  相似文献   

18.
19.
Multi‐hop communications equipped with parallel relay nodes is an emerging network scenario visible in environments with high node density. Conventional interference‐free medium access control (MAC) has little capability in utilizing such parallel relays because it essentially prohibits the existence of co‐channel interference and limits the feasibility of concurrent communications. This paper aims at presenting a cooperative multi‐input multi‐output (MIMO) space division multiple access (SDMA) design that uses each hop's parallel relay nodes to improve multi‐hop throughput performance. Specifically, we use MIMO and SDMA to enable concurrent transmissions (from multiple Tx nodes to single/multiple Rx nodes) and suppress simultaneous links' co‐channel interference. As a joint physical layer (MAC/PHY) solution, our design has multiple MAC modules including load balancing that uniformly splits traffic packets at parallel relay nodes and multi‐hop scheduling taking co‐channel interference into consideration. Meanwhile, our PHY layer modules include distributive channel sounding that exchanges channel information in a decentralized manner and link adaptation module estimating instantaneous link rate per time frame. Simulation results validate that compared with interference‐free MAC or existing Mitigating Interference using Multiple Antennas (MIMA‐MAC), our proposed design can improve end‐to‐end throughput by around 30% to 50%. In addition, we further discuss its application on extended multi‐hop topology. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

20.
基于RTT的TCP流带宽公平性保障机制   总被引:3,自引:0,他引:3  
TCP端到端的拥塞控制机制使得TCP连接获得的瓶颈带宽反比于RTT(数据包往返时间)。为了缓解TCP对于RTT较小流的偏向,区分服务的流量调节机制在RTT较小的流取得目标速率且获得多余资源的情况下可以确保RTT较大流不至于饥饿。现有的基于RTT的流量调节机制在网络拥塞程度较轻时非常有效,但是当网络拥塞程度较重时,由于对RTT较大流的过分保护而导致RTT较小流饥饿。因此,通过引进自适应的思想提出了改进方法,其主要思想就是根据网络的拥塞程度自适应地调整对RTT较大流的保护程度。大量的仿真试验表明所提的机制能有效保障TCP流的带宽公平性并且比现有的方法具有更好的强壮性。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号