首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
On-line, spatially localized information about internal network performance can greatly assist dynamic routing algorithms and traffic transmission protocols. However, it is impractical to measure network traffic at all points in the network. A promising alternative is to measure only at the edge of the network and infer internal behavior from these measurements. We concentrate on the estimation and localization of internal delays based on end-to-end delay measurements from a source to receivers. We propose a sequential Monte Carlo (SMC) procedure capable of tracking nonstationary network behavior and estimating time-varying, internal delay characteristics. Simulation experiments demonstrate the performance of the SMC approach  相似文献   

2.
We present a novel methodology for identifying internal network performance characteristics based on end-to-end multicast measurements. The methodology, solidly grounded on statistical estimation theory, can be used to characterize the internal loss and delay behavior of a network. Measurements on the MBone have been used to validate the approach in the case of losses. Extensive simulation experiments provide further validation of the approach, not only for losses, but also for delays. We also describe our strategy for deploying the methodology on the Internet. This includes the continued development of the National Internet Measurement Infrastructure to support RTP-based end-to-end multicast measurements and the development of software tools to analyze the traces. Once complete, this combined software/hardware infrastructure will provide a service for understanding and forecasting the performance of the Internet  相似文献   

3.
基于Gilbert丢包机制的TCP吞吐量模型   总被引:1,自引:0,他引:1       下载免费PDF全文
曾彬  张大方  黎文伟  谢高岗 《电子学报》2009,37(8):1728-1732
 丢包机制是推导TCP吞吐量模型的关键,直接影响模型的准确性.本文利用四状态Gilbert丢包机制来描述端到端路径上的丢包行为,对TCP的拥塞控制过程进行建模,在此基础上提出了一种更精确的TCP吞吐量模型.实验表明,改进的模型能较好的与实际值相拟合,可以更精确地预测实际TCP数据流的吞吐量性能.  相似文献   

4.
In this paper, we focus on the performance of TCP enhancements for a hybrid terrestrial–satellite network. While a large body of literature exists regarding modeling TCP performance for the wired Internet, and recently over a single-hop wireless link, the literature is very sparse on TCP analysis over a hybrid wired–wireless (multi-hop) path. We seek to make a contribution to this problem (where the wireless segment is a satellite uplink) by deriving analytical estimates of TCP throughput for two widely deployed approaches: TCP splitting and E2E (End-to-End) TCP with link layer support as a function of key parameters such as terrestrial/satellite propagation delay, segment loss rate and buffer size. Our analysis is supported by simulations; throughput comparisons indicate superiority of TCP splitting over E2E scheme in most cases. However, in situations where end-to-end delay is dominated by terrestrial portion and buffering is very limited at intermediate node, E2E achieves higher throughput than TCP splitting.  相似文献   

5.
This work proposes a stochastic model to characterize the transmission control protocol (TCP) over optical burst switching (OBS) networks which helps to understand the interaction between the congestion control mechanism of TCP and the characteristic bursty losses in the OBS network. We derive the steady-state throughput of a TCP NewReno source by modeling it as a Markov chain and the OBS network as an open queueing network with rejection blocking. We model all the phases in the evolution of TCP congestion window and evaluate the number of packets sent and time spent in different states of TCP. We model the mixed assembly process, burst assembler and disassembler modules, and the core network using queueing theory and compute the burst loss probability and end-to-end delay in the network. We derive expression for the throughput of a TCP source by solving the models developed for the source and the network with a set of fixed-point equations. To evaluate the impact of a burst loss on each TCP flow accurately, we define the burst as a composition of per-flow-bursts (which is a burst of packets from a single source). Analytical and simulation results validate the model and highlight the importance of accounting for individual phases in the evolution of TCP congestion window.  相似文献   

6.
几种主动式队列管理算法的比较研究   总被引:9,自引:0,他引:9  
吴春明  姜明  朱淼良 《电子学报》2004,32(3):429-434
主动式队列管理(Active Queue Management,AQM)技术是IETF为了解决Internet拥塞控制问题而提出的一种路由器缓存管理技术.本文对几种主要AQM算法RED、BLUE、ARED和SRED的性能在基于ns-2仿真实验的基础上进行了比较研究.研究的性能包括队列长度、丢包概率、丢包率、连接数对吞吐量的影响及缓冲区大小对链路利用率的影响等.仿真结果表明BLUE、ARED和SRED在这几方面的性能都要优于RED算法.  相似文献   

7.
With the growth in Internet access services over networks with asymmetric links such as asymmetric digital subscriber line (ADSL) and cable-based access networks, it becomes crucial to evaluate the performance of TCP/IP over systems in which the bottleneck link speed on the reverse path is considerably slower than that on the forward path. In this paper, we provide guidelines for designing network control mechanisms for supporting TCP/IP. We determine the throughput as a function of buffering, round-trip times, and normalized asymmetry (defined as the ratio of the transmission time of acknowledgment (ACK) in the reverse path to that of data packets in the forward path). We identify three modes of operation which are dependent on the forward buffer size and the normalized asymmetry, and determine the conditions under which the forward link is fully utilized. We also show that drop-from-front discarding of ACKs on the reverse link provides performance advantages over other drop mechanisms in use. Asymmetry increases the TCP already high sensitivity to random packet losses that occur on a time scale faster than the connection round-trip time. We generalize the by-now well-known relation relating the square root of the random loss probability to obtained TCP throughput, originally derived considering only data path congestion. Specifically, random loss leads to significant throughput deterioration when the product of the loss probability, the normalized asymmetry and the square of the bandwidth delay product is large. Congestion in the reverse path adds considerably to TCP unfairness when multiple connections share the reverse bottleneck link. We show how such problems can be alleviated by per-connection buffer and bandwidth allocation on the reverse path  相似文献   

8.
We explore the performance of reliable data communication in mobile computing environments. Motion across wireless cell boundaries causes increased delays and packet losses while the network learns how to route data to a host's new location. Reliable transport protocols like TCP interpret these delays and losses as signs of network congestion. They consequently throttle their transmissions, further degrading performance. We quantify this degradation through measurements of protocol behavior in a wireless networking testbed. We show how current TCP implementations introduce unacceptably long pauses in communication during cellular handoffs (800 ms and longer), and propose an end-to-end fast retransmission scheme that can reduce these pauses to levels more suitable for human interaction (200 ms). Our work makes clear the need for reliable transport protocols to differentiate between motion-related and congestion-related packet losses and suggests how to adapt these protocols to perform better in mobile computing environments  相似文献   

9.
Providers of high quality-of-service over telecommunication networks require accurate methods for remote measurement of link-level performance. Recent research in network tomography has demonstrated that it is possible to estimate internal link characteristics, e.g., link delays and packet losses, using unicast probing schemes in which probes are exchanged between several pairs of sites in the network. We present a new method for estimation of internal link delay distributions using the end-to-end packet pair delay statistics gathered by back-to-back packet-pair unicast probes. Our method is based on a variant of the penalized maximum likelihood expectation-maximization (PML-EM) algorithm applied to an additive finite mixture model for the link delay probability density functions. The mixture model incorporates a combination of discrete and continuous components, and we use a minimum message length (MML) penalty for selection of model order. We present results of Matlab and ns-2 simulations to illustrate the promise of our network tomography algorithm for light cross-traffic scenarios.  相似文献   

10.
In this research, we first investigate the cross-layer interaction between TCP and routing protocols in the IEEE 802.11 ad hoc network. On-demand ad hoc routing protocols respond to network events such as channel noise, mobility, and congestion in the same manner, which, in association with TCP, deteriorates the quality of an existing end-to-end connection. The poor end-to-end connectivity deteriorates TCP's performance in turn. Based on the well-known TCP-friendly equation, we conduct a quantitative study on the TCP operation range using static routing and long-lived TCP flows and show that the additive-increase, multiplicative-decrease (AIMD) behavior of the TCP window mechanism is aggressive for a typical multihop IEEE 802.11 network with a low-bandwidth-delay product. Then, to address these problems, we propose two complementary mechanisms, that is, the TCP fractional window increment (FeW) scheme and the Route-failure notification using BUIk-losS Trigger (ROBUST) policy. The TCP FeW scheme is a preventive solution used to reduce the congestion-driven wireless link loss. The ROBUST policy is a corrective solution that enables on-demand routing protocols to suppress overreactions induced by the aggressive TCP behavior. It is shown by computer simulation that these two mechanisms result in a significant improvement of TCP throughput without modifying the basic TCP window or the wireless MAC mechanisms.  相似文献   

11.
Wireless link losses result in poor TCP throughput since losses are perceived as congestion by TCP, resulting in source throttling. In order to mitigate this effect, 3G wireless link designers have augmented their system with extensive local retransmission mechanisms. In addition, in order to increase throughput, intelligent channel state based scheduling have also been introduced. While these mechanisms have reduced the impact of losses on TCP throughput and improved the channel utilization, these gains have come at the expense of increased delay and rate variability. In this paper, we comprehensively evaluate the impact of variable rate and variable delay on long-lived TCP performance. We propose a model to explain and predict TCPs throughput over a link with variable rate and/or delay. We also propose a network-based solution called Ack Regulator that mitigates the effect of variable rate and/or delay without significantly increasing the round trip time, while improving TCP performance by up to 100%.  相似文献   

12.
This paper examines the performance of TCP/IP, the Internet data transport protocol, over wide-area networks (WANs) in which data traffic could coexist with real-time traffic such as voice and video. Specifically, we attempt to develop a basic understanding, using analysis and simulation, of the properties of TCP/IP in a regime where: (1) the bandwidth-delay product of the network is high compared to the buffering in the network and (2) packets may incur random loss (e.g., due to transient congestion caused by fluctuations in real-time traffic, or wireless links in the path of the connection). The following key results are obtained. First, random loss leads to significant throughput deterioration when the product of the loss probability and the square of the bandwidth-delay product is larger than one. Second, for multiple connections sharing a bottleneck link, TCP is grossly unfair toward connections with higher round-trip delays. This means that a simple first in first out (FIFO) queueing discipline might not suffice for data traffic in WANs. Finally, while the Reno version of TCP produces less bursty traffic than the original Tahoe version, it is less robust than the latter when successive losses are closely spaced. We conclude by indicating modifications that may be required both at the transport and network layers to provide good end-to-end performance over high-speed WANs  相似文献   

13.
Current end-to-end Internet congestion control under tail-drop (TD) queue management experiences performance degradations such as multiple packet losses, high queueing delay and low link utilization. In this paper, we review recently proposed active queue management (AQM) algorithms for supporting end-to-end transmission control protocol (TCP) congestion control. We focus recently developed control theoretic design and analysis method for the AQM based TCP congestion control dynamics. In this context, we analyze the problems of existing AQM proposals in which congestion is detected and controlled reactively based on current and/or past congestion. Then we argue that AQM based TCP congestion control should be adaptive to the dynamically changing traffic situation in order to detect, control and avoid the current and the incipient congestion proactively. Finally, we survey two adaptive and proactive AQM algorithms, PID-controller and Pro-Active Queue Management (PAQM), designed using classical proportional-integral–derivative (PID) feedback control to overcome the reactive congestion control dynamics of existing AQM algorithms. A comparative study of these AQM algorithms with existing AQM algorithms is given. A simulation study under a wide range of realistic traffic conditions suggests that PID-controller and PAQM outperform other AQM algorithms such as random early detection (RED) [Floyd and Jacobson, 18] and proportional-integral (PI) controller [Hollot et al., 24].  相似文献   

14.
The impact of multihop wireless channel on TCP performance   总被引:6,自引:0,他引:6  
This paper studies TCP performance in a stationary multihop wireless network using IEEE 802.11 for channel access control. We first show that, given a specific network topology and flow patterns, there exists an optimal window size W* at which TCP achieves the highest throughput via maximum spatial reuse of the shared wireless channel. However, TCP grows its window size much larger than W* leading to throughput reduction. We then explain the TCP throughput decrease using our observations and analysis of the packet loss in an overloaded multihop wireless network. We find out that the network overload is typically first signified by packet drops due to wireless link-layer contention, rather than buffer overflow-induced losses observed in the wired Internet. As the offered load increases, the probability of packet drops due to link contention also increases, and eventually saturates. Unfortunately the link-layer drop probability is insufficient to keep the TCP window size around W'*. We model and analyze the link contention behavior, based on which we propose link RED that fine-tunes the link-layer packet dropping probability to stabilize the TCP window size around W*. We further devise adaptive pacing to better coordinate channel access along the packet forwarding path. Our simulations demonstrate 5 to 30 percent improvement of TCP throughput using the proposed two techniques.  相似文献   

15.
1 IntroductionAnanalysisofperformanceusingMPLS TEispresentedinthepaper.WhenMultiProtocolLabelSwitching(MPLS)isfirstintroducedinNetworkSociety[2~ 3] ,theoriginalideaaboutMPLSisthatitmapsL3routing (thetraditionallongestaddressmatch)toL2switching(thefixedshortla…  相似文献   

16.
This paper studies the feasibility and algorithms for inferring the delay at each link in a communication network based on a large number of end-to-end measurements. The restriction is that we are not allowed to measure directly on each link and can only observe the route delays. It is assumed that we have considerable flexibility in choosing which routes to measure. We investigate two different cases: 1) each link delay is a constant and 2) each link delay is modeled as a random variable from a family of distributions with unknown parameters. We will answer whether such indirect inference is possible at all, and when possible, how it can be carried out. The emphasis is on developing the maximum-likelihood estimators for scenario 2) when the link delays are modeled by exponential random variables or mixtures of exponentials. We have derived solutions based on the EM algorithm and demonstrated that, even though they do not necessarily reflect the true model parameters, they do seem to maximize the likelihood in most cases and that the resulting probability density functions match the true functions on regions where the probability mass concentrates  相似文献   

17.
The throughput degradation of Transport Control Protocol (TCP)/Internet Protocol (IP) networks over lossy links due to the coexistence of congestion losses and link corruption losses is very similar to the degradation of processor performance (i.e., cycle per instruction) due to control hazards in computer design. First, two types of loss events in networks with lossy links are analogous to two possibilities of a branching result in computers (taken vs. not taken). Secondly, both problems result in performance degradations in their applications, i.e., penalties (in clock cycles) in a processor, and throughput degradation (in bits per second) in a TCP/IP network. This has motivated us to apply speculative techniques (i.e., speculating on the outcome of branch predictions), used to overcome control dependencies in a processor, for throughput improvements when lossy links are involved in TCP/IP connections. The objective of this paper is to propose a cross-layer network architecture to improve the network throughput over lossy links. The system consists of protocol-level speculation based algorithms at transport layer, and protocol enhancements at middleware and network layers that provide control and performance parameters to transport layer functions. Simulation results show that, compared with prior research, our proposed system is effective in improving network throughput over lossy links, capable of handling incorrect speculations, fair for other competing flows, backward compatible with legacy networks, and relatively easy to implement.  相似文献   

18.
It is well-known that the bufferless nature of optical burst-switching (OBS) networks cause random burst loss even at low traffic loads. When TCP is used over OBS, these random losses make the TCP sender decrease its congestion window even though the network may not be congested. This results in significant TCP throughput degradation. In this paper, we propose a multi-layer loss-recovery approach with automatic retransmission request (ARQ) and Snoop for OBS networks given that TCP is used at the transport layer. We evaluate the performance of Snoop and ARQ at the lower layer over a hybrid IP-OBS network. Based on the simulation results, the proposed multi-layer hybrid ARQ + Snoop approach outperforms all other approaches even at high loss probability. We developed an analytical model for end-to-end TCP throughput and verified the model with simulation results.  相似文献   

19.
A comparison of mechanisms for improving TCP performance overwireless links   总被引:1,自引:0,他引:1  
Reliable transport protocols such as TCP are tuned to perform well in traditional networks where packet losses occur mostly because of congestion. However, networks with wireless and other lossy links also suffer from significant losses due to bit errors and handoffs. TCP responds to all losses by invoking congestion control and avoidance algorithms, resulting in degraded end-to end performance in wireless and lossy systems. We compare several schemes designed to improve the performance of TCP in such networks. We classify these schemes into three broad categories: end-to-end protocols, where loss recovery is performed by the sender; link-layer protocols that provide local reliability; and split-connection protocols that break the end-to-end connection into two parts at the base station. We present the results of several experiments performed in both LAN and WAN environments, using throughput and goodput as the metrics for comparison. Our results show that a reliable link-layer protocol that is TCP-aware provides very good performance. Furthermore, it is possible to achieve good performance without splitting the end-to-end connection at the base station. We also demonstrate that selective acknowledgments and explicit loss notifications result in significant performance improvements  相似文献   

20.
Hotspots represent transient but highly congested regions in wireless ad hoc networks that result in increased packet loss, end-to-end delay, and out-of-order packets delivery. We present a simple, effective, and scalable hotspot mitigation protocol (HMP) where mobile nodes independently monitor local buffer occupancy, packet loss, and MAC contention and delay conditions, and take local actions in response to the emergence of hotspots, such as, suppressing new route requests and rate controlling TCP flows. We use analysis, simulation, and experimental results from a wireless testbed to demonstrate the effectiveness of HMP in mobile ad hoc networks. HMP balances resource consumption among neighboring nodes, and improves end-to-end throughput, delay, and packet loss. Our results indicate that HMP can also improve the network connectivity preventing premature network partitions. We present analysis of hotspots, and the detailed design of HMP. We evaluate the protocol’s ability to effectively mitigate hotspots in mobile ad hoc networks that are based on on-demand and proactive routing protocols.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号