首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Scalable IP lookup for Internet routers   总被引:2,自引:0,他引:2  
Internet protocol (IP) address lookup is a central processing function of Internet routers. While a wide range of solutions to this problem have been devised, very few simultaneously achieve high lookup rates, good update performance, high memory efficiency, and low hardware cost. High performance solutions using content addressable memory devices are a popular but high-cost solution, particularly when applied to large databases. We present an efficient hardware implementation of a previously unpublished IP address lookup architecture, invented by Eatherton and Dittia (see M.S. thesis, Washington Univ., St. Louis, MO, 1998). Our experimental implementation uses a single commodity synchronous random access memory chip and less than 10% of the logic resources of a commercial configurable logic device, operating at 100 MHz. With these quite modest resources, it can perform over 9 million lookups/s, while simultaneously processing thousands of updates/s, on databases with over 100000 entries. The lookup structure requires 6.3 bytes per address prefix: less than half that required by other methods. The architecture allows performance to be scaled up by using parallel fast IP lookup (FIPL) engines, which interleave accesses to a common memory interface. This architecture allows performance to scale up directly with available memory bandwidth. We describe the tree bitmap algorithm, our implementation of it in a dynamically extensible gigabit router being developed at Washington University in Saint Louis, and the results of performance experiments designed to assess its performance under realistic operating conditions.  相似文献   

2.
This paper discusses the architecture, protocol, analysis, and experimentation of optical packet switching routers incorporating optical-label switching (OLS) technologies and electronic edge routers with traffic shaping capabilities. The core optical router incorporates all-optical switching with contention resolution in wavelength, time, and space domains. It is also capable of accommodating traffic of any protocol and format, and supports packet, flow, burst, and circuit traffic. The edge router is designed to achieve traffic shaping with consideration for quality of service and priority based class-of-service. Simulation results show packet loss rates below 0.3% at load 0.7 and jitter values below 18 /spl mu/s. The traffic shaping reduces the packet loss rate by a factor of /spl sim/5 while adding negligible additional latency. The OLS core routers and the electronic edge routers are constructed including the field-programmable-gate-arrays incorporating the wavelength-aware forwarding and contention resolution algorithms. The experiment shows optical-label-based packet switching with a packet loss rate near 0.2%.  相似文献   

3.
The rapid increase of Internet traffic is pushing the deployment of WDM technology in the next-generation high-speed Internet backbone. Routers in the backbone could still be the potential bottleneck. In this article we consider some design issues of high-throughput optical routers which combine the advantages of WDM with the new optical switching technology. We first introduce a proposed Internet architecture based on the optical burst switching mechanism. Some important design issues in optical buffering and control function are addressed which are expected to have significant effects on router performance. Numerical results of a simulation study are also presented along with discussions  相似文献   

4.
Server-side congestion arises when a large number of users wish to retrieve files from a server over a short period of time. Under such conditions, users are in a unique position to benefit enormously by sharing retrieved files. Pseudoserving, a new paradigm for Internet access, provides incentives for users to contribute to the speedy dissemination of server files through a contract set by a “superserver”. Under this contract, the superserver grants a user a referral to where a copy of the requested file may be retrieved in exchange for the user's assurance to serve other users for a specified period of time. Simulations that consider only network congestion occurring near the server show that: (1) pseudoserving is effective because it self-scales to handle very high request rates; (2) pseudoserving is feasible because a user who participates as a pseudoserver benefits enormously in return for a relatively small contribution of the user's resources; (3) pseudoserving is robust under realistic user behavior because it can tolerate a large percentage of contract breaches; and (4) pseudoserving can exploit locality to reduce usage of network resources. Experiments performed on a local area network that account for the processing of additional layers of protocols and the finite processing and storage capacities of the server and the clients, corroborate the simulation results. They also demonstrate the benefits of exploiting network locality in reducing download times and network traffic while making referrals to a pseudoserver. Limitations of pseudoserving and potential solutions to them are also discussed  相似文献   

5.
In this paper, we study the problem of designing globally stable, scalable congestion control algorithms for the Internet. Prior work has primarily used linear stability as the criterion for such a design. Global stability has been studied only for single node, single source problems. Here, we obtain conditions for a general topology network accessed by sources with heterogeneous delays. We obtain a sufficient condition for global stability in terms of the increase/decrease parameters of the congestion control algorithm and the price functions used at the links.  相似文献   

6.
Under the assumption that queueing delays will eventually become small relative to propagation delays, we derive stability results for a fluid flow model of end-to-end Internet congestion control. The theoretical results of the paper are intended to be decentralized and locally implemented: each end system needs knowledge only of its own round-trip delay. Criteria for local stability and rate of convergence are completely characterized for a single resource, single user system. Stability criteria are also described for networks where all users share the same round-trip delay. Numerical experiments investigate extensions to more general networks. Through simulations, we are able to evaluate the relative importance of queueing delays and propagation delays on network stability. Finally, we suggest how these results may be used to design network resources  相似文献   

7.
Promoting the use of end-to-end congestion control in the Internet   总被引:2,自引:0,他引:2  
This paper considers the potentially negative impacts of an increasing deployment of non-congestion-controlled best-effort traffic on the Internet. These negative impacts range from extreme unfairness against competing TCP traffic to the potential for congestion collapse. To promote the inclusion of end-to-end congestion control in the design of future protocols using best-effort traffic, we argue that router mechanisms are needed to identify and restrict the bandwidth of selected high-bandwidth best-effort flows in times of congestion. The paper discusses several general approaches for identifying those flows suitable for bandwidth regulation. These approaches are to identify a high-bandwidth flow in times of congestion as unresponsive, “not TCP-friendly”, or simply using disproportionate bandwidth. A flow that is not “TCP-friendly” is one whose long-term arrival rate exceeds that of any conformant TCP in the same circumstances. An unresponsive flow is one failing to reduce its offered load at a router in response to an increased packet drop rate, and a disproportionate-bandwidth flow is one that uses considerably more bandwidth than other flows in a time of congestion  相似文献   

8.
TCP Vegas: end to end congestion avoidance on a global Internet   总被引:36,自引:0,他引:36  
Vegas is an implementation of TCP that achieves between 37 and 71% better throughput on the Internet, with one-fifth to one-half the losses, as compared to the implementation of TCP in the Reno distribution of BSD Unix. This paper motivates and describes the three key techniques employed by Vegas, and presents the results of a comprehensive experimental performance study, using both simulations and measurements on the Internet, of the Vegas and Reno implementations of TCP  相似文献   

9.
Fluid models have been the main tools for Internet congestion control. By capturing how the average rate of each flow evolves, the fluid model proves to be useful as it predicts the equilibrium point to which system trajectory converges and also provides conditions under which the convergence is ensured, i.e., the system is stable. However, due to inherent randomness in the network caused by random packet arrivals or random packet marking, the actual system evolution is always of a stochastic nature. In this paper, we show that we can be better off using a stochastic approach toward the congestion control. We first prove that the equilibrium point of a fluid model can be quite different from the true average rate of the corresponding stochastic system. After we describe the notion of stability for two different approaches, we show that a stable fluid model can impose too much restriction on our choice of system parameters such as buffer size or link utilization. In particular, under fluid models, we show that there exists a fundamental tradeoff between the link utilization and buffer size requirement for large systems, while in a more realistic setting with stochastic models, there is no such tradeoff. This implies that the current congestion control design can be much more flexible, to the benefit of efficient usage of network resources. This work was supported in part by NSF CAREER Award CNS-0545893. Do Young Eun received his B.S. and M.S. degree in Electrical Engineering from Korea Advanced Institute of Science and Technology (KAIST), Taejon, Korea, in 1995 and 1997, respectively, and Ph.D. degree in Electrical and Computer Engineering from Purdue University, West Lafayette, IN, in 2003. Since August 2003, he has been an Assistant Professor with the Department of Electrical and Computer Engineering at North Carolina State University, Raleigh, NC. His research interests include network modeling and analysis, congestion control, resource allocation, and ad-hoc/sensor networks. He is a member of Technical Program Committee of IEEE INFOCOM 2005–2007, IEEE ICC 2005–2007, IEEE Globecom 2005, and IEEE IPCCC 2006–2007. He received the Best Paper Awards in the IEEE ICCCN 2005 and the IEEE IPCCC 2006, and the NSF CAREER Award 2006.  相似文献   

10.
11.
Mean FDE models for Internet congestion control under a many-flows regime   总被引:1,自引:0,他引:1  
Congestion control algorithms used in the Internet are difficult to analyze or simulate on a large scale, i.e., when there are large numbers of nodes, links, and sources in a network. The reasons for this include the complexity of the actual implementation of the algorithm and the randomness introduced in the packet arrival and service processes due to many factors such as arrivals and departures of sources and uncontrollable short flows in the network. To make the analysis or simulation tractable, often deterministic fluid approximations of these algorithms are used. These approximations are in the form of either deterministic delay differential equations, or more generally, deterministic functional-differential equations (FDEs). In this paper, we ignore the complexity introduced by the window-based implementation of such algorithms and focus on the randomness in the network. We justify the use of deterministic models for proportionally-fair congestion controllers under a limiting regime where the number of flows in a network is large.  相似文献   

12.
Active buffer management can improve the performance of Transmission Control Protocol/Internet Protocol‐based networks. Random early detection (RED), an active queue management scheme, has been proposed by the Internet Engineering Task Force for next‐generation Internet routers. RED uses a number of parameters, such as buffer thresholds, a packet drop probability and a weight parameter. RED suffers from low throughput and large delay/jitter and induces instability in networks. Previous attempts to improve the performance of RED were based on modifying the thresholds and drop probabilities. In this paper, we show that an optimal value of the weight parameter can improve the performance of RED, and then develop a framework to determine the optimal value of the weight parameter. We show that the optimal weight parameter obtained from our framework improves the performance of RED. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

13.
Fair bandwidth sharing is important for the Internet architecture to be more accommodative of the heterogeneity. The Internet relies primarily on the end-systems to cooperatively deploy congestion control mechanisms for achieving high network utilization and some degree of fairness among flows. However, the cooperative behavior may be abandoned by some end-systems that act selfishly to be more competitive through bandwidth abuse. The result can be severe unfairness and even congestion collapse. Fairness-driven active queue management, thus, becomes essential for allocating the shared bottleneck bandwidth fairly among competing flows. This paper proposes a novel stateless active queue management algorithm, termed CHOKeH, to enforce fairness in bottleneck routers. CHOKeH splits the queue into dynamic regions at each packet arrival and treats each region differently for performing matched-drops using a dynamically updated drawing factor, which is based on the level of queue occupancy and the buffer size. In this way, CHOKeH can effectively identify and restrict unfair flows from dominating the bandwidth by discarding more packets from these flows. The performance of CHOKeH is studied through extensive simulations. The results demonstrate that CHOKeH is well suited for fair bandwidth allocation even in the presence of multiple unresponsive flows and across a wider range of buffer sizes. The results also show the ability of CHOKeH to provide inter-protocol and intra-protocols fairness and protection for short-lived flows. With a low per-packet-processing complexity, CHOKeH is amenable to implementation in core routers to offer an effective incentive structure for end-systems to self-impose some form of congestion control.  相似文献   

14.
15.
交叉开关是片上网络路由器的关键部分。交叉开关的设计可以采用三态触发器或多路复用器实现。本文针对几种不同形式的交叉开关实现方案,比较了其面积和功耗的开销,同时设计了基于iSLIP算法的交叉开关调度机制。通过基本逻辑门搭建的多路复用器实现的交叉开关相比于采用三态门实现的交叉开关,在功耗、面积上有较大优势。采用iSLIP算法实现的片上网络交叉开关,具有最高的工作频率上限。  相似文献   

16.
In this paper, we investigate the optimization of content router deployment in large‐scale information centric core‐edge separation Internet. We formulate the target cost function, which captures the trade‐off between the cost for traffic transmission and cost due to additional deployment of content routers. To solve the problem, we first derive the analytical results of traffic ratio and its closed‐form approximation. We then construct a convex optimization problem, from which the optimal deployment probability can be solved. By extensive numerical results, the accuracy of the closed‐form approximation of traffic ratio is demonstrated. And, it is found that the optimal deployment probability is decided by the average number of hops for obtaining the content, the cost for traffic transmission, and the cost of extending router to content router. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

17.
The characteristics of TCP and UDP lead to different network transmission behaviours. TCP is responsive to network congestion whereas UDP is not. This paper proposes two mechanisms that operate at the source node to regulate TCP and UDP flows and provide a differential service for them. One is the congestion‐control mechanism, which uses congestion signal detected by TCP flows to regulate the flows at the source node. Another is the time‐slot mechanism, which assigns different number of time slots to flows to control their flow transmission. Based on the priority of each flow, different bandwidth proportions are allocated for each flow and differential services are provided. Simulation results show some insights of these two mechanisms. Moreover, we summarize the factors that may impact the performance of these two mechanisms. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

18.
The Internet's excellent scalability and robustness result in part from the end-to-end nature of Internet congestion control. End-to-end congestion control algorithms alone, however, are unable to prevent the congestion collapse and unfairness created by applications that are unresponsive to network congestion. To address these maladies, we propose and investigate a novel congestion-avoidance mechanism called network border patrol (NBP). NBP entails the exchange of feedback between routers at the borders of a network in order to detect and restrict unresponsive traffic flows before they enter the network, thereby preventing congestion within the network. Moreover, NBP is complemented with the proposed enhanced core-stateless fair queueing (ECSFQ) mechanism, which provides fair bandwidth allocations to competing flows. Both NBP and ECSFQ are compliant with the Internet philosophy of pushing complexity toward the edges of the network whenever possible. Simulation results show that NBP effectively eliminates congestion collapse and that, when combined with ECSFQ, approximately max-min fair bandwidth allocations can be achieved for competing flows.  相似文献   

19.
A Single-Buffered (SB) router is a router where only one stage of shared buffering is sandwiched between two interconnects in comparison of a Combined Input and Output Queued (CIOQ) router where a central switch fabric is sandwiched between two stages of buffering. The notion of SB routers was firstly proposed by the High-Performance Networking Group (HPNG) of Stanford University, along with two promising designs of SB routers: one of which was Parallel Shared Memory (PSM) router and the other was Distributed Shared Memory (DSM) router. Admittedly, the work of HPNG deserved full credit, but all results presented by them appeared to relay on a Centralized Memory Management Algorithm (CMMA) which was essentially impractical because of the high processing and communication complexity. This paper attempts to make a scalable high-speed SB router completely practical by introducing a fully distributed architecture for managing the shared memory of SB routers. The resulting SB router is called as a Virtual Output and Input Queued (VOIQ) router. Furthermore, the scheme of VOIQ routers can not only eliminate the need for the CMMA scheduler, thus allowing a fully distributed implementation with low processing and commu- nication complexity, but also provide QoS guarantees and efficiently support variable-length packets in this paper. In particular, the results of performance testing and the hardware implementation of our VOIQ-based router (NDSC~ SR1880-TTM series) are illustrated at the end of this paper. The proposal of this paper is the first distributed scheme of how to design and implement SB routers publicized till now.  相似文献   

20.
The debate over network neutrality started with the appearance of new Internet application services that are latency sensitive and the use of broad bandwidth. These services are still diffusing, and more applications that use broader bandwidth with greater latency sensitivity are expected to be developed. To estimate precisely the effect of network neutrality regulation, it is necessary to forecast the number of end-users that will adopt application services. However, previous studies are limited in that they assume that the potential market and the final number of adopters are constant at the current market penetration level. By computational experiments, this research estimates the effect of network neutrality regulation considering the diffusion process of application services such as IPTV and VoIP. Additionally, it deduces the relation between the effect of network neutrality regulation and the level of diffusion of new Internet application services.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号