首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
刘亚林  刘东  张晓 《计算机学报》2001,24(12):1272-1278
该文对路由器中的快速路由查找算法进行了研究。针对路由查找算法在查找速度、算法空间复杂度以及插入和删除表项的难度算方法存在的问题,提出了一种快速路由查找算法。该算法通过构造两级索引表结构来减小路由查找的访存次数以提高查找速度;利用前缀扩展的特性并采用特殊的数据结构来构建索引表,能支持动态插入、删除和更新路由;采用压缩技术对二级索引表进行压缩,从而大大减小了路由所需的存储空间。该算法最多四次访存,最少两次访存就完成一次路由查找。由于采用了压缩方法,所需存储空间很小,该算法不仅适合于软件实现,也适合于硬件实现。查找速度快、存储空间小并支持动态插入和删除是该算法的主要特点。  相似文献   

2.
《Computer Networks》2007,51(12):3354-3367
The IP lookup mechanism is a key issue, in the design of IP routers. IP lookup is an important action in a router, which finds the next hop of each incoming packet with a longest-prefix-match address in the routing table. This work places the routing table on a longest prefix first search tree, which is constructed as a heap-like structure by the prefix length. A router using this scheme has fewer memory accesses when executing IP lookup than a router designed according to the Trie [E. Fredkin, Trie Memory, Communication of the ACM 3 (1960) 490–500], Patricia [K. Sklower, A tree-based routing table for Berkeley Unix, in: Proceedings of the USENIX Conference, 1991, pp. 93–99] or Prefix tree [M. Berger, IP lookup with low memory requirement and fast update, in: Proceedings of IEEE High Performance Switching and Routing, 2003, pp. 287–291]. Some nodes of the proposed tree can include two entries of the routing table to decrease the number of tree nodes. For instance, a routing table with 163,695 entries can be held in the proposed tree with 156,191 nodes. Furthermore, an improved scheme is presented to partition a tree into several smaller trees. The simulation reveals that the scheme not only lowers the tree height effectively but also scales well to IPv6 addresses.  相似文献   

3.
《Computer Networks》2007,51(3):588-605
Backbone routers with tens-of-gigabits-per-second links are indispensable communication devices to deploy on the Internet. The IP lookup operation is the most critical task that must be improved in routers. In this paper, we first present a systematic method to compare prefixes of different lengths. The list of prefixes can then be sorted and stored in a sequential array, which is contrary to the linked lists used in most of trie-based structures. Next, fast binary and multiway prefix searches assisted by auxiliary prefixes are proposed. We also developed a 32-bit representation to encode the prefixes of different lengths. For the large routing tables currently available on the Internet, the proposed multiway prefix search can achieve the worst-case number of memory accesses of three and four if the sizes of the CPU cache lines are 64 bytes and 32 bytes, respectively. The IPv4 simulation results show that the proposed prefix searches outperform the existing IP lookup schemes in terms of lookup times and memory consumption. The simulations using IPv6 routing tables also show the performance advantages of the proposed binary prefix searches. We also analyze the performance of the existing lookup schemes by concurrently considering the lookup speed, the update speed, and the memory consumption. Although the update speed of the proposed prefix search is worse than the dynamic routing table schemes with log(N) complexity for a table of N prefixes, our analysis shows that the overall performance of the proposed binary prefix search outperforms all the existing schemes.  相似文献   

4.
基于LSOT的高速IP路由查找算法   总被引:9,自引:0,他引:9  
由于因特网速度不断提高、网络流量不断增加、路由表规模不断扩大,IP路由查找已经成为制约路由器性能的重要原因,因而受到广泛重视。目前人们已经提出几种算法用于解决IP路由查找问题,但均不能完全满足核心路由器的要求。该文提出一种基于LSOT的IP路由查找方法,它使用可变大小段表和偏移量表,能适应SRAM和FPGA芯片内存储器容量的变化,具有查找速度高、更新时间快、存储代价低、易于实现等特点,使用FPGA设计能满足10Gbps端口速率核心路由器环境的要求,使用ASIC设计能满足40Gbps端口速率核心路由器环境的要求。  相似文献   

5.
Most of the high-performance routers available commercially these days equip each of their line cards (LCs) with a forwarding engine (FE) to perform table lookups locally. This work introduces and evaluates a technique for speedy packet lookups, called SPAL, in such routers. The BGP routing table under SPAL is fragmented into subsets which constitute forwarding tables for different FEs so that the number of table entries in each FE drops as the router grows. This reduction in the forwarding table size drastically lowers the amount of SRAM (e.g., L3 data cache) required in each LC to hold the trie constructed according to the prefix matching algorithm. SPAL calls for caching the lookup result of a given IP address at its home LC (denoted by LC/sub ho/, using the LR-cache), such that the result can satisfy the lookup requests for the same address from not only LC/sub ho/, but also other LCs quickly. Our trace-driven simulation reveals that SPAL leads to improved mean lookup performance by a factor of at least 2.5 (or 4.3) for a router with three (or 16) LCs, if the LR-cache contains 4K blocks. SPAL achieves this significant improvement, while greatly lowering the SRAM (i.e., the L3 data cache plus the LR-cache combined) requirement in each LC and possibly shortening the worst-case lookup time (thanks to fewer memory accesses during longest-prefix matching search) when compared with a current router without partitioning the routing table. It promises good scalability (with respect to routing table growth) and exhibits a small mean lookup time per packet. With its ability to speed up packet lookup performance while lowering overall SRAM substantially, SPAL is ideally applicable to the new generation of scalable high-performance routers.  相似文献   

6.
《Computer Networks》2008,52(2):399-417
Packet forwarding on the Internet is solely based on the destination address of packets, and it is easy to forge the source address of IP packets without affecting the delivery of the packets. To solve this problem, one can have routers check whether or not every packet comes from a correct direction based on its source address field. However, due to routing asymmetry in today’s Internet, a router cannot simply reverse its forwarding table to determine the correct incoming direction of a packet.In this paper, we present the source address validity enforcement protocol, SAVE, which allows routers to learn valid incoming directions for any given source address. SAVE is independent from—and can work with—any specific routing protocol. By only interfacing with the forwarding table at routers, SAVE allows routers to properly propagate valid source address information from source address spaces to all destinations, and allows each router en route to build and maintain an incoming tree to associate each source address prefix with a corresponding incoming interface. The incoming tree is further valuable in handling routing changes: although a routing change at one router could affect the incoming direction of source address spaces from many locations, only the router that sees the change needs to send out new updates. Finally, SAVE has a good performance with low overhead.  相似文献   

7.
Named Data Networking (NDN) aims at redesigning the current Internet: using names to identify the wanted contents instead of using IP addresses to locate the end hosts, with the goal of substantially improving the data retrieval efficiency. Different from IP routers, NDN routers forward packets by names. An NDN name is composed of a number of length-variable components, causing the name to be tens or even hundreds of characters in length. Meanwhile, NDN routing tables could be several orders of magnitude larger than the current IP routing tables. This kind of complex name constitution plus the huge-sized name table makes wire speed name lookup an extremely challenging task.  相似文献   

8.
随着Internet技术的发展,尤其是网络带宽的不断扩展,在IP路由器中,当数据包达时,需要极快的路由查找速度,但现存的路由查找受其设计算法的限制,越来越难以满足这种需要,本文在分析了Linux下的路由查找机制的基础上,提出了一种改进的基于软件的高效率路由查找算法。  相似文献   

9.
《Computer Communications》1999,22(15-16):1415-1422
The key to the success of the next generation IP networks to provide good services relies on the deployment of high performance routers to do fast IP routing lookups. In this paper, we propose a new algorithm for fast IP lookups using a so-called two-trie structure. The two-trie structure provides the advantages in that less memory space is required for representing a routing table than the standard trie while it still provides fast IP lookups. Based on the simulation result, the memory space can be saved around 27% over the standard trie while a lookup operation takes 1.6 memory accesses in the average case and 8 memory accesses in the worst case. Also, the structure is not based on any assumptions about the distribution of the prefix lengths in routing tables. Thus, increasing the lengths from 32 to 128 bit (from IPv4 to IPv6) does not affect the main structure.  相似文献   

10.
In this paper, a reconfigurable memory architecture and lookup technique for IP packet forwarding engines is presented. This is achieved by partitioning the forwarding table into smaller partial lookup tables for each output port and allowing a forwarding engine to process them in parallel. This effectively reduces the complexity of finding the ‘longest prefix match’ problem to the ‘first prefix match’ problem. Our method is a flexible technique that significantly elevates the scalability of the next generation network processors and other packet processing devices. Such scalability facilitates migration to IPv6 and benefits network equipments especially in terms of growing routing table size, traffic, frequency of route updates and bandwidth requirement.  相似文献   

11.
As end users and managers of both corporate and Internet service provider (ISP) IP networks, we ask a lot of those devices called routers. They must be reliable and easy to manage, while supporting a variety of LAN and WAN interfaces at a reasonable price. They must forward hundreds of thousands or even millions of packets per second. For each packet, this means the router receives it, extracts the destination address contained in the header, performs a lookup in a local routing table, finds the best match, and then transmits the packet to the next-hop router. The router may even be configured to examine additional fields in the packet and, based on this analysis, decide whether to place the packet in a high-priority transmission queue for expedited service. The author discusses link-state routing protocols  相似文献   

12.
《Computer Networks》2008,52(2):303-326
This paper proposes a novel Internet Protocol (IP) packet forwarding architecture for IP routers. This architecture is comprised of a non-blocking Multizone Pipelined Cache (MPC) and of a hardware-supported IP routing lookup method. The paper also describes a method for expansion-free software lookups. The MPC achieves lower miss rates than those reported in the literature. The MPC uses a two-stage pipeline for a half-prefix/half-full address IP cache that results in lower activity than conventional caches. MPC’s updating technique allows the IP routing lookup mechanism to freely decide when and how to issue update requests. The effective miss penalty of the MPC is reduced by using a small non-blocking buffer. This design caches prefixes but requires significantly less expansion of the routing table than conventional prefix caches. The hardware-based IP lookup mechanism uses a Ternary Content Addressable Memory (TCAM) with a novel Hardware-based Longest Prefix Matching (HLPM) method. HLPM has lower signaling activity in order to process short matching prefixes as compared to alternative designs. HLPM has a simple solution to determine the longest matching prefix and requires a single write for table updates.  相似文献   

13.
路由器的主要任务是转发IP分组,实现高速分组转发的关键是快速的路由查找算法。我们针对IPv4地址,首先建立前缀长度为8、16和24的3张hash表,在此基础上,再分别针对不同长度的前缀建立最多只涉及其余8比特的多分支Trie树。在这种结构中进行IP路由查找,其存储器访问次数最多为7次,而且还具有易于更新、易于扩展等特点。  相似文献   

14.
基于短前缀长度分割的高速二维分组分类算法   总被引:1,自引:0,他引:1  
分组分类是路由器根据IP分组的多个域,从分类器数据库中匹配每个输入分组,确定分组转发规则的技术,分类器为实现因特网新业务提供了统一的方式,这些新业务包括:防火墙,网络地址翻译等,二维分组分类问题在未来的因特网体系结构中占有十分重要的地位,目前,人们已经提出了几种分组分类算法,但没有一种是理想的,提出基于短前缀长度分割的二维分组分类算法,它使用短前缀长度分割(SPLS)技术对分类器集合进行分割,使得分割后的小分类器子集合可以使用巳有快速IP路由查找方法进行查找,实现时以多叉树作为基本数据结构,实验显示它具有存储需求小,平均查询时间快,更新时间快,适合于大的分类器等特点,是一种较好的二维分组分类算法。  相似文献   

15.
Internet的飞速发展要求核心路由器每秒能转发几百万个以上的分组,实现高速分组转发的关键是路由表的组织和快速的路由查找算法。索引分离trie树结构建立了具有k比特的一级索引,m比特的二级索引和步宽为s、最大深度为m/s的多分支trie树结构。在这种数据结构中进行最长前缀匹配查找的算法复杂度为:O(m/s+2)。它具有算法简单、查找速度快、易于更新、便于向IPv6过渡等特点,是一种综合性能较好的快速最长前缀匹配查找算法。  相似文献   

16.
Due to a tremendous increase in internet traffic, backbone routers must have the capability to forward massive incoming packets at several gigabits per second. IP address lookup is one of the most challenging tasks for high-speed packet forwarding. Some high-end routers have been implemented with hardware parallelism using ternary content addressable memory (TCAM). However, TCAM is much more expensive in terms of circuit complexity as well as power consumption. Therefore, efficient algorithmic solutions are essentially required to be implemented using network processors as low cost solutions.Among the state-of-the-art algorithms for IP address lookup, a binary search based on a balanced tree is effective in providing a low-cost solution. In order to construct a balanced search tree, the prefixes with the nesting relationship should be converted into completely disjointed prefixes. A leaf-pushing technique is very useful to eliminate the nesting relationship among prefixes [V. Srinivasan, G. Varghese, Fast address lookups using controlled prefix expansion, ACM Transactions on Computer Systems 17 (1) (1999) 1-40]. However, it creates duplicate prefixes, thus expanding the search tree.This paper proposes an efficient IP address lookup algorithm based on a small balanced tree using entry reduction. The leaf-pushing technique is used for creating the completely disjointed entries. In the leaf-pushed prefixes, there are numerous pairs of adjacent prefixes with similarities in prefix strings and output ports. The number of entries can be significantly reduced by the use of a new entry reduction method which merges pairs with these similar prefixes. After sorting the reduced disjointed entries, a small balanced tree is constructed with a very small node size. Based on this small balanced tree, a native binary search can be effectively used in address lookup issue. In addition, we propose a new multi-way search algorithm to improve a binary search for IPv4 address lookup. As a result, the proposed algorithms offer excellent lookup performance along with reduced memory requirements. Besides, these provide good scalability for large amounts of routing data and for the address migration toward IPv6. Using both various IPv4 and IPv6 routing data, the performance evaluation results demonstrate that the proposed algorithms have better performance in terms of lookup speed, memory requirement and scalability for the growth of entries and IPv6, as compared with other algorithms based on a binary search.  相似文献   

17.
路由查找算法研究综述   总被引:26,自引:2,他引:26  
随着Internet的迅猛发展,用于主干网络互联的核心路由器的接口速率已经达到了2.5Gbps~10Gbps.这一速率要求核心路由器每秒能够转发几百万乃至上千万个以上的分组.分组转发的重要一步就是查找路由表,因此快速的路由查找算法是实现高速分组转发的关键.路由查找需要实现最长前缀匹配.近年来,研究人员提出了多种路由查找算法,以提高查找性能.分析了路由查找问题及其难点,全面综述了各种查找算法,并对它们进行了详细的分析和比较,最后指出了进一步的研究方向.  相似文献   

18.
We propose new shared memory multiprocessor architectures and evaluate their performance for future Internet protocol (IP) routers based on symmetric multiprocessor (SMP) and cache coherent nonuniform memory access (CC-NUMA) paradigms. We also propose a benchmark application suite, RouterBench, which consists of four categories of applications representing key functions on the time-critical path of packet processing in routers. An execution driven simulation environment is created to evaluate SMP and CC-NUMA router architectures using this RouterBench. The execution driven simulation can produce accurate cycle-level execution time prediction and reveal the impact of various architectural parameters on the performance of routers. We port the FUNET trace and its routing table for use in our experiments. We find that the CC-NUMA architecture provides an excellent scalability for design of high-performance IP routers. Results also show that the CC-NUMA architecture can sustain good lookup performance, even at a high frequency of route updates.  相似文献   

19.
由于因特网速度的不断提高,网络流量的不断增加和路由表规模的不断扩大,IP路由查找已经成为制约核心路由器性能的主要瓶颈。文章分析了两种常用的基于硬件存储器的路由查找算法,并结合它们各自优点,提出了一种基于RAM和TCAM存储结构的路由查找算法,该算法克服了上述两种算法的不足,具有查找速率高、更新时间快、存储代价低、易于实现等特点,是一种理想的适合于高速核心路由器环境的查找机制。  相似文献   

20.
In modern IP routers, Internet protocol (IP) lookup forms a bottleneck in packet forwarding because the lookup speed cannot catch up with the increase in link bandwidth. Ternary content-addressable memories (TCAMs) have emerged as viable devices for designing high-throughput forwarding engines on routers. Called ternary because they store don't-care states in addition to 0s and 1s, TCAMs search the data (IP address) in a single clock cycle. Because of this property, TCAMs are particularly attractive for packet forwarding and classifications. Despite these advantages, large TCAM arrays have high power consumption and lack scalable design schemes, which limit their use. We propose a two-level pipelined architecture that reduces power consumption through memory compaction and the selective enablement of only a portion of the TCAM array. We also introduce the idea of prefix aggregation and prefix expansion to reduce the number of routing-table entries in TCAMs for IP lookup. We also discuss an efficient incremental update scheme for the routing of prefixes and provide empirical equations for estimating memory requirements and proportional power consumption for the proposed architecture.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号