首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
刘亚林  刘东  张晓 《计算机学报》2001,24(12):1272-1278
该文对路由器中的快速路由查找算法进行了研究。针对路由查找算法在查找速度、算法空间复杂度以及插入和删除表项的难度算方法存在的问题,提出了一种快速路由查找算法。该算法通过构造两级索引表结构来减小路由查找的访存次数以提高查找速度;利用前缀扩展的特性并采用特殊的数据结构来构建索引表,能支持动态插入、删除和更新路由;采用压缩技术对二级索引表进行压缩,从而大大减小了路由所需的存储空间。该算法最多四次访存,最少两次访存就完成一次路由查找。由于采用了压缩方法,所需存储空间很小,该算法不仅适合于软件实现,也适合于硬件实现。查找速度快、存储空间小并支持动态插入和删除是该算法的主要特点。  相似文献   

2.
基于LSOT的高速IP路由查找算法   总被引:9,自引:0,他引:9  
由于因特网速度不断提高、网络流量不断增加、路由表规模不断扩大,IP路由查找已经成为制约路由器性能的重要原因,因而受到广泛重视。目前人们已经提出几种算法用于解决IP路由查找问题,但均不能完全满足核心路由器的要求。该文提出一种基于LSOT的IP路由查找方法,它使用可变大小段表和偏移量表,能适应SRAM和FPGA芯片内存储器容量的变化,具有查找速度高、更新时间快、存储代价低、易于实现等特点,使用FPGA设计能满足10Gbps端口速率核心路由器环境的要求,使用ASIC设计能满足40Gbps端口速率核心路由器环境的要求。  相似文献   

3.
分析了互联网路由表和路由更新的特征,提出了一种基于叶子节点进行路由表分区的并行IP路由查找方法Leaf-TCAM,分区子表按照流量特征在K个TCAM芯片中进行均衡分布。分析表明,该路由查找方法在引入0.1*(K-1)冗余的前提下具有K-1倍加速因子。该方法无需进行前缀扩展,90%以上的路由前缀无需排序,可以采用随机更新;同时还具有分区均匀、分区溢出代价小等特点,而功耗只有传统单片方案的12%。  相似文献   

4.
Due to a tremendous increase in internet traffic, backbone routers must have the capability to forward massive incoming packets at several gigabits per second. IP address lookup is one of the most challenging tasks for high-speed packet forwarding. Some high-end routers have been implemented with hardware parallelism using ternary content addressable memory (TCAM). However, TCAM is much more expensive in terms of circuit complexity as well as power consumption. Therefore, efficient algorithmic solutions are essentially required to be implemented using network processors as low cost solutions.Among the state-of-the-art algorithms for IP address lookup, a binary search based on a balanced tree is effective in providing a low-cost solution. In order to construct a balanced search tree, the prefixes with the nesting relationship should be converted into completely disjointed prefixes. A leaf-pushing technique is very useful to eliminate the nesting relationship among prefixes [V. Srinivasan, G. Varghese, Fast address lookups using controlled prefix expansion, ACM Transactions on Computer Systems 17 (1) (1999) 1-40]. However, it creates duplicate prefixes, thus expanding the search tree.This paper proposes an efficient IP address lookup algorithm based on a small balanced tree using entry reduction. The leaf-pushing technique is used for creating the completely disjointed entries. In the leaf-pushed prefixes, there are numerous pairs of adjacent prefixes with similarities in prefix strings and output ports. The number of entries can be significantly reduced by the use of a new entry reduction method which merges pairs with these similar prefixes. After sorting the reduced disjointed entries, a small balanced tree is constructed with a very small node size. Based on this small balanced tree, a native binary search can be effectively used in address lookup issue. In addition, we propose a new multi-way search algorithm to improve a binary search for IPv4 address lookup. As a result, the proposed algorithms offer excellent lookup performance along with reduced memory requirements. Besides, these provide good scalability for large amounts of routing data and for the address migration toward IPv6. Using both various IPv4 and IPv6 routing data, the performance evaluation results demonstrate that the proposed algorithms have better performance in terms of lookup speed, memory requirement and scalability for the growth of entries and IPv6, as compared with other algorithms based on a binary search.  相似文献   

5.
针对现有的大多IPv6路由表查找算法采用各种优化手段提高查找性能,却使得路由更新需要重构整个路由表的问题,提出基于多层混合结构的IPv6路由表查找算法。该算法在第一层借鉴最优查找树的优点,把前缀1~16位的不同取值按其在路由表中出现的概率降序存储在线性表中,在第二、三层把前缀的17~32位和33~48位分别用二叉平衡树组织,在第四层把49~64位使用线性表组织。实验结果表明,该算法查找速度快,占用内存少,动态增量更新速度快。  相似文献   

6.
An IP router must forward packets at gigabit speed in order to guarantee a good quality of service. Two important factors make this task a challenging problem: (i) for each packet, the longest matching prefix in the forwarding table must be quickly computed; (ii) the routing tables contain several thousands of entries and their size grows significantly every year. Because of this, parallel routers have been developed which use several processors to forward packets. In this work we present a novel algorithmic technique which, for the first time, exploits the parallelism of the router also to reduce the size of the routing table. Our method is scalable and requires only minimal additional hardware. Indeed, we prove that any IP routing table T can be split into two subtables T1 and T2 such that: (a) |T1| can be any positive integer k ≤ |T| and |T2| ≤ |T| - k + 1; (b) the two routing tables can be used separately by two processors so that the IP lookup on T is obtained by simply XOR-ing the IP lookup on the two tables. Our method is independent of the data structure used to implement the lookup search and it allows for a better use of the processors L2 cache. For real routers routing tables, we also show how to achieve simultaneously: (a) |T1| is roughly 7% of the original table T; (b) the lookup on table T2 does not require the bestmatching prefix computation.  相似文献   

7.
一种基于哈希表和Trie树的快速IP路由查找算法   总被引:3,自引:0,他引:3  
Internet的飞速发展要求核心路由器每秒能转发几百万个以上的分组,实现高速分组转发的关键是路由表的组织和快速的路由查找算法。论文提出了一种基于8比特的前向查找表(LFT)和7比特的简单二进制回退查找Trie树(HBT)的IP路由查找算法。算法综合考虑了IP地址的分布特点,兼顾了查找速度、存储空间利用、硬件实现,以及向IPv6过渡等几个因素。具有算法简单、查找速度较快、存储空间利用率较高、易于扩展和便于硬件实现等特点。  相似文献   

8.
Most of the high-performance routers available commercially these days equip each of their line cards (LCs) with a forwarding engine (FE) to perform table lookups locally. This work introduces and evaluates a technique for speedy packet lookups, called SPAL, in such routers. The BGP routing table under SPAL is fragmented into subsets which constitute forwarding tables for different FEs so that the number of table entries in each FE drops as the router grows. This reduction in the forwarding table size drastically lowers the amount of SRAM (e.g., L3 data cache) required in each LC to hold the trie constructed according to the prefix matching algorithm. SPAL calls for caching the lookup result of a given IP address at its home LC (denoted by LC/sub ho/, using the LR-cache), such that the result can satisfy the lookup requests for the same address from not only LC/sub ho/, but also other LCs quickly. Our trace-driven simulation reveals that SPAL leads to improved mean lookup performance by a factor of at least 2.5 (or 4.3) for a router with three (or 16) LCs, if the LR-cache contains 4K blocks. SPAL achieves this significant improvement, while greatly lowering the SRAM (i.e., the L3 data cache plus the LR-cache combined) requirement in each LC and possibly shortening the worst-case lookup time (thanks to fewer memory accesses during longest-prefix matching search) when compared with a current router without partitioning the routing table. It promises good scalability (with respect to routing table growth) and exhibits a small mean lookup time per packet. With its ability to speed up packet lookup performance while lowering overall SRAM substantially, SPAL is ideally applicable to the new generation of scalable high-performance routers.  相似文献   

9.
《Computer Networks》2007,51(3):588-605
Backbone routers with tens-of-gigabits-per-second links are indispensable communication devices to deploy on the Internet. The IP lookup operation is the most critical task that must be improved in routers. In this paper, we first present a systematic method to compare prefixes of different lengths. The list of prefixes can then be sorted and stored in a sequential array, which is contrary to the linked lists used in most of trie-based structures. Next, fast binary and multiway prefix searches assisted by auxiliary prefixes are proposed. We also developed a 32-bit representation to encode the prefixes of different lengths. For the large routing tables currently available on the Internet, the proposed multiway prefix search can achieve the worst-case number of memory accesses of three and four if the sizes of the CPU cache lines are 64 bytes and 32 bytes, respectively. The IPv4 simulation results show that the proposed prefix searches outperform the existing IP lookup schemes in terms of lookup times and memory consumption. The simulations using IPv6 routing tables also show the performance advantages of the proposed binary prefix searches. We also analyze the performance of the existing lookup schemes by concurrently considering the lookup speed, the update speed, and the memory consumption. Although the update speed of the proposed prefix search is worse than the dynamic routing table schemes with log(N) complexity for a table of N prefixes, our analysis shows that the overall performance of the proposed binary prefix search outperforms all the existing schemes.  相似文献   

10.
In modern IP routers, Internet protocol (IP) lookup forms a bottleneck in packet forwarding because the lookup speed cannot catch up with the increase in link bandwidth. Ternary content-addressable memories (TCAMs) have emerged as viable devices for designing high-throughput forwarding engines on routers. Called ternary because they store don't-care states in addition to 0s and 1s, TCAMs search the data (IP address) in a single clock cycle. Because of this property, TCAMs are particularly attractive for packet forwarding and classifications. Despite these advantages, large TCAM arrays have high power consumption and lack scalable design schemes, which limit their use. We propose a two-level pipelined architecture that reduces power consumption through memory compaction and the selective enablement of only a portion of the TCAM array. We also introduce the idea of prefix aggregation and prefix expansion to reduce the number of routing-table entries in TCAMs for IP lookup. We also discuss an efficient incremental update scheme for the routing of prefixes and provide empirical equations for estimating memory requirements and proportional power consumption for the proposed architecture.  相似文献   

11.
针对目前用于IP路由查找的地址缓存技术和前缀缓存技术的局限性,分析了骨干网路由表前缀重叠特征,提出了一种基于阈值的IP路由缓存方法,该方法结合了地址缓存和前缀缓存技术,无需进行前缀扩展,克服了地址缓存技术缓存空间要求过大、前缀缓存技术无法缓存内部前缀节点的问题,在缓存空间、缓存命中率、缓存公平性以及路由增量更新方面具有优势;仿真实验表明对于路由条目超过260000的路由表,缓存空间大小为30000,选择阈值K=4时97%以上的节点可实现1:1缓存,其余节点采用地址缓存,缓存失效率小于0.02,可以用小的缓存空间实现高速线速转发.  相似文献   

12.
路由器的主要任务是转发IP分组,实现高速分组转发的关键是快速的路由查找算法。我们针对IPv4地址,首先建立前缀长度为8、16和24的3张hash表,在此基础上,再分别针对不同长度的前缀建立最多只涉及其余8比特的多分支Trie树。在这种结构中进行IP路由查找,其存储器访问次数最多为7次,而且还具有易于更新、易于扩展等特点。  相似文献   

13.
基于短前缀长度分割的高速二维分组分类算法   总被引:1,自引:0,他引:1  
分组分类是路由器根据IP分组的多个域,从分类器数据库中匹配每个输入分组,确定分组转发规则的技术,分类器为实现因特网新业务提供了统一的方式,这些新业务包括:防火墙,网络地址翻译等,二维分组分类问题在未来的因特网体系结构中占有十分重要的地位,目前,人们已经提出了几种分组分类算法,但没有一种是理想的,提出基于短前缀长度分割的二维分组分类算法,它使用短前缀长度分割(SPLS)技术对分类器集合进行分割,使得分割后的小分类器子集合可以使用巳有快速IP路由查找方法进行查找,实现时以多叉树作为基本数据结构,实验显示它具有存储需求小,平均查询时间快,更新时间快,适合于大的分类器等特点,是一种较好的二维分组分类算法。  相似文献   

14.
Trie树数据结构的实现方法灵活,所需存储器空间小,是实现高速路由查找和分组转发的理想选择。为满足10 Gb/s线速度网络处理器中微引擎的设计要求,提出一种基于最优平衡、多层存储的Trie树路由查找算法。建立一种平衡的压缩树结构,将该树中相邻的多层节点压缩到一个存储节点中。通过构造特定的数据存储结构来减小树的搜索深度,以空间换取时间,从而提高路由查找速度和分组转发效率。在网络处理器的查找微引擎设计中实现Trie路由查找算法,实验结果表明,单个微引擎的查找速度为4.4 Mb/s,能达到节省存储空间、提高查找效率的效果。  相似文献   

15.
随着Internet技术的发展,尤其是网络带宽的不断扩展,在IP路由器中,当数据包达时,需要极快的路由查找速度,但现存的路由查找受其设计算法的限制,越来越难以满足这种需要,本文在分析了Linux下的路由查找机制的基础上,提出了一种改进的基于软件的高效率路由查找算法。  相似文献   

16.
TCAM被广泛用于执行快速路由查找,不管前缀的数量和长度,它能在极短时间内解决最佳前缀问题。与基于软件解决方法相比较,TCAM能提供持续吞吐量和简单系统体系,这对IPv6路由查找来说是很有吸引力的。然而,它也有一些缺点,例如入口数量有限,价格昂贵和能源消耗。因此,本文提出一种有效、能减少所需TCAM的算法,该算法通过增加微DRAM来消除98%的TCAM入口。实验证明,该算法效果良好,可以大大提高IPv6路由查找性能。  相似文献   

17.
针对当前基于Trie的变长层次化且可以无限长度的命名的数据网络(Named Data Networking,NDN)内容名称的最长前缀匹配查找策略存在复杂性高、查找速率低且树型数据结构的更新开销高等问题,导致算法效率低,提出一种快速的贪婪名称查找机制(FGNL)来实现数据包的快速转发。快速的贪婪的组件代码分配机制复杂性较低,容易实现,支持快速更新;组件编码树本质上是一个二维状态转移表,进一步转换成快速的哈希表查找;多哈希表结构创建速度快,且压缩存储空间,能够极大地加快名称查找的速度。实验结果证明,与字符查找树相比FGNL方案减少大约48.71%的内存,与NCE相比节省26.98%的存储空间,且查找速度获得了2倍的加速。评估结果也表明,该方案可以向上扩展来适应名称集潜在的未来增长。  相似文献   

18.
With the growing number of routing entries, IP routing lookup has become the major performance bottleneck in backbone routers. In this paper, a complete hardware-based routing lookup system is proposed to achieve high-throughput and high-capacity for IPv6. The proposed system is a cache-centric, hash-based architecture that contains a routing lookup application specific integrated circuit (ASIC) and a memory set. A hash function is used to reduce lookup time for the routing table and ternary content addressable memory (TCAM) effectively resolves the collision problem. The gate count of the ASIC, excluding the binary content addressable memory (BCAM), is about 5306 gates, using an in-house 0.18 μm CMOS single-poly six-metal standard cell library. The results of post-layout simulations show that the ASIC operates in 3.6 ns so that the routing lookup system approaches 260 Mega lookups per second (Mlps), which is sufficient for 100 Gbps networks. The memory density is good, with each routing entry requiring only 64 bits. Moreover, the routing table only needs 10.24 KB on-chip BCAM, 20.04 KB off-chip TCAM and 29.29 MB DRAM for 3.6 M routing entries in the proposed system.  相似文献   

19.
余晓磊  江红  杨璀琼 《计算机工程》2010,36(21):115-117
针对无线传感器网络(WSN)中的全局单播地址,提出一种IPv6快速路由查找机制。利用布鲁姆过滤器作为存储结构,以合适的存储方法降低错误率,采用最长前缀匹配算法合理分配前缀,以减少静态随机存取存储器的数量,降低成本。实验结果表明,利用该算法可以减少每一次查找的散列探头,从而提高路由表的查找速度,改善WSN的性能。  相似文献   

20.
随着因特网的飞速发展以及128位地址的IPV6的出现,路由表变得日益庞大,这给IP目标地址的查找速度提出了更高的要求。IP地址查询使用的不是精确匹配,而是最长前缀匹配,因查询极其复杂。论文针对现有的IP查询技术的缺点和不足,提出了一种基于多处理器结构的搜索技术,这种技术减少了查找的比较次数和存储空间。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号