首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 156 毫秒
1.
NDN(named data networking)网络直接依据层次化内容标志进行路由, 内容条目数量激增使路由表规模呈现爆炸式增长。传统的单径路由方式对于转发信息表(forwarding information base, FIB)的聚合和压缩的作用已不大, 为此提出一种基于后缀摘要的可选下一跳转发信息表聚合方法。一方面, 将多可选下一跳的路由方式引入到转发信息表聚合过程, 使得具有共同下一跳的内容条目进一步聚合, 缩减了路由表项数量; 另一方面, 为解决前缀过度聚合带来的后缀空洞问题, 利用布鲁姆过滤器提取后缀摘要, 提高了路由的成功率。理论分析和仿真实验表明:将后缀摘要和可选下一跳FIB聚合相结合, 可使路由表项缩减到原来的20%以下, 同时随着布鲁姆过滤器哈希函数的增多, 可使内容路由的成功率接近100%。  相似文献   

2.
基于内容的发布/订阅模式正受到日益广泛的重视,为构建大规模分布式系统提供了一个很好的选择。在基于内容的事件分发中,事件发布结点无需指定分发的目标地址,事件在转发的过程中根据其内容逐步路由到对事件感兴趣的目标结点。针对已有的基于内容的路由算法不能适应订阅动态变化、网络通讯开销较大的问题,提出了一种基于内容的自适应事件路由算法CAER。通过在基于内容的路由表中将订阅与订阅源结点相绑定的方式,来实现路由表的构建和维护,使得路由算法适应订阅的动态变化。实验结果表明,该算法不仅提高了事件分发的准确率,而且降低了网络的通讯开销。  相似文献   

3.
内容中心网络(Content-Centric Network, CCN)是未来互联网的重要发展方向。网内缓存是 CCN 网络的重要特征,对 CCN 内容传输性能具有重要影响。网内缓存的内容发现效率与网内缓存性能密切相关。传统 CCN 网络缓存发现方法是通过请求包在数据平面转发,沿途机会性地命中缓存实现的,具有一定的随机性、盲目性,可能导致缓存内容无法被高效利用。本文提出一种在控制平面解决缓存可用性的方法,结合拓扑、缓存容量以及用户请求分布计算出“值得”缓存的内容进行存储,同时将其向外通告,使其参与路由计算,以便后续请求快速准确地发现并利用缓存内容。实验结果表明,本文方法可使缓存命中率提高 20%左右,服务器负载降低 15%左右。  相似文献   

4.
TCAM路由表项管理算法优化研究   总被引:1,自引:0,他引:1  
TCAM(Ternary Content-Addressable Memory)能够很好的完成最长前缀匹配,实现快速路由查找和分组转发,但是其对路由表项的有序性要求使得表项管理比较复杂.在讨论已有TCAM表项管理算法的基础上,通过分析前缀表项的统计分布特性,对路由表的空间分配进行了优化,同时引入新的基于前缀块指针管理策略,提出了一种改进的表项管理方法,提高了路由表更新效率.  相似文献   

5.
分析了现有IPv4路由表查找算法和IPv6地址的特性以及主干网路由表的前缀分布特点,借鉴LFT哈希表结构简单、查找快速的特点,提出了以32bits为查找路由前缀起点的分段哈希表和多分支Tile树相结合的IPv6路由查找算法.该算法结构简单、查找效率高、易于更新,多数情况下只需一次内存访问就可查找到路由信息,提高了IPv6主干网路由器转发速度,以满足下一代互联网IPv6发展的需求.  相似文献   

6.
互联网的快速发展要求网络设备能够支持每秒几百万以上分组的转发能力,实现这一功能的关键是路由表的组织结构、快速的路由查找算法和高性能的硬件平台支持。设计并实现基于众核网络处理器的高速IP包转发系统,使用Tile-Gx36众核网络处理器作为硬件平台,采用基于Hash的前缀长度和多分支Trie树的路由查找算法,借鉴基于Hash的前缀长度路由表查找算法在存储和检索上的优势,并结合基于多分支Trie树路由表查找算法的查询效率,将路由表存储于L2层缓存中,进一步提高了路由表的访问速度和查询命中率。实验结果表明,对于不同大小负载的数据包系统均能满足40 Gbps的转发速度。  相似文献   

7.
针对在内容中心网络(Content Centric Networking, CCN)中如何合理放置与高效利用应答数据的问题,该文将集中化控制的思想引入到内容缓存与查找中,提出一种协作缓存路由机制。缓存决策时,通过兴趣包和数据包携带标签的方式,确定沿途最大缓存收益区域;在最大缓存收益区域内,结合内容全局活跃度和节点可用缓存空间,选择内容最佳放置位置。路由查找时,将区域内容放置与路由转发相结合,增大缓存资源可用性。仿真结果表明,与经典算法相比,该机制以少量额外的开销提高了缓存命中率和跳数减少率,改善了缓存负载分布均衡性,提升了CCN网络缓存和传输效率。  相似文献   

8.
在研究路由表地址前缀分布特点的基础上,提出了前缀长度二分查找方案。该方案采用前缀扩展技术,将前缀数量相对稀少的若干种前缀合并成一种,降低了查找树的高度,减少了存储器访问次数,提高了查找速度,分析了一种实用的Marker存储算法,探讨了IPv6的路由查找问题。  相似文献   

9.
《计算机工程》2017,(3):99-104
为解决路由查找过程中路由表项数不断增加导致存储冗余大和查找效率低的问题,在代数决策图(ADD)的基础上,提出一种改进的路由查找算法。根据符号算法的特性对路由表项进行伪布尔函数表示,综合考虑路由表结构特征和符号算法的优势,基于ADD结构构建基于前缀的路由表,并给出路由表更新、删除、查找算法。通过国际项目管理协会提供的开源路由表进行实验仿真,结果表明该算法能够有效减少路由表操作时的内存访问次数,节省路由表存储空间。  相似文献   

10.
针对现有路由表查找方法效率低的问题,提出了一种基于多分支优先级树的数据查找算法。该算法将优先级较高的前缀依次存储在原多分支树的虚节点上,将需要进行扩展的前缀存储在辅助存储结构中,从而在路由查找时,该方法可在内部节点找到最长前缀匹配而无需查找到叶子节点,同时避免了在路由表更新时对路由表的重建。仿真结果表明,提出的查找算法能够有效减少在对路由表查找、插入和删除操作所需的内存访问次数,并大幅度地提高路由查找及其更新速率。  相似文献   

11.
针对当前基于Trie的变长层次化且可以无限长度的命名的数据网络(Named Data Networking,NDN)内容名称的最长前缀匹配查找策略存在复杂性高、查找速率低且树型数据结构的更新开销高等问题,导致算法效率低,提出一种快速的贪婪名称查找机制(FGNL)来实现数据包的快速转发。快速的贪婪的组件代码分配机制复杂性较低,容易实现,支持快速更新;组件编码树本质上是一个二维状态转移表,进一步转换成快速的哈希表查找;多哈希表结构创建速度快,且压缩存储空间,能够极大地加快名称查找的速度。实验结果证明,与字符查找树相比FGNL方案减少大约48.71%的内存,与NCE相比节省26.98%的存储空间,且查找速度获得了2倍的加速。评估结果也表明,该方案可以向上扩展来适应名称集潜在的未来增长。  相似文献   

12.
刘亚林  刘东  张晓 《计算机学报》2001,24(12):1272-1278
该文对路由器中的快速路由查找算法进行了研究。针对路由查找算法在查找速度、算法空间复杂度以及插入和删除表项的难度算方法存在的问题,提出了一种快速路由查找算法。该算法通过构造两级索引表结构来减小路由查找的访存次数以提高查找速度;利用前缀扩展的特性并采用特殊的数据结构来构建索引表,能支持动态插入、删除和更新路由;采用压缩技术对二级索引表进行压缩,从而大大减小了路由所需的存储空间。该算法最多四次访存,最少两次访存就完成一次路由查找。由于采用了压缩方法,所需存储空间很小,该算法不仅适合于软件实现,也适合于硬件实现。查找速度快、存储空间小并支持动态插入和删除是该算法的主要特点。  相似文献   

13.
《Computer Networks》2008,52(2):399-417
Packet forwarding on the Internet is solely based on the destination address of packets, and it is easy to forge the source address of IP packets without affecting the delivery of the packets. To solve this problem, one can have routers check whether or not every packet comes from a correct direction based on its source address field. However, due to routing asymmetry in today’s Internet, a router cannot simply reverse its forwarding table to determine the correct incoming direction of a packet.In this paper, we present the source address validity enforcement protocol, SAVE, which allows routers to learn valid incoming directions for any given source address. SAVE is independent from—and can work with—any specific routing protocol. By only interfacing with the forwarding table at routers, SAVE allows routers to properly propagate valid source address information from source address spaces to all destinations, and allows each router en route to build and maintain an incoming tree to associate each source address prefix with a corresponding incoming interface. The incoming tree is further valuable in handling routing changes: although a routing change at one router could affect the incoming direction of source address spaces from many locations, only the router that sees the change needs to send out new updates. Finally, SAVE has a good performance with low overhead.  相似文献   

14.
《Computer Networks》2008,52(2):303-326
This paper proposes a novel Internet Protocol (IP) packet forwarding architecture for IP routers. This architecture is comprised of a non-blocking Multizone Pipelined Cache (MPC) and of a hardware-supported IP routing lookup method. The paper also describes a method for expansion-free software lookups. The MPC achieves lower miss rates than those reported in the literature. The MPC uses a two-stage pipeline for a half-prefix/half-full address IP cache that results in lower activity than conventional caches. MPC’s updating technique allows the IP routing lookup mechanism to freely decide when and how to issue update requests. The effective miss penalty of the MPC is reduced by using a small non-blocking buffer. This design caches prefixes but requires significantly less expansion of the routing table than conventional prefix caches. The hardware-based IP lookup mechanism uses a Ternary Content Addressable Memory (TCAM) with a novel Hardware-based Longest Prefix Matching (HLPM) method. HLPM has lower signaling activity in order to process short matching prefixes as compared to alternative designs. HLPM has a simple solution to determine the longest matching prefix and requires a single write for table updates.  相似文献   

15.
Most of the high-performance routers available commercially these days equip each of their line cards (LCs) with a forwarding engine (FE) to perform table lookups locally. This work introduces and evaluates a technique for speedy packet lookups, called SPAL, in such routers. The BGP routing table under SPAL is fragmented into subsets which constitute forwarding tables for different FEs so that the number of table entries in each FE drops as the router grows. This reduction in the forwarding table size drastically lowers the amount of SRAM (e.g., L3 data cache) required in each LC to hold the trie constructed according to the prefix matching algorithm. SPAL calls for caching the lookup result of a given IP address at its home LC (denoted by LC/sub ho/, using the LR-cache), such that the result can satisfy the lookup requests for the same address from not only LC/sub ho/, but also other LCs quickly. Our trace-driven simulation reveals that SPAL leads to improved mean lookup performance by a factor of at least 2.5 (or 4.3) for a router with three (or 16) LCs, if the LR-cache contains 4K blocks. SPAL achieves this significant improvement, while greatly lowering the SRAM (i.e., the L3 data cache plus the LR-cache combined) requirement in each LC and possibly shortening the worst-case lookup time (thanks to fewer memory accesses during longest-prefix matching search) when compared with a current router without partitioning the routing table. It promises good scalability (with respect to routing table growth) and exhibits a small mean lookup time per packet. With its ability to speed up packet lookup performance while lowering overall SRAM substantially, SPAL is ideally applicable to the new generation of scalable high-performance routers.  相似文献   

16.
An IP router must forward packets at gigabit speed in order to guarantee a good quality of service. Two important factors make this task a challenging problem: (i) for each packet, the longest matching prefix in the forwarding table must be quickly computed; (ii) the routing tables contain several thousands of entries and their size grows significantly every year. Because of this, parallel routers have been developed which use several processors to forward packets. In this work we present a novel algorithmic technique which, for the first time, exploits the parallelism of the router also to reduce the size of the routing table. Our method is scalable and requires only minimal additional hardware. Indeed, we prove that any IP routing table T can be split into two subtables T1 and T2 such that: (a) |T1| can be any positive integer k ≤ |T| and |T2| ≤ |T| - k + 1; (b) the two routing tables can be used separately by two processors so that the IP lookup on T is obtained by simply XOR-ing the IP lookup on the two tables. Our method is independent of the data structure used to implement the lookup search and it allows for a better use of the processors L2 cache. For real routers routing tables, we also show how to achieve simultaneously: (a) |T1| is roughly 7% of the original table T; (b) the lookup on table T2 does not require the bestmatching prefix computation.  相似文献   

17.
本文在比较各种基于树型结构查找算法的基础上提出了一种改进的路由查找算法,该算法具有查找速度快、所需存储空间小、更新速度快、硬件实现简单等特点,能够满足10Gbps核心路由器环境的要求。  相似文献   

18.
颜永红  张帆 《微计算机信息》2006,22(35):254-256
现代核心路由器对查找速率、表项更新速度、查找表容量等提出越来越高的要求。目前工业厂商大多采用基于TCAM(三态内容关联存储器)的解决方案。TCAM最大特点是查找速度快,但其更新算法会浪费很大的存储空间。针对这个问题该文提出一种利用FPGA提供硬件支持的路由更新方法,增加新表项时,只需对新增表项进行一次预处理,转发表无需按前缀长度排序,消除了预留空闲表项造成的存储空间浪费。  相似文献   

19.
为了提高IPv6的路由查找效率,根据IPv6路由前缀分布规律和前缀层次关系,提出了基于无冲突哈希表和多比特树的两级IPv6路由查找算法。该算法将地址前缀划分区间并按长度为32,40,48比特分别存储于3个哈希表中,剩下不足的前缀比特由多比特树存储,IPv6路由查找时在无冲突哈希表和多比特树中两级查找。实验表明,该查找算法的平均查找路径数为1.0~1.7,适用于高速的IPv6路由查找。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号