共查询到20条相似文献,搜索用时 0 毫秒
1.
2.
为了实现网络流的线速转发,高性能交换机普遍采用三态内容寻址存储器(TCAM)来构建其包分类引擎。针对TCAM功耗高的问题,近年来出现了许多低功耗索引方案,实现了TCAM存储块的选择性激活以降低功耗,但这些索引方案普遍采用自底向上的局部优化算法来构建,无法有效实现流表规则的均匀划分,严重影响了TCAM的存储效率及功耗降低效果。提出并实现了一种基于决策树映射的TCAM低功耗索引方案,在极大降低功耗的同时提升了TCAM的存储效率。利用规则普遍存在的小域特征,将原始规则集划分为若干个规则子集,然后针对各个子集的特征域,采用自顶向下的方式分别构建平衡决策树,最后通过对各个决策树进行贪心遍历,从而得到TCAM索引列表。实验表明,针对规模为十万条的规则集,算法在仅使用额外1.3%存储空间开销的同时实现了98.2%的功耗降低。 相似文献
3.
In modern IP routers, Internet protocol (IP) lookup forms a bottleneck in packet forwarding because the lookup speed cannot catch up with the increase in link bandwidth. Ternary content-addressable memories (TCAMs) have emerged as viable devices for designing high-throughput forwarding engines on routers. Called ternary because they store don't-care states in addition to 0s and 1s, TCAMs search the data (IP address) in a single clock cycle. Because of this property, TCAMs are particularly attractive for packet forwarding and classifications. Despite these advantages, large TCAM arrays have high power consumption and lack scalable design schemes, which limit their use. We propose a two-level pipelined architecture that reduces power consumption through memory compaction and the selective enablement of only a portion of the TCAM array. We also introduce the idea of prefix aggregation and prefix expansion to reduce the number of routing-table entries in TCAMs for IP lookup. We also discuss an efficient incremental update scheme for the routing of prefixes and provide empirical equations for estimating memory requirements and proportional power consumption for the proposed architecture. 相似文献
4.
针对传统TCAM查找更新方法复杂度高、无法满足高性能路由器线速查找转发需求的问题,提出一种基于分组TCAM的路由查找更新技术.介绍T比特高性能路由器的系统架构,提出基于分组TCAM的路由查找更新架构,以及基于统计预测的预留表项空间算法,进行路由查找更新算法的设计.搭建测试验证环境,采用MAE-WEST和MAE-EAST... 相似文献
5.
6.
针对传统防火墙线性匹配算法匹配效率低、维护困难等问题,提出并实现了一种面向IP地址集过滤的高效、灵活的Netfilter扩展框架Salist。 Salist包含一个基于内核虚拟文件的表管理模块,一个可自动对IP地址集进行去重、归并和排序的表内规则管理模块,一个基于Bsearch算法的高效的包匹配模块。通过理论分析和实际测试证明, Salist使包匹配算法时间复杂度由传统线性匹配的O( n)降低为O( log n),规则合并减少了规则表占用的内核内存空间10%以上,按文件分离的规则管理机制简化了对规则集进行维护的难度。结果表明Salist使用在核心网络设备中可极大提高包转发速率,降低规则的内存占用和管理难度。 相似文献
7.
8.
9.
基于三态内容可寻址内存(TCAM)的包分类方法不能有效解决区间膨胀的问题。为此,提出一种有效包分类方法。对包分类规则集中各个域的不同区间进行分组,利用Shadow Encoding方法对同一分组中的所有区间进行重新编码,依据重新编码的区间结果改写原始规则集。实验结果表明,该方法可以平均压缩75.90%的TCAM存储空间。 相似文献
10.
《Computer Communications》2001,24(7-8):667-676
In order to provide different service treatments to individual or aggregated flows, layer 4 routers in Integrated Services networks need to classify packets into different queues. The classification module of layer 4 routers must be fast enough to support gigabit links at a rate of millions of packets per second. In this work, we present a new software method OLBM to lookup multiple fields of a packet, in a dynamically pre-defined order, against the classification database. This algorithm also uses a technique called bypass matching and can classify packets at a rate of well over one million packets per second while scaling to support more than 300k flows. Complexity analysis and experiment measurements are also presented in this study. 相似文献
11.
Until now, exising camera pose estimation methods for the widely used square marker‐based augmented reality (AR) are either highly sensitive to noise or much time consuming, and developers have to work hard to find the proper trade‐off between computational speed and quality in mobile AR applications where computational resources are limited. The major difficulty is that only the four corner points of the square AR marker are available, and no redundant point correspondences can be used for a stable estimation. To solve this problem, an efficient lookup table (LUT)‐based non‐iterative solution is presented in this paper that achieves high stability in the presence of noise better than the most robust and accurate iterative solutions in the field, with the same level of accuracy and a much lower computational complexity. Our central idea consists of extracting a key parameter β from the camera pose and creating a LUT for β by taking the symmetrical structure of the square marker into account, thereby exploiting additional information. Copyright © 2011 John Wiley & Sons, Ltd. 相似文献
12.
快速数据包分流算法研究 总被引:1,自引:0,他引:1
瞿中 《计算机工程与设计》2005,26(9):2322-2325
基于“流”的数据包分类算法已经在第四层交换等领域中得到了应用,该类算法的特点是流表的容量大,流表的更新速度较快.“快速的数据包分流算法”采用了散列算法的基本思想,并引入了流的局部性原理来加速散列查找的过程,用软件对该算法进行了仿真测试,并在最后从时间复杂度和空间复杂度两个方面对其进行了性能分析.实验结果表明,该算法具有良好的时间复杂度和空间复杂度,可以实现快速的分流. 相似文献
13.
Packet classification is one of the most challenging functions in Internet routers since it involves a multi-dimensional search that should be performed at wire-speed. Hierarchical packet classification is an effective solution which reduces the search space significantly whenever a field search is completed. However, the hierarchical approach using binary tries has two intrinsic problems: back-tracking and empty internal nodes. To avoid back-tracking, the hierarchical set-pruning trie applies rule copy, and the grid-of-tries uses pre-computed switch pointers. However, none of the known hierarchical algorithms simultaneously avoids empty internal nodes and back-tracking. This paper describes various packet classification algorithms and proposes a new efficient packet classification algorithm using the hierarchical approach. In the proposed algorithm, a hierarchical binary search tree, which does not involve empty internal nodes, is constructed for the pruned set of rules. Hence, both back-tracking and empty internal nodes are avoided in the proposed algorithm. Two refinement techniques are also proposed; one for reducing the rule copy caused by the set-pruning and the other for avoiding rule copy. Simulation results show that the proposed algorithm provides an improvement in search performance without increasing the memory requirement compared with other existing hierarchical algorithms. 相似文献
14.
针对报文分类算法的可扩展性,深入分析了典型可扩展报文分类算法的时间、空间复杂度;基于ClassBench工具集开发出可扩展报文分类算法评测系统,利用该系统对典型算法在不同模拟场景下进行评测,并对各算法的性能差异和适用条件进行了系统分析。最后,对今后可扩展报文分类算法的发展趋势作出了展望。 相似文献
15.
Frederic Raspall 《Computer Networks》2012,56(6):1667-1684
Packet sampling is needed to measure network traffic scalably at high speeds. While many sampled-based measurement techniques have been developed in the recent past, most approaches select packets uniformly, without regard to their size. We discuss that this behavior can negatively impact the performance of tools, and present a sampling scheme that, taking into account packet sizes, overcomes several weaknesses of classic sampling methods. While the idea behind of our approach is conceptually not new, the way we propose to implement it yields a cost-effective solution that is suitable at very high speeds.To see the advantages of the approach, we study the problem of estimating traffic volumes. Our analysis and experimental evaluation with real traffic traces show that sampling considering packet sizes can improve the quality of measurements and make their accuracy less dependant on the properties of the traffic, at a small additional overhead compared to traditional methods. 相似文献
16.
在大型网络中大量的规则数量会导致位向量(BV)算法的位向量过长和稀疏,要在网络处理器中实现BV算法需要大量的存储资源,而且多次存储读取也降低了算法匹配效率。针对BV算法位向量的问题,将Tuple空间分割思想与BV算法相结合缩短了位向量长度,充分利用网络处理器的并行处理机制和硬件加速单元,提出了一种适用于网络处理器的改进算法——Tuple-BV算法。该算法的元组分割缩短了位向量的长度,减少了位向量的存储空间和读取次数。通过对数据包处理延时的实验比较,当较多规则时,Tuple-BV算法在最大延时和平均延时指标上优于BV算法。 相似文献
17.
Multi-field packet classification is a network kernel function where packets are classified based on a set of predefined rules. Decomposition-based classification approaches are of major interest to the research community because of the parallel search in each packet header field. This paper presents four decomposition-based approaches on multi-core processors. We search in parallel for all the fields using linear search or range-tree search; we store the partial results in a linked list or a bit vector. The partial results are merged to produce the final packet header match. We evaluate the performance with respect to latency and throughput varying the rule set size (1–64 K). Experimental results show that our approaches can achieve 128 ns latency per packet and 11.5 Gbps throughput on state-of-the-art 16-core platforms. 相似文献
18.
This paper presents a systematic technique for expressing a string search algorithm as a regular iterative expression to explore all possible processor arrays for deep packet classification. The computation domain of the algorithm is obtained and three affine scheduling functions are presented. The technique allows some of the algorithm variables to be pipelined while others are broadcast over system-wide buses. Nine possible processor array structures are obtained and analyzed in terms of speed, area, power, and I/O timing requirements. Time complexities are derived analytically and through extensive numerical simulations. The proposed designs exhibit optimum speed and area complexities. The processor arrays are compared with previously derived processor arrays for the string matching problem. 相似文献
19.
瞿中 《计算机工程与设计》2006,27(9):1554-1556
随着Internet规模的不断扩大和应用技术的不断进步,越来越多的业务需要对数据包进行实时快速的分类.可编程片上系统(SOPC)的设计是一个崭新的富有生机的嵌入式系统设计研究方向.在阐述可编程逻辑器件特点及其发展趋势的基础上,探讨了智力产权复用理念、基于嵌入式处理器内核和XilinxFPGA的SOPC软硬件设计技术,介绍了基于Internet的可重配置逻辑(IRL)技术并提出了设计实现方法. 相似文献
20.
《Expert systems with applications》2007,32(2):527-533
Texture can be defined as a local statistical pattern of texture primitives in observer’s domain of interest. Texture classification aims to assign texture labels to unknown textures, according to training samples and classification rules. This paper describes the usage of wavelet packet neural networks (WPNN) for texture classification problem. The proposed schema composed of a wavelet packet feature extractor and a multi-layer perceptron classifier. Entropy and energy features are integrated wavelet feature extractor. The performed experimental studies show the effectiveness of the WPNN structure. The overall success rate is about 95%. 相似文献