首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到18条相似文献,搜索用时 140 毫秒
1.
软件定义网络的转发控制分离、集中控制、开放接口等特性使网络变得灵活可控,其架构得到了充分的发展.由于与各种云化业务的良好结合,软件定义网络(software defined networking,SDN)在近些年来得到了大量的商业部署.在基于OpenFlow的SDN架构中,为了实现流表项的快速查找、掩码匹配等目标,商业部署的硬件交换机大多使用三态内容寻址存储器(ternary content addressable memory,TCAM)来存储控制器下发的流表项.但受限于TCAM的容量和价格,目前商用OpenFlow交换机至多能支持存储数万条流表项,导致其存在因突发流和流表攻击等原因而产生流表溢出问题,严重影响了网络性能.因此,如何建立高效的流表溢出缓解机制引起了研究人员的广泛关注.首先对OpenFlow交换机流表溢出问题产生的原因及其影响进行了分析,在此基础上按照流量突发和攻击行为2种情况归纳对比了流表溢出缓解技术的研究现状,总结分析了现有研究存在的问题与不足,并展望了未来的发展方向和面临的挑战.  相似文献   

2.
在软件定义网络(software-defined networking, SDN)中,OpenFlow交换机通常采用三态内容可寻址存储器(ternary content addressable memory, TCAM)存储流表,以支持快速通配查找.然而,TCAM采用并行查找方式,查找能耗高,因此有必要为OpenFlow交换机选择合适的TCAM容量,以平衡分组转发时延和能耗.针对软件定义数据中心网络(software-defined data center network, SD-DCN)这一典型应用场景,利用多优先级M/G/1排队模型刻画OpenFlow交换机的分组处理过程,进而建立OpenFlow分组转发时延模型.同时,基于网络流分布特性,建立TCAM流表命中率模型,以求解OpenFlow分组转发时延与TCAM容量的关系式.在此基础上,结合TCAM查找能耗,建立OpenFlow分组转发能效联合优化模型,并设计优化算法求解TCAM最优容量.实验结果表明:所提时延模型比现有模型更能准确刻画OpenFlow分组转发时延.同时,利用优化算法求解不同参数配置下的TCAM最优容量,为SD-DC...  相似文献   

3.
傅明  何洋  熊兵 《计算机工程》2019,45(5):52-58
OpenFlow支持通配符查找,会造成严重的流表查找性能瓶颈。为此,基于网络流量局部性,提出一种OpenFlow虚拟流表查找方法。通过缓存在数据包流中近期频繁出现的连接和对应的掩码,对大部分数据包直接定位其掩码,进而查找流表,无需逐个探测掩码数组。理论分析和实验结果表明,相比于目前主流虚拟交换机中的流表查找方法OFT-OVS,该方法的平均查找长度较小,可有效提升OpenFlow虚拟交换机的数据转发性能。  相似文献   

4.
陈志鹏  徐明伟  杨芫 《计算机学报》2021,44(7):1341-1362
软件定义网络(SDN)将传统网络的控制平面和数据平面解耦,通过控制平面的控制器灵活地对网络进行管理,目前应用最广泛的控制协议是OpenFlow.三态内容寻址存储器(TCAM)查找速度快、支持三态掩码存储,在SDN网络中应用广泛.但TCAM成本高、功耗大,并且在存储含有范围字段匹配域的规则时候存在范围膨胀问题,因此交换机中可存储的转发规则数量,尤其是匹配域的数量和类型都比较多的OpenFlow规则数目非常有限,这成为约束SDN网络大规模扩展和应用的瓶颈.研究机构从不同角度提出了针对SDN中交换机转发规则的TCAM存储优化方案.本文从转发规则存储架构优化、本地交换机转发规则压缩、全局转发规则动态优化以及控制器参与的网络转发规则管理四个角度总结了相关研究工作,并提出了适合未来SDN网络的转发规则存储的综合优化方案.  相似文献   

5.
随着OpenFlow协议版本的不断更新,其在数据平面上细粒度的控制能力得到了很大提升。然而,由于表项匹配域支持的任意通配符依赖TCAM进行匹配处理,随着OpenFlow匹配域支持字段的增加,设备中的TCAM存储空间面临很大压力。为此,提出一种减小TCAM中流表存储空间的数学模型FICO(A Function-Integral TCAM-saving Compression model for flow table of OpenFlow)。FICO首先根据匹配域不同字段间的关系,将字段之间的冗余分为三种。然后基于冗余提出三种预压缩算法,分别为域间字段合并、字段映射、域内字段压缩,最终组合为更小位宽的表项被送往TCAM中进行流匹配。通过仿真表明在保持OpenFlow功能完整性的前提下,较未压缩流表,FICO可以节省60%TCAM存储空间。并且随着流表规模的增大,压缩性能保持稳定。  相似文献   

6.
基于流的报文处理是防火墙、入侵检测等网络安全应用的重要组成功能,其中流表是流处理技术的关键数据结构,流表的规模及访问性能直接影响到流处理的能力和速度。着眼于高速网络下大规模流表的硬件实现,设计了一种基于硬件的千万级哈希流表查找架构,并在FPGA平台上进行了实现和测试。该方案在保证访存效率的同时很好地解决了冲突的难题,利用有限的存储资源,满足了高达4 900万项的流表查找需求,测试能够实现92Mdesc/s的表查找速度,支持约220Gbps高速以太网的处理能力。  相似文献   

7.
哈希表在网络报文处理,尤其是带状态的报文处理中发挥着重要作用.伴随着网络流量的快速增长,传统软件哈希表难以满足网络性能需求,而查找是影响哈希表性能的关键之一,如何提升哈希表的查找速率也一直是一个难点问题.经研究表明,现有的网络流量呈现Pareto分布特征,即存在少数的大流量数据——大象流.基于当前数据中心广泛采用的软硬协同计算模式,提出了一种基于DPDK+FPGA的大规模软硬协同哈希表架构.根据现有网络流量特征,将流量分成大象流与背景流.同时也将哈希表分成硬件表与软件表.在FPGA中构造小规模硬件表,卸载所有报文的哈希计算,以及大象流的哈希查找.在软件中基于DPDK构建大规模软件表,利用FPGA卸载哈希计算,加速背景流的查找.软件拥有所有流信息,利用采样法识别大象流并将大象流的键值对信息(key-value)更新到FPGA的硬件表中,以加速软件中大规模软件表的查找速率.采用Xilinx U200加速卡和通用服务器作为硬件平台,实现了软硬协同的大规模哈希表,并利用测试仪构造了符合当前网络特征的流量数据,以DPDK精确转发为例,验证了软硬协同哈希表的性能.结果表明,在大象流哈希查找完全卸载...  相似文献   

8.
与传统网络技术相比,SDN技术将网络的控制平面与数据平面分离,使网络具有一定的可编程能力。以OpenFlow、POF、P4等为代表,领域内常见的SDN交换机大多基于匹配动作表模型实现。与协议相关的OpenFlow等技术不同,协议无感知的SDN技术使用{偏移,长度}等结构表示协议字段,从而实现对任意协议字段的解析和处理。然而,待处理的数据包可能带有不同长度的数据包头,所以这些数据包中特定协议字段的偏移也会不同,需要多个匹配域偏移不同的流表去完成数据流的解析,从而造成流表和流水线结构复杂。针对上述问题,本文提出一种基于MAT模型的可编程数据平面流表归并方案,扩展MAT模型中的动作集,在数据包查询流表时使用特定的动作动态地调整数据包的起始偏移,使不同数据包同一协议字段的偏移保持一致,实现匹配域相同的流表的归并。本文方案在兼容VLAN、QinQ的POF Switch实验场景下,以跳转流表时多执行一条动作为代价,缩减了约69%的流表内存消耗。  相似文献   

9.
在SDN体系架构中把网络控制功能从网络设备(交换机/路由器)里分离出来,集中到中心节点控制器上,交换机只负责数据平面的功能(通过流表进行数据转发)。在大规模的数据中心网络中,路由/流表的计算和分发完全由中心控制器完成,控制器成为网络的性能瓶颈和脆弱点。为了解决上述问题,本文提出半集中式SDN路由技术,其主要思想是每个交换机节点不需要控制器的参与,可以自主构建一个基础流表,基于基础流表,交换机可以完成基本的数据转发工作。而控制器负责更高级的路由选路(故障处理)工作,从而大大减轻控制器的负担。针对控制器的高级路由选路工作,本文通过对现有SDN网络中的故障恢复机制的特性以及限制的分析,在基础流表的基础上设计一套局部迂回故障检测恢复机制。基于该机制,控制器能够及时检测到网络故障,并在极短的时间内进行故障恢复,实现控制器的高级路由选路工作。  相似文献   

10.
针对SDN新一代转发控制分离技术——协议无感知转发(POF),提出了协议字段全开放的SDN网络虚拟化架构。该架构提出了基于标签的网络虚拟化技术和POF物理交换机流表分配技术,通过网络虚拟化中间件将POF交换机与POF控制器之间传递的消息进行转换,对物理网络中传输的数据进行标签封装,从而区分不同网络切片与虚拟链路的流量信息。与已有的虚拟化中间件Flow Visor、Open Virtex、Co Visor等相比,该虚拟化中间件全面支持POF协议,通过物理网络中的数据的标签化处理,实现了SDN转发平面全字段开放的网络虚拟化。同时,基于该套架构,实现了POF网络虚拟化中间件系统POFHyper Visor,并验证了该POF网络虚拟化中间件系统的功能与性能,经测试,虚拟化消息处理能力损失在17.1%~29.9%。  相似文献   

11.
针对当前互联网中多匹配域流表规模不断膨胀、匹配宽度不断增大,导致硬件存储压力过大的问题,提出了一种基于独立规则子集位提取(BEIS)的压缩方案。首先,根据多匹配域之间的逻辑关系进行匹配域合并,从而减少匹配域个数、减小流表位宽;其次,对合并后的规则集进行独立规则子集分割,将分割后的子集进行可区分的位提取,从而使用部分位完成匹配查找功能,进一步缩减所用的三态内容寻址寄存器(TCAM)空间;最后,提出了实现该方案的硬件查找架构。仿真结果表明,对于OpenFlow流表,该方案在一定的时间复杂度下,比匹配域裁剪(FT)方案减少了20%的存储空间;另外,对于实际应用中常见的访问控制列表、防火墙等包分类规则集,可实现20%到40%的压缩比率。  相似文献   

12.
Most of the high-performance routers available commercially these days equip each of their line cards (LCs) with a forwarding engine (FE) to perform table lookups locally. This work introduces and evaluates a technique for speedy packet lookups, called SPAL, in such routers. The BGP routing table under SPAL is fragmented into subsets which constitute forwarding tables for different FEs so that the number of table entries in each FE drops as the router grows. This reduction in the forwarding table size drastically lowers the amount of SRAM (e.g., L3 data cache) required in each LC to hold the trie constructed according to the prefix matching algorithm. SPAL calls for caching the lookup result of a given IP address at its home LC (denoted by LC/sub ho/, using the LR-cache), such that the result can satisfy the lookup requests for the same address from not only LC/sub ho/, but also other LCs quickly. Our trace-driven simulation reveals that SPAL leads to improved mean lookup performance by a factor of at least 2.5 (or 4.3) for a router with three (or 16) LCs, if the LR-cache contains 4K blocks. SPAL achieves this significant improvement, while greatly lowering the SRAM (i.e., the L3 data cache plus the LR-cache combined) requirement in each LC and possibly shortening the worst-case lookup time (thanks to fewer memory accesses during longest-prefix matching search) when compared with a current router without partitioning the routing table. It promises good scalability (with respect to routing table growth) and exhibits a small mean lookup time per packet. With its ability to speed up packet lookup performance while lowering overall SRAM substantially, SPAL is ideally applicable to the new generation of scalable high-performance routers.  相似文献   

13.
In this paper, a reconfigurable memory architecture and lookup technique for IP packet forwarding engines is presented. This is achieved by partitioning the forwarding table into smaller partial lookup tables for each output port and allowing a forwarding engine to process them in parallel. This effectively reduces the complexity of finding the ‘longest prefix match’ problem to the ‘first prefix match’ problem. Our method is a flexible technique that significantly elevates the scalability of the next generation network processors and other packet processing devices. Such scalability facilitates migration to IPv6 and benefits network equipments especially in terms of growing routing table size, traffic, frequency of route updates and bandwidth requirement.  相似文献   

14.
OpenFlow作为软件定义网络的一种实现方案,允许我们可以通过编程实现控制器,从外部控制高速交换机的包转发行为。本文提出了一种使用软件路由器(Quagga)来作为路由引擎,根据路由表状态向控制器发送控制消息,从而控制支持OpenFlow的高速交换机实现路由器效果的一种软件路由器加速方案。  相似文献   

15.
In next-generation networks, packet classification is important in fulfilling the requirements of multimedia services, including VoIP and VoD. Using pre-defined filters, the incoming packets can be categorized that determines to which forwarding class a packet belongs. Packet classification is essentially a problem of multidimensional range matching. The tuple space search is a well-known solution based on multiple hash accesses for various filter length combinations. The tuple-based algorithm, a rectangle search, is highly scalable with respect to the number of filters; however, it suffers from the memory-explosion problem. Besides, the lookup performance of the rectangle search is not sufficiently fast to accomplish high-speed packet classification. This work proposes an improved scheme to reduce the required storage and realize OC-192 wire-speed forwarding. The scheme consists of two parts. The "Tuple Reduction Algorithm" drastically reduces the number of tuples by duplicating filters. Dynamic programming is used to optimize the tuple reduction and two heuristic approaches are introduced to simplify the optimization process. Furthermore, the "Look-ahead Caching" scheme is presented to improve the lookup performance. The basic idea is to prevent unnecessary tuple probing by filtering out the "un-matched" situation of the incoming packet. The experimental results show that combining the tuple reduction algorithm with look-ahead caching increases the lookup speed by a factor of six while requiring only around one third of the storage. Additionally, an extension of multiple fields to more general filters is addressed.  相似文献   

16.
In today’s networks, load balancing and priority queues in switches are used to support various quality-of-service (QoS) features and provide preferential treatment to certain types of traffic. Traditionally, network operators use ‘traceroute’ and ‘ping’ to troubleshoot load balancing and QoS problems. However, these tools are not supported by the common OpenFlow-based switches in software-defined networking (SDN). In addition, traceroute and ping have potential problems. Because load balancing mechanisms balance flows to different paths, it is impossible for these tools to send a single type of probe packet to find the forwarding paths of flows and measure latencies. Therefore, tracing flows’ real forwarding paths is needed before measuring their latencies, and path tracing and latency measurement should be jointly considered. To this end, FlowTrace is proposed to find arbitrary flow paths and measure flow latencies in OpenFlow networks. FlowTrace collects all flow entries and calculates flow paths according to the collected flow entries. However, polling flow entries from switches will induce high overhead in the control plane of SDN. Therefore, a passive flow table collecting method with zero control plane overhead is proposed to address this problem. After finding flows’ real forwarding paths, FlowTrace uses a new measurement method to measure the latencies of different flows. Results of experiments conducted in Mininet indicate that FlowTrace can correctly find flow paths and accurately measure the latencies of flows in different priority classes.  相似文献   

17.
Routing table lookup is an important operation in packet forwarding. This operation has a significant influence on the overall performance of the network processors. Routing tables are usually stored in main memory which has a large access time. Consequently, small fast cache memories are used to improve access time. In this paper, we propose a novel routing table compaction scheme to reduce the number of entries in the routing table. The proposed scheme has three versions. This scheme takes advantage of ternary content addressable memory (TCAM) features. Two or more routing entries are compacted into one using don’t care elements in TCAM. A small compacted routing table helps to increase cache hit rate; this in turn provides fast address lookups. We have evaluated this compaction scheme through extensive simulations involving IPv4 and IPv6 routing tables and routing traces. The original routing tables have been compacted over 60% of their original sizes. The average cache hit rate has improved by up to 15% over the original tables. We have also analyzed port errors caused by caching, and developed a new sampling technique to alleviate this problem. The simulations show that sampling is an effective scheme in port error-control without degrading cache performance.  相似文献   

18.
因为网络安全设备对数据包转发的特殊需求,需要比一般设备更高效稳定的数据包转发机制。通过对IP数据包转发机制研究,针对特殊需求,提出了一种基于(IFPLUT+TCAM)的IP数据包转发机制。该机制将查找表根据输出端口分割为若干个小查找表,并允许查找引擎对每个小查找表进行并行处理,有效地将寻找“最长前缀匹配”的复杂问题简化为“第1前缀匹配”问题。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号