首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Named Data Networking (NDN) is a candidate next-generation Internet architecture designed to overcome the fundamental limitations of the current IP-based Internet, in particular strong security. The ubiquitous in-network caching is a key NDN feature. However, pervasive caching strengthens security problems namely cache pollution attacks including cache poisoning (i.e., introducing malicious content into caches as false-locality) and cache pollution (i.e., ruining the cache locality with new unpopular content as locality-disruption).In this paper, a new cache replacement method based on Adaptive Neuro-Fuzzy Inference System (ANFIS) is presented to mitigate the cache pollution attacks in NDN. The ANFIS structure is built using the input data related to the inherent characteristics of the cached content and the output related to the content type (i.e., healthy, locality-disruption, and false-locality). The proposed method detects both false-locality and locality-disruption attacks as well as a combination of the two on different topologies with high accuracy, and mitigates them efficiently without very much computational cost as compared to the most common policies.  相似文献   

2.
In information-centric networking, in-network caching has the potential to improve network efficiency and content distribution performance by satisfying user requests with cached content rather than downloading the requested content from remote sources. In this respect, users who request, download, and keep the content may be able to contribute to in-network caching by sharing their downloaded content with other users in the same network domain (i.e., user-assisted in-network caching). In this paper, we examine various aspects of user-assisted in-network caching in the hopes of efficiently utilizing user resources to achieve in-network caching. Through simulations, we first show that user-assisted in-network caching has attractive features, such as self-scalable caching, a near-optimal cache hit ratio (that can be achieved when the content is fully cached by the in-network caching) based on stable caching, and performance improvements over in-network caching. We then examine the caching strategy of user-assisted in-network caching. We examine three caching strategies based on a centralized server that maintains all content availability information and informs each user of what to cache. We also examine three caching strategies based on each user’s content availability information. We first show that the caching strategy affects the distribution of upload overhead across users and the number of cache hits in each segment. One interesting observation is that, even with a small storage space (i.e., 0.1% of the content size per user), the centralized and distributed approaches improve the cache hit ratio by 50% and 45%, respectively. With an overall view of caching information, the centralized approach can achieve a higher cache hit ratio than the distributed approach. Based on this observation, we discuss a distributed approach with a larger view of caching information than the distributed approach and, through simulations, confirm that a larger view leads to a higher cache hit ratio. Another interesting observation is that the random distributed strategy yields comparable performance to more complex strategies.  相似文献   

3.
命名数据网络(NDN)中的路由器节点具有缓存能力,这就极大地提高了网络中的数据发送与检索效率。然而,由于路由器的缓存能力是有限的,设计有效的缓存策略仍然是一项紧迫的任务。为了解决这个问题,提出了一种动态内容流行度缓存决策和替换策略(DPDR)。DPDR综合考虑内容流行度和缓存能力,利用一个和式增加、积式减少(AIMD)的算法动态调节流行度阈值,并将超过流行度阈值的内容存入缓存空间;同时提出了一个缓存替换算法,综合考虑了缓存空间中内容的流行度和内容最后被访问时间等因素,将替换值最小的内容移出内容缓存。大量仿真结果显示,与其他算法相比,本文所提的算法能够有效提高缓存命中率,缩短平均命中距离和网络吞吐量。  相似文献   

4.
In the Content-Centric Networking (CCN) architecture, popular content can be cached in some intermediate network devices while being delivered, and the following requests for the cached content can be efficiently handled by the caches. Thus, how to design in-network caching is important for reducing both the traffic load and the delivery delay. In this paper, we propose a caching framework of Prefix-based Popularity Prediction (PPP) for efficient caching in CCN. PPP assigns a lifetime (in a cache) to the prefix of a name (of each cached object) based on its access history (or popularity), which is represented as a Prefix-Tree (PT). We demonstrate PPP’s predictability of content popularity in CCN by both traces and simulations. The evaluation results show that PPP can achieve higher cache hits and less traffic load than traditional caching algorithms (i.e., LRU and LFU). Also, its performance gain increases with users of high mobility.  相似文献   

5.
针对怎样高效地对命名数据网络(NDN)缓存中的数据进行替换的问题,提出了一种综合考虑数据流行度与数据请求代价的数据替换策略。该策略根据数据的请求时间间隔动态地分配数据流行度因子与数据请求代价因子的比重,使节点缓存高流行度与高请求代价的数据。当用户下次请求数据时能够从本节点获取,降低数据请求的响应时间并减少链路拥塞。仿真结果表明,本策略能够有效提高网内存储命中率,降低用户获取数据的时间以及缩短用户获取数据的距离。  相似文献   

6.
针对内容中心网络(CCN)中的缓存污染攻击问题,提出一种基于多样化存储的缓存污染防御机制。对不同业务内容采取差异化缓存从而减小网络受攻击面,将业务划分为三类并采用不同缓存策略:对隐私及实时性业务不予缓存;对流媒体业务以概率推送至网络边缘缓存;对其他文件类内容业务由上游到边缘逐步推送缓存。在不同节点分别配置不同的缓存污染攻击防御手段:对于边缘节点通过内容请求到达概率的变化对攻击进行检测;对于上游节点通过设置过滤规则将请求概率较低的内容排除出缓存空间。仿真结果表明,相比CNN传统缓存策略下的防御效果,该机制使网络平均缓存命中率提高了17.3%,该机制能够有效提升网络对于缓存污染攻击的防御能力。  相似文献   

7.
网络化缓存是命名数据网络实现对信息的高效获取,有效降低互联网骨干网络流量的关键技术.网络化缓存将缓存作为普适的功能添加到每个网络节点.用户需要获取信息时,缓存有该内容的任意网络节点(例如路由器)接收到用户请求后都可直接向用户返回相应内容,提升用户请求响应效率.然而,命名数据网络采用泛在缓存使得内容发布者到用户的传输路径...  相似文献   

8.
内容中心网络(Content Centric Networking,CCN)属于信息中心网络的一种,是未来互联网体系架构中极具前景的架构之一,已成为下一代互联网体系的研究热点。内容中心网络中的内容路由、内嵌缓存、接收端驱动传输等新特征,一方面提高了网络中的内容分发效率,另一方面也带来了新的安全挑战。本文在分析CCN工作原理的基础上,介绍了CCN的安全威胁、安全需求以及现有的解决方案,并展望了CCN安全技术研究的方向。首先,详细介绍了CCN的原理和工作流程,对比分析了CCN与TCP/IP网络的区别,并分析了CCN面临的安全威胁及需求。其次,对CCN中隐私保护、泛洪攻击、缓存污染、拥塞控制等技术的研究现状进行归纳、分析、总结,并分析了现有方案的优缺点及不足,进而分析可能的解决方案。最后,对CCN安全技术面临的挑战进行了分析与讨论,并展望了未来的研究方向及发展趋势。通过对已有研究工作进行总结与分析,本文提出了CCN安全技术潜在研究方向与关键问题,为CCN安全后续研究提供有益参考。  相似文献   

9.
针对NDN(命名数据网络)中确定性缓存和概率性缓存各自特点,提出一种确定性缓存和概率性缓存相结合的混合式NDN缓存策略(HDP)。基于区域划分的思想,在网络边缘采用基于热度的确定性缓存策略,在网络核心采用基于缓存收益和内容热度的概率性缓存策略,从而将两种缓存策略的优势相结合,进一步提高NDN缓存性能。实验表明该策略与现有NDN缓存方法相比,能有效提高缓存服务率和命中率,并有助于降低内容访问延迟,改善用户体验。  相似文献   

10.
With the massive growth of information generation, processing, and distribution in the Internet of Things (IoT), the existing cloud architectures need to be designed more effectively using fog networks. The current IP-address-based Internet architecture is unable to deliver the desired Quality-of-Service (QoS) towards the increasing demands of fog networking-based applications. To this end, Content-Centric Networking (CCN) has been developed as the potential future Internet architecture. CCN provides name-based content delivery and is established as an architecture for next-generation fog applications. The CCN-based fog environment uses the cache of in-network fog nodes to place the contents near the end-user devices. Generally, the caching capacity of the fog nodes is very small as compared to the content catalog size. Therefore, efficient content placement decisions are vital for improving the network performance. To enhance the content retrieval performance for the end-users, a novel content caching scheme named “Dynamic Partitioning and Popularity based Caching for Optimized Performance (DPPCOP)” has been proposed in this paper. First, the proposed scheme partitions the fog network by grouping the fog nodes into non-overlapping partitions to improve content distributions in the network and to ensure efficient content placement decisions. During partitioning, the scheme uses the Elbow method to obtain the “good” number of partitions. Then, the DPPCOP scheme analyzes the partition’s information along with the content popularity and distance metrics to place the popular contents near the end-user devices. Extensive simulations on realistic network topologies demonstrate the superiority of the DPPCOP caching strategy on existing schemes over various performance measurement parameters such as cache hit ratio, delay, and average network traffic load. This makes the proposed scheme suitable for next-generation CCN-based fog networks and the futuristic Internet architectures for industry 4.0.  相似文献   

11.
With the increasing demand of content dissemination, Content-Centric Networking (CCN) has been proposed as a promising architecture for future Internet. In response to the challenges in CCN caching, we develop an online caching scheme (named RBC-CC) exploiting the concept of Routing Betweenness Centrality (RBC) and “prefetching”, aiming at significantly reducing costly inter-ISP traffic and largely reducing content access hops. Simulation results demonstrate that the proposed caching scheme can significantly outperform the popular caching approaches in terms of the saving rate of inter-ISP traffic. Besides, RBC-CC performs well in reducing the average access hops and incurs the least cache evictions. Further, we present a thorough analysis regarding the impact of access pattern, cache size, content popularity and population on the caching performance. We then come to the conclusion that our scheme is featured with good stability and scalability as well as its effectiveness.  相似文献   

12.
Named Data Networking (NDN) routes data based on their application-layer content names enabling location independence, in-network caching, and enhanced security. A proof-of-concept solution is presented that demonstrates the applicability of NDN for multi-user, multi-application, and multi-sensor data-fusion systems. The system consists of a collaborative network of weather radars name data based on their geographic location and weather feature (e.g., reflectivity of clouds and wind velocity). This enables end users to specify an area of interest for a particular weather feature while being oblivious to the placement of radars and associated computing facilities. Conversely, the data-fusion system can also use its knowledge about the underlying system to decide the best sensing and data processing strategies. Such sensor-independent names also enhance resilience, enable processing data close to the source, and benefit from NDN features such as in-network caching and duplicate query suppression, consequently reducing the bandwidth requirements of the entire data-fusion system. The solution is implemented as an overlaid NDN enabling the benefits of both the NDN and overlay networks. Simulation-based analysis using reflectivity data from an actual weather event showed 84% reduction in peak bandwidth consumption of radars and 95% reduction in peak query resolution latency.  相似文献   

13.
命名数据网络(Named Data Networking, NDN)作为一种新型网络架构,内在地支持多路径转发和网内缓存,使得网络中存在大量冗余数据,会大大增加拥塞可能性。为解决上述问题,以减少兴趣包和数据包的转发数量为出发点,提出了基于扩展链路状态通告(Extended Link State Advertisements, ELSA)的冗余控制算法(ELSA-based Redundant Control, ELSA-RC)。该算法一方面在路由节点新添一个跳数数据库(hopDB)来保存兴趣包的最新跳数,并通过发送ELSA以增强邻居节点的hopDB更新,从而降低兴趣包转发深度;另一方面,基于待定请求表的接口信息和收到的ELSA消息来阻止重复数据包的返回。基于ndnSIM的仿真结果表明:相较传统NDN,采用ELSA-RC后兴趣包和数据包的传输数量分别减少了约15%和26%,平均时延减少了约14%。因此,使用ELSA-RC能显著减少兴趣包和数据包在网络中的无效扩散和重复传输,同时还能降低网络时延,使NDN网络性能得到提高。  相似文献   

14.
Content-Centric Networking (CCN) is an emerging paradigm being considered as a possible replacement for the current IP-based host-centric Internet infrastructure. In CCN, named content – rather than addressable hosts – becomes a first-class entity. Content is therefore decoupled from its location. This allows, among other things, the implementation of ubiquitous caching.  相似文献   

15.
针对内容中心网络中ALWAYS缓存策略节点存储空间利用率低、内容访问时延大、整体网络缓存性能低下的问题,提出了一种依据节点相似度的协作缓存算法。该算法优先将兴趣包转发至最相似节点,增大相关请求的就近响应概率;同时保证在缓存决策中的同一副本在协作节点间不重复存储,在降低冗余的同时增加了缓存多样性。实验结果表明,与现有算法相比,该算法在减少路由跳数和请求时延的同时提高了缓存命中率。  相似文献   

16.
Named Data Networking (NDN) is considered an appropriate architecture for IoT as it naturally supports consumer mobility and provides in-network caching capabilities as leverage to meet IoT requirements. Some caching techniques have been introduced to meet IoT application requirements and enforce the caching at the network edge. However, it remains challenging to design a popularity and freshness aware caching technique that places cached contents at the edge of the network as close to consumers as possible in a natural and simple manner without resorting to cumbersome networking mechanisms and hard-to-insure assumptions. In this paper, we propose PF-EdgeCache, an efficient popularity and freshness aware caching technique that naturally brings requested popular contents to the edge of the network in a manner fully compliant with the NDN standard. Simulations performed using the ndnSIM simulator and a large transit stub topology clearly show the competitiveness of PF-EdgeCache in terms of server hit reduction, eviction rate, and retrieval time compared to some representative work proposed in the literature.  相似文献   

17.
互联网目前已经发展为一个由实时视频和视频点播等内容分发服务主导的网络传统IP网络对于视频分发类任务的支持存在组播的部署复杂且开销大,不能有效利用多路径获取内容、对移动性的支持差和难以同时满足可靠性以及低延时需求等问题命名数据网络(named data networking,NDN)作为新型的下一代互联网体系结构,支持网...  相似文献   

18.
汪漪  刘斌 《软件学报》2016,27(S2):234-242
内容中心网络通过路由器缓存内容来提高网络的整体性能.为防止被污染的数据在网络中扩散,路由器需要对进入网络的内容进行验证.原始的验证机制需要对内容的数字签名进行非对称密钥解密操作,导致内容验证速度不能满足高速路由器的需求.提出了基于着色的快速内容验证机制,以减少内容验证的计算复杂度,加快内容的检测速度.该机制通过对第1次进入网络的正确内容进行着色操作以保证其正确性.被着色的内容再次进入网络时,路由器可以通过着色信息来快速验证内容的正确性,从而提高路由器的检测速度.  相似文献   

19.
针对命名数据网络中如何高效地对节点内的数据进行替换的问题,对节点内已经缓存的数据块,根据被请求的频率、请求时间间隔,准确判断数据块在当前时间的流行度,提出了一种基于流行度的替换策略Po-Rep。从命中节点返回的数据决定要存储在相应节点时,把节点内流行度低的数据进行剔除替换。该策略使节点的内容保持最大价值,满足后续的用户请求。仿真结果表明,该策略有效提高了网内节点存储的命中率,降低了服务器的负载,提高了网络的整体性能。  相似文献   

20.
针对内容中心网络(content centric networking,CCN)中不同业务内容的合理放置问题,提出了一种基于业务分类和节点分区的混合缓存机制。根据不同的业务特征,设计了差异化的缓存策略。对于流媒体点播业务,采用基于流行度的推拉式缓存,实现其在边缘网络的按序存储;对于非流媒体共享内容,采用基于hash的显式缓存,实现其在核心网络的单一副本放置。仿真结果表明,与经典算法相比,该机制提高了缓存命中率和跳数减少率,降低了平均请求时延。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号