首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到10条相似文献,搜索用时 140 毫秒
1.
Named Data Networking (NDN) is a candidate next-generation Internet architecture designed to overcome the fundamental limitations of the current IP-based Internet, in particular strong security. The ubiquitous in-network caching is a key NDN feature. However, pervasive caching strengthens security problems namely cache pollution attacks including cache poisoning (i.e., introducing malicious content into caches as false-locality) and cache pollution (i.e., ruining the cache locality with new unpopular content as locality-disruption).In this paper, a new cache replacement method based on Adaptive Neuro-Fuzzy Inference System (ANFIS) is presented to mitigate the cache pollution attacks in NDN. The ANFIS structure is built using the input data related to the inherent characteristics of the cached content and the output related to the content type (i.e., healthy, locality-disruption, and false-locality). The proposed method detects both false-locality and locality-disruption attacks as well as a combination of the two on different topologies with high accuracy, and mitigates them efficiently without very much computational cost as compared to the most common policies.  相似文献   

2.
In information-centric networking, in-network caching has the potential to improve network efficiency and content distribution performance by satisfying user requests with cached content rather than downloading the requested content from remote sources. In this respect, users who request, download, and keep the content may be able to contribute to in-network caching by sharing their downloaded content with other users in the same network domain (i.e., user-assisted in-network caching). In this paper, we examine various aspects of user-assisted in-network caching in the hopes of efficiently utilizing user resources to achieve in-network caching. Through simulations, we first show that user-assisted in-network caching has attractive features, such as self-scalable caching, a near-optimal cache hit ratio (that can be achieved when the content is fully cached by the in-network caching) based on stable caching, and performance improvements over in-network caching. We then examine the caching strategy of user-assisted in-network caching. We examine three caching strategies based on a centralized server that maintains all content availability information and informs each user of what to cache. We also examine three caching strategies based on each user’s content availability information. We first show that the caching strategy affects the distribution of upload overhead across users and the number of cache hits in each segment. One interesting observation is that, even with a small storage space (i.e., 0.1% of the content size per user), the centralized and distributed approaches improve the cache hit ratio by 50% and 45%, respectively. With an overall view of caching information, the centralized approach can achieve a higher cache hit ratio than the distributed approach. Based on this observation, we discuss a distributed approach with a larger view of caching information than the distributed approach and, through simulations, confirm that a larger view leads to a higher cache hit ratio. Another interesting observation is that the random distributed strategy yields comparable performance to more complex strategies.  相似文献   

3.
命名数据网络(NDN)中的路由器节点具有缓存能力,这就极大地提高了网络中的数据发送与检索效率。然而,由于路由器的缓存能力是有限的,设计有效的缓存策略仍然是一项紧迫的任务。为了解决这个问题,提出了一种动态内容流行度缓存决策和替换策略(DPDR)。DPDR综合考虑内容流行度和缓存能力,利用一个和式增加、积式减少(AIMD)的算法动态调节流行度阈值,并将超过流行度阈值的内容存入缓存空间;同时提出了一个缓存替换算法,综合考虑了缓存空间中内容的流行度和内容最后被访问时间等因素,将替换值最小的内容移出内容缓存。大量仿真结果显示,与其他算法相比,本文所提的算法能够有效提高缓存命中率,缩短平均命中距离和网络吞吐量。  相似文献   

4.
针对怎样高效地对命名数据网络(NDN)缓存中的数据进行替换的问题,提出了一种综合考虑数据流行度与数据请求代价的数据替换策略。该策略根据数据的请求时间间隔动态地分配数据流行度因子与数据请求代价因子的比重,使节点缓存高流行度与高请求代价的数据。当用户下次请求数据时能够从本节点获取,降低数据请求的响应时间并减少链路拥塞。仿真结果表明,本策略能够有效提高网内存储命中率,降低用户获取数据的时间以及缩短用户获取数据的距离。  相似文献   

5.
针对内容中心网络(CCN)中的缓存污染攻击问题,提出一种基于多样化存储的缓存污染防御机制。对不同业务内容采取差异化缓存从而减小网络受攻击面,将业务划分为三类并采用不同缓存策略:对隐私及实时性业务不予缓存;对流媒体业务以概率推送至网络边缘缓存;对其他文件类内容业务由上游到边缘逐步推送缓存。在不同节点分别配置不同的缓存污染攻击防御手段:对于边缘节点通过内容请求到达概率的变化对攻击进行检测;对于上游节点通过设置过滤规则将请求概率较低的内容排除出缓存空间。仿真结果表明,相比CNN传统缓存策略下的防御效果,该机制使网络平均缓存命中率提高了17.3%,该机制能够有效提升网络对于缓存污染攻击的防御能力。  相似文献   

6.
内容中心网络(Content Centric Networking,CCN)属于信息中心网络的一种,是未来互联网体系架构中极具前景的架构之一,已成为下一代互联网体系的研究热点。内容中心网络中的内容路由、内嵌缓存、接收端驱动传输等新特征,一方面提高了网络中的内容分发效率,另一方面也带来了新的安全挑战。本文在分析CCN工作原理的基础上,介绍了CCN的安全威胁、安全需求以及现有的解决方案,并展望了CCN安全技术研究的方向。首先,详细介绍了CCN的原理和工作流程,对比分析了CCN与TCP/IP网络的区别,并分析了CCN面临的安全威胁及需求。其次,对CCN中隐私保护、泛洪攻击、缓存污染、拥塞控制等技术的研究现状进行归纳、分析、总结,并分析了现有方案的优缺点及不足,进而分析可能的解决方案。最后,对CCN安全技术面临的挑战进行了分析与讨论,并展望了未来的研究方向及发展趋势。通过对已有研究工作进行总结与分析,本文提出了CCN安全技术潜在研究方向与关键问题,为CCN安全后续研究提供有益参考。  相似文献   

7.
网络化缓存是命名数据网络实现对信息的高效获取,有效降低互联网骨干网络流量的关键技术.网络化缓存将缓存作为普适的功能添加到每个网络节点.用户需要获取信息时,缓存有该内容的任意网络节点(例如路由器)接收到用户请求后都可直接向用户返回相应内容,提升用户请求响应效率.然而,命名数据网络采用泛在缓存使得内容发布者到用户的传输路径上的各节点对内容进行重复并无差别缓存,造成数据冗余、内容缓存无差别对待问题.为此,提出一种基于内容类型的隔跳概率缓存机制.首先根据业务特征(例如时延要求、带宽占用)将内容划分为4种类型:动态类、实时类、大数据类、以及小数据类;其次构造隔跳待定缓存策略,将数据存储在非连续的传输节点上,从空间上减少冗余缓存;最后针对不同内容提供差异化缓存服务:无缓存、网络边缘概率缓存、网络次边缘概率缓存、以及网络核心概率缓存策略,从而进一步降低冗余数据,同时提高用户获取内容的效率.实验结果表明,该机制能够减少冗余缓存,降低用户请求内容时延.  相似文献   

8.
针对NDN(命名数据网络)中确定性缓存和概率性缓存各自特点,提出一种确定性缓存和概率性缓存相结合的混合式NDN缓存策略(HDP)。基于区域划分的思想,在网络边缘采用基于热度的确定性缓存策略,在网络核心采用基于缓存收益和内容热度的概率性缓存策略,从而将两种缓存策略的优势相结合,进一步提高NDN缓存性能。实验表明该策略与现有NDN缓存方法相比,能有效提高缓存服务率和命中率,并有助于降低内容访问延迟,改善用户体验。  相似文献   

9.
With the massive growth of information generation, processing, and distribution in the Internet of Things (IoT), the existing cloud architectures need to be designed more effectively using fog networks. The current IP-address-based Internet architecture is unable to deliver the desired Quality-of-Service (QoS) towards the increasing demands of fog networking-based applications. To this end, Content-Centric Networking (CCN) has been developed as the potential future Internet architecture. CCN provides name-based content delivery and is established as an architecture for next-generation fog applications. The CCN-based fog environment uses the cache of in-network fog nodes to place the contents near the end-user devices. Generally, the caching capacity of the fog nodes is very small as compared to the content catalog size. Therefore, efficient content placement decisions are vital for improving the network performance. To enhance the content retrieval performance for the end-users, a novel content caching scheme named “Dynamic Partitioning and Popularity based Caching for Optimized Performance (DPPCOP)” has been proposed in this paper. First, the proposed scheme partitions the fog network by grouping the fog nodes into non-overlapping partitions to improve content distributions in the network and to ensure efficient content placement decisions. During partitioning, the scheme uses the Elbow method to obtain the “good” number of partitions. Then, the DPPCOP scheme analyzes the partition’s information along with the content popularity and distance metrics to place the popular contents near the end-user devices. Extensive simulations on realistic network topologies demonstrate the superiority of the DPPCOP caching strategy on existing schemes over various performance measurement parameters such as cache hit ratio, delay, and average network traffic load. This makes the proposed scheme suitable for next-generation CCN-based fog networks and the futuristic Internet architectures for industry 4.0.  相似文献   

10.
With the increasing demand of content dissemination, Content-Centric Networking (CCN) has been proposed as a promising architecture for future Internet. In response to the challenges in CCN caching, we develop an online caching scheme (named RBC-CC) exploiting the concept of Routing Betweenness Centrality (RBC) and “prefetching”, aiming at significantly reducing costly inter-ISP traffic and largely reducing content access hops. Simulation results demonstrate that the proposed caching scheme can significantly outperform the popular caching approaches in terms of the saving rate of inter-ISP traffic. Besides, RBC-CC performs well in reducing the average access hops and incurs the least cache evictions. Further, we present a thorough analysis regarding the impact of access pattern, cache size, content popularity and population on the caching performance. We then come to the conclusion that our scheme is featured with good stability and scalability as well as its effectiveness.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号