首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
信息中心网络中的内置缓存技术研究   总被引:2,自引:0,他引:2  
张国强  李杨  林涛  唐晖 《软件学报》2014,25(1):154-175
互联网的应用方式正从以面向主机的点对点通信为主转向以海量内容获取为主.为适应这一转变,研究界提出了多种以信息/内容为中心的新型网络架构.这类网络架构中最重要的特征之一是利用网络内置缓存提高接收者驱动的内容获取的传输效率和网络资源的利用率.与传统的Web缓存、CDN缓存等相比,ICN缓存系统呈现了缓存透明化、泛在化和细粒度化等新特征,对缓存系统的建模、行为理解和优化方法都提出了新的挑战.在介绍了ICN缓存系统的新特征及带来的挑战后,首先从多方面着重阐述和比较了缓存网络的优化方法,之后对缓存网络系统的理论模型研究现状加以阐述,然后分析了仍待解决的关键问题和未来的研究方向.  相似文献   

2.
现有以信息为中心的网络(Information-Centric Network,ICN)缓存的内容的时空分布不够合理,存在无效缓存及同质化缓存等问题。对此,文中提出一种基于内容流行度和社团重要度的ICN缓存与替换策略,其结合内容流行度与节点的社团重要度来选择缓存节点,把具有不同流行度的内容分散缓存在社团重要度不同的节点上,使缓存内容的空间分布趋于合理,从而增加了缓存内容的多样性。同时,基于社团局部流行度来替换缓存内容的策略更有利于优化缓存内容的时间分布,实现缓存内容空间分布的动态调整。实验结果表明,所提策略能有效地减少用户请求的平均响应时间,提高缓存对象的命中率和全网的缓存差异度。  相似文献   

3.
因特网中日益增长的内容获取需求促使学术界提出了多种以信息为中心的未来网络架构。这类架构将以主机为中心的通信模式转变为以内容为中心。信息中心网络(ICN)最重要的特征之一是利用内置缓存减少用户获取内容的时延、节省网络带宽和缓解网络拥塞。与传统的内容分发网络(CDN)、对等网络(P2P)和Web缓存系统相比,ICN缓存系统呈现出一系列的新特征。分析了缓存新特征对ICN研究带来的挑战;从多方面重点阐述了ICN缓存的优化方法,详细分析对比了不同缓存策略;指出了未来研究方向并总结全文。  相似文献   

4.
Cooperative caching is an efficient way to improve the performance of data access in mobile wireless networks, by cache nodes selecting different data items in their limited storage in order to reduce total access delay. With more demands on sharing a video or other data, especially for mobile applications in an Internet-based Mobile Ad Hoc Network, considering the relations among data items in cooperative caching becomes more important than before. However, most of the existing works do not consider these inherent relations among data items, such as the logical, temporal, or spatial relations. In this paper, we present a novel solution, Gossip-based Cooperative Caching (GosCC) to address the cache placement problem, and consider the sequential relation among data items. Each mobile node stores the IDs of data items cached locally and the ID of the data item in use into its progress report. Each mobile node also makes use of these progress reports to determine whether a data item should be cached locally. These progress reports are propagated within the network in a gossip-based way. To improve the user experience, GosCC aims to provide users with an uninterrupted data access service. Simulation results show that GosCC achieves better performance than Benefit-based Data Caching and HybridCache, in terms of average interruption intervals and average interruption times, while sacrificing message cost to a certain degree.  相似文献   

5.
信息中心网络(information-centric networking,简称ICN)将网络通信模式从当前的以地址为中心转变为以信息为中心.泛在化缓存是ICN重要特性之一,它通过赋予网络任意节点缓存的能力来缓和服务器的压力,降低用户访问延迟.然而,由于缺少内容热度的分布感知,现有ICN缓存策略仍存在缓存利用率较低、缓存位置缺乏合理规划等问题.为了解决这些问题,提出一种基于两级缓存的协同缓存机制(a cache coordination scheme based on two-level cache,简称CSTC).将每个节点的缓存空间分为热度感知和协作分配两部分,为不同热度的内容提供不同的缓存策略.同时,结合提出的热度筛选机制和路由策略,降低了缓存冗余,实现了缓存位置优化.最后,基于真实网络拓扑的仿真实验表明,CSTC在次热门内容缓存数量上提升了2倍,缓存命中率提升了将近50%,且平均往返跳数在多数情况下优于现有On-path缓存方式.  相似文献   

6.
The technology advance in network has accelerated the development of multimedia applications over the wired and wireless communication. To alleviate network congestion and to reduce latency and workload on multimedia servers, the concept of multimedia proxy has been proposed to cache popular contents. Caching the data objects can relieve the bandwidth demand on the external network, and reduce the average time to load a remote data object to local side. Since the effectiveness of a proxy server depends largely on cache replacement policy, various approaches are proposed in recent years. In this paper, we discuss the cache replacement policy in a multimedia transcoding proxy. Unlike the cache replacement for conventional web objects, to replace some elements with others in the cache of a transcoding proxy, we should further consider the transcoding relationship among the cached items. To maintain the transcoding relationship and to perform cache replacement, we propose in this paper the RESP framework (standing for REplacement with Shortest Path). The RESP framework contains two primary components, i.e., procedure MASP (standing for Minimum Aggregate Cost with Shortest Path) and algorithm EBR (standing for Exchange-Based Replacement). Procedure MASP maintains the transcoding relationship using a shortest path table, whereas algorithm EBR performs cache replacement according to an exchanging strategy. The experimental results show that the RESP framework can approximate the optimal cache replacement with much lower execution time for processing user queries.  相似文献   

7.
分布式视频点播中的Cache控制机制是提高系统效率的核心技术,良好的缓存机制可以有效地减少用户的请求丢失率。提出了分布式VOD的一种新型层次化体系结构,采用两层Cache替换机制,将本地服务器机群所有节点内存连成一个全局的虚拟缓存,并给出视频文件基于该缓存的组播调度。  相似文献   

8.
内容中心网络(Content-Centric Network, CCN)是未来互联网的重要发展方向。网内缓存是 CCN 网络的重要特征,对 CCN 内容传输性能具有重要影响。网内缓存的内容发现效率与网内缓存性能密切相关。传统 CCN 网络缓存发现方法是通过请求包在数据平面转发,沿途机会性地命中缓存实现的,具有一定的随机性、盲目性,可能导致缓存内容无法被高效利用。本文提出一种在控制平面解决缓存可用性的方法,结合拓扑、缓存容量以及用户请求分布计算出“值得”缓存的内容进行存储,同时将其向外通告,使其参与路由计算,以便后续请求快速准确地发现并利用缓存内容。实验结果表明,本文方法可使缓存命中率提高 20%左右,服务器负载降低 15%左右。  相似文献   

9.
熊炼  李朋明  陈翔  朱红梅 《计算机应用》2018,38(12):3509-3513
针对内容中心网络(CCN)中节点默认缓存所有经过的内容,未能实现对内容选择性缓存与最佳放置的问题,提出一种基于用户偏好的协作缓存策略(CCUP)。首先,考虑用户对内容类型的喜好和内容流行度作为用户本地偏好度指标,实现缓存内容的选择;然后,对需要缓存内容执行差异化缓存策略,全局活跃的内容则缓存在重要的中心节点,非活跃内容则按本地偏好度与节点同用户距离层级匹配缓存;最后,实现用户对本地偏好内容的就近获取和全局活跃内容的快速分发。仿真结果表明,相比典型缓存策略(LCE、Prob(0.6)、Betw),CCUP在平均缓存命中率和平均请求时延方面有明显优势。  相似文献   

10.
Named Data Networking (NDN) is a candidate next-generation Internet architecture designed to overcome the fundamental limitations of the current IP-based Internet, in particular strong security. The ubiquitous in-network caching is a key NDN feature. However, pervasive caching strengthens security problems namely cache pollution attacks including cache poisoning (i.e., introducing malicious content into caches as false-locality) and cache pollution (i.e., ruining the cache locality with new unpopular content as locality-disruption).In this paper, a new cache replacement method based on Adaptive Neuro-Fuzzy Inference System (ANFIS) is presented to mitigate the cache pollution attacks in NDN. The ANFIS structure is built using the input data related to the inherent characteristics of the cached content and the output related to the content type (i.e., healthy, locality-disruption, and false-locality). The proposed method detects both false-locality and locality-disruption attacks as well as a combination of the two on different topologies with high accuracy, and mitigates them efficiently without very much computational cost as compared to the most common policies.  相似文献   

11.
田铭  邬江兴  兰巨龙 《计算机科学》2016,43(11):164-171
通过对信息中心网络的网内节点缓存建模,分析发现基于全局内容流行度的替换策略不适用于信息中心网络的分布式模式。继而提出了一种基于局部内容活跃度的缓存替换策略LAU,并基于该策略提出了一种自适应路径缓存算法ACAP,使缓存内容按照本地活跃度依次缓存在访问路径中。仿真结果表明,LAU策略提高了单节点缓存命中率;ACAP相比已有的路径缓存算法,具有较低的服务器命中率和跳数比。最后对该算法适用的缓存结构和拓扑结构进行了讨论和分析。  相似文献   

12.
在移动计算环境中基于移动代理的缓存失效方案   总被引:2,自引:2,他引:2  
1 引言缓存技术是分布式计算环境中的重要技术,它可以改善系统的整体性能(如查询响应时间、吞吐量等),而移动计算的网络环境是一种特殊的分布式环境,与传统的分布式系统相比,它具有鲜明的特点:移动性、断接性、带宽多样性、可伸缩性、弱可靠性、网络通信的非对称性、电源能力局限性等等。这些特点使得缓存技术在移动计算环境中尤为重要。因为缓存能有效减少带宽需求,并能节省移动计算机的能耗。  相似文献   

13.
Caching reduces the average cost of retrieving data by amortizing the lookup cost over several references to the data. Problems with maintaining strong cache consistency in a distributed system can be avoided by treating cached information as hints. A new approach to managing caches of hints suggests maintaining a minimum level of cache accuracy, rather than maximizing the cache hit ratio, in order to guarantee performance improvements. The desired accuracy is based on the ratio of lookup costs to the costs of detecting and recovering from invalid cache entries. Cache entries are aged so that they get purged when their estimated accuracy falls below the desired level. The age thresholds are dictated solely by clients' accuracy requirements instead of being suggested by data storage servers or system administrators.  相似文献   

14.
基于流行度预测的流媒体代理缓存替换算法   总被引:2,自引:0,他引:2       下载免费PDF全文
针对流行度随时间变化的特性,利用回归分析技术给出了一种流媒体文件的流行度预测算法,并在增加少量存储空间及计算时间消耗的情况下,将该预测算法应用于流媒体代理缓存服务器的缓存替换算法之中,模拟实验表明,该方法能减少缓存的替换次数,提高缓存命中率,性能较优。  相似文献   

15.
This paper studies a challenging problem of cache placement in wireless multi-hop ad hoc networks. More specifically, we study how to achieve an optimal tradeoff between total access delay and caching overheads, by properly selecting a subset of wireless nodes as cache nodes when the network topology changes. We assume a data source updates a data item to be accessed by other client nodes. Most of the existing cache placement algorithms use hop counts to measure the total cost of a caching system, but hop delay in wireless networks varies much due to the contentions among these nodes and the traffic load on each link. Therefore, we evaluate the per-hop delay for each link according to the contentions detected by a wireless node from the MAC layer. We propose two heuristic cache placement algorithms, named Centralized Contention-aware Caching Algorithm (CCCA) and Distributed Contention-aware Caching Algorithm (DCCA), both of which detect the variation of contentions and the change of the traffic flows, in order to evaluate the benefit of selecting a node as a cache node. We also apply a TTL-based cache consistency strategy to maintain the delta consistency among all the cache nodes. Simulation results show that the proposed algorithms achieve better performance than other alternative ones in terms of average query delay, caching overheads, and query success ratio.  相似文献   

16.
片上多核Cache资源管理机制研究   总被引:2,自引:1,他引:1  
随着片上多核成为处理器发展的主流和片上Cache资源的持续增长,Cache资源的管理已成为片上多核的关键问题。介绍了片上多核Cache资源管理的研究进展,依据研究内容将Cache资源的管理分为Cache划分和Cache共享两类。对Cache划分,探讨了其主要组成部分和一般形式,分析和比较了典型的片上多核Cache划分机制。对Cache共享,给出了其主要研究内容,并介绍和比较了几种主流的片上多核Cache共享机制。通过分析,认为软硬件协同管理的页划分应是未来片上多核Cache划分机制的研究重点;而片上多核Cache共享机制的研究则应从目标应用的Cache行为特征着手。  相似文献   

17.
网内缓存功能是信息中心网络ICN(Information-Centric Networking)最重要的特性之一,大大减小了信息请求的响应时间和网内流量。合理地分配每个路由器的缓存空间大小,对网络性能有较大影响,也可以节约网络成本。为了使路由器的缓存大小配置合理,首先综合考虑路由器的度数权重、紧密度、网络的中心度、请求影响度等度量指标,定义了一个新的度量指标,称为节点权重;然后,提出一种基于节点权重的缓存大小分配方案,将网络所需的容量按比例分配给路由器。仿真结果表明,与均匀分配相比,路由器的缓存空间利用率至少提升了8%,命中率至少提高了6%;与基于请求影响度的分配方案相比,路由器的缓存空间利用率至少提升3%,命中率至少提高了3%。  相似文献   

18.
In-network caching in Named Data Networking (NDN) based Internet of Things (IoT) plays a central role for efficient data dissemination. Data cached throughout the network may quickly become obsolete as they are transient and frequently updated by their producers. As such, NDN-based IoT networks impose stringent requirement in terms of data freshness. While various cache replacement policies were proposed, none has considered the cache freshness requirement. In this paper, we introduce a novel cache replacement policy called Least Fresh First (LFF) integrating the cache freshness requirement. LFF evicts invalid cached contents based on time series forecasting of sensors future events. Extensive simulations are performed to evaluate the performance of LFF and to compare it to the different well-known cache replacement policies in ICN-based IoT networks. The obtained results show that LFF significantly improves data freshness compared to other policies, while enhancing the server hit reduction ratio, the hop reduction ratio and the response latency.  相似文献   

19.
Competitive snoopy caching   总被引:8,自引:1,他引:8  
In a snoopy cache multiprocessor system, each processor has a cache in which it stores blocks of data. Each cache is connected to a bus used to communicate with the other caches and with main memory. Each cache monitors the activity on the bus and in its own processor and decides which blocks of data to keep and which to discard. For several of the proposed architectures for snoopy caching systems, we present new on-line algorithms to be used by the caches to decide which blocks to retain and which to drop in order to minimize communication over the bus. We prove that, for any sequence of operations, our algorithms' communication costs are within a constant factor of the minimum required for that sequence; for some of our algorithms we prove that no on-line algorithm has this property with a smaller constant.A preliminary and condensed version of this paper appeared in theProceedings of the 27th Annual Symposium on the Foundations of Computer Science, IEEE, 1986.This author received support from an IBM doctoral fellowship, and did part of this work while a research student associate at IBM Almaden Research Center.  相似文献   

20.
在Internet上高效传输流媒体数据是推广诸如视频点播等应用的基础.现有方案仅考虑了采用单代理结构的前缀缓存和服务器调度来降低骨干网带宽消耗和服务器负载.在带前缀缓存的Batch patching基础上提出了后缀的动态缓存算法ICBR,并提出了基于ICBR缓存算法的多缓存协作体系结构及协作算法MCC,仿真结果表明,基于ICBR的多缓存协作显著地降低了获取补丁而导致的骨干网带宽的消耗,提高了客户端QoS同时也降低了服务器负载.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号