首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
One of the key research fields of content-centric networking (CCN) is to develop more efficient cache replacement policies to improve the hit ratio of CCN in-network caching. However, most of existing cache strategies designed mainly based on the time or frequency of content access, can not properly deal with the problem of the dynamicity of content popularity in the network. In this paper, we propose a fast convergence caching replacement algorithm based on dynamic classification method for CCN, named as FCDC. It develops a dynamic classification method to reduce the time complexity of cache inquiry, which achieves a higher caching hit rate in comparison to random classification method under dynamic change of content popularity. Meanwhile, in order to relieve the influence brought about by dynamic content popularity, it designs a weighting function to speed up cache hit rate convergence in the CCN router. Experimental results show that the proposed scheme outperforms the replacement policies related to least recently used (LRU) and recent usage frequency (RUF) in cache hit rate and resiliency when content popularity in the network varies.  相似文献   

2.
The development of proxy caching is essential in the area of video‐on‐demand (VoD) to meet users' expectations. VoD requires high bandwidth and creates high traffic due to the nature of media. Many researchers have developed proxy caching models to reduce bandwidth consumption and traffic. Proxy caching keeps part of a media object to meet the viewing expectations of users without delay and provides interactive playback. If the caching is done continuously, the entire cache space will be exhausted at one stage. Hence, the proxy server must apply cache replacement policies to replace existing objects and allocate the cache space for the incoming objects. Researchers have developed many cache replacement policies by considering several parameters, such as recency, access frequency, cost of retrieval, and size of the object. In this paper, the Weighted‐Rank Cache replacement Policy (WRCP) is proposed. This policy uses such parameters as access frequency, aging, and mean access gap ratio and such functions as size and cost of retrieval. The WRCP applies our previously developed proxy caching model, Hot‐Point Proxy, at four levels of replacement, depending on the cache requirement. Simulation results show that the WRCP outperforms our earlier model, the Dual Cache Replacement Policy.  相似文献   

3.
This paper aims at finding fundamental design principles for hierarchical Web caching. An analytical modeling technique is developed to characterize an uncooperative two-level hierarchical caching system where the least recently used (LRU) algorithm is locally run at each cache. With this modeling technique, we are able to identify a characteristic time for each cache, which plays a fundamental role in understanding the caching processes. In particular, a cache can be viewed roughly as a low-pass filter with its cutoff frequency equal to the inverse of the characteristic time. Documents with access frequencies lower than this cutoff frequency have good chances to pass through the cache without cache hits. This viewpoint enables us to take any branch of the cache tree as a tandem of low-pass filters at different cutoff frequencies, which further results in the finding of two fundamental design principles. Finally, to demonstrate how to use the principles to guide the caching algorithm design, we propose a cooperative hierarchical Web caching architecture based on these principles. Both model-based and real trace simulation studies show that the proposed cooperative architecture results in more than 50% memory saving and substantial central processing unit (CPU) power saving for the management and update of cache entries compared with the traditional uncooperative hierarchical caching architecture.  相似文献   

4.
GroCoca: group-based peer-to-peer cooperative caching in mobile environment   总被引:3,自引:0,他引:3  
In a mobile cooperative caching environment, we observe the need for cooperating peers to cache useful data items together, so as to improve cache hit from peers. This could be achieved by capturing the data requirement of individual peers in conjunction with their mobility pattern, for which we realized via a GROup-based COoperative CAching scheme (GroCoca). In GroCoca, we define a tightly-coupled group (TCG) as a collection of peers that possess similar mobility pattern and display similar data affinity. A family of algorithms is proposed to discover and maintain all TCGs dynamically. Furthermore, two cooperative cache management protocols, namely, cooperative cache admission control and replacement, are designed to control data replicas and improve data accessibility in TCGs. A cache signature scheme is also adopted in GroCoca in order to provide information for the mobile clients to determine whether their TCG members are likely caching their desired data items and to perform cooperative cache replacement Experimental results show that GroCoca outperforms the conventional caching scheme and standard COoperative CAching scheme (COCA) in terms of access latency and global cache hit ratio. However, GroCoca generally incurs higher power consumption  相似文献   

5.
针对内容中心网络(CCN, content centric networking)节点存储资源的有效利用和优化配给问题,在同质化缓存分配的基础上,提出了一种基于替换率的缓存空间动态借调机制。该机制从节点存储空间使用状态的动态差异性出发,首先对于缓存资源借调的合理性给予证明,进而,依据节点对于存储资源的需求程度,动态地执行缓存借调,将相对空闲的存储资源分配给需求程度更大的节点支配,换取过载节点缓存性能的提升。该机制减小了内容请求跳数,提高了缓存命中率,以少量额外的代价换取了内容请求开销的显著下降,提升了存储资源整体利用率,仿真结果验证了其有效性。  相似文献   

6.
廖建新  杨波  朱晓民  王纯 《通信学报》2007,28(11):51-58
提出一种适用于移动通信网的两级缓存流媒体系统结构2CMSA(two—level cache mobile streaming architecture),它突破了移动流媒体系统中终端缓存空间小、无线接入网带宽窄的局限;针对2CMSA结构设计了基于两级缓存的移动流媒体调度算法2CMSS(two—level cache based mobile streaming scheduling algorithm),建立数学模型分析了其性能;仿真实验证明,与原有的移动流媒体系统相比,使用2CMSS调度算法能够有效地节省网络传输开销,降低用户启动时延。  相似文献   

7.
Multimedia applications involving image retrieval demand fast and efficient response. Efficiency of search and retrieval of information in a database system is index dependent. Generally, a two-level indexing scheme in an image database can help to reduce the search space against a given query image. In such type of indexing scheme, the first level is required to significantly reduce the search space for second stage of comparisons and must be computationally efficient. It is also required to guarantee that no false negatives may result. The second level of indexing involves more detailed analysis and comparison of potentially relevant images. In this paper, we present an efficient signature representation scheme for first level of a two-level image indexing scheme that is based on hierarchical decomposition of image space into spatial arrangement of image features. Experimental results demonstrate that our signature representation scheme results in fewer number of matching signatures in the first level and significantly improves the overall computational time. As this scheme relies on corner points as the salient feature points in an image to describe its contents, we also compare results using several different contemporary corner detection methods. Further, we formally prove that the proposed signature representation scheme not only results in fewer number of signatures but also does not result in any false negative.  相似文献   

8.
An overview of web caching replacement algorithms   总被引:2,自引:0,他引:2  
The increasing demand for World Wide Web (WWW) services has made document caching a necessity to decrease download times and reduce Internet traffic. To make effective use of caching, an informative decision has to be made as to which documents are to be evicted from the cache in case of cache saturation. This is particularly important in a wireless network, where the size of the client cache at the mobile terminal (MT) is small. Several types of caching are used over the Internet, including client caching, server caching, and more recently, proxy caching. In this article we review some of the well known proxy-caching policies for the Web. We describe these policies, show how they operate, and discuss the main traffic properties they incorporate in their design. We argue that a good caching policy adapts itself to changes in Web workload characteristics. We make a qualitative comparison between these policies after classifying them according to the traffic properties they consider in their designs. Furthermore, we compare a selected subset of these policies using trace-driven simulations.  相似文献   

9.
刘银龙  汪敏  周旭 《通信学报》2015,36(3):187-194
为降低P2P缓存系统中的全局开销,提出一种基于总开销最小的协作缓存策略。该策略综合考虑P2P缓存系统中的传输开销和存储开销,使用跨ISP域间链路开销、流行度、文件大小、存储开销来衡量文件的缓存增益。需要替换时,首先替换掉缓存增益最小的内容。实验结果表明,所提策略能够有效降低系统的总开销。  相似文献   

10.
Existing cooperative caching algorithms for mobile ad hoc networks face serious challenges due to message overhead and scalability issues. To solve these issues, we propose an adaptive virtual backbone based cooperative caching that uses a connective dominating set (CDS) to find the desired location of cached data. Message overhead in cooperative caching is mainly due to cache lookup process used for cooperative caching. The idea in this scheme is to reduce the number of nodes involved in cache look up process, by constructing a virtual backbone adaptive to the dynamic topology in mobile ad hoc networks. The proposed algorithm is decentralized and the nodes in the CDS perform data dissemination and discovery. Simulation results show that the message overhead created by the proposed cooperative caching technique is very less compared to other approaches. Moreover, due to the CDS based cache discovery we applied in this work, the proposed cooperative caching has the potential to increase the cache hit ratio and reduce average delay.  相似文献   

11.
Mobile Ad hoc NETwork (MANET) presents a constrained communication environment due to fundamental limitations of client resources, insufficient wireless bandwidth and users' frequent mobility. Caching of frequently accessed data in such environment is a potential technique that can improve the data access performance and availability. Co‐operative caching, which allows the sharing and co‐ordination of cached data among clients, can further explore the potential of the caching techniques. In this paper, we propose a novel scheme, called zone co‐operative (ZC) for caching in MANETs. In ZC scheme, one‐hop neighbours of a mobile client form a co‐operative cache zone. For a data miss in the local cache, each client first searches the data in its zone before forwarding the request to the next client that lies along routing path towards server. As a part of cache management, cache admission control and value‐based replacement policy are developed to improve the data accessibility and reduce the local cache miss ratio. An analytical study of ZC based on data popularity, node density and transmission range is also performed. Simulation experiments show that the ZC caching mechanism achieves significant improvements in cache hit ratio and average query latency in comparison with other caching strategies. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

12.
网内缓存技术一直是信息中心网络(Information-Centric Networking, ICN)研究中的核心问题。传统缓存方案通常为网络内的单一全局协作策略,然而不同网络区域的缓存目标不同,导致传统方案的性能优化程度有限,可扩展性不强。同时协同缓存放置和请求路由,相较于纯缓存方案有效性更高。该文提出并评估了一种新型面向分域的协作混合缓存放置和路由请求机制,该机制将缓存网络区分核心网络和边缘网络,在核心网络中采用非沿路的HASH协作机制,边缘网络采用沿路的回退协作机制,创建二元组标签保存缓存放置的信息并作用于后续的请求转发。通过仿真对比表明,该文提出的协作混合缓存机制相较于传统策略可以更好地平衡网络资源利用和用户体验,并在大型ISPs中拓展性更好。  相似文献   

13.
针对命名数据网络(Named Data Networking, NDN)存储空间的有效利用和应答内容的高效缓存问题,该文采用差异化缓存的方式,提出一种依据内容请求序列相关性的协作缓存算法。在内容请求中,预先发送对于后续相关数据单元的并行预测请求,增大内容请求的就近响应概率;缓存决策时,提出联合空间存储位置与缓存驻留时间的2维差异化缓存策略。根据内容活跃度的变化趋势,空间维度上逐跳推进内容存储位置,时间维度上动态调整内容缓存时间,以渐进式的方式将真正流行的请求内容推送至网络边缘存储。该算法减小了内容请求时延和缓存冗余,提高了缓存命中率,仿真结果验证了其有效性。  相似文献   

14.
In local loss recovery schemes, a small number of recovery nodes distributed along the transmission paths save incoming packets temporarily in accordance with a specified cache policy and retransmit these packets if they subsequently receive a request message from a downstream receiver. To reduce the recovery latency, the cache policy should ensure that the recovery nodes are always able to satisfy the retransmission requests of the downstream receivers. However, owing to the limited cache size of the recovery nodes and the behavior of the cache policy, this cannot always be achieved, and thus some of the packets must be retransmitted by the sender. Accordingly, this paper develops a new network‐coding‐based cache policy, designated as network‐coding‐based FIFO (NCFIFO), which extends the caching time of the packets at the recovery nodes without dropping any of the incoming packets. As a result, the lost packets can be always recovered from the nearest recovery nodes and the recovery latency is significantly reduced. The loss recovery performance of the NCFIFO cache policy is compared with that of existing cache policies by performing a series of simulation experiments using both a uniform error model and a burst error model. The simulation results show that the NCFIFO cache policy not only achieves a better recovery performance than existing cache policies, but also provides a more effective solution for managing a small amount of cache size in environments characterized by a high packet arrival rate. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

15.
在软件定义网络(SDN)和内容中心网络(CCN)融合架构下,为了充分利用控制层对网络拓扑和缓存资源的全局感知,在全网中实现缓存资源的优化使用,提出了一种集中控制的缓存决策优化方案.在该方案中,应用粒子群优化算法(PSO)并且根据节点边缘度、节点重要度以及内容流行度对缓存资源和内容进行集中缓存决策,使得内容在不同的节点进行合理的缓存.仿真结果表明,通过评估缓存大小对缓存性能的影响,PSO缓存决策方法取得了比LCE、PROB缓存决策策略更优的缓存命中率和路径延展率,明显降低了缓存节点的缓存替换数,使得缓存达到了整体缓存优化.  相似文献   

16.
异构网络边缘缓存机制是解决传统回程传输链路负载过大的可靠技术之一,但已有的缓存策略往往不能与被请求数据的流行度相匹配。为了解决这一问题,该文提出一种流行度匹配边缘缓存策略(PMCP),该策略能够根据流行度参数匹配对应的文件缓存概率以最大限度提升通信可靠性并降低回程带宽压力。基站的平面位置通过随机几何建模,文件的被请求概率则通过齐夫分布建模。蒙特卡罗仿真结果表明缓存机制能够有效降低回程带宽压力,且所提出缓存策略的可靠性优于对比策略。  相似文献   

17.
命名数据网络中基于局部请求相似性的协作缓存路由机制   总被引:1,自引:0,他引:1  
该文针对命名数据网络(Named Data Networking, NDN)应答内容的高效缓存和利用问题,依据内容请求分布的局域相似特征,提出一种协作缓存路由机制。缓存决策时,将垂直请求路径上的冗余消除和水平局域范围内的内容放置进行有效结合。垂直方向上,提出基于最大内容活跃因子的路径缓存策略,确定沿途转发对应的最大热点请求区域;水平方向上,采用一致性Hash协同缓存思想,实现应答内容的局域定向存储。路由查找时,将局域节点缓存引入到路由转发决策中,依据内容活跃等级动态执行局域缓存查找,增大内容请求就近响应概率。该机制减小了内容请求时延和缓存冗余,提高了缓存命中率,以少量额外的代价换取了内容请求开销的大幅下降,仿真结果验证了其有效性。  相似文献   

18.
罗熹  安莹  王建新  刘耀 《电子与信息学报》2015,37(11):2790-2794
内容中心网络(CCN)是为了适应未来网络通信模式的转变,提供对可扩展和高效内容获取的原生支持而提出一种新型的网络体系架构,内容缓存机制是其研究的关键问题之一。现有机制在缓存节点的选择时往往过于集中,缓存负载分布严重不均,大大降低了网络资源利用率以及系统的缓存性能。该文提出一种基于缓存迁移的协作缓存机制,首先在缓存节点选择时考虑节点的中心性保证内容尽可能缓存在位置更重要的节点。同时,在缓存压力过大时,通过可用缓存空间大小、缓存替换率以及网络连接的稳定性等信息选择合适的邻居节点进行缓存内容的转移,充分利用邻居资源实现负载分担。仿真结果表明该机制能有效地改善缓存负载在节点上分布的均衡性,提高缓存命中率和缓存资源利用率并降低平均接入代价。  相似文献   

19.
Aiming at the problem of reducing the load of the backward link in the edge buffer and fog wireless access network technology,a multi-tier cooperative caching scheme in F-RAN was proposed to further reduce the backhaul traffic load.In particular,by considering the network topology,content popularity prediction and link capacity,the optimization problem was decomposed into knapsack subproblems in multi-tiers,and effective greedy algorithms were proposed to solve the corresponding subproblems.Simulation results show that the proposed multi-tier cooperative caching scheme can effectively reduce the backhaul traffic and achieve relatively high cache hit rate.  相似文献   

20.
Power consumption is an increasingly pressing problem in modern processor design. Since the on-chip caches usually consume a significant amount of power, it is one of the most attractive targets for power reduction. This paper presents a two-level filter scheme, which consists of the L1 and L2 filters, to reduce the power consumption of the on-chip cache. The main idea of the proposed scheme is motivated by the substantial unnecessary activities in conventional cache architecture. We use a single block buffer as the L1 filter to eliminate the unnecessary cache accesses. In the L2 filter, we then propose a new sentry-tag architecture to further filter out the unnecessary way activities in case of the L1 filter miss. We use SimpleScalar to simulate the SPEC2000 benchmarks and perform the HSPICE simulations to evaluate the proposed architecture. Experimental results show that the two-level filter scheme can effectively reduce the cache power consumption by eliminating most unnecessary cache activities, while the compromise of system performance is negligible. Compared to a conventional instruction cache (32 kB, two-way) implemented with only the L1 filter, the use of a two-level filter can result in roughly 30% reduction in total cache power consumption. Similarly, compared to a conventional data cache (32 kB, four-way) implemented with only the L1 filter, the total cache power reduction is approximately 46%.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号