共查询到19条相似文献,搜索用时 109 毫秒
1.
为了缓解社交网络热点话题生成的密集图数据导致存储的频繁读取和缓存空间浪费等问题,针对话题产生与消亡的演化更新规律,提出了基于话题热度演化加速度的缓存置换算法(cache replacement algorithm based on topic heat evolution acceleration, THEA-CR)。该算法首先对社交网络数据进行话题簇的实体划分,识别锚定目标。其次,计算话题热度演化加速度,对热点数据的优先级进行研判;最后设计双队列缓存置换策略,针对话题关注度和访问频率进行缓存空间的置换和更新。在新浪微博数据集中与经典的缓存置换算法进行大量对比实验,验证了所提算法具有较好的可行性与有效性。结果表明提出的THEA-CR算法能够在社交网络密集图数据的不同图查询操作中平均提升约31.4%的缓存命中率,并且缩短了约27.1%的查询响应时间。 相似文献
2.
现有的Web缓存器的实现主要是基于传统的内存缓存算法,由于Web业务请求的异质性,传统的替换算法不能在Web环境中有效工作。研究了Web缓存替换操作的依据,分析了以往替换算法的不足,考虑到Web文档的大小、访问代价、访问频率、访问兴趣度以及最近一次被访问的时间对缓存替换的影响,提出了Web缓存对象角色的概念,建立了一种新的基于对象角色的高精度Web缓存替换算法(ORB算法);并以NASA和DEC的代理服务器数据为例,将该算法与LRU、LFU、SIZE、Hybrid算法进行了仿真实验对比,结果证明,ORB算 相似文献
3.
Web代理服务器缓存能够在一定程度上解决用户访问延迟和网络拥塞问题,Web代理缓存的缓存替换策略直接影响缓存的命中率,从而影响网络请求响应的效果;为此,使用一种通过固定大小的循环滑动窗口提取Web日志数据的多项特征,并使用高斯混合模型对Web日志数据进行聚类分析,预测在窗口时间内可能再次访问到Web对象,结合最近最少使用(LRU)算法,提出一种新的基于高斯混合模型的Web代理服务器缓存替换策略;实验结果表明,与传统的缓存替换策略LRU、LFU、FIFO、GDSF相比,该策略有效提高了Web代理缓存的请求命中率和字节命中率。 相似文献
4.
针对GDSF替换算法中对访问频率缺少预测的不足,提出了一种基于协同过滤的GDSF缓存替换算法(GDSF-CF)。该算法考虑了Web对象之间相似性与用户访问时间间隔,运用协同过滤算法生成Web对象的预测访问频率,并采用齐普夫定律参数对GDSF算法的目标函数进行了改进。当需要进行缓存替换时,利用目标函数价值计算缓存空间中的每个Web对象缓存价值,将最小缓存价值的Web对象进行替换。仿真实验结果表明,该算法的命中率HR和字节命中率BHR都有较大提升。 相似文献
5.
6.
7.
内容分布网络缓存资源并行分配的博弈粒子场方法 总被引:1,自引:0,他引:1
文章研究博弈粒子场方法对内容分布网络(CDN)缓存分配问题求解,通过建立相应的数学模型,将两阶段Web服务器一代理服务器缓存资源分配问题,映射为两个对偶力场中粒子的运动,力场中所有粒子按数学模型中定义的规则运动直至达到稳定状态,再由粒子的稳定状态反映射为Web服务器一代理服务器缓存资源分配问题的解.提出的适用于CDN的博弈广义粒子场模型(game particle-field(G-PF))置换方法,克服了现有常用的MFU、LFU、LRU等置换算法缓存间不能合作的缺点,发展成为合作的博弈置换算法.并用博弈理论简单地证明了所得到的解为全局Pareto最优解.这样,使G-PF置换算法能逼近理论上的Optimal置换算法,较Korupolu等提出合作的置换算法有更好的性能. 相似文献
8.
9.
10.
11.
在分析用户访问行为基础上实现代理缓存 总被引:3,自引:0,他引:3
文中提出一个描述WWW结构的网站图Site-Graph模型,在此基础上进行用户访问行为分析,从而提出了一个考虑实际请问请求模式的代理缓存系统URAC.文中详细描述了URAC的工作原理,对代理缓存设计时所要解决的命中率,一致性和替换算法等主要问题进行了讨论,并给出了性能分析,得到URAC以提高命中率和降低访问延迟为目标是一个更加实用的代理缓存系统的结论。 相似文献
12.
In this paper, we propose a novel approach to enhancing web proxy caching, an approach that integrates content management
with performance tuning techniques. We first develop a hierarchical model for management of web data, which consists of physical
pages, logical pages and topics corresponding to different abstraction levels. Content management based on this model enables
active utilization of the cached web contents. By defining priority on each abstraction level, the cache manager can make
replacement decisions on topics, logical pages, and physical pages hierarchically. As a result, a cache can always keep the
most relevant, popular, and high-quality content. To verify the proposed approach, we have designed a content-aware replacement
algorithm, LRU-SP+. We evaluate the algorithm through preliminary experiments. The results show that content management can
achieve 30% improvement of caching performance in terms of hit ratios and profit ratios (considering significance of topics)
compared to content-blind schemes.
Received 19 March 2001 / Revised 17 April 2001 / Accepted in revised form 25 May 2001 相似文献
13.
In traditional proxy caches,any visited page from any Web server is cached independently,ignoring connections between pages,And users still have to frequently visity in dexing pages just for reaching useful informative ones,which causes significant waste of caching space and unnecessary Web traffic.In order to solve the above problem,this paper introduced a site graph model to describe WWW and a site-based replacement strategy has been built based on it .The concept of “access frequency“ is developed for evaluating whether a Web page is worth being kept in caching space.On the basis of user‘‘‘‘‘‘‘‘s access history,auxiliary navigation information is provided to help him reach target pages more quickly.Performance test results haves shown that the proposed proxy cache system can get higher hit ratio than traditional ones and can reduce user‘‘‘‘‘‘‘‘s access latency effectively. 相似文献
14.
Athena Vakali 《Distributed and Parallel Databases》2002,11(1):93-116
Web caching has been proposed as an effective solution to the problems of network traffic and congestion, Web objects access and Web load balancing. This paper presents a model for optimizing Web cache content by applying either a genetic algorithm or an evolutionary programming scheme for Web cache content replacement. Three policies are proposed for each of the genetic algorithm and the evolutionary programming techniques, in relation to objects staleness factors and retrieval rates. A simulation model is developed and long term trace-driven simulation is used to experiment on the proposed techniques. The results indicate that all evolutionary techniques are beneficial to the cache replacement, compared to the conventional replacement applied in most Web cache server. Under an appropriate objective function the genetic algorithm has been proven to be the best of all approaches with respect to cache hit and byte hit ratios. 相似文献
15.
16.
Web-log mining for predictive Web caching 总被引:3,自引:0,他引:3
Caching is a well-known strategy for improving the performance of Web-based systems. The heart of a caching system is its page replacement policy, which selects the pages to be replaced in a cache when a request arrives. In this paper, we present a Web-log mining method for caching Web objects and use this algorithm to enhance the performance of Web caching systems. In our approach, we develop an n-gram-based prediction algorithm that can predict future Web requests. The prediction model is then used to extend the well-known GDSF caching policy. We empirically show that the system performance is improved using the predictive-caching approach. 相似文献
17.
In-network caching in Named Data Networking (NDN) based Internet of Things (IoT) plays a central role for efficient data dissemination. Data cached throughout the network may quickly become obsolete as they are transient and frequently updated by their producers. As such, NDN-based IoT networks impose stringent requirement in terms of data freshness. While various cache replacement policies were proposed, none has considered the cache freshness requirement. In this paper, we introduce a novel cache replacement policy called Least Fresh First (LFF) integrating the cache freshness requirement. LFF evicts invalid cached contents based on time series forecasting of sensors future events. Extensive simulations are performed to evaluate the performance of LFF and to compare it to the different well-known cache replacement policies in ICN-based IoT networks. The obtained results show that LFF significantly improves data freshness compared to other policies, while enhancing the server hit reduction ratio, the hop reduction ratio and the response latency. 相似文献
18.
Abstract. Caching web pages at proxies and in web servers' memories can greatly enhance performance. Proxy caching is known to reduce
network load and both proxy and server caching can significantly decrease latency. Web caching problems have different properties
than traditional operating systems caching, and cache replacement can benefit by recognizing and exploiting these differences.
We address two aspects of the predictability of traffic patterns: the overall load experienced by large proxy and web servers,
and the distinct access patterns of individual pages.
We formalize the notion of ``cache load' under various replacement policies, including LRU and LFU, and demonstrate that
the trace of a large proxy server exhibits regular load. Predictable load allows for improved design, analysis, and experimental
evaluation of replacement policies. We provide a simple and (near) optimal replacement policy when each page request has an
associated distribution function on the next request time of the page. Without the predictable load assumption, no such online
policy is possible and it is known that even obtaining an offline optimum is hard. For experiments, predictable load enables
comparing and evaluating cache replacement policies using partial traces , containing requests made to only a subset of the pages.
Our results are based on considering a simpler caching model which we call the interval caching model . We relate traditional and interval caching policies under predictable load, and derive (near)-optimal replacement policies
from their optimal interval caching counterparts. 相似文献