首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 109 毫秒
1.
为了缓解社交网络热点话题生成的密集图数据导致存储的频繁读取和缓存空间浪费等问题,针对话题产生与消亡的演化更新规律,提出了基于话题热度演化加速度的缓存置换算法(cache replacement algorithm based on topic heat evolution acceleration, THEA-CR)。该算法首先对社交网络数据进行话题簇的实体划分,识别锚定目标。其次,计算话题热度演化加速度,对热点数据的优先级进行研判;最后设计双队列缓存置换策略,针对话题关注度和访问频率进行缓存空间的置换和更新。在新浪微博数据集中与经典的缓存置换算法进行大量对比实验,验证了所提算法具有较好的可行性与有效性。结果表明提出的THEA-CR算法能够在社交网络密集图数据的不同图查询操作中平均提升约31.4%的缓存命中率,并且缩短了约27.1%的查询响应时间。  相似文献   

2.
现有的Web缓存器的实现主要是基于传统的内存缓存算法,由于Web业务请求的异质性,传统的替换算法不能在Web环境中有效工作。研究了Web缓存替换操作的依据,分析了以往替换算法的不足,考虑到Web文档的大小、访问代价、访问频率、访问兴趣度以及最近一次被访问的时间对缓存替换的影响,提出了Web缓存对象角色的概念,建立了一种新的基于对象角色的高精度Web缓存替换算法(ORB算法);并以NASA和DEC的代理服务器数据为例,将该算法与LRU、LFU、SIZE、Hybrid算法进行了仿真实验对比,结果证明,ORB算  相似文献   

3.
Web代理服务器缓存能够在一定程度上解决用户访问延迟和网络拥塞问题,Web代理缓存的缓存替换策略直接影响缓存的命中率,从而影响网络请求响应的效果;为此,使用一种通过固定大小的循环滑动窗口提取Web日志数据的多项特征,并使用高斯混合模型对Web日志数据进行聚类分析,预测在窗口时间内可能再次访问到Web对象,结合最近最少使用(LRU)算法,提出一种新的基于高斯混合模型的Web代理服务器缓存替换策略;实验结果表明,与传统的缓存替换策略LRU、LFU、FIFO、GDSF相比,该策略有效提高了Web代理缓存的请求命中率和字节命中率。  相似文献   

4.
针对GDSF替换算法中对访问频率缺少预测的不足,提出了一种基于协同过滤的GDSF缓存替换算法(GDSF-CF)。该算法考虑了Web对象之间相似性与用户访问时间间隔,运用协同过滤算法生成Web对象的预测访问频率,并采用齐普夫定律参数对GDSF算法的目标函数进行了改进。当需要进行缓存替换时,利用目标函数价值计算缓存空间中的每个Web对象缓存价值,将最小缓存价值的Web对象进行替换。仿真实验结果表明,该算法的命中率HR和字节命中率BHR都有较大提升。  相似文献   

5.
Web对象可缓存性研究及加速方案   总被引:4,自引:2,他引:4  
石磊  卫琳  古志民  石云 《计算机工程》2005,31(18):74-75,89
讨论了Web缓存的一般工作原理,描述了Web对象可缓存性的判断方法.为减少不可缓存对象对系统效率的影响,提出了Web缓存技术的加速方案.对于不可缓存的Web对象的请求,直接发给相应的Web服务器,而不通过代理服务器转发,系统的效率可以得到一定程度的提高.  相似文献   

6.
位于因特网骨干网和同一接入网之间的流媒体缓存代理服务器相互协作,可以提高缓存命中率,保持负载平衡。该文提出了一种共享缓存空间的紧耦合的多代理服务器协作机制,给出了多代理协作的缓存替换策略和负载平衡算法。通过NS2模拟验证,该机制可以使系统保持更好的性能。  相似文献   

7.
内容分布网络缓存资源并行分配的博弈粒子场方法   总被引:1,自引:0,他引:1  
文章研究博弈粒子场方法对内容分布网络(CDN)缓存分配问题求解,通过建立相应的数学模型,将两阶段Web服务器一代理服务器缓存资源分配问题,映射为两个对偶力场中粒子的运动,力场中所有粒子按数学模型中定义的规则运动直至达到稳定状态,再由粒子的稳定状态反映射为Web服务器一代理服务器缓存资源分配问题的解.提出的适用于CDN的博弈广义粒子场模型(game particle-field(G-PF))置换方法,克服了现有常用的MFU、LFU、LRU等置换算法缓存间不能合作的缺点,发展成为合作的博弈置换算法.并用博弈理论简单地证明了所得到的解为全局Pareto最优解.这样,使G-PF置换算法能逼近理论上的Optimal置换算法,较Korupolu等提出合作的置换算法有更好的性能.  相似文献   

8.
对于一个日访问量达到万级以上的网站来说,浏览速度将会成为该系统的一个瓶颈。如果能够优化内容发布系统,并把非实时更新的页面转换成静态页面,将会使浏览速度得到显著的提升。本文提出一种基于Java和XML技术、在MVC2架构基础上的三级缓存机制,从静态页面、组件到数据对象分别进行缓存,根据点击率来更新缓存,有效降低了多层体系结构之间的通信量,提高了Web的响应性能。  相似文献   

9.
在LRU算法的基础上,提出一种改进的Web合作缓存置换算法。该算法针对不同大小的文档采取不同的存储策略,如增加小文档在缓存组中的存储数量,以提高其本地缓存的命中率,减少大文档在缓存组中存储的数量,以节约整个缓存组的空间。仿真实验结果表明该算法能够获得较好的性能。  相似文献   

10.
高访问量信息平台响应性能优化研究与实现*   总被引:1,自引:0,他引:1  
网络信息平台在访问量骤增、达到系统性能设计上限时,容易出现访问滞缓的问题,在分析和实验的基础上提出了一种有效的解决方案。通过Web系统页面部分静态化、列表页面实现动态缓存、优化页面更新策略,提出了一种新的Web动态缓存替换策略LFU*来提高Web服务器峰值负载下的运行能力,最终达到对网络信息服务平台进行性能优化的目标。最后通过性能测试证明了本方案的有效性。  相似文献   

11.
在分析用户访问行为基础上实现代理缓存   总被引:3,自引:0,他引:3  
文中提出一个描述WWW结构的网站图Site-Graph模型,在此基础上进行用户访问行为分析,从而提出了一个考虑实际请问请求模式的代理缓存系统URAC.文中详细描述了URAC的工作原理,对代理缓存设计时所要解决的命中率,一致性和替换算法等主要问题进行了讨论,并给出了性能分析,得到URAC以提高命中率和降低访问延迟为目标是一个更加实用的代理缓存系统的结论。  相似文献   

12.
In this paper, we propose a novel approach to enhancing web proxy caching, an approach that integrates content management with performance tuning techniques. We first develop a hierarchical model for management of web data, which consists of physical pages, logical pages and topics corresponding to different abstraction levels. Content management based on this model enables active utilization of the cached web contents. By defining priority on each abstraction level, the cache manager can make replacement decisions on topics, logical pages, and physical pages hierarchically. As a result, a cache can always keep the most relevant, popular, and high-quality content. To verify the proposed approach, we have designed a content-aware replacement algorithm, LRU-SP+. We evaluate the algorithm through preliminary experiments. The results show that content management can achieve 30% improvement of caching performance in terms of hit ratios and profit ratios (considering significance of topics) compared to content-blind schemes. Received 19 March 2001 / Revised 17 April 2001 / Accepted in revised form 25 May 2001  相似文献   

13.
A site-based proxy cache   总被引:4,自引:0,他引:4       下载免费PDF全文
In traditional proxy caches,any visited page from any Web server is cached independently,ignoring connections between pages,And users still have to frequently visity in dexing pages just for reaching useful informative ones,which causes significant waste of caching space and unnecessary Web traffic.In order to solve the above problem,this paper introduced a site graph model to describe WWW and a site-based replacement strategy has been built based on it .The concept of “access frequency“ is developed for evaluating whether a Web page is worth being kept in caching space.On the basis of user‘‘‘‘‘‘‘‘s access history,auxiliary navigation information is provided to help him reach target pages more quickly.Performance test results haves shown that the proposed proxy cache system can get higher hit ratio than traditional ones and can reduce user‘‘‘‘‘‘‘‘s access latency effectively.  相似文献   

14.
Web caching has been proposed as an effective solution to the problems of network traffic and congestion, Web objects access and Web load balancing. This paper presents a model for optimizing Web cache content by applying either a genetic algorithm or an evolutionary programming scheme for Web cache content replacement. Three policies are proposed for each of the genetic algorithm and the evolutionary programming techniques, in relation to objects staleness factors and retrieval rates. A simulation model is developed and long term trace-driven simulation is used to experiment on the proposed techniques. The results indicate that all evolutionary techniques are beneficial to the cache replacement, compared to the conventional replacement applied in most Web cache server. Under an appropriate objective function the genetic algorithm has been proven to be the best of all approaches with respect to cache hit and byte hit ratios.  相似文献   

15.
一种有效的Web代理缓存替换算法   总被引:2,自引:0,他引:2       下载免费PDF全文
设计良好的Web缓存替换策略能使网络上的资源得到最有效的利用。文章设计了一个较有效率的Web缓存替换策略LFRU,期望以较佳的方式获得网络资源及改善Web缓存的性能和服务质量。实验结果表明该策略有较高的文档命中率和字节命中率。  相似文献   

16.
Web-log mining for predictive Web caching   总被引:3,自引:0,他引:3  
Caching is a well-known strategy for improving the performance of Web-based systems. The heart of a caching system is its page replacement policy, which selects the pages to be replaced in a cache when a request arrives. In this paper, we present a Web-log mining method for caching Web objects and use this algorithm to enhance the performance of Web caching systems. In our approach, we develop an n-gram-based prediction algorithm that can predict future Web requests. The prediction model is then used to extend the well-known GDSF caching policy. We empirically show that the system performance is improved using the predictive-caching approach.  相似文献   

17.
In-network caching in Named Data Networking (NDN) based Internet of Things (IoT) plays a central role for efficient data dissemination. Data cached throughout the network may quickly become obsolete as they are transient and frequently updated by their producers. As such, NDN-based IoT networks impose stringent requirement in terms of data freshness. While various cache replacement policies were proposed, none has considered the cache freshness requirement. In this paper, we introduce a novel cache replacement policy called Least Fresh First (LFF) integrating the cache freshness requirement. LFF evicts invalid cached contents based on time series forecasting of sensors future events. Extensive simulations are performed to evaluate the performance of LFF and to compare it to the different well-known cache replacement policies in ICN-based IoT networks. The obtained results show that LFF significantly improves data freshness compared to other policies, while enhancing the server hit reduction ratio, the hop reduction ratio and the response latency.  相似文献   

18.
Exploiting Regularities in Web Traffic Patterns for Cache Replacement   总被引:2,自引:0,他引:2  
Cohen  Kaplan 《Algorithmica》2002,33(3):300-334
Abstract. Caching web pages at proxies and in web servers' memories can greatly enhance performance. Proxy caching is known to reduce network load and both proxy and server caching can significantly decrease latency. Web caching problems have different properties than traditional operating systems caching, and cache replacement can benefit by recognizing and exploiting these differences. We address two aspects of the predictability of traffic patterns: the overall load experienced by large proxy and web servers, and the distinct access patterns of individual pages. We formalize the notion of ``cache load' under various replacement policies, including LRU and LFU, and demonstrate that the trace of a large proxy server exhibits regular load. Predictable load allows for improved design, analysis, and experimental evaluation of replacement policies. We provide a simple and (near) optimal replacement policy when each page request has an associated distribution function on the next request time of the page. Without the predictable load assumption, no such online policy is possible and it is known that even obtaining an offline optimum is hard. For experiments, predictable load enables comparing and evaluating cache replacement policies using partial traces , containing requests made to only a subset of the pages. Our results are based on considering a simpler caching model which we call the interval caching model . We relate traditional and interval caching policies under predictable load, and derive (near)-optimal replacement policies from their optimal interval caching counterparts.  相似文献   

19.
石磊  孟彩霞  韩英杰 《计算机应用》2007,27(8):1842-1845
为提高Web缓存性能,在缓存替换算法的基础上加入预测机制,提出了基于预测的Web替换策略P-Re。预测算法采用PPM上下文模型,当缓存空间不够用来存放新的对象时,P-Re选择键值较小且未被预测到的对象进行替换。实验表明,基于预测的Web缓存替换算法P-Re相对于传统替换算法而言具有较高的命中率和字节命中率。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号