首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 93 毫秒
1.
Web proxy caches are used to reduce the strain of contemporary web traffic on web servers and network bandwidth providers. In this research, a novel approach to web proxy cache replacement which utilizes neural networks for replacement decisions is developed and analyzed. Neural networks are trained to classify cacheable objects from real world data sets using information known to be important in web proxy caching, such as frequency and recency. Correct classification ratios between 0.85 and 0.88 are obtained both for data used for training and data not used for training. Our approach is compared with Least Recently Used (LRU), Least Frequently Used (LFU) and the optimal case which always rates an object with the number of future requests. Performance is evaluated in simulation for various neural network structures and cache conditions. The final neural networks achieve hit rates that are 86.60% of the optimal in the worst case and 100% of the optimal in the best case. Byte-hit rates are 93.36% of the optimal in the worst case and 99.92% of the optimal in the best case. We examine the input-to-output mappings of individual neural networks and analyze the resulting caching strategy with respect to specific cache conditions.  相似文献   

2.
Exploiting Regularities in Web Traffic Patterns for Cache Replacement   总被引:2,自引:0,他引:2  
Cohen  Kaplan 《Algorithmica》2002,33(3):300-334
Abstract. Caching web pages at proxies and in web servers' memories can greatly enhance performance. Proxy caching is known to reduce network load and both proxy and server caching can significantly decrease latency. Web caching problems have different properties than traditional operating systems caching, and cache replacement can benefit by recognizing and exploiting these differences. We address two aspects of the predictability of traffic patterns: the overall load experienced by large proxy and web servers, and the distinct access patterns of individual pages. We formalize the notion of ``cache load' under various replacement policies, including LRU and LFU, and demonstrate that the trace of a large proxy server exhibits regular load. Predictable load allows for improved design, analysis, and experimental evaluation of replacement policies. We provide a simple and (near) optimal replacement policy when each page request has an associated distribution function on the next request time of the page. Without the predictable load assumption, no such online policy is possible and it is known that even obtaining an offline optimum is hard. For experiments, predictable load enables comparing and evaluating cache replacement policies using partial traces , containing requests made to only a subset of the pages. Our results are based on considering a simpler caching model which we call the interval caching model . We relate traditional and interval caching policies under predictable load, and derive (near)-optimal replacement policies from their optimal interval caching counterparts.  相似文献   

3.
Integrating Web caching and Web prefetching in client-side proxies   总被引:2,自引:0,他引:2  
Web caching and Web prefetching are two important techniques used to reduce the noticeable response time perceived by users. Note that by integrating Web caching and Web prefetching, these two techniques can complement each other since the Web caching technique exploits the temporal locality, whereas Web prefetching technique utilizes the spatial locality of Web objects. However, without circumspect design, the integration of these two techniques might cause significant performance degradation to each other. In view of this, we propose in this paper an innovative cache replacement algorithm, which not only considers the caching effect in the Web environment, but also evaluates the prefetching rules provided by various prefetching schemes. Specifically, we formulate a normalized profit function to evaluate the profit from caching an object (i.e., either a nonimplied object or an implied object according to some prefetching rule). Based on the normalized profit function devised, we devise an innovative Web cache replacement algorithm, referred to as Algorithm IWCP (standing for the Integration of Web Caching and Prefetching). Using an event-driven simulation, we evaluate the performance of Algorithm IWCP under several circumstances. The experimental results show that Algorithm IWCP consistently outperforms the companion schemes in various performance metrics.  相似文献   

4.
Web-log mining for predictive Web caching   总被引:3,自引:0,他引:3  
Caching is a well-known strategy for improving the performance of Web-based systems. The heart of a caching system is its page replacement policy, which selects the pages to be replaced in a cache when a request arrives. In this paper, we present a Web-log mining method for caching Web objects and use this algorithm to enhance the performance of Web caching systems. In our approach, we develop an n-gram-based prediction algorithm that can predict future Web requests. The prediction model is then used to extend the well-known GDSF caching policy. We empirically show that the system performance is improved using the predictive-caching approach.  相似文献   

5.
Web对象访问特征模拟器的设计与实现   总被引:2,自引:0,他引:2  
石磊  陶永才 《计算机仿真》2006,23(1):133-136
Web缓存是一个提高Web性能非常有效的方法,它可以位于网络的不同位置:客户端,代理服务器端,服务器端。研究表明Web缓存命中率可以达到30%-50%。Web缓存在应用中最大的问题就是Web缓存管理,研究Web访问特征是有效进行Web缓存管理的基础。Web日志生成模拟器对于研究Web缓存系统有很大地帮助,目前有两种方法模拟生成Web访问日志:日志驱动方法,数学模拟方法。日志驱动方法利用对历史日志进行变换来模拟生成新的日志,数学模拟方法在充分研究Ⅵ协对象访问特征的基础上,通过建立数学模型来模拟生成Web日志。该文通过分析Web对象访问特征,采用数学模拟方法分别模拟了Web对象高频区及低频区流行度特征,Web对象大小重尾分布特征,Web访问的时间局部性特征;设计并实现了一个Web日志模拟生成器WEBSIM。该模拟器不仅可以模拟生成Web对象访问日志,而且具有较大的灵活性,为进一步研究Web缓存技术和预取技术提供依据。  相似文献   

6.
With the recent explosion in usage of the World Wide Web, Web caching has become increasingly important. However, due to the non-uniform cost/size property of data objects in this environment, design of an efficient caching algorithm becomes an even more difficult problem compared to the traditional caching problems. In this paper, we propose the Least Expected Cost (LEC) replacement algorithm for Web caches that provides a simple and robust framework for the estimation of reference probability and fair evaluation of non-uniform Web objects. LEC evaluates a Web object based on its cost per unit size multiplied by the estimated reference probability of the object. This results in a normalized assessment of the contribution to the cost-savings ratio, leading to a fair replacement algorithm. We show that this normalization method finds optimal solution under some assumptions. Trace-driven simulations with actual Web cache logs show that LEC offers the performance of caches more than twice its size compared with other algorithms we considered. Nevertheless, it is simple, having no parameters to tune. We also show how the algorithm can be effectively implemented as a Web cache replacement module.  相似文献   

7.
Proxy caches are essential to improve the performance of the World Wide Web and to enhance user perceived latency. Appropriate cache management strategies are crucial to achieve these goals. In our previous work, we have introduced Web object-based caching policies. A Web object consists of the main HTML page and all of its constituent embedded files. Our studies have shown that these policies improve proxy cache performance substantially.In this paper, we propose a new Web object-based policy to manage the storage system of a proxy cache. We propose two techniques to improve the storage system performance. The first technique is concerned with prefetching the related files belonging to a Web object, from the disk to main memory. This prefetching improves performance as most of the files can be provided from the main memory rather than from the proxy disk. The second technique stores the Web object members in contiguous disk blocks in order to reduce the disk access time. We used trace-driven simulations to study the performance improvements one can obtain with these two techniques. Our results show that the first technique by itself provides up to 50% reduction in hit latency, which is the delay involved in providing a hit document by the proxy. An additional 5% improvement can be obtained by incorporating the second technique.  相似文献   

8.
Web代理服务器缓存能够在一定程度上解决用户访问延迟和网络拥塞问题,Web代理缓存的缓存替换策略直接影响缓存的命中率,从而影响网络请求响应的效果;为此,使用一种通过固定大小的循环滑动窗口提取Web日志数据的多项特征,并使用高斯混合模型对Web日志数据进行聚类分析,预测在窗口时间内可能再次访问到Web对象,结合最近最少使用(LRU)算法,提出一种新的基于高斯混合模型的Web代理服务器缓存替换策略;实验结果表明,与传统的缓存替换策略LRU、LFU、FIFO、GDSF相比,该策略有效提高了Web代理缓存的请求命中率和字节命中率。  相似文献   

9.
Web caching has been proposed as an effective solution to the problems of network traffic and congestion, Web objects access and Web load balancing. This paper presents a model for optimizing Web cache content by applying either a genetic algorithm or an evolutionary programming scheme for Web cache content replacement. Three policies are proposed for each of the genetic algorithm and the evolutionary programming techniques, in relation to objects staleness factors and retrieval rates. A simulation model is developed and long term trace-driven simulation is used to experiment on the proposed techniques. The results indicate that all evolutionary techniques are beneficial to the cache replacement, compared to the conventional replacement applied in most Web cache server. Under an appropriate objective function the genetic algorithm has been proven to be the best of all approaches with respect to cache hit and byte hit ratios.  相似文献   

10.
This paper proposes a novel contribution in Web caching area, especially in Web cache replacement, so-called intelligent client-side Web caching scheme (ICWCS). This approach is developed by splitting the client-side cache into two caches: short-term cache that receives the Web objects from the Internet directly, and long-term cache that receives the Web objects from the short-term cache. The objects in short-term cache are removed by least recently used (LRU) algorithm as short-term cache is full. More significantly, when the long-term cache saturates, the neuro-fuzzy system is employed efficiently in managing contents of the long-term cache. The proposed solution is validated by implementing trace-driven simulation and the results are compared with least recently used (LRU) and least frequently used (LFU) algorithms; the most common policies of evaluating Web caching performance. The simulation results have revealed that the proposed approach improves the performance of Web caching in terms of hit ratio (HR), up to 14.8% and 17.9% over LRU and LFU. In terms of byte hit ratio (BHR), the Web caching performance is improved up to 2.57% and 26.25%, and for latency saving ratio (LSR), the performance is better with 8.3% and 18.9% over LRU and LFU, respectively.  相似文献   

11.
基于网络性能的智能Web加速技术——缓存与预取   总被引:8,自引:0,他引:8  
Web业务在网络业务中占有很大比重,在无法扩大网络带宽时,需要采取一定技术合理利用带宽,改善网络性能。研究了基于RTT(round trip time)等网络性能指标的Web智能加速技术,在对Web代理服务器上的业务进行分析和对网络RTT进行测量分析的基础上,提出了智能预取控制技术及新的缓存(cache)替换方法。对新算法的仿真研究表明,该方法提高了缓存的命中率。研究表明预取技术在不明显增加网络负荷的前提下,提高了业务的响应速度,有效地改进了Web访问性能。  相似文献   

12.
Web对象缓存技术是一种减少web服务器访问通信量和访问延迟的重要手段。Web缓存的引入虽然大大减轻了服务器负载,降低了网络拥塞,减少了客户端访问的延迟等优点,但同时也带来缓存的一致性问题,这样使客户端获得web的数据可能不是最新的版本。该文通过分析现有的缓存一致性方针,提出了一个应适于web的强缓存一致性算法。  相似文献   

13.
Belloum  A.  Hertzberger  L.O.  Muller  H. 《World Wide Web》2001,4(4):255-275
Web caches are traditionally organised in a simple tree like hierarchy. In this paper, a new architecture is proposed, where federations of caches are distributed globally, caching data partially. The advantages of the proposed system are that contention on global caches is reduced, while at the same time improving the scalability of the system since extra cache resources can be added on the fly. Among other topics discussed in this papers, is the scalability of the proposed system, the algorithms used to control the federation of Web caches and the approach used to identify the potential Web cache partners. In order to obtain a successful collaborative Web caching system, the formation of federations must be controlled by an algorithm that takes the dynamics of the Internet traffic into consideration. We use the history of Web cache access in order to determine how federations should be formed. Initial performance results of a simulation of a number of nodes are promising.  相似文献   

14.
Web 2.0 systems are more unpredictable and customizable than traditional web applications. This causes that performance techniques, such as web caching, limit their improvements. Our study was based on the hypotheses that the use of web caching in Web 2.0 applications, particularly in content aggregation systems, can be improved by adapting the content fragment designs. We proposed to base this adaptation on the analysis of the characterization parameters of the content elements and on the creation of a classification algorithm. This algorithm was deployed with decision trees, created in an off-line knowledge discovery process. We also defined a framework to create and adapt fragments of the web documents to reduce the user-perceived latency in web caches. The experiment results showed that our solution had a remarkable reduction in the user-perceived latency even losses in the cache hit ratios and in the overhead generated on the system, in comparison with other web cache schemes.  相似文献   

15.
Vakali  Athena 《World Wide Web》2001,4(4):277-297
Accesing and circulation of Web objects has been facilitated by the design and implementation of effective caching schemes. Web caching has been integrated in prototype and commercial Web-based information systems in order to reduce the overall bandwidth and increase system's fault tolerance. This paper presents an overview of a series of Web cache replacement algorithms based on the idea of preserving a history record for cached Web objects. The number of references to Web objects over a certain time period is a critical parameter for the cache content replacement. The proposed algorithms are simulated and experimented under a real workload of Web cache traces provided by a major (Squid) proxy cache server installation. Cache and bytes hit rates are given with respect to different cache sizes and a varying number of request workload sets and it is shown that the proposed cache replacement algorithms improve both cache and byte hit rates.  相似文献   

16.
一种基于有限记忆多LRU的Web缓存替换算法   总被引:3,自引:0,他引:3  
Web缓存的核心是缓存内容的替换算法.在动态不确定的网络环境下,本文提出一种基于有限记忆的多LRU (LH-MLRU)Web缓存替换算法,它是一种低开销、高性能和适应性的算法.LH-MLRU综合考虑各项因素对Web对象使用多个LRU队列进行分类管理,引入Web对象最近被访问的历史作为缓存内容替换的一个关键因素,来预测对象可能再次被访问的概率.通过周期性的训练参数可以适应动态不确定的网络环境.轨迹驱动的仿真实验表明LH-MLRU在各项性能指标上均优于其他算法,可以显著的提高Web缓存的性能.  相似文献   

17.
通过对Web通信量的分析,人们发现用户对Web对象的访问模式服从Zipf定律或类Zipf定律。在Web缓存的设计中,为得到所期望的Web对象命中率的要求,设计人员可以根据Zipf定律近似计算出相应的缓存大小。因此,Zipf定律为Web缓存结构的设计提供了重要的依据。适当的缓存大小结合P-LFU替换策略可以得到很高的Web缓存命中率。  相似文献   

18.
The article provides a primer on Web resource caching, one technology used to make the Web scalable. Web caching can reduce bandwidth usage, decrease user-perceived latencies, and reduce Web server loads transparently. As a result, caching has become a significant part of the Web's infrastructure. Caching has even spawned a new industry: content delivery networks, which are also growing at a fantastic rate. Readers familiar with relatively advanced Web caching topics such as the Internet Cache Protocol (ICP), invalidation, and interception proxies are not likely to learn much here. Instead, the article is designed for the general audience of Web users. Rather than a how-to guide to caching technology deployment, it is a high-level argument for the value of Web caching to content consumers and producers. The article defines caching, explains how it applies to the Web, and describes when and why it is useful  相似文献   

19.
缓存和预取在提高无线环境下的Web访问性能方面发挥着重要作用。文章研究针对无线局域网的Web缓存和预取机制,分别基于数据挖掘和信息论提出了采用序列挖掘和延迟更新的预测算法,设计了上下文感知的预取算法和获益驱动的缓存替换机制,上述算法已在Web缓存系统OnceEasyCache中实现。性能评估实验结果表明,上述算法的集成能有效地提高缓存命中率和延迟节省率。  相似文献   

20.
代理Web Cache性能分析   总被引:3,自引:0,他引:3  
采用WebCache技术提高当前Internet性能已成为一个主流的研究领域,其功能原理就象处理器和文件系统中的多级高速缓存一样。大规模Web高速缓存系统已成为许多国家Internet基础设施的重要组成部分。该文从三个不同访问规模的代理WebCache的跟踪日志出发,分析了WebCache的用户访问模式、Cache命中率、Cache服务器处理延迟等统计特征,提出基于分布式共享RAM和外存储结合的两级协同WebCache集群技术,可以提供可扩展的高性能并行Web高速缓存服务。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号