首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Exploiting Regularities in Web Traffic Patterns for Cache Replacement   总被引:2,自引:0,他引:2  
Cohen  Kaplan 《Algorithmica》2002,33(3):300-334
Abstract. Caching web pages at proxies and in web servers' memories can greatly enhance performance. Proxy caching is known to reduce network load and both proxy and server caching can significantly decrease latency. Web caching problems have different properties than traditional operating systems caching, and cache replacement can benefit by recognizing and exploiting these differences. We address two aspects of the predictability of traffic patterns: the overall load experienced by large proxy and web servers, and the distinct access patterns of individual pages. We formalize the notion of ``cache load' under various replacement policies, including LRU and LFU, and demonstrate that the trace of a large proxy server exhibits regular load. Predictable load allows for improved design, analysis, and experimental evaluation of replacement policies. We provide a simple and (near) optimal replacement policy when each page request has an associated distribution function on the next request time of the page. Without the predictable load assumption, no such online policy is possible and it is known that even obtaining an offline optimum is hard. For experiments, predictable load enables comparing and evaluating cache replacement policies using partial traces , containing requests made to only a subset of the pages. Our results are based on considering a simpler caching model which we call the interval caching model . We relate traditional and interval caching policies under predictable load, and derive (near)-optimal replacement policies from their optimal interval caching counterparts.  相似文献   

2.
现有的Web缓存器的实现主要是基于传统的内存缓存算法,由于Web业务请求的异质性,传统的替换算法不能在Web环境中有效工作。研究了Web缓存替换操作的依据,分析了以往替换算法的不足,考虑到Web文档的大小、访问代价、访问频率、访问兴趣度以及最近一次被访问的时间对缓存替换的影响,提出了Web缓存对象角色的概念,建立了一种新的基于对象角色的高精度Web缓存替换算法(ORB算法);并以NASA和DEC的代理服务器数据为例,将该算法与LRU、LFU、SIZE、Hybrid算法进行了仿真实验对比,结果证明,ORB算  相似文献   

3.
唐兵  张黎 《计算机应用》2014,34(11):3109-3111
为提高云存储的访问速率并降低费用,提出了一种面向费用优化的云存储缓存策略。利用几乎免费的局域网环境下的多台桌面计算机,在本地建立一个分布式文件系统,并将其作为远端云存储的缓存。进行文件读取时,首先查找其是否在缓存中,若存在则直接从缓存读取;若不存在则从远端云存储读取。采用了最近最少使用(LRU)算法进行缓存替换,将冷门数据从缓存中替换掉。以亚马逊简单存储服务(S3)作为远端的云存储服务,对原型系统进行了简单的性能测试。测试结果表明,使用了所提出的缓存策略后,在降低费用的同时能够显著提高文件读取的速度。  相似文献   

4.
Song  Xiaodong   《Performance Evaluation》2005,60(1-4):5-29
Most computer systems use a global page replacement policy based on the LRU principle to approximately select a Least Recently Used page for a replacement in the entire user memory space. During execution interactions, a memory page can be marked as LRU even when its program is conducting page faults. We define the LRU pages under such a condition as false LRU pages because these LRU pages are not produced by program memory reference delays, which is inconsistent with the LRU principle. False LRU pages can significantly increase page faults, even cause system thrashing. This poses a more serious risk in a large parallel systems with distributed memories because of the existence of coordination among processes running on individual node. In the case, the process thrashing in a single node or a small number of nodes could severely affect other nodes running coordinating processes, even crash the whole system. In this paper, we focus on how to improve the page replacement algorithm running on one node.

After a careful study on characterizing the memory usage and the thrashing behaviors in the multi-programming system using LRU replacement. we propose an LRU replacement alternative, called token-ordered LRU, to eliminate or reduce the unnecessary page faults by effectively ordering and scheduling memory space allocations. Compared with traditional thrashing protection mechanisms such as load control, our policy allows more processes to keep running to support synchronous distributed process computing. We have implemented the token-ordered LRU algorithm in a Linux kernel to show its effectiveness.  相似文献   


5.
The minimization of page faults in a demand paging environment where program behavior is described by the LRU stack model is studied. Previous work on this subject considered a specific type of stack probability distribution. As there exist practical situations which do not satisfy this assumption, we extend the analysis to arbitrary distributions. The minimization is stated as an optimization problem under constraints, a method to obtain a class of optimal solution is presented, and a fixed-space replacement algorithm to implement these solutions is proposed. The parameters of this replacement algorithm can be varied to adapt to specific stack probability distributions and to the number of allocated pages in memory. This paper also determines a necessary and sufficient condition for the optimality of the LRU algorithm.  相似文献   

6.
Page replacement algorithms of main memory in modern operating systems are crucial in system performance. When memory is full, a page replacement algorithm exploits temporal locality and frequency of page references to evict the page that is least likely to be accessed in the near future. Subsequently, loading the majority of data directly from memory improves performance by reducing I/O waits of accessing slow storage. Research of replacement algorithms that maximizes hit ratio while incurring as less overhead as possible has been constantly studied. In this paper, we propose a time-shift least recently used (TSLRU) algorithm that converts frequency information of page references into temporal locality. Frequent accesses of a page are thus recognized and accumulated in terms of time. Moreover, pages being loaded into memory for the first time are not necessarily the most recently used pages. As a result, one-pass pages are evicted sooner in our algorithm than in traditional LRU algorithm. Our performance evaluations show that the TSLRU outperforms conventional page replacement algorithms on both artificial and real application traces. For example, hit ratio of TSLRU advances ARC by \(4.17\%\) and LRU by \(5.91\%\) on normal distributed workloads. Moreover, TSLRU outperforms ARC by over \(2\%\) on half of the application traces tested.  相似文献   

7.
In manycore systems, eviction decisions related to caches and memory coherence greatly impact system performance, thereby emphasizing their importance. Extensive research has produced numerous standalone eviction policies such as LRU, LFU, FIFO, etc. all aiming to attain the Bélády optimum solution. Standalone eviction policies optimize for a single attribute (recency, frequency, etc.), limiting their impact on applications exhibiting non-uniform memory access patterns. The Hybrid Voting-based Eviction Policy (HyVE) extends multiple standalone eviction policies with a ranking system and evaluates them using concepts from the voting theory domain. The goal of HyVE is to make better replacement decisions by creating a consensus among its constituent eviction policies. With its inherent voting properties, HyVE takes different replacement decisions compared to its standalone counterparts, making it a unique and new eviction policy. We deploy and evaluate HyVE as part of two case-studies: last-level cache replacement decisions in a generic manycore environment, and sparse directory eviction decisions on a tile-based distributed shared memory (DSM) architecture. We explore different variants of HyVE, and evaluate them using workloads from the PARSEC and SPLASH-2 benchmark suites in a simulation environment. We also compare HyVE to state-of-the-art set-dueling and learning-based eviction policies. For last-level cache replacement decisions, HyVE reduces cache misses by 7.4% compared to the LRU policy, whereas DRRIP and Hawkeye reduce cache misses by 5.5% and 9.2% respectively compared to the LRU policy. Though Hawkeye exhibits better performance on average, HyVE offers a unique advantage for certain workloads by using a voting-based approach to solve the replacement problem. For sparse directory eviction decisions, results show that HyVE reduces coherence traffic and execution time by up to 11% compared to the LRU policy. We have synthesized HyVE on an FPGA prototype. Hardware analysis results show that HyVE’s constituent policies contribute the most to its overheads, while HyVE’s ranking and voting extensions do not add significant overheads. Timing analysis results show that HyVE’s logic delay is comparable to that of standalone eviction policies. Lastly, we evaluate HyVE on the FPGA prototype using characteristic micro-benchmarks that further emphasize HyVE’s ability to remain agnostic to varying data access patterns.  相似文献   

8.
The Internet now offers more than just simple information to the users. Decision makers can now issue analytical, as opposed to transactional, queries that involve massive data (such as, aggregations of millions of rows in a relational database) in order to identify useful trends and patterns. Such queries are often referred to as On-Line-Analytical Processing (OLAP). Typically, pages carrying query results do not exhibit temporal locality and, therefore, are not considered for caching at Internet proxies. In OLAP processing, this is a major problem as the cost of these queries is significantly larger than that of the transactional queries. This paper proposes a technique to reduce the response time for OLAP queries originating from geographically distributed private LANs and issued through the Web toward a central data warehouse (DW) of an enterprise. An active caching scheme is introduced that enables the LAN proxies to cache some parts of the data, together with the semantics of the DW, in order to process queries and construct the resulting pages. OLAP queries arriving at the proxy are either satisfied locally or from the DW, depending on the relative access costs. We formulate a cost model for characterizing the respective latencies, taking into consideration the combined effects of both common Web access and query processing. We propose a cache admittance and replacement algorithm that operates on a hybrid Web-OLAP input, outperforming both pure-Web and pure-OLAP caching schemes.  相似文献   

9.
实时多媒体应用存在严格的时间限制。存储访问时延是影响系统性能的一个最关键问题,针对实时多媒体数据而言,基于传统数据的LRU及其改进算法上的的预取和置换策略已经不能满足实际QoS要求。该文针对实时多媒体流,建立终端环境下的实时媒体对象的任务路径树模型,并给出该模型下的媒体流的预取和置换算法MO-VFT。实验模拟结果表明,所提出的模型合理,算法负载小,性能提高达20~35%。  相似文献   

10.
Data caching at mobile clients is an important technique for improving the performance of wireless data dissemination systems. However, variable data sizes, data updates, limited client resources, and frequent client disconnections make cache management a challenge. We propose a gain-based cache replacement policy, Min-SAUD, for wireless data dissemination when cache consistency must be enforced before a cached item is used. Min-SAUD considers several factors that affect cache performance, namely, access probability, update frequency, data size, retrieval delay, and cache validation cost. The paper employs stretch as the major performance metric since it accounts for the data service time and, thus, is fair when items have different sizes. We prove that Min-SAUD achieves optimal stretch under some standard assumptions. Moreover, a series of simulation experiments have been conducted to thoroughly evaluate the performance of Min-SAUD under various system configurations. The simulation results show that, in most cases, the Min-SAUD replacement policy substantially outperforms two existing policies, namely, LRU and SAIU.  相似文献   

11.
Replication algorithms in a remote caching architecture   总被引:1,自引:0,他引:1  
Studies the cache performance in a remote caching architecture. The authors develop a set of distributed object replication policies that are designed to implement different optimization goals. Each site is responsible for local cache decisions, and modifies cache contents in response to decisions made by other sites. The authors use the optimal and greedy policies as upper and lower bounds, respectively, for performance in this environment. Critical system parameters are identified, and their effect on system performance studied. Performance of the distributed algorithms is found to be close to optimal, while that of the greedy algorithms is far from optimal  相似文献   

12.
为简化嵌入式虚拟内存的实现,改善嵌入式虚拟内存的性能,在对常见页面置换算法进行对比分析的基础上,提出一种改进的最久未使用页面置换算法。该算法基于内存管理单元、跨页访问计数器、访问次序寄存器、溢出中断处理等软硬件相结合的技术。实验结果表明,该算法能提高嵌入式系统的页面置换效率,提升系统的整体性能,可广泛应用于各种物联网系统和嵌入式系统。  相似文献   

13.
In this paper, we propose a novel approach to enhancing web proxy caching, an approach that integrates content management with performance tuning techniques. We first develop a hierarchical model for management of web data, which consists of physical pages, logical pages and topics corresponding to different abstraction levels. Content management based on this model enables active utilization of the cached web contents. By defining priority on each abstraction level, the cache manager can make replacement decisions on topics, logical pages, and physical pages hierarchically. As a result, a cache can always keep the most relevant, popular, and high-quality content. To verify the proposed approach, we have designed a content-aware replacement algorithm, LRU-SP+. We evaluate the algorithm through preliminary experiments. The results show that content management can achieve 30% improvement of caching performance in terms of hit ratios and profit ratios (considering significance of topics) compared to content-blind schemes. Received 19 March 2001 / Revised 17 April 2001 / Accepted in revised form 25 May 2001  相似文献   

14.
Donald B. Innes 《Software》1977,7(2):271-273
Many implementations of paged virtual memory systems employ demand fetching with least recently used (LRU) replacement. The stack characteristic of LRU replacement implies that a reference string which repeatedly accesses a number of pages in sequence will cause a page fault for each successive page referenced when the number of pages is greater than the number of page frames allocated to the program's LRU stack. In certain circumstances when the individual operations being performed on the referenced string are independent, or more precisely are commutative, the order of alternate page reference sequences can be reversed. This paper considers sequences which cannot be reversed and shows how placement of information on pages can achieve a similar effect if at least half the pages can be held in the LRU stack.  相似文献   

15.
Caching strategies to improve disk system performance   总被引:1,自引:0,他引:1  
Karedla  R. Love  J.S. Wherry  B.G. 《Computer》1994,27(3):38-46
I/O subsystem manufacturers attempt to reduce latency by increasing disk rotation speeds, incorporating more intelligent disk scheduling algorithms, increasing I/O bus speed, using solid-state disks, and implementing caches at various places in the I/O stream. In this article, we examine the use of caching as a means to increase system response time and improve the data throughput of the disk subsystem. Caching can help to alleviate I/O subsystem bottlenecks caused by mechanical latencies. This article describes a caching strategy that offers the performance of caches twice its size. After explaining some basic caching issues, we examine some popular caching strategies and cache replacement algorithms, as well as the advantages and disadvantages of caching at different levels of the computer system hierarchy. Finally, we investigate the performance of three cache replacement algorithms: random replacement (RR), least recently used (LRU), and a frequency-based variation of LRU known as segmented LRU (SLRU)  相似文献   

16.
一种有效的混合式P2P Web缓存系统HCache   总被引:1,自引:0,他引:1  
李天亮  石磊 《计算机应用》2008,28(6):1478-1480
针对当前P2P Web 缓存系统中副本过多的问题,提出了一种有效的混合式P2P Web缓存系统:HCache。HCache根据用户对网页的访问特点及网页的优先级,对网页进行有选择的缓存,进而减少P2P Web缓存系统中的副本个数。根据Web对象当前的流行度,对LRU替换策略进行了改进(ELRU),提高了P2P Web缓存的命中率。在日志驱动的模拟实验表明,HCache缓存系统提高了Web缓存的命中率和性能。  相似文献   

17.
This paper proposes a novel contribution in Web caching area, especially in Web cache replacement, so-called intelligent client-side Web caching scheme (ICWCS). This approach is developed by splitting the client-side cache into two caches: short-term cache that receives the Web objects from the Internet directly, and long-term cache that receives the Web objects from the short-term cache. The objects in short-term cache are removed by least recently used (LRU) algorithm as short-term cache is full. More significantly, when the long-term cache saturates, the neuro-fuzzy system is employed efficiently in managing contents of the long-term cache. The proposed solution is validated by implementing trace-driven simulation and the results are compared with least recently used (LRU) and least frequently used (LFU) algorithms; the most common policies of evaluating Web caching performance. The simulation results have revealed that the proposed approach improves the performance of Web caching in terms of hit ratio (HR), up to 14.8% and 17.9% over LRU and LFU. In terms of byte hit ratio (BHR), the Web caching performance is improved up to 2.57% and 26.25%, and for latency saving ratio (LSR), the performance is better with 8.3% and 18.9% over LRU and LFU, respectively.  相似文献   

18.
当代CMP处理器通常采用基于LRU替换策略或其近似算法的共享最后一级Cache设计.然而,随着LLC容量和相联度的增长,LRU和理论最优替换算法之间的性能差距日趋增大.为此已提出多种Cache管理策略来解决这一问题,但是它们多数仅针对单一的内存访问类型,且对Cache访问的频率信息关注较少,因而性能提升具有很大的局限性...  相似文献   

19.
光盘库是当前海量信息存储的一种重要方式。为了降低光盘库的访问响应时间,文章在光盘库前端加入一个磁盘缓存系统。分析了光盘库访问特性,设计并实现了一个基于文件类型的副本缓存结构以及相应的缓存替换算法。这种缓存结构同传统的相比,缓存命中率有所提高,内存消耗变小了,同时大大改善了本地缓存空间的可管理性。  相似文献   

20.
《Parallel Computing》2014,40(10):710-721
In this paper, we investigate the problem of fair storage cache allocation among multiple competing applications with diversified access rates. Commonly used cache replacement policies like LRU and most LRU variants are inherently unfair in cache allocation for heterogeneous applications. They implicitly give more cache to the applications that has high access rate and less cache to the applications of slow access rate. However, applications of fast access rate do not always gain higher performance from the additional cache blocks. In contrast, the slow application suffer poor performance with a reduced cache size. It is beneficial in terms of both performance and fairness to allocate cache blocks by their utility.In this paper, we propose a partition-based cache management algorithm for a shared cache. The goal of our algorithm is to find an allocation such that all heterogeneous applications can achieve a specified fairness degree as least performance degradation as possible. To achieve this goal, we present an adaptive partition framework, which partitions the shared cache among competing applications and dynamically adjusts the partition size based on predicted utility on both fairness and performance. We implement our algorithm in a storage simulator and evaluate the fairness and performance with various workloads. Experimental results show that, compared with LRU, our algorithm achieves large improvement in fairness and slightly in performance.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号