首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 242 毫秒
1.
Web caching has been widely used to alleviate Internet traffic congestion in World Wide Web (WWW) services. To reduce download throughput, an effective strategy on web cache management is needed to exploit web usage information in order to make a decision on evicting the document stored in case of cache saturation. This paper presents a so-called Learning Based Replacement algorithm (LBR), a hybrid approach towards an efficient replacement model for web caching by incorporating a machine learning technique (naive Bayes) into the LRU replacement method to improve prediction of possibility that an existing page will be revised by a succeeding request, from access history in a web log. The learned knowledge includes information on which URL objects in cache should be kept or evicted. The learning-based model is acquired to represent the hidden aspect of user request pattern for predicting the re-reference possibility. By a number of experiments, the LBR gains potential improvement of prediction on revisit probability, hit rate and byte hit rate overtraditional methods; LRU, LFU, and GDSF, respectively.  相似文献   

2.
一种基于LRU算法改进的缓存方案研究与实现   总被引:1,自引:0,他引:1  
廖鑫 《电子工程师》2008,34(7):46-48
LRU(最近最少使用)替换算法在单处理器结构的许多应用中被广泛使用。然而在多处理器结构中,传统LRU算法对降低共享缓存的缺失率并不是最优的。文中研究了基本的缓存块替换算法,在分析LRU算法的基础上,提出基于LRU算法及访问概率改进的缓存方案,综合考虑最近使用次数和访问频率来决定候选的替换块,增强了替换算法对多处理器的适应性。  相似文献   

3.
黄丹  宋荣方 《电信科学》2018,34(11):59-66
缓存替换机制是内容中心网络的重要研究问题之一,考虑到缓存空间的有限性,合理地对缓存内容进行置换,成为影响网络整体性能的关键因素。因此,设计了一种基于内容价值的缓存替换方案。该方案综合考虑了内容的动态流行度、缓存代价以及最近被请求的时间,构建了更实际的内容价值函数,并依据该内容价值函数,设计了有效的内容存储与置换方案。具体地,当缓存空间不足时,对已有缓存内容按照价值从小到大进行置换。仿真结果表明,相比于传统替换算法 LRU、LFU 和 FIFO,本文提出的方案有效地提升了网络节点的内容缓存命中率,降低了用户获取内容的平均跳数。  相似文献   

4.
In this paper, we analyze access traces to a Web proxy, looking at statistical parameters to be used in the design of a replacement policy for documents held in the cache. In the first part of this paper, we present a number of properties of the lifetime and statistics of access to documents, derived from two large trace sets coming from very different proxies and spanning over time intervals of up to five months. In the second part, we propose a novel replacement policy, called LRV, which selects for replacement the document with the lowest relative value among those in cache. In LRV, the value of a document is computed adaptively based on information readily available to the proxy server. The algorithm has no hardwired constants, and the computations associated with the replacement policy require only a small constant time. We show how LRV outperforms least recently used (LRU) and other policies and can significantly improve the performance of the cache, especially for a small one  相似文献   

5.
This paper aims at finding fundamental design principles for hierarchical Web caching. An analytical modeling technique is developed to characterize an uncooperative two-level hierarchical caching system where the least recently used (LRU) algorithm is locally run at each cache. With this modeling technique, we are able to identify a characteristic time for each cache, which plays a fundamental role in understanding the caching processes. In particular, a cache can be viewed roughly as a low-pass filter with its cutoff frequency equal to the inverse of the characteristic time. Documents with access frequencies lower than this cutoff frequency have good chances to pass through the cache without cache hits. This viewpoint enables us to take any branch of the cache tree as a tandem of low-pass filters at different cutoff frequencies, which further results in the finding of two fundamental design principles. Finally, to demonstrate how to use the principles to guide the caching algorithm design, we propose a cooperative hierarchical Web caching architecture based on these principles. Both model-based and real trace simulation studies show that the proposed cooperative architecture results in more than 50% memory saving and substantial central processing unit (CPU) power saving for the management and update of cache entries compared with the traditional uncooperative hierarchical caching architecture.  相似文献   

6.
Web caching is a significantly important strategy for improving Web performance. In this paper, we design SmartCache, a router-based system of Web page load time reduction in home broadband access networks, which is composed of cache, SVM trainer and classifier, and browser extension. More specifically, the browser interacts with users to collect their experience satisfaction and prepare training dataset for SVM trainer. With the desired features extracted from training dataset, the SVM classifier predicts the classes of the Web objects. Then, integrated with LFU, the cache makes a cache replacement based on SVM-LFU policy. Finally, by implementing SmartCache on a Netgear router and Chrome browsers, we evaluate our SVM-LFU algorithm in terms of Web page load time, SVM accuracy, and cache performance, and the experimental results illustrate that SmartCache can greatly improve Web performance in Web page load time.  相似文献   

7.
The growth of the World Wide Web and web‐based applications is creating demand for high performance web servers to offer better throughput and shorter user‐perceived latency. This demand leads to widely used cluster‐based web servers in the Internet infrastructure. Load balancing algorithms play an important role in boosting the performance of cluster web servers. Previous load balancing algorithms suffer a significant performance drop under dynamic and database‐driven workloads. We propose an estimation‐based load balancing algorithm with admission control for cluster‐based web servers. Because it is difficult to accurately determine the load of web servers, we propose an approximate policy. The algorithm classifies requests based on their service times and tracks the number of outstanding requests from each class in each web server node to dynamically estimate each web server load state. The available capacity of each web server is then computed and used for the load balancing and admission control decisions. The implementation results confirm that the proposed scheme improves both the mean response time and the throughput of clusters compared to rival load balancing algorithms and prevents clusters being overloaded even when request rates are beyond the cluster capacity.  相似文献   

8.
LRU替换算法在单核处理器中得到了广泛应用,而多核环境大都采用多核共享最后一级Cache(LLC)的策略,随着LLC容量和相联度的增加以及多核应用的工作集增大,LRU替换算法和理论最优替换算法之间的差距越来越大。该文提出了一种平均划分下基于频率的多核共享Cache替换算法(ALRU-F)。该算法将当前所需要的部分工作集保留在Cache内,逐出无用块,同时还提出了块粒度动态划分下基于频率的替换算法(BLRU-F)。该文提出的ALRU-F算法相比传统的LRU算法缺失率降低了26.59%, CPU每一时钟周期内所执行的指令数IPC(Instruction Per Clock)则提升了13.59%。在此基础上提出的块粒度动态划分下,基于频率的BLUR-F算法相比较传统的LRU算法性能提高更大,缺失率降低了33.72%,而IPC 则提升了16.59%。提出的两种算法在性能提升的同时,并没有明显地增加能耗。  相似文献   

9.
司成祥  孟晓烜  许鲁 《电子学报》2011,39(5):1205-1209
本文通过对websearch负载的分析,总结出负载访问模式的特点,在此基础上提出了一种新的缓存替换算法--ERDP-LRU.与传统的LRU算法的区别是它采用基于重用距离的放置策略.通过模拟实验和实际系统验证,在各种不同的典型负载和缓存大小下,ERDP-LRU的效果均好于其它替换算法.  相似文献   

10.
Cost-Effective Caching for Mobility Support in IEEE 802.1X Frameworks   总被引:1,自引:0,他引:1  
This paper is concerned with caching support of access points (APs) for fast handoff within IEEE 802.11 networks. A common flavor of current schemes is to let a mobile station preauthenticate or distribute the security context of the station proactively to neighboring APs. Each target AP caches the received context beforehand and can save itself backend-network authentication if the station reassociates. We present an approach to ameliorating cache effectiveness under the least recently used (LRU) replacement policy, additionally allowing for distinct cache miss penalty indicative of authentication delay. We leverage the widely used LRU caching techniques to effect a new model where high-penalty cache entries are prevented from being prematurely evicted under the conventional replacement policy so as to save frequent, expensive authentications with remote sites. This is accomplished by introducing software-generated reference requests that trigger cache hardware machinery in APs to refresh certain entries in an automated manner. Performance evaluations are conducted using simulation and analytical modeling. Performance results show that our approach, when compared with the base LRU scheme, reduces authentication delay by more than 51 percent and cache miss ratio by over 28 percent on average. Quantitative and qualitative discussions indicate that our approach is applicable in pragmatic settings  相似文献   

11.
黄涛  王晶  管雪涛  钟祺  王克义 《电子学报》2012,40(12):2433-2438
 现有高速缓存替换算法大多无法有效识别数据的局部性特征,导致高速缓存内即将被访问到的数据可能被未来不会被访问到的数据所替换,造成高速缓存污染问题.末级高速缓存污染引发的性能损失随着处理器和存储器之间性能差距的扩大而不断增大,成为制约系统性能提升的重要瓶颈之一.本文针对末级高速缓存污染问题,在剖视分析访存密集型程序的页一级访存行为基础上,提出一种软件控制末级高速缓存插入策略.本方法通过控制和指导页一级数据插入位置,限制局部性差的数据页在末级高速缓存中的访问空间,达到降低末级高速缓存污染的目的.实验结果表明,相对于LRU和DIP策略,本文方法能够有效降低末级高速缓存失效率,提高程序性能.  相似文献   

12.
LRU-Assist:一种高效的Cache漏流功耗控制算法   总被引:1,自引:4,他引:1       下载免费PDF全文
随着集成电路制造工艺进入超深亚微米阶段,漏电流功耗在微处理器总功耗中所占的比例越来越大,在开发新的低漏流工艺和电路技术之外,如何在体系结构级控制和优化漏流功耗成为业界研究的热点.Cache在微处理器中面积最大,是进行漏流控制的首要部件.LRU是组相联Cache最常用的替换算法,而研究发现,访存操作命中LRU后半区的概率很低.LRU-Assist算法以Drowsy Cache、Cache Decay等控制策略为基础,在保证处理器性能不受影响的前提下,利用既有的LRU信息把Cache的关闭率平均提高了15%,大大降低了漏电流功耗.  相似文献   

13.
一种静态可控功耗的数据Cache设计   总被引:4,自引:2,他引:2  
在目前的微处理器设计中,片内Cache存储器的能量损耗所占的比重越来越大。本文给出了一种能够有效降低功耗的数据Cache设计方法。该方法通过静态调节组映射策略,根据应用程序的自身特点调节数据Cache的容量大小,并且选择合理的替换算法,在保证高性能的同时降低了能量损耗。  相似文献   

14.
一种结合动态写策略的磁盘Cache替换算法   总被引:1,自引:0,他引:1  
磁盘Cache是改善I/O性能的一种技术.通过分析Cache写策略和LRU、LFU替换算法对磁盘Cache性能的影响,引入一种动态写策略,改进替换算法,使基于频率的块替换算法FBR与动态写策略相结合.二者结合较好地应用于磁盘存取中,充分利用局部性规律,提高I/O性能,使磁盘在多种工作环境和不同Cache大小下的性能更优.  相似文献   

15.
本文提出了一种基于改进的LRU替换策略划分最后一级共享Cache的算法,隔离了线程间的数据冲突,实现了改进的Cache替换策略,通过划分最后一级共享Cache也减少了访存延迟,提高了系统吞吐率.  相似文献   

16.
One of the key research fields of content-centric networking (CCN) is to develop more efficient cache replacement policies to improve the hit ratio of CCN in-network caching. However, most of existing cache strategies designed mainly based on the time or frequency of content access, can not properly deal with the problem of the dynamicity of content popularity in the network. In this paper, we propose a fast convergence caching replacement algorithm based on dynamic classification method for CCN, named as FCDC. It develops a dynamic classification method to reduce the time complexity of cache inquiry, which achieves a higher caching hit rate in comparison to random classification method under dynamic change of content popularity. Meanwhile, in order to relieve the influence brought about by dynamic content popularity, it designs a weighting function to speed up cache hit rate convergence in the CCN router. Experimental results show that the proposed scheme outperforms the replacement policies related to least recently used (LRU) and recent usage frequency (RUF) in cache hit rate and resiliency when content popularity in the network varies.  相似文献   

17.
Information‐centric networking (ICN) has emerged as a promising candidate for designing content‐based future Internet paradigms. ICN increases the utilization of a network through location‐independent content naming and in‐network content caching. In routers, cache replacement policy determines which content to be replaced in the case of cache free space shortage. Thus, it has a direct influence on user experience, especially content delivery time. Meanwhile, content can be provided from different locations simultaneously because of the multi‐source property of the content in ICN. To the best of our knowledge, no work has yet studied the impact of cache replacement policy on the content delivery time considering multi‐source content delivery in ICN, an issue addressed in this paper. As our contribution, we analytically quantify the average content delivery time when different cache replacement policies, namely, least recently used (LRU) and random replacement (RR) policy, are employed. As an impressive result, we report the superiority of these policies in term of the popularity distribution of contents. The expected content delivery time in a supposed network topology was studied by both theoretical and experimental method. On the basis of the obtained results, some interesting findings of the performance of used cache replacement policies are provided.  相似文献   

18.
一种基于伪LRU的新型共享Cache划分机制   总被引:1,自引:0,他引:1  
倪亚路  周晓方 《电子学报》2013,41(4):681-684
本文提出了一种基于伪LRU方法的新型共享Cache动态划分策略PLRU-SCP.本文提出的划分策略在分析电路中给出了基于二叉树的新型分析方法,在划分电路中使用了一种非遍历的划分算法.并提出了一种新型共享Cache结构.本文提出的新型划分策略比基于LRU方法的不划分共享Cache策略和效用最优的划分策略的性能分别提高了11.05%和8.66%.  相似文献   

19.
Information-centric networking (ICN) proposes a content-centric paradigm which has some attractive advantages, such as network load reduction, low dissemination latency, and energy efficiency. In this paper, based on the analytical model of ICN with receiver-driven transport protocol employing least-recently used (LRU) replacement policy, we derive expressions to compute the average content delivery time of the requests' arrival sequence of a single cache, and then we extend the expressions to a cascade of caches’ scenario. From the expressions, we know the quantitative relationship among the delivery time, cache size and bandwidth. Our results, analyzing the trade-offs between performance and resources in ICN, can be used as a guide to design ICN and to evaluation its performance.  相似文献   

20.
In this paper we discuss the performance of a document distribution model that interconnects Web caches through a satellite channel. During recent years Web caching has emerged as an important way to reduce client-perceived latency and network resource requirements in the Internet. Also a satellite distribution is being rapidly deployed to offer Internet services while avoiding highly congested terrestrial links. When Web caches are interconnected through a satellite distribution, caches end up containing all documents requested by a huge community of clients. Having a large community of clients connected to a cache, the probability that a client is the first one to request a document is very small, and the number of requests that are hit in the cache increases. In this paper we develop analytical models to study the performance of a cache-satellite distribution. We derive simple expressions for the hit rate of the caches, the bandwidth in the satellite channel, the latency experienced by the clients, and the required capacity of the caches. Additionally, we use trace driven simulations to validate our model and evaluate the performance of a real cache-satellite distribution.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号