首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 250 毫秒
1.
Song  Xiaodong   《Performance Evaluation》2005,60(1-4):5-29
Most computer systems use a global page replacement policy based on the LRU principle to approximately select a Least Recently Used page for a replacement in the entire user memory space. During execution interactions, a memory page can be marked as LRU even when its program is conducting page faults. We define the LRU pages under such a condition as false LRU pages because these LRU pages are not produced by program memory reference delays, which is inconsistent with the LRU principle. False LRU pages can significantly increase page faults, even cause system thrashing. This poses a more serious risk in a large parallel systems with distributed memories because of the existence of coordination among processes running on individual node. In the case, the process thrashing in a single node or a small number of nodes could severely affect other nodes running coordinating processes, even crash the whole system. In this paper, we focus on how to improve the page replacement algorithm running on one node.

After a careful study on characterizing the memory usage and the thrashing behaviors in the multi-programming system using LRU replacement. we propose an LRU replacement alternative, called token-ordered LRU, to eliminate or reduce the unnecessary page faults by effectively ordering and scheduling memory space allocations. Compared with traditional thrashing protection mechanisms such as load control, our policy allows more processes to keep running to support synchronous distributed process computing. We have implemented the token-ordered LRU algorithm in a Linux kernel to show its effectiveness.  相似文献   


2.
The minimization of page faults in a demand paging environment where program behavior is described by the LRU stack model is studied. Previous work on this subject considered a specific type of stack probability distribution. As there exist practical situations which do not satisfy this assumption, we extend the analysis to arbitrary distributions. The minimization is stated as an optimization problem under constraints, a method to obtain a class of optimal solution is presented, and a fixed-space replacement algorithm to implement these solutions is proposed. The parameters of this replacement algorithm can be varied to adapt to specific stack probability distributions and to the number of allocated pages in memory. This paper also determines a necessary and sufficient condition for the optimality of the LRU algorithm.  相似文献   

3.
蒋飞虎  舒平 《微机发展》2006,16(5):42-43
页面置换算法是操作系统中虚拟存储管理的一个重要部分。改进页面置换算法,可以降低页面失败率,从而有效地提高系统性能。现有的应用于虚拟存储管理的页面置换算法主要是Least Reference Used(LRU)页面置换算法。文中利用页面访问间隔数,分析不同的页面访问序列对LRU算法的影响,把页面访问序列分为LRU-友好页面访问序列、LRU-不友好页面访问序列、不友好页面访问序列三类,为改进LRU页面置换算法提供了依据。  相似文献   

4.
Replacement algorithms have been widely used as key technologies for cache management in areas such as file systems or database management. A replacement algorithm determines which page to be evicted when the cache is full and a new page is referenced. Because replacement policies considering only recency or frequency such as LRU (Least Recently Used) and LFU (Least Frequently Used) do not perform well, replacement polices that take both recency and frequency into account have been intensively studied. As a classical replacement policy, LRFU (Least Recently/Frequently Used) policy subsumes the LRU and LFU policy. However, because LFU is not able to adapt to the change of page accessing pattern and it is hard to select a suitable λ for each certain trace, LRFU cannot always guarantee a good performance. In this paper, we propose a Window‐LRFU policy, to subsume the LRU and Window‐LFU policies. Experimental results show that the Window‐LRFU policy outperforms LRFU and has at least competitive performance than other classical algorithms. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

5.
Page replacement algorithms of main memory in modern operating systems are crucial in system performance. When memory is full, a page replacement algorithm exploits temporal locality and frequency of page references to evict the page that is least likely to be accessed in the near future. Subsequently, loading the majority of data directly from memory improves performance by reducing I/O waits of accessing slow storage. Research of replacement algorithms that maximizes hit ratio while incurring as less overhead as possible has been constantly studied. In this paper, we propose a time-shift least recently used (TSLRU) algorithm that converts frequency information of page references into temporal locality. Frequent accesses of a page are thus recognized and accumulated in terms of time. Moreover, pages being loaded into memory for the first time are not necessarily the most recently used pages. As a result, one-pass pages are evicted sooner in our algorithm than in traditional LRU algorithm. Our performance evaluations show that the TSLRU outperforms conventional page replacement algorithms on both artificial and real application traces. For example, hit ratio of TSLRU advances ARC by \(4.17\%\) and LRU by \(5.91\%\) on normal distributed workloads. Moreover, TSLRU outperforms ARC by over \(2\%\) on half of the application traces tested.  相似文献   

6.
一种改进的自适应页面置换算法   总被引:1,自引:0,他引:1  
研究缓冲区页面置换策略和算法(特别是自适应页面置换策略和算法),提出一种基于双管理链的自适应页面置换算法HA。HA算法是对DMC(2c)算法的改进,它引入动态置换点,同时,根据缺页失败数确定算法的工作链,并根据页面访问序列的局部特征选择效率较高的页面置换策略。实验结果表明,HA算法能有效地减少缺页失败数,降低缺页率,特别是在处理第三种模式的页面访问序列时,该算法的缺页率较改进前的算法可降低近30%。  相似文献   

7.
利用页面重构与数据温度识别的闪存缓存算法   总被引:1,自引:0,他引:1  
基于闪存的固态盘(SSD)具有比磁盘更加优越的性能,并且在桌面系统中逐渐替代磁盘.但是,尽管在SSD中嵌入了DRAM作为缓存,闪存在不断写入的过程中也可能产生不稳定的写性能,主要是因为逻辑页写入时会频繁引发非覆盖写和垃圾回收操作.针对此问题,提出了一种叫作PRLRU的新型闪存缓存管理方法,通过页面重构机制以及数据温度识...  相似文献   

8.
In a multinode data sharing environment, different buffer coherency control schemes based on various lock retention mechanisms can be designed to exploit the concept of deferring the propagation or writing of dirty pages to disk to improve normal performance. Two types of deferred write policies are considered. One policy only propagates dirty pages to disk at the times when dirty pages are flushed out of the buffer under LRU buffer replacement. The other policy also performs writes at the times when dirty pages are transferred across nodes. The dirty page propagation policy can have significant implications on the database recovery time. In this paper, we provide an analytical modeling framework for the analysis of the recovery times under the two deferred write policies. We demonstrate how these policies can be mapped onto a unified analytic modeling framework. The main challenge in the analysis is to obtain the pending update count distribution which can be used to determine the average numbers of log records and data I/Os needed to be applied during recovery. The analysis goes beyond previous work on modeling buffer hit probability in a data sharing system where only the average buffer composition, not the distribution, needs to be estimated, and recovery analysis in a single node environment where the complexities on tracking the propagation of dirty pages across nodes and the buffer invalidation effect do not appear  相似文献   

9.
The possibility of fast access to the main memory of remote sites has been advanced as a potential performance improvement in distributed systems. Even if a page is not available in local memory, sites need not do a disk access. Instead, the sites can use efficient mechanisms that support rapid request/response exchanges in order to access pages that are currently buffered at a remote site. Hardware and software support in such a remote caching architecture must also include algorithms that determine which pages should be buffered at what sites. When each site uses the classic LRU replacement algorithm, performance can be much worse than optimal in many system configurations. Because sites do not coordinate individual decisions, overall system buffering/caching decisions yield very inefficient global configurations. This paper proposes an easily implementable modification of the LRU replacement algorithm for LAN environments that reduces replication. The algorithm substantially improves hit-ratios-and thus performance-over a wide range of parameters. The relatively simple LAN topology implies that much less state information need be available for good replacement decisions compared to general network topologies. Two implications of two variations of the algorithm are explored. In an environment where the network is not a performance bottleneck, and where performance is memory-limited, performance of the proposed replacement algorithm is shown to be close to optimal  相似文献   

10.
Exploiting Regularities in Web Traffic Patterns for Cache Replacement   总被引:2,自引:0,他引:2  
Cohen  Kaplan 《Algorithmica》2002,33(3):300-334
Abstract. Caching web pages at proxies and in web servers' memories can greatly enhance performance. Proxy caching is known to reduce network load and both proxy and server caching can significantly decrease latency. Web caching problems have different properties than traditional operating systems caching, and cache replacement can benefit by recognizing and exploiting these differences. We address two aspects of the predictability of traffic patterns: the overall load experienced by large proxy and web servers, and the distinct access patterns of individual pages. We formalize the notion of ``cache load' under various replacement policies, including LRU and LFU, and demonstrate that the trace of a large proxy server exhibits regular load. Predictable load allows for improved design, analysis, and experimental evaluation of replacement policies. We provide a simple and (near) optimal replacement policy when each page request has an associated distribution function on the next request time of the page. Without the predictable load assumption, no such online policy is possible and it is known that even obtaining an offline optimum is hard. For experiments, predictable load enables comparing and evaluating cache replacement policies using partial traces , containing requests made to only a subset of the pages. Our results are based on considering a simpler caching model which we call the interval caching model . We relate traditional and interval caching policies under predictable load, and derive (near)-optimal replacement policies from their optimal interval caching counterparts.  相似文献   

11.
Operating system designers attempt to keep high CPU utilization by maintaining an optimal multiprogramming level (MPL). Although running more processes makes it less likely to leave the CPU idle, too many processes adversely incur serious memory competition, and even introduce thrashing, which eventually lowers CPU utilization. A common practice to address the problem is to lower the MPL with the aid of process swapping out/in operations. This approach is expensive and is only used when the system begins serious thrashing. The objective of our study is to provide highly responsive and cost‐effective thrashing protection by adaptively conducting priority page replacement in a timely manner. We have designed a dynamic system Thrashing Protection Facility (TPF) in the system kernel. Once TPF detects system thrashing, one of the active processes will be identified for protection. The identified process will have a short period of privilege in which it does not contribute its least recently used (LRU) pages for removal so that the process can quickly establish its working set, improving the CPU utilization. With the support of TPF, thrashing can be eliminated in its early stage by adaptive page replacement, so that process swapping will be avoided or delayed until it is truly necessary. We have implemented TPF in a current and representative Linux kernel running on an Intel Pentium machine. Compared with the original Linux page replacement, we showthat TPF consistently and significantly reduces page faults and the execution time of each individual job in several groups of interacting SPEC CPU2000 programs. We also show that TPF introduces little additional overhead to program executions, and its implementation in Linux (or Unix) systems is straightforward. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

12.
Abstract. We study Web Caching when the input sequence is a depth first search traversal of some tree. There are at least two good motivations for investigating tree traversal as a search technique on the WWW: First, empirical studies of people browsing and searching the WWW have shown that user access patterns commonly are nearly depth first traversals of some tree. Secondly (as we will show in this paper), the problem of visiting all the pages on some WWW site using anchor clicks (clicks on links) and back button clicks—by far the two most common user actions—reduces to the problem of how best to cache a tree traversal sequence (up to constant factors). We show that for tree traversal sequences the optimal offline strategy can be computed efficiently. In the bit model, where the access time of a page is proportional to its size, we show that the online algorithm LRU is (1 + 1/ɛ) -competitive against an adversary with unbounded cache as long as LRU has a cache of size at least (1+ ɛ) times the size of the largest item in the input sequence. In the general model, where pages have arbitrary access times and sizes, we show that in order to be constant competitive, any online algorithm needs a cache large enough to store Ω(log n) pages; here n is the number of distinct pages in the input sequence. We provide a matching upper bound by showing that the online algorithm Landlord is constant competitive against an adversary with an unbounded cache if Landlord has a cache large enough to store the Ω(log n) largest pages. This is further theoretical evidence that Landlord is the ``right' algorithm for Web Caching.  相似文献   

13.
由于I/O工作流多样性,单纯的LRU或LFU类型的算法都无法提高缓冲区的效率。自适应双栈LRU(下文简称AD-LRU)替换算法很好的解决了上面的问题,其中LRS存放低recency的页面,HRS存放高recency且高频率的页面,利用它们的效率来调整它们的大小。实验证明,有效的提高了缓冲区的命中率。  相似文献   

14.
15.
提出一种解决PACT01一种结合动态可编程逻辑阵列(DPGA)的处理器的新型体系结制中cache的一致性与同步性问题的算法,并且解决多线程支持的快速上下文切换及快速用户级操作问题。存储器替换机制是解决cache的一致性问题及当cache未命中时从局部或远程存储器到cacbe存储器的数据替换问题的一种硬件实现方法,产生冲突的原因是由于多线程并行的写入/读取的位置相同和读或写的位置相同。文中选择的是相联映射策略,同时也选择了最少最近使用LRU算法,即在cache未命中时替换最少最近使用的参考块,为实现LRU算法设置了与每块相对应的计数器。  相似文献   

16.
M. Chrobak  J. Noga 《Algorithmica》1999,23(2):180-185
In the paging problem we have to manage a two-level memory system, in which the first level has short access time but can hold only up to k pages, while the second level is very large but slow. We use competitive analysis to study the relative performance of the two best known algorithms for paging, LRU and FIFO. Sleator and Tarjan proved that the competitive ratio of LRU and FIFO is k . In practice, however, LRU is known to perform much better than FIFO. It is believed that the superiority of LRU can be attributed to locality of reference exhibited in request sequences. In order to study this phenomenon, Borodin et al. [2] refined the competitive approach by introducing the concept of access graphs. They conjectured that the competitive ratio of LRU on each access graph is less than or equal to the competitive ratio of FIFO. We prove this conjecture in this paper. Received June 2, 1997; revised January 28, 1998.  相似文献   

17.
Do users wait less if proxy caches incorporate estimates of the current network conditions into document replacement algorithms? To answer this, we explore two new caching algorithms: (1) keep in the cache documents that take the longest to retrieve; and (2) use a hybrid of several factors, trying to keep in the cache documents from servers that take a long time to connect to, that must be loaded over the slowest Internet links, that have been referenced the most frequently, and that are small. The algorithms work by estimating the Web page download delays or proxy-to-Web server bandwidth using recent page fetches. The new algorithms are compared to the best three existing policies—LRU, LFU, and SIZE—using three measures-user response time and ability to minimize Web server loads and network bandwidth consumed—on workloads from Virginia Tech and Boston University.  相似文献   

18.
To support ubiquitous computing, the underlying data have to be persistent and available anywhere-anytime. The data thus have to migrate from devices that are local to individual computers, to shared storage volumes that are accessible over open network. This potentially exposes the data to heightened security risks. In particular, the activity on a database exhibits regular page reference patterns that could help attackers learn logical links among physical pages and then launch additional attacks. We propose two countermeasures to mitigate the risk of attacks initiated through analyzing the shared storage server’s activity for those page patterns. The first countermeasure relocates data pages according to which page sequences they are in. The second countermeasure enhances the first by randomly prefetching pages from predicted page sequences. We have implemented the two countermeasures in MySQL, and experiment results demonstrate their effectiveness and practicality.  相似文献   

19.
Six buffer coherency policies for a multisystem transaction processing environment are compared. These policies differ in their basic approaches on how and when the invalidated pages are identified or if the updated pages are propagated to the buffers of the remote nodes. They can be classified as detection, notification (of invalid pages), and (update) propagation oriented approaches. The policies trade off CPU overhead of coherency messages with buffer hit probability in different ways, resulting in a tradeoff of response time and maximum throughput. The main contribution is to develop analytical models to predict buffer hit probabilities under various buffer coherency policies assuming the LRU replacement policy and the independent reference model (IRM). The buffer models are validated using simulation models and show excellent agreement. Integrated analytic models capturing buffer hit probability and CPU overhead are developed to predict the overall response times under these coherency policies. The difference in buffer hit probabilities amongst various policies are found to be very sensitive to the skewness of the data access  相似文献   

20.
为简化嵌入式虚拟内存的实现,改善嵌入式虚拟内存的性能,在对常见页面置换算法进行对比分析的基础上,提出一种改进的最久未使用页面置换算法。该算法基于内存管理单元、跨页访问计数器、访问次序寄存器、溢出中断处理等软硬件相结合的技术。实验结果表明,该算法能提高嵌入式系统的页面置换效率,提升系统的整体性能,可广泛应用于各种物联网系统和嵌入式系统。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号