首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Operating system designers attempt to keep high CPU utilization by maintaining an optimal multiprogramming level (MPL). Although running more processes makes it less likely to leave the CPU idle, too many processes adversely incur serious memory competition, and even introduce thrashing, which eventually lowers CPU utilization. A common practice to address the problem is to lower the MPL with the aid of process swapping out/in operations. This approach is expensive and is only used when the system begins serious thrashing. The objective of our study is to provide highly responsive and cost‐effective thrashing protection by adaptively conducting priority page replacement in a timely manner. We have designed a dynamic system Thrashing Protection Facility (TPF) in the system kernel. Once TPF detects system thrashing, one of the active processes will be identified for protection. The identified process will have a short period of privilege in which it does not contribute its least recently used (LRU) pages for removal so that the process can quickly establish its working set, improving the CPU utilization. With the support of TPF, thrashing can be eliminated in its early stage by adaptive page replacement, so that process swapping will be avoided or delayed until it is truly necessary. We have implemented TPF in a current and representative Linux kernel running on an Intel Pentium machine. Compared with the original Linux page replacement, we showthat TPF consistently and significantly reduces page faults and the execution time of each individual job in several groups of interacting SPEC CPU2000 programs. We also show that TPF introduces little additional overhead to program executions, and its implementation in Linux (or Unix) systems is straightforward. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

2.
It has been suggested that the algorithm used to schedule those processes active and in main memory can have an effect on memory contention. We create models for memory contention in a system that uses global LRU replacement and either round robin or priority internal scheduling. Parameters to our model include the ratio of secondary storage to primary storage access times, thus allowing consideration of a variety of storage technologies. The round robin quantum size is included and is shown to have some effect. Our model uses LRU miss ratio curves and thus reflects actual program characteristics. Trace driven simulations are used to verify the accuracy of the models. We find that in most cases internal scheduling has only a small effect on page fault rates and CPU utilization. In certain cases, however priority scheduling is found to besignificant in relieving thrashing.  相似文献   

3.
Donald B. Innes 《Software》1977,7(2):271-273
Many implementations of paged virtual memory systems employ demand fetching with least recently used (LRU) replacement. The stack characteristic of LRU replacement implies that a reference string which repeatedly accesses a number of pages in sequence will cause a page fault for each successive page referenced when the number of pages is greater than the number of page frames allocated to the program's LRU stack. In certain circumstances when the individual operations being performed on the referenced string are independent, or more precisely are commutative, the order of alternate page reference sequences can be reversed. This paper considers sequences which cannot be reversed and shows how placement of information on pages can achieve a similar effect if at least half the pages can be held in the LRU stack.  相似文献   

4.
Page replacement algorithms of main memory in modern operating systems are crucial in system performance. When memory is full, a page replacement algorithm exploits temporal locality and frequency of page references to evict the page that is least likely to be accessed in the near future. Subsequently, loading the majority of data directly from memory improves performance by reducing I/O waits of accessing slow storage. Research of replacement algorithms that maximizes hit ratio while incurring as less overhead as possible has been constantly studied. In this paper, we propose a time-shift least recently used (TSLRU) algorithm that converts frequency information of page references into temporal locality. Frequent accesses of a page are thus recognized and accumulated in terms of time. Moreover, pages being loaded into memory for the first time are not necessarily the most recently used pages. As a result, one-pass pages are evicted sooner in our algorithm than in traditional LRU algorithm. Our performance evaluations show that the TSLRU outperforms conventional page replacement algorithms on both artificial and real application traces. For example, hit ratio of TSLRU advances ARC by \(4.17\%\) and LRU by \(5.91\%\) on normal distributed workloads. Moreover, TSLRU outperforms ARC by over \(2\%\) on half of the application traces tested.  相似文献   

5.
The minimization of page faults in a demand paging environment where program behavior is described by the LRU stack model is studied. Previous work on this subject considered a specific type of stack probability distribution. As there exist practical situations which do not satisfy this assumption, we extend the analysis to arbitrary distributions. The minimization is stated as an optimization problem under constraints, a method to obtain a class of optimal solution is presented, and a fixed-space replacement algorithm to implement these solutions is proposed. The parameters of this replacement algorithm can be varied to adapt to specific stack probability distributions and to the number of allocated pages in memory. This paper also determines a necessary and sufficient condition for the optimality of the LRU algorithm.  相似文献   

6.
In a multinode data sharing environment, different buffer coherency control schemes based on various lock retention mechanisms can be designed to exploit the concept of deferring the propagation or writing of dirty pages to disk to improve normal performance. Two types of deferred write policies are considered. One policy only propagates dirty pages to disk at the times when dirty pages are flushed out of the buffer under LRU buffer replacement. The other policy also performs writes at the times when dirty pages are transferred across nodes. The dirty page propagation policy can have significant implications on the database recovery time. In this paper, we provide an analytical modeling framework for the analysis of the recovery times under the two deferred write policies. We demonstrate how these policies can be mapped onto a unified analytic modeling framework. The main challenge in the analysis is to obtain the pending update count distribution which can be used to determine the average numbers of log records and data I/Os needed to be applied during recovery. The analysis goes beyond previous work on modeling buffer hit probability in a data sharing system where only the average buffer composition, not the distribution, needs to be estimated, and recovery analysis in a single node environment where the complexities on tracking the propagation of dirty pages across nodes and the buffer invalidation effect do not appear  相似文献   

7.
蒋飞虎  舒平 《微机发展》2006,16(5):42-43
页面置换算法是操作系统中虚拟存储管理的一个重要部分。改进页面置换算法,可以降低页面失败率,从而有效地提高系统性能。现有的应用于虚拟存储管理的页面置换算法主要是Least Reference Used(LRU)页面置换算法。文中利用页面访问间隔数,分析不同的页面访问序列对LRU算法的影响,把页面访问序列分为LRU-友好页面访问序列、LRU-不友好页面访问序列、不友好页面访问序列三类,为改进LRU页面置换算法提供了依据。  相似文献   

8.
The commonly used LRU replacement policy causes thrashing for memory- intensive workloads. A simple mechanism that dynamically changes the insertion policy used by LRU replacement reduces cache misses by 21 percent and requires a total storage overhead of less than 2 bytes.  相似文献   

9.
The possibility of fast access to the main memory of remote sites has been advanced as a potential performance improvement in distributed systems. Even if a page is not available in local memory, sites need not do a disk access. Instead, the sites can use efficient mechanisms that support rapid request/response exchanges in order to access pages that are currently buffered at a remote site. Hardware and software support in such a remote caching architecture must also include algorithms that determine which pages should be buffered at what sites. When each site uses the classic LRU replacement algorithm, performance can be much worse than optimal in many system configurations. Because sites do not coordinate individual decisions, overall system buffering/caching decisions yield very inefficient global configurations. This paper proposes an easily implementable modification of the LRU replacement algorithm for LAN environments that reduces replication. The algorithm substantially improves hit-ratios-and thus performance-over a wide range of parameters. The relatively simple LAN topology implies that much less state information need be available for good replacement decisions compared to general network topologies. Two implications of two variations of the algorithm are explored. In an environment where the network is not a performance bottleneck, and where performance is memory-limited, performance of the proposed replacement algorithm is shown to be close to optimal  相似文献   

10.
一种改进的自适应页面置换算法   总被引:1,自引:0,他引:1  
研究缓冲区页面置换策略和算法(特别是自适应页面置换策略和算法),提出一种基于双管理链的自适应页面置换算法HA。HA算法是对DMC(2c)算法的改进,它引入动态置换点,同时,根据缺页失败数确定算法的工作链,并根据页面访问序列的局部特征选择效率较高的页面置换策略。实验结果表明,HA算法能有效地减少缺页失败数,降低缺页率,特别是在处理第三种模式的页面访问序列时,该算法的缺页率较改进前的算法可降低近30%。  相似文献   

11.
This paper deals with the following mechanism for controlling the multiprogramming set in a demand paging system: processes are dynamically divided into several categories according to the number of page faults generated during their residence in main memory. A process is admitted into the multiprogramming set only if there is enough space free in the main memory to contain the number of pages corresponding to the current category of the process. Using a queueing network model of an interactive system with such a control mechanism we study the effectivenesss of the control considered, and, more particularly, its ability to partition the memory space according to the locality of processes.  相似文献   

12.
为简化嵌入式虚拟内存的实现,改善嵌入式虚拟内存的性能,在对常见页面置换算法进行对比分析的基础上,提出一种改进的最久未使用页面置换算法。该算法基于内存管理单元、跨页访问计数器、访问次序寄存器、溢出中断处理等软硬件相结合的技术。实验结果表明,该算法能提高嵌入式系统的页面置换效率,提升系统的整体性能,可广泛应用于各种物联网系统和嵌入式系统。  相似文献   

13.
Exploiting Regularities in Web Traffic Patterns for Cache Replacement   总被引:2,自引:0,他引:2  
Cohen  Kaplan 《Algorithmica》2002,33(3):300-334
Abstract. Caching web pages at proxies and in web servers' memories can greatly enhance performance. Proxy caching is known to reduce network load and both proxy and server caching can significantly decrease latency. Web caching problems have different properties than traditional operating systems caching, and cache replacement can benefit by recognizing and exploiting these differences. We address two aspects of the predictability of traffic patterns: the overall load experienced by large proxy and web servers, and the distinct access patterns of individual pages. We formalize the notion of ``cache load' under various replacement policies, including LRU and LFU, and demonstrate that the trace of a large proxy server exhibits regular load. Predictable load allows for improved design, analysis, and experimental evaluation of replacement policies. We provide a simple and (near) optimal replacement policy when each page request has an associated distribution function on the next request time of the page. Without the predictable load assumption, no such online policy is possible and it is known that even obtaining an offline optimum is hard. For experiments, predictable load enables comparing and evaluating cache replacement policies using partial traces , containing requests made to only a subset of the pages. Our results are based on considering a simpler caching model which we call the interval caching model . We relate traditional and interval caching policies under predictable load, and derive (near)-optimal replacement policies from their optimal interval caching counterparts.  相似文献   

14.
This paper defines and analyzes two new strategy independent program restructuring algorithms. The more effective of these two algorithms combines the critical reference principle used in strategy dependent algorithms with the bounded locality interval mechanism. Limited, though encouraging, results are presented which show that this new algorithm can be at least as effective as the Critical LRU algorithm, even when the memory management policy is LRU itself, and can also be at least as effective as Critical Working Set, even when the memory management policy is the working set policy. This new algorithm combines the benefits of strategy independent restructuring with the performance improvements previously possible only with a strategy dependent method.  相似文献   

15.
We describe the evolution of a distributed shared memory (DSM) system, Mirage, and the difficulties encountered when moving the system from a Unix-based* kernel on the VAX to a Unix-based kernel on personal computers. Mirage provides a network transparent form of shared memory for a loosely coupled environment. The system hides network boundaries for processes that are accessing shared memory and is upward compatible with the Unix System V Interface Definition. This paper addresses the architectural dependencies in the design of the system and evaluates performance of the implementation. The new version, MIRAGE +, performs well compared to Mirage even though eight times the amount of data is sent on each page fault because of the larger page size used in the implementation. We show that performance of systems with a large page size to network packet size can be dramatically improved on conventional hardware by applying three well-known techniques: packet blasting, compression, and running at interrupt level. The measured time for a page fault in MIRAGE + has been reduced 37 per cent by sending a page using packet blasting instead of using a handshake for each portion of the page. When compression was added to MIRAGE +, the time to fault a page across the network was further improved by 47 per cent when the page was compressed into one network packet. Our measured performance compares favorably with the amount of time it takes to fault a page from disk. Lastly, running at interrupt level may improve performance 16 per cent when faulting pages without compression.  相似文献   

16.
传统的缓存替换算法由于不能适应应用程序的流式访问行为而导致缓存性能不佳.设计基于周期检测的预测方法,分析程序访存重用距离的规律性和流式访问的复杂性,提出用重用距离预测能同时适应简单流和复杂流访问模式的RDP算法.RDP的基本思想是预测重用距离并动态维护重用距离计数,动态调整缓存数据的替换顺序,通过流采样缩减存储开销.实验结果表明,RDP算法能够很好地适应程序中多样化的流访问模式,其总体性能优于LRU算法和DIP算法,在32MB缓存上比传统LRU算法平均减少了27.5%的缓存缺失.  相似文献   

17.
OSCAR对象-关系数据库管理系统的缓冲区管理器负责在必要时将页面从磁盘取到主存的软件层。由于主存不可能容纳磁盘上所有的页面,缓冲区管理器需要考虑到页面替换和回写。主要介绍在设计OSCAR缓冲区管理器中使用的关键技术,包括异步I/O、页面读写校验、页面替换策略及动态调整缓冲区物理内存技术。  相似文献   

18.
An object-oriented multilevel model of computation is used to discuss recoverability and crash resistance issues in distributed systems. Of particular importance are the issues that are raised when recoverability and crash resistance properties are desired from objects whose concrete representations are distributed over several nodes. The execution of a program at a node of the system can give rise to a hierarchy of processes executing various parts of the program at different nodes. Recoverability and crash resistance properties are needed to ensure that such a group of processes leave the system state consistent despite faults in the system.  相似文献   

19.
Replacement algorithms have been widely used as key technologies for cache management in areas such as file systems or database management. A replacement algorithm determines which page to be evicted when the cache is full and a new page is referenced. Because replacement policies considering only recency or frequency such as LRU (Least Recently Used) and LFU (Least Frequently Used) do not perform well, replacement polices that take both recency and frequency into account have been intensively studied. As a classical replacement policy, LRFU (Least Recently/Frequently Used) policy subsumes the LRU and LFU policy. However, because LFU is not able to adapt to the change of page accessing pattern and it is hard to select a suitable λ for each certain trace, LRFU cannot always guarantee a good performance. In this paper, we propose a Window‐LRFU policy, to subsume the LRU and Window‐LFU policies. Experimental results show that the Window‐LRFU policy outperforms LRFU and has at least competitive performance than other classical algorithms. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

20.
High speed networks and rapidly improving microprocessor performance make the network of workstations an extremely important tool for parallel computing in order to speedup the execution of scientific applications. Shared memory is an attractive programming model for designing parallel and distributed applications, where the programmer can focus on algorithmic development rather than data partition and communication. Based on this important characteristic, the design of systems to provide the shared memory abstraction on physically distributed memory machines has been developed, known as Distributed Shared Memory (DSM). DSM is built using specific software to combine a number of computer hardware resources into one computing environment. Such an environment not only provides an easy way to execute parallel applications, but also combines available computational resources with the purpose of speeding up execution of these applications. DSM systems need to maintain data consistency in memory, which usually leads to communication overhead. Therefore, there exists a number of strategies that can be used to overcome this overhead issue and improve overall performance. Strategies as prefetching have been proven to show great performance in DSM systems, since they can reduce data access communication latencies from remote nodes. On the other hand, these strategies also transfer unnecessary prefetching pages to remote nodes. In this research paper, we focus on the access pattern during execution of a parallel application, and then analyze the data type and behavior of parallel applications. We propose an adaptive data classification scheme to improve prefetching strategy with the goal to improve overall performance. Adaptive data classification scheme classifies data according to the accessing sequence of pages, so that the home node uses past history access patterns of remote nodes to decide whether it needs to transfer related pages to remote nodes. From experimental results, we can observe that our proposed method can increase the accuracy of data access in effective prefetch strategy by reducing the number of page faults and misprefetching. Experimental results using our proposed classification scheme show a performance improvement of about 9–25% over the same benchmark applications running on top of an original JIAJIA DSM system.
Kuan-Ching Li (Corresponding author)Email:
  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号