首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Donald B. Innes 《Software》1977,7(2):271-273
Many implementations of paged virtual memory systems employ demand fetching with least recently used (LRU) replacement. The stack characteristic of LRU replacement implies that a reference string which repeatedly accesses a number of pages in sequence will cause a page fault for each successive page referenced when the number of pages is greater than the number of page frames allocated to the program's LRU stack. In certain circumstances when the individual operations being performed on the referenced string are independent, or more precisely are commutative, the order of alternate page reference sequences can be reversed. This paper considers sequences which cannot be reversed and shows how placement of information on pages can achieve a similar effect if at least half the pages can be held in the LRU stack.  相似文献   

2.
由于I/O工作流多样性,单纯的LRU或LFU类型的算法都无法提高缓冲区的效率。自适应双栈LRU(下文简称AD-LRU)替换算法很好的解决了上面的问题,其中LRS存放低recency的页面,HRS存放高recency且高频率的页面,利用它们的效率来调整它们的大小。实验证明,有效的提高了缓冲区的命中率。  相似文献   

3.
操作系统的内存管理一直是计算机领域研究的一个重要方向。文中分析了几种常用内存管理中的页面置换算法及其存在的问题,提出了LUR页面置换算法的操作系统内存管理中比较接近理想算法的一种页面置换算法,并阐述了使用矩阵方法实现该页面置换算法的原理。  相似文献   

4.
For cache analytical modeling, the stack distance theory is widely utilized to predict LRU-cache behaviors. Typically, the stack distance histogram collecting is implemented by profiling memory references. However, the profiled memory references merely reflect instruction fetching and load/store executions, which only represent the memory accesses to first-level (L1) caches. That is why these traces cannot be applied directly to construct stack distance histograms for downstream (L2 and L3) caches.Therefore, this paper proposes a stack distance probability model to extend the stack distance theory to the multi-level LRU cache behavior predictions. The inputs of our model are the L1 cache stack distance histograms and the multi-level LRU cache configurations. The outputs are the L2 and L3 cache stack distance histograms, with which the conflict misses in L2 and L3 caches can be quantified quickly and precisely.15 benchmarks chosen from Mobybench 2.0, Mibench I and Mediabench II are used to evaluate the accuracy of our model. Compared to the simulation results from Gem5 in AtomicSimpleCPU mode, the average absolute error of predicting cache misses in the I/D shared L2 cache is less than 5% while that of estimating the L3 cache misses is less than 7%. Furthermore, contrast to the time overhead of Gem5 AtomicSimpleCPU simulations, our model can speed up the cache miss prediction about x100 on average.  相似文献   

5.
We propose a simple solution to the problem of efficient stack evaluation of LRU multiprocessor cache memories with arbitrary set-associative mapping. It is an extension of the existing stack evaluation techniques for all set-associative LRU uniprocessor caches. Special marker entries are used in the stack to represent data blocks (or lines) deleted by an invalidation-based cache coherence protocol. A method of marker-splitting is employed when a data block below a marker in the stack is accessed. Using this technique, one-pass trace evaluation of memory access trace yields hit ratios for all cache sizes and set-associative mappings of multiprocessor caches in a single pass over a memory reference trace. Simulation experiments on some multiprocessor trace data show an order-of-magnitude speed-up in simulation time using this one-pass technique  相似文献   

6.
魏赟  丁宇琛 《计算机应用研究》2020,37(10):3043-3047
由于并行计算框架Spark缓存替换算法的粗糙性,LRU(least recently used)算法并未考虑RDD的重复使用导致易把高重用数据块替换出内存且作业执行效率较低等问题。通过优化权重模型和改进替换策略,提出了一种高效RDD自主缓存替换策略(efficient RDD automatic cache,ERAC),包括高重用自主缓存算法和缓存替换分级算法,可实现高效RDD的自主缓存和缓存目标的分级替换。最后利用SNAP(Stanford network analysis project)提供的标准数据集将ERAC和LRU、RA(register allocation)等算法进行了对比实验,结果显示ERAC算法能够有效提高Spark的内存利用率和任务执行效率。  相似文献   

7.
基于最小效用的流媒体缓存替换算法   总被引:7,自引:0,他引:7  
提出最小缓存替换算法SCU-K,综合考虑流媒体文件最近K次访问情况,使缓存大小动态适应媒体流行度、字节有用性和已缓存部分大小的变化,降低了文件前缀部分被替换的概率,避免LRU和LFU算法中出现的媒体文件被连续替换的问题。在与LRU,LFU和LRU-2算法的对比实验中,SCU-K算法在提高缓存空间利用率、字节命中率和降低启动延迟方面具有更好的性能。  相似文献   

8.
Recently, various mobile apps have included more features to improve user convenience. Mobile operating systems load as many apps into memory for faster app launching and execution. The least recently used (LRU)-based termination of cached apps is a widely adopted approach when free space of the main memory is running low. However, the LRU-based cached app termination does not distinguish between frequently or infrequently used apps. The app launch performance degrades if LRU terminates frequently used apps. Recent studies have suggested the potential of using users’ app usage patterns to predict the next app launch and address the limitations of the current least recently used (LRU) approach. However, existing methods only focus on predicting the probability of the next launch and do not consider how soon the app will launch again. In this paper, we present a new approach for predicting future app launches by utilizing the relaunch distance. We define the relaunch distance as the interval between two consecutive launches of an app and propose a memory management based on app relaunch prediction (M2ARP). M2ARP utilizes past app usage patterns to predict the relaunch distance. It uses the predicted relaunch distance to determine which apps are least likely to be launched soon and terminate them to improve the efficiency of the main memory.  相似文献   

9.
Considers the use of massively parallel architectures to execute a trace-driven simulation of a single cache set. A method is presented for the least-recently-used (LRU) policy, which, regardless of the set size C, runs in time O(log N) using N processors on the EREW (exclusive read, exclusive write) parallel model. A simpler LRU simulation algorithm is given that runs in O(C log N) time using N/log N processors. We present timings of this algorithm's implementation on the MasPar MP-1, a machine with 16384 processors. A broad class of reference-based line replacement policies are considered, which includes LRU as well as the least-frequently-used (LFU) and random replacement policies. A simulation method is presented for any such policy that, on any trace of length N directed to a C line set, runs in O(C log N) time with high probability using N processors on the EREW model. The algorithms are simple, have very little space overhead, and are well suited for SIMD implementation  相似文献   

10.
Reliable methods for generating pseudo-random numbers from specific distributions are increasingly important in all branches of applied mathematics. In Monte Carlo studies, generating random variables from specific continuous probability distributions, whether symmetric or asymmetric is a fundamental consideration. A composite uniform U(0,1) generator algorithm is described and statistically tested. Algorithms for transforming the U(0,1) to a set of selected continuous probability distributions are also validated.  相似文献   

11.
王庆桦 《计算机仿真》2020,37(2):294-298
针对传统分布式缓存替换算法路由器命中缓存性能不足的问题,提出一种动态数据处理平台分布式缓存替换算法。描述动态数据处理平台分布式数据缓存信息,构建动态数据处理平台的缓存架构表,并根据缓存情况替换缓存架构表,通过不断替换的缓存架构表改进权重替换算法,在算法中添加缓存对象这一参数,并通过改进后的算法计算缓存对象的更新权重值及其权重成本,根据计算成本替换LRU链表中的尾指针元素,当元素已存在缓存中并且被命中时、或出现被请求的新元素时,则更新LRU链,构造新的LRU链表,通过重构的LRU链表构建分布式缓存替换策略,从而实现动态数据处理平台分布式缓存替换算法的构建。为了证明动态数据处理平台分布式缓存替换算法的优越性,将其与传统分布式缓存替换算法进行比较,实验结果证明,上述算法的路由器命中缓存性能优于传统算法,更适合进行动态数据处理平台的分布式缓存替换。  相似文献   

12.
The relative worst-order ratio, a relatively new measure for the quality of on-line algorithms, is extended and applied to the paging problem. We obtain results significantly different from those obtained with the competitive ratio. First, we devise a new deterministic paging algorithm, Retrospective-LRU, and show that, according to the relative worst-order ratio and in contrast with the competitive ratio, it performs better than LRU. Our experimental results, though not conclusive, are slightly positive and leave it possible that Retrospective-LRU or similar algorithms may be worth considering in practice. Furthermore, the relative worst-order ratio (and practice) indicates that LRU is better than the marking algorithm FWF, though all deterministic marking algorithms have the same competitive ratio. Look-ahead is also shown to be a significant advantage with this new measure, whereas the competitive ratio does not reflect that look-ahead can be helpful. Finally, with the relative worst-order ratio, as with the competitive ratio, no deterministic marking algorithm can be significantly better than LRU, but the randomized algorithm MARK is better than LRU.  相似文献   

13.
Song  Xiaodong   《Performance Evaluation》2005,60(1-4):5-29
Most computer systems use a global page replacement policy based on the LRU principle to approximately select a Least Recently Used page for a replacement in the entire user memory space. During execution interactions, a memory page can be marked as LRU even when its program is conducting page faults. We define the LRU pages under such a condition as false LRU pages because these LRU pages are not produced by program memory reference delays, which is inconsistent with the LRU principle. False LRU pages can significantly increase page faults, even cause system thrashing. This poses a more serious risk in a large parallel systems with distributed memories because of the existence of coordination among processes running on individual node. In the case, the process thrashing in a single node or a small number of nodes could severely affect other nodes running coordinating processes, even crash the whole system. In this paper, we focus on how to improve the page replacement algorithm running on one node.

After a careful study on characterizing the memory usage and the thrashing behaviors in the multi-programming system using LRU replacement. we propose an LRU replacement alternative, called token-ordered LRU, to eliminate or reduce the unnecessary page faults by effectively ordering and scheduling memory space allocations. Compared with traditional thrashing protection mechanisms such as load control, our policy allows more processes to keep running to support synchronous distributed process computing. We have implemented the token-ordered LRU algorithm in a Linux kernel to show its effectiveness.  相似文献   


14.
大象流的及时、准确提取对防御大规模网络安全事件具有重要意义.针对独立的LRU和SCBF提取大象流存在的不足,提出了基于LRU和SCBF的大象流提取方法——LRU_SCBF算法.该算法使用LRU列表和SCBF数组二级存储结构,将到达的老鼠流存入SCBF中,达到一定门限则提取到LRU中,LRU满时按最近最久未用策略淘汰老鼠流到SCBF中,循环实现大象流和老鼠流的分别聚集.理论分析和模拟实验表明:LRU_SCBF算法占用空间小,误报和漏报低,能实现高速网络环境下大象流的及时准确提取.应用于DDoS攻击防御中,能够实现DDoS攻击的及时检测和追踪.  相似文献   

15.
针对总拖期时间最小化的置换流水车间调度问题(Total tardiness permutation flow-shop scheduling problem) 提出了一种基于多智能体的进化搜索算法. 在该算法中,采用基于延迟时间排序的学习搜索策略(Tardiness rank based learning),快速产生高质量的新个体,并根据概率更新模型进行智能体网格的更新进化. 同时通过实验设计的方法探讨了算法参数设置对算法性能的影响. 为了验证算法的性能,求解了Vallada标准测试集中540个测试问题,并将测试结果与一些代表算法进行比较,验证了该算法的有效性.  相似文献   

16.
王楠  吴云 《计算机应用研究》2023,40(4):1154-1159
由于MySQL使用配置参数的方式调节线性预读的阈值以及冷热LRU算法的冷热比例,导致缓冲区存在性能瓶颈。针对以上问题,提出一种缓冲区自适应管理的方法,该方法通过遗憾最小化的强化在线学习技术设计了自适应阈值调整算法以及自适应冷热缓存替换算法。首先,对MySQL中的预读算法以及冷热缓存替换算法进行深入研究,明确了预读阈值以及冷热比例大小对两种算法的具体影响;其次,通过FIFO历史队列以及增加辅助字段的方式,设计了一套参数评估流程,实时评估当前参数是偏大或偏小;最后,设计了一种参数调整模型,该模型利用MySQL原生的预读算法以及缓存替换算法的性能监控指标,实现对参数的合理调整。在FIU数据集上进行了900组仿真实验,实验表明,相较于MySQL原生的基准预读算法以及冷热缓存算法,自适应后的两种算法能够在基本不牺牲算法运行速度的基础上,有效减少8%的磁盘I/O以及增加24%的缓存命中率;相对于最新的缓存替换算法,自适应后的冷热缓存替换算法在保证缓存命中率的前提下,将速度提升至1.6倍。  相似文献   

17.
阐述LRU算法的基本原理.提出在.NETFramework平台上基于LRU算法的高速缓存的设计思路.并给出一个通用、高性能、GB级、线程安全和支持泛型的LRUCache缓存类的具体实现方法。  相似文献   

18.
19.
基于时间间隔的P2P流媒体直播系统缓存算法   总被引:1,自引:0,他引:1  
针对基于分片传输机制的P2P流媒体直播系统客户端缓存问题,为提高节点间请求数据分片的命中率和避免数据分片大量冗余,提出一种依赖邻居节点请求量的节点缓存替换算法.该算法将分片在节点的缓存时间进行等间隔划分,利用马尔可夫链转移概率矩阵计算理论,预测分片在下一时刻的缓存价值.并将该算法和传统算法FIFO及LRU算法进行了对比实验,实验结果表明,在同等条件下,该算法相较于传统算法能更好地提高数据的命中率.  相似文献   

20.
刘锐  董社勤  洪先龙  龙迪  顾钧 《软件学报》2004,15(5):641-649
在模拟集成电路设计中,关于X轴和y轴同时对称的Stack,以及模块之间的合并,对于增加器件之间的匹配和控制寄生是至关重要的.描述了模拟集成电路二轴对称Stack生成算法和模块合并算法.通过对于对称欧拉图和对称欧拉路径的研究,得出了多项理论结果.在此基础上,提出了时间复杂度为O(n)的伪器件插入算法、对称欧拉路径构造算法和二轴对称Stack生成算法.生成的Stack,不但关于X轴和y轴对称,而且具有公共质心(commoncentroid)的结构.还描述了模块合并算法,给出了计算最大合并距离的公式.该算法本质上是独立于任何拓扑表示的.实验结果验证了算法的有效性.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号