首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Young 《Algorithmica》2002,33(3):371-383
Abstract. Consider the following file caching problem: in response to a sequence of requests for files, where each file has a specified size and retrieval cost , maintain a cache of files of total size at most some specified k so as to minimize the total retrieval cost. Specifically, when a requested file is not in the cache, bring it into the cache and pay the retrieval cost, and remove other files from the cache so that the total size of files remaining in the cache is at most k . This problem generalizes previous paging and caching problems by allowing objects of arbitrary size and cost, both important attributes when caching files for world-wide-web browsers, servers, and proxies. We give a simple deterministic on-line algorithm that generalizes many well-known paging and weighted-caching strategies, including least-recently-used, first-in-first-out, flush-when-full, and the balance algorithm. On any request sequence, the total cost incurred by the algorithm is at most k/(k-h+1) times the minimum possible using a cache of size h ≤ k . For any algorithm satisfying the latter bound, we show it is also the case that for most choices of k , the retrieval cost is either insignificant or at most a constant (independent of k ) times the optimum. This helps explain why competitive ratios of many on-line paging algorithms have been typically observed to be constant in practice.  相似文献   

2.
We address a hierarchical generalization of the well-known disk paging problem. In the hierarchical cooperative caching problem, a set of n machines residing in an ultrametric space cooperate with one another to satisfy a sequence of read requests to a collection of read-only files. A seminal result in the area of competitive analysis states that the "least recently used" (LRU) paging algorithm is constant-competitive if it is given a constant-factor blowup in capacity over the offline algorithm. Does such a constant-competitive deterministic algorithm, with a constant-factor blowup in the machine capacities, exist for the hierarchical cooperative caching problem? In this paper we present a deterministic hierarchical generalization of LRU that is constant-competitive when the capacity blowup is linear in d, the depth of the cache hierarchy. Furthermore, we exhibit an infinite family of depth-d hierarchies such that any randomized hierarchical cooperative caching algorithm with capacity blowup b has competitive ratio Ω(log(d/b)) against an oblivious adversary. Thus, our upper and lower bounds imply a tight bound of Θ(d) on the capacity blowup required to achieve constant competitiveness.  相似文献   

3.
提出了一种新的机群文件系统缓存模型,它充分利用机群系统累积的系统资源和高速的互联网络,将文件系统元数据和内容数据分离,分别使用分布式元数据缓存和统一缓存模型进行管理。元数据缓存使用改进的广播一致性协议和LRU替换算法。内容数据统一缓存则将磁盘缓存整个文件和内存缓存文件块相结合组成一个单一映像的多层次分布协作缓存,并使用单拷贝优先LRU和向前传递调度缓存块替换算法以及一种贪心的数据预取方法。实验结果表明,这两种缓存机制结合使用能极大地提高机群文件系统的性能。  相似文献   

4.
为了有效提高搜索引擎检索服务系统的整体性能,提出了一种基于倒排文件索引的缓存机制优化方法。具体研究过程是:首先分析倒排文件缓存的体系结构和数据加载,接着讨论负载数据对倒排文件缓存和缓存替换算法的影响,最后通过设计仿真实验研究倒排文件的缓存优化。研究结果表明,采用倒排文件索引的缓存机制优化方法可以明显减少磁盘系统I/O访问次数,提高磁盘系统带宽的利用率。  相似文献   

5.
在发生重大自然灾害时,所在区域的基站等通信设备容易遭到破坏,无法满足受灾人员的通信需求.针对该场景,在无线mesh网络架构下,提出地空异构网络中无人机辅助的文件协作缓存算法.其中,mesh路由器为主要缓存设备,将缓存文件块作为缓存策略的基因,采用遗传算法实现文件协作缓存,减少用户请求文件的响应时延,达到应急通信的目的;无人机作为中继传输节点,将转弯角作为轨迹规划的基因,采用遗传算法得出最优中继位置,辅助地面无线mesh路由缓存,共同为用户提供文件块.仿真结果表明,所提出的算法可以显著降低用户获取文件的平均时延,达到应急通信的目的.  相似文献   

6.
一种P2P环境下分布式文件存储系统的缓存策略   总被引:4,自引:1,他引:4  
在分布式文件存储系统中,缓存技术被广泛用于提高系统性能。论文针对P2P环境下分布式文件存储系统的特点,提出了一种兼顾用户访问效率和复本一致性的灵活的缓存策略,不同于目前已经存在的P2P存储系统,论文使用“阀值”来将文件区分为热点文件和非热点文件,并且只针对热点文件来做缓存,根据缓存空间的使用效率和不同的文件类型来设置不同的阀值使得缓存策略灵活而有效,论文对该策略进行了理论上的分析,然后通过Trace-Driven模拟的方法验证了该策略的可行性。  相似文献   

7.
Online auction sites have very specific workloads and user behavior characteristics. Previous studies on workload characterization conducted by the authors showed that (1) bidding activity on auctions increases considerably after 90% of an auction’s life time has elapsed, (2) a very large percentage of auctions have a relatively low number of bids and bidders and a very small percentage of auctions have a high number of bids and bidders, (3) prices rise very fast after an auction has lasted more than 90% of its life time. Thus, if bidders are not able to successfully bid at the very last moments of an auction because of site overload, the final price may not be as high as it could be and sellers, and consequently the auction site, may lose revenue. In this paper, we propose server-side caching strategies in which cache placement and replacement policies are based on auction-related parameters such as number of bids placed or percent remaining time till closing time. A main-memory auction cache at the application server can be used to reduce accesses to the back-end database server. Trace-based simulations were used to evaluate these caching strategies in terms of cache hit ratio and cache efficiency. The performance characteristics of the best policies were then evaluated through experiments conducted on a benchmark online auction system.  相似文献   

8.
在线文件管理系统的开发方法   总被引:1,自引:0,他引:1  
介绍了利用Microsoft提供的文件管理对象FileSystemObject开发客户端文件管理系统的一般方法,并应用这种方法和编程工具VBScript,实现了一个运行良好的Web在线文件管理系统的原型。  相似文献   

9.
目前由于缺乏针对分布式文件系统的性能测试工具,在很多研究中依然照搬传统的文件系统性能测试工具和方法,很难满足分布式文件系统性能测试的需求(如避免缓存的影响、多客户端并发测试等)。因此,我们研发了一个分布式文件系统在线性能测试平台,此平台通过提供多种大数据量测试负载,有效地避免了系统缓存和系统老化对测试结果的影响,同时支持在线对分布式文件系统执行性能测试,支持多客户端并发测试,并能够对测试结果进行可视化展示和分析。此外,平台也提供了很好的扩展性,满足多种测试需求。  相似文献   

10.
陕西理工学院计算机系陕西723000摘要:随着网上交易的广泛应用,安全问题成为一个不容忽视的重要问题。XML加密技术是保证XML安全的一种重要技术。本文讨论关于网上交易中XML文档的加密问题,从网上交易的实际情况出发,建立XML加密方法。  相似文献   

11.
We study client-server caching of data with expiration timestamps. Although motivated by the potential for caching in telecommunication applications, our work extends to the general case of caching data that has known expiration times. Toward this end, we tailor caching algorithms to consider expiration timestamps. Next, we consider several different client-server paradigms that differ in whether and how the server updates client caches. Finally, we perform simulation studies to evaluate the empirical performance of a variety of strategies for managing a single cache independent of the server and for managing caches in a client-server setting.  相似文献   

12.
移动边缘计算通过在边缘设备上部署通信、计算、存储等资源,有效克服传统云计算存在的传输距离较长、响应时延过慢等问题,满足新兴的计算密集型和时延敏感型应用的服务需求.然而,移动边缘计算中存在边缘设备资源有限且多边缘设备间负载不均衡的问题.为了解决上述问题,多边缘设备协作成为一种必然趋势.然而,多边缘设备协作面临任务卸载与服务缓存相互耦合、边缘设备的任务负载及资源状态随时空双维变化等两大挑战,极大增加了求解难度.针对上述挑战,提出一种面向多边缘设备协作的任务卸载和服务缓存在线联合优化机制,将任务卸载和服务缓存联合优化问题解耦为服务缓存和任务卸载2个子问题.针对服务缓存子问题,提出基于情景感知组合多臂赌博机的协作服务缓存算法;针对任务卸载子问题,设计基于偏好的双边匹配算法.仿真实验表明所提算法能够有效降低任务整体执行时延,同时实现边缘设备间负载均衡.  相似文献   

13.
In this paper, we propose a novel approach to enhancing web proxy caching, an approach that integrates content management with performance tuning techniques. We first develop a hierarchical model for management of web data, which consists of physical pages, logical pages and topics corresponding to different abstraction levels. Content management based on this model enables active utilization of the cached web contents. By defining priority on each abstraction level, the cache manager can make replacement decisions on topics, logical pages, and physical pages hierarchically. As a result, a cache can always keep the most relevant, popular, and high-quality content. To verify the proposed approach, we have designed a content-aware replacement algorithm, LRU-SP+. We evaluate the algorithm through preliminary experiments. The results show that content management can achieve 30% improvement of caching performance in terms of hit ratios and profit ratios (considering significance of topics) compared to content-blind schemes. Received 19 March 2001 / Revised 17 April 2001 / Accepted in revised form 25 May 2001  相似文献   

14.
Irani 《Algorithmica》2002,32(4):624-640
We consider a special case of the weighted caching problem where the weight of every page is either 1 or some fixed number M > 1 . We present a randomized algorithm which achieves a competitive ratio which is O( log k) where k is the number of pages which can fit in the cache.  相似文献   

15.
具有多种产品类型的最小订单提前/拖期问题   总被引:1,自引:0,他引:1  
针对单机多产品多订单有序独立机器调整时间的订单排产问题,建立了以最小化订单的提前和拖期罚值总和为优化目标的混合整数规划模型,分析了问题的NP-hard性和最优排序中临近批组的优化特性,提出一个基于过滤束搜索的拟多项式时间算法,算法复杂性分析和仿真结果验证了算法的有效性.  相似文献   

16.
We address the one-machine problem in which the jobs have distinct due dates, earliness costs, and tardiness costs. In order to determine the minimal cost of such a problem, a new lower bound is proposed. It is based on the decomposition of each job in unary operations that are then assigned to the time slots, which gives a preemptive schedule. Assignment costs are defined so that the minimum assignment cost is a valid lower bound. A branch-and-bound algorithm based on this lower bound and on some new dominance rules is experimentally tested.  相似文献   

17.
Irani 《Algorithmica》2008,32(4):624-640
Abstract. We consider a special case of the weighted caching problem where the weight of every page is either 1 or some fixed number M > 1 . We present a randomized algorithm which achieves a competitive ratio which is O( log k) where k is the number of pages which can fit in the cache.  相似文献   

18.
设计和实现了一种全新的在线文件系统检查工具-OnlineFSCK(On-Line File System Checker)。 OnlineFSCK可以对在线的文件系统进行一致性检查。利用OnlineFSCK对文件系统进行检查时,文件系统可以继续正常提供服务。提出和实现了一种对在线文件系统生成镜像的算法,并将这个算法与现有的文件系统检查工具相结合,最终达到对在线文件系统进行检查的目的。针对ext3文件系统实现了OnlineFSCK的原型,实验结果表明, OnlineFSCK 在满足对性能的要求的前提下,能够达到与传统文件系统检查工具相同的检查能力。 OnlineFSCK的实现中没有修改文件系统的内核源码,可以扩展支持多种文件系统。  相似文献   

19.
We study the on-line caching problem in a restricted cache where each memory item can be placed in only a restricted subset of cache locations. Examples of restricted caches in practice include victim caches, assist caches, and skew caches. To the best of our knowledge, all previous on-line caching studies have considered on-line caching in identical or fully-associative caches where every memory item can be placed in any cache location.In this paper, we focus on companion caches, a simple restricted cache that includes victim caches and assist caches as special cases. Our results show that restricted caches are significantly more complex than identical caches. For example, we show that the commonly studied Least Recently Used algorithm is not competitive unless cache reorganization is allowed while the performance of the First In First Out algorithm is competitive but not optimal. We also present two near optimal algorithms for this problem as well as lower bound arguments.  相似文献   

20.
提出一个新的 Web Caching结构模型—基于内容的 Web Caching.模型综合考虑了 Proxy的操作信息和 Web文档的内容特性 ,界定了虚拟用户团体和 Proxy个性 ,并利用 Ontology技术来刻画 Proxy的个性 ,模拟实验表明 ,结合内容属性可以使得 Web Caching性能得到进一步提高  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号