首页 | 本学科首页   官方微博 | 高级检索  
     


On-Line File Caching
Authors:Young
Affiliation:(1) Dartmouth College, Hanover NH 03755, USA, and Akamai Technologies, Cambridge, MA 02138, USA. neal@young.name., US
Abstract:Abstract. Consider the following file caching problem: in response to a sequence of requests for files, where each file has a specified size and retrieval cost , maintain a cache of files of total size at most some specified k so as to minimize the total retrieval cost. Specifically, when a requested file is not in the cache, bring it into the cache and pay the retrieval cost, and remove other files from the cache so that the total size of files remaining in the cache is at most k . This problem generalizes previous paging and caching problems by allowing objects of arbitrary size and cost, both important attributes when caching files for world-wide-web browsers, servers, and proxies. We give a simple deterministic on-line algorithm that generalizes many well-known paging and weighted-caching strategies, including least-recently-used, first-in-first-out, flush-when-full, and the balance algorithm. On any request sequence, the total cost incurred by the algorithm is at most k/(k-h+1) times the minimum possible using a cache of size h ≤ k . For any algorithm satisfying the latter bound, we show it is also the case that for most choices of k , the retrieval cost is either insignificant or at most a constant (independent of k ) times the optimum. This helps explain why competitive ratios of many on-line paging algorithms have been typically observed to be constant in practice.
Keywords:, Paging, Caching, On-line algorithms, Competitive analysis,
本文献已被 SpringerLink 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号