首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Thepaging problem is that of deciding which pages to keep in a memory ofk pages in order to minimize the number of page faults. We develop thepartitioning algorithm, a randomized on-line algorithm for the paging problem. We prove that its expected cost on any sequence of requests is within a factor ofH k of optimum. (H k is thekth harmonic number, which is about ln(k).) No on-line algorithm can perform better by this measure. Our result improves by a factor of two the best previous algorithm.  相似文献   

2.
In this paper, we give a finer separation of several known paging algorithms using a new technique called relative interval analysis. This technique compares the fault rate of two paging algorithms across the entire range of inputs of a given size, rather than in the worst case alone. Using this technique, we characterize the relative performance of LRU and LRU-2, as well as LRU and FWF, among others. We also show that look-ahead is beneficial for a paging algorithm, a fact that is well known in practice but it was, until recently, not verified by theory.  相似文献   

3.
N. Young 《Algorithmica》1994,11(6):525-541
Weighted caching is a generalization ofpaging in which the cost to evict an item depends on the item. We study both of these problems as restrictions of the well-knownk-server problem, which involves moving servers in a graph in response to requests so as to minimize the distance traveled.We give a deterministic on-line strategy for weighted caching that, on any sequence of requests, given a cache holdingk items, incurs a cost within a factor ofk/(k–h+1) of the minimum cost possible given a cache holdingh items. The strategy generalizes least recently used, one of the best paging strategies in practice. The analysis is a primal-dual analysis, the first for an on-line problem, exploiting the linear programming structure of thek-server problem.We introduceloose competitiveness, motivated by Sleator and Tarjan's complaint [ST] that the standard competitive ratios for paging strategies are too high. Ak-server strategy isloosely c(k)-competitive if, for any sequence, foralmost all k, the cost incurred by the strategy withk serverseither is no more thanc(k) times the minimum costor is insignificant.We show that certain paging strategies (including least recently used, and first in first out) that arek-competitive in the standard model are looselyc(k)-competitive providedc(k)/Ink and bothk/c(k) andc(k) are nondecreasing. We show that the marking algorithm, a randomized paging strategy that is (Ink)-competitive in the standard model, is looselyc(k)-competitive providedk–2 In Ink and both 2 Ink–c(k) andc(k) are nondecreasing.This paper is the journal version of On-line Caching as Cache Size Varies, which appeared in theProceedings of the 2nd Annual ACM-SIAM Symposium on Discrete Algorithms (1991). Details beyond those in this paper may be found in Competitive Paging and Dual-Guided Algorithms for Weighted Caching and Matching, which is the author's thesis and is available as Technical Report CS-TR-348-91 from the Computer Science Department at Princeton University.This research was performed while the author was at the Computer Science Department, Princeton University, Princeton, NJ 08544, USA, and was supported by the Hertz Foundation.  相似文献   

4.
We introduce the continuously compounded interest rate into a generalized ski-rental problem with two options: either pay some rental proportionally to the usage time (the rent option), or buy the equipment and then pay some reduced rental proportionally to the usage time (the generalized buy option). Under the framework of competitive analysis, a randomized algorithm for the modified model is constructed and then is proved optimal by Yao?s Lemma, and thus the optimal competitive ratio is obtained. The result shows that the introduction of the interest rate puts off the optimal purchase date and diminishes the uncertainty involved in the decision making.  相似文献   

5.
Competitive randomized algorithms for nonuniform problems   总被引:5,自引:0,他引:5  
Competitive analysis is concerned with comparing the performance of on-line algorithms with that of optimal off-line algorithms. In some cases randomization can lead to algorithms with improved performance ratios on worst-case sequences. In this paper we present new randomized on-line algorithms for snoopy caching and the spin-block problem. These algorithms achieve competitive ratios approachinge/(e–1) 1.58 against an oblivious adversary. These ratios are optimal and are a surprising improvement over the best possible ratio in the deterministic case, which is 2. We also consider the situation when the request sequences for these problems are generated according to an unknown probability distribution. In this case we show that deterministic algorithms that adapt to the observed request statistics also have competitive factors approachinge/(e–1). Finally, we obtain randomized algorithms for the 2-server problem on a class of isosceles triangles. These algorithms are optimal against an oblivious adversary and have competitive ratios that approache/(e–1). This compares with the ratio of 3/2 that can be achieved on an equilateral triangle.Supported in part by the Center for Discrete Mathematics and Theoretical Computer Science (DIMACS), an NSF Science and Technology Center funded under NSF Contract STC-88-09648 and supported by the New Jersey Commission on Science and Technology.  相似文献   

6.
Young 《Algorithmica》2002,33(3):371-383
Abstract. Consider the following file caching problem: in response to a sequence of requests for files, where each file has a specified size and retrieval cost , maintain a cache of files of total size at most some specified k so as to minimize the total retrieval cost. Specifically, when a requested file is not in the cache, bring it into the cache and pay the retrieval cost, and remove other files from the cache so that the total size of files remaining in the cache is at most k . This problem generalizes previous paging and caching problems by allowing objects of arbitrary size and cost, both important attributes when caching files for world-wide-web browsers, servers, and proxies. We give a simple deterministic on-line algorithm that generalizes many well-known paging and weighted-caching strategies, including least-recently-used, first-in-first-out, flush-when-full, and the balance algorithm. On any request sequence, the total cost incurred by the algorithm is at most k/(k-h+1) times the minimum possible using a cache of size h ≤ k . For any algorithm satisfying the latter bound, we show it is also the case that for most choices of k , the retrieval cost is either insignificant or at most a constant (independent of k ) times the optimum. This helps explain why competitive ratios of many on-line paging algorithms have been typically observed to be constant in practice.  相似文献   

7.
M. Chrobak  J. Noga 《Algorithmica》1999,23(2):180-185
In the paging problem we have to manage a two-level memory system, in which the first level has short access time but can hold only up to k pages, while the second level is very large but slow. We use competitive analysis to study the relative performance of the two best known algorithms for paging, LRU and FIFO. Sleator and Tarjan proved that the competitive ratio of LRU and FIFO is k . In practice, however, LRU is known to perform much better than FIFO. It is believed that the superiority of LRU can be attributed to locality of reference exhibited in request sequences. In order to study this phenomenon, Borodin et al. [2] refined the competitive approach by introducing the concept of access graphs. They conjectured that the competitive ratio of LRU on each access graph is less than or equal to the competitive ratio of FIFO. We prove this conjecture in this paper. Received June 2, 1997; revised January 28, 1998.  相似文献   

8.
S. Albers 《Algorithmica》1997,18(3):283-305
We introduce a new model of lookahead for on-line paging algorithms and study several algorithms using this model. A paging algorithm is n-line with strong lookahead l if it sees the present request and a sequence of future requests that contains l pairwise distinct pages. We show that strong lookahead has practical as well as theoretical importance and improves the competitive factors of on-line paging algorithms. This is the first model of lookahead having such properties. In addition to lower bounds we present a number of deterministic and randomized on-line paging algorithms with strong lookahead which are optimal or nearly optimal. Received April 8, 1994; revised May 15, 1995.  相似文献   

9.
We prove upper and lower bounds on the competitiveness of randomized algorithms for the list update problem of Sleator and Tarjan. We give a simple and elegant randomized algorithm that is more competitive than the best previous randomized algorithm due to Irani. Our algorithm uses randomness only during an initialization phase, and from then on runs completely deterministically. It is the first randomized competitive algorithm with this property to beat the deterministic lower bound. We generalize our approach to a model in which access costs are fixed but update costs are scaled by an arbitrary constantd. We prove lower bounds for deterministic list update algorithms and for randomized algorithms against oblivious and adaptive on-line adversaries. In particular, we show that for this problem adaptive on-line and adaptive off-line adversaries are equally powerful.A preliminary version of these results appeared in a joint paper with S. Irani in theProceedings of the 2nd Symposium on Discrete Algorithms, 1991 [17].This research was partially supported by NSF Grants CCR-8808949 and CCR-8958528.This research was partially supported by NSF Grant CCR-9009753.This research was supported in part by the National Science Foundation under Grant CCR-8658139, by DIMACS, a National Science Foundation Science and Technology center, Grant No. NSF-STC88-09648.  相似文献   

10.
Abstract. In this paper we introduce a generalization of Paging to the case where there are many threads of requests. This models situations in which the requests come from more than one independent source. Hence, apart from deciding how to serve a request, at each stage it is necessary to decide which request to serve among several possibilities. Four different on-line problems arise depending on whether we consider fairness restrictions or not, with finite or infinite input sequences. We study all of them, proving lower and upper bounds for the competitiveness of on-line algorithms. The main competitiveness results presented in this paper state that when no fairness restrictions are imposed it is possible to obtain competitive algorithms for finite and infinite inputs. On the other hand, for the fair case in general there exist no competitive algorithms. In addition, we consider three definitions of competitiveness for infinite inputs. One of them forces algorithms to behave efficiently at every finite stage, while the other two aim at comparing the algorithms' steady-state performances. A priori, the three definitions seem different. We study them and find, however, that they are essentially equivalent. This suggests that the competitiveness results that we obtain reflect the intrinsic difficulty of the problem and are not a consequence of a too strict definition of competitiveness.  相似文献   

11.
A new measure for the study of on-line algorithms   总被引:3,自引:0,他引:3  
An accepted measure for the performance of an on-line algorithm is the competitive ratio introduced by Sleator and Tarjan. This measure is well motivated and has led to the development of a mathematical theory for on-line algorithms.We investigate the behavior of this measure with respect to memory needs and benefits of lookahead and find some counterintuitive features. We present lower bounds on the size of memory devoted to recording the past. It is also observed that the competitive ratio reflects no improvement in the performance of an on-line algorithm due to any (finite) amount of lookahead.We offer an alternative measure that exhibits a different and, in some respects, more intuitive behavior. In particular, we demonstrate the use of our new measure by analyzing the tradeoff between the amortized cost of on-line algorithms for the paging problem and the amount of lookahead available to them. We also derive on-line algorithms for theK-server problem on any bounded metric space, which, relative to the new measure, are optimal among all on-line algorithms (up to a factor of 2) and are within a factor of 2K from the optimal off-line performance.  相似文献   

12.
A generalized paging problem is considered. Each request is expressed as a set of u pages. In order to satisfy the request, at least one of these pages must be in the cache. Therefore, on a page fault, the algorithm must load into the cache at least one page out of the u pages given in the request. The problem arises in systems in which requests can be serviced by various utilities (e.g., a request for a data that lies in various web-pages) and a single utility can service many requests (e.g., a web-page containing various data). The server has the freedom to select the utility that will service the next request and hopefully additional requests in the future. The case u=1 is simply the classical paging problem, which is known to be polynomially solvable. We show that for any u>1 the offline problem is NP-hard and hard to approximate if the cache size k is part of the input, but solvable in polynomial time for constant values of k. We consider mainly online algorithms, and design competitive algorithms for arbitrary values of k, u. We study in more detail the cases where u and k are small. We also give an algorithm which uses resource augmentation and which is asymptotically optimal for u=2. A preliminary version of this paper appeared in Proc. Scandinavian Workshop on Algorithm Theory (SWAT 2006), pp. 124–135, 2006. Research of R. van Stee supported by Alexander von Humboldt Foundation.  相似文献   

13.
Cohen  Kaplan  Zwick 《Algorithmica》2002,33(4):511-516
   Abstract. We present a competitive analysis of the LRFU paging algorithm, a hybrid of the LRU (Least Recently Used) and LFU (Least Frequently Used) paging algorithms. We show that the competitive ratio of LRFU is k +
log(1-λ ) / logλ
- 1, where 1/2 < λ 1 is the decay parameter used by the LRFU algorithm, and k is the size of the cache. This supplies, in particular, the first natural paging algorithms that are competitive but are not optimally competitive, answering a question of Borodin and El-Yaniv. Although LRFU, as it turns out, is not optimally competitive, it is expected to behave well in practice, as it combines the benefits of both LRU and LFU.  相似文献   

14.
In this paper we investigate parallel searches on m concurrent rays for a point target t located at some unknown distance along one of the rays. A group of p agents or robots moving at unit speed searches for t. The search succeeds when an agent reaches the point t. Given a strategy S the competitive ratio is the ratio of the time needed by the agents to find t using S and the time needed if the location of t had been known in advance. We provide a strategy with competitive ratio of 1+2(m/p−1)(m/(mp))m/p and prove that this is optimal. This problem has applications in multiple heuristic searches in AI as well as robot motion planning. The case p = 1 is known in the literature as the cow path problem.  相似文献   

15.
We consider the online problem of scheduling jobs with equal processing times on a single machine. Each job has a release time and a deadline, and the goal is to maximize the number of jobs completed by their deadlines. Chrobak et al. (2007, SICOMP 36:6) introduce a preempt-restart model in which progress toward completing a preempted job is lost, yet that job can be restarted from scratch. They provide a -competitive deterministic algorithm and show that this is the optimal competitiveness. Their analysis is based on a complex charging scheme among individual jobs and the use of several partial functions and mappings for assigning the charges. In this paper, we provide an alternative proof of the result using a more global potential argument to compare the relative progress of the algorithm versus the optimal schedule over time. This new proof is significantly simpler and more intuitive that the original, and our technique is applicable to related problems.  相似文献   

16.
We show that the Shortest Processing Time (SPT) algorithm is (Δ+1)/2-competitive for nonpreemptive uniprocessor total flow time with release dates, where Δ is the ratio between the longest and shortest job lengths. This is best possible for a deterministic algorithm and improves on the (Δ+1)-competitive ratio shown by Epstein and van Stee using different methods.  相似文献   

17.
Randomized algorithms are algorithms that employ randomness in their solution method. We show that the performance of randomized algorithms is less affected by factors that prevent most parallel deterministic algorithms from attaining their theoretical speedup bounds. A major reason is that the mapping of randomized algorithms onto multiprocessors involves very little scheduling or communication overhead. Furthermore, reliability is enhanced because the failure of a single processor leads only to degradation, not failure, of the algorithm. We present results of an extensive simulation done on a multiprocessor simulator, running a randomized branch-and-bound algorithm. The particular case we consider is the knapsack problem, due to its ease of formulation. We observe the largest speedups in precisely those problems that take large amounts of time to solve. This work has been supported by the U.S. Army Research Office under Contract No. DAAG 29-85-K-0236.  相似文献   

18.
In a FOCS 1990 paper, S. Irani proved that the First-Fit online algorithm for coloring a graph uses at most O(klogn) colors for k-inductive graphs. In this note we provide a very short proof of this fact.  相似文献   

19.
In a k-server routing problem k?1 servers move in a metric space in order to visit specified points or carry objects from sources to destinations. In the online version requests arrive online while the servers are traveling. Two classical objective functions are to minimize the makespan, i.e., the time when the last server has completed its tour (k-Traveling Salesman Problem or k-tsp) and to minimize the sum of completion times (k-Traveling Repairman Problem or k-trp). Both problems, the k-tsp and the k-trp have been studied from a competitive analysis point of view, where the cost of an online algorithm is compared to that of an optimal offline algorithm. However, the gap between the obtained competitive ratios and the corresponding lower bounds have mostly been quite large for k>1, in particular for randomized algorithms against an oblivious adversary.We reduce a number of gaps by providing new lower bounds for randomized algorithms. The most dramatic improvement is in the lower bound for the k-Dial-a-Ride-Problem (the k-trp when objects need to be carried) from to 3 which is currently also the best lower bound for deterministic algorithms.  相似文献   

20.
On Capital Investment   总被引:8,自引:0,他引:8  
We deal with the problem of making capital investments in machines for manufacturing a product. Opportunities for investment occur over time, every such option consists of a capital cost for a new machine and a resulting productivity gain, i.e., a lower production cost for one unit of product. The goal is that of minimizing the total production costs and capital costs when future demand for the product being produced and investment opportunities are unknown. This can be viewed as a generalization of the ski-rental problem and related to the mortgage problem [3]. If all possible capital investments obey the rule that lower production costs require higher capital investments, then we present an algorithm with constant competitive ratio. If new opportunities may be strictly superior to previous ones (in terms of both capital cost and production cost), then we give an algorithm which is O (min{1+log C , 1+log log P , 1+log M }) competitive, where C is the ratio between the highest and the lowest capital costs, P is the ratio between the highest and the lowest production costs, and M is the number of investment opportunities. We also present a lower bound on the competitive ratio of any on-line algorithm for this case, which is Ω (min{log C , log log P / log log log P , log M / log log M }). This shows that the competitive ratio of our algorithm is tight (up to constant factors) as a function of C , and not far from the best achievable as a function of P and M . Received February 6, 1997; revised November 17, 1997.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号