共查询到11条相似文献,搜索用时 15 毫秒
1.
Abstract. This paper concerns the online list accessing problem. In the first part of the paper we present two new families of list
accessing algorithms. The first family is of optimal, 2-competitive, deterministic online algorithms. This family, called
the MRI (MOVE-TO-RECENT-ITEM) family, includes as members the well-known MOVE-TO-FRONT (MTF) algorithm and the recent, more
``conservative' algorithm TIMESTAMP due to Albers. So far MOVE-TO-FRONT and TIMESTAMP were the only algorithms known to be
optimal in terms of their competitive ratio. This new family contains a sequence of algorithms { A(i) }
i \geq 1
where A(1) is equivalent to TIMESTAMP and the limit element A(∈fty) is \mtf. Further, in this class, for each i , the algorithm A(i) is more conservative than algorithm A(i+1) in the sense that it is more reluctant to move an accessed item to the front, thus giving a gradual transition from the
conservative TIMESTAMP to the ``reckless' MTF. The second new family, called the PRI (PASS-RECENT-ITEM) family, is also infinite
and contains TIMESTAMP. We show that most algorithms in this family attain a competitive ratio of 3.
In the second, experimental, part of the paper we report the results of an extensive empirical study of the performances
of a large set of online list accessing algorithms (including members of our MRI and PRI families). The algorithms' access
cost performances were tested with respect to a number of different request sequences. These include sequences of independent
requests generated by probability distributions and sequences generated by Markov sources to examine the influence of locality.
It turns out that the degree of locality has a considerable influence on the algorithms' absolute and relative costs, as well
as on their rankings. In another experiment we tested the algorithms' performances as data compressors in two variants of
the compression scheme of Bentley et al. In both experiments, members of the MRI and PRI families were found to be among the
best performing algorithms. 相似文献
2.
In this paper we study the performance of list update algorithms under arbitrary distributions that exhibit strict locality of reference and prove that Move-To-Front (MTF) is the best list update algorithm under any such distribution. We also show that the performance of MTF depends on the amount of locality of reference, while the performance of any static list update algorithm is independent of the amount of locality. 相似文献
3.
We consider the problem of dynamic reorganization of a linear list, where requests for the elements are generated randomly
with fixed, unknown probabilities. The objective is to obtain the smallest expected cost per access. It has been shown, that
when no a priori information is given on the reference probabilities, the Counter Scheme (CS) provides an optimal reorganization rule, which applies to all possible distributions. In this paper we show that for a list of n elements, arbitrary request probabilities, and >0 the expected cost under CS achieves a ratio of at most 1+ to the optimal (minimal) expected cost within Qn lg n
2
reorganization steps, for a Q we can compute.
Received January 20, 1998; revised March 1, 1998. 相似文献
4.
In this paper the problem of efficiently serving a sequence of requests presented in an on-line fashion located at points
of a metric space is considered. We call this problem the On-Line Travelling Salesman Problem (OLTSP). It has a variety of
relevant applications in logistics and robotics.
We consider two versions of the problem. In the first one the server is not required to return to the departure point after
all presented requests have been served. For this problem we derive a lower bound on the competitive ratio of 2 on the real line. Besides, a 2.5 -competitive algorithm for a wide class of metric spaces, and a 7/3 -competitive algorithm for the real line are provided.
For the other version of the problem, in which returning to the departure point is required, we present an optimal 2 -competitive algorithm for the above-mentioned general class of metric spaces. If in this case the metric space is the real
line we present a 1.75 -competitive algorithm that compares with a \approx 1.64 lower bound.
Received November 12, 1997; revised June 8, 1998. 相似文献
5.
S. Albers 《Algorithmica》1997,18(3):283-305
We introduce a new model of lookahead for on-line paging algorithms and study several algorithms using this model. A paging
algorithm is n-line with strong lookahead l if it sees the present request and a sequence of future requests that contains l pairwise distinct pages. We show that strong lookahead has practical as well as theoretical importance and improves the competitive
factors of on-line paging algorithms. This is the first model of lookahead having such properties. In addition to lower bounds
we present a number of deterministic and randomized on-line paging algorithms with strong lookahead which are optimal or nearly
optimal.
Received April 8, 1994; revised May 15, 1995. 相似文献
6.
We propose a framework to model on-line resource management problems based on an on-line version of positive linear programming.
We consider both min cost problems and max benefit problems and propose logarithmic competitive algorithms that are optimal
up to a constant factor.
The proposed framework provides a general methodology that applies to a wide class of on-line problems: shop scheduling,
packet routing, and in general a class of packing and assignment problems. Previously studied problems as on-line multiprocessor
scheduling and on-line virtual circuit routing can also be modeled within this framework.
Received December 18, 1996; revised March 2, 1997. 相似文献
7.
Irani 《Algorithmica》2002,33(3):384-409
Abstract. We consider the paging problem where the pages have varying size. This problem has applications to page replacement policies
for caches containing World Wide Web documents. We consider two models for the cost of an algorithm on a request sequence.
In the first (the FAULT model) the goal is to minimize the number of page faults. In the second (the BIT model) the goal is
to minimize the total number of bits that have to be read into the cache. We show offline algorithms for both cost models
that obtain approximation factors of O(log k) , where k is the ratio of the size of the cache to the size of the smallest page. We show randomized online algorithms for both cost
models that are O(log
2
k) -competitive. In addition, if the input sequence is generated by a known distribution, we show an algorithm for the FAULT
model whose expected cost is within a factor of O(log k) of any other online algorithm. 相似文献
8.
相对主元分析及其在数据压缩和故障诊断中的应用研究 总被引:3,自引:0,他引:3
传统主元分析(Principal component analysis, PCA)方法因忽视量纲对系统的影响, 从而使选取的主元难以具有代表性; 而在进行量纲标准化后, 又因得到的特征值常常是近似相等的而无法进行有效的主元提取. 针对这一主要问题, 本文通过引入相对化变换(Relative transform, RT)、相对主元(Relative principal components, RPCs) 和分布"均匀"等概念, 建立起一种相对主元分析(Relative principal component analysis, RPCA)的新方法. 该方法首先对系统各分量进行量纲标准化; 其次再根据系统的先验信息分析和确定各分量的重要程度; 然后在系统能量守恒的准则下, 赋以系统各分量相应的权值; 最后利用已建立起的相对主元模型, 对系统实施RPCA. 同时运用数值例子, 开展了RPCA在数据压缩和系统故障诊断中的应用研究. 理论分析和仿真实验均表明, 采用RPCA方法选取出的主元更具代表性和显著几何意义, 加之选取主元的灵活性, 将使新方法具有更广泛的应用前景. 相似文献
9.
In this paper we present a method for clustering geo-referenced data suitable for applications in spatial data mining, based
on the medoid method. The medoid method is related to k -MEANS, with the restriction that cluster representatives be chosen from among the data elements. Although the medoid method
in general produces clusters of high quality, especially in the presence of noise, it is often criticized for the Ω(n
2
) time that it requires.
Our method incorporates both proximity and density information to achieve high-quality clusters in subquadratic time; it
does not require that the user specify the number of clusters in advance. The time bound is achieved by means of a fast approximation
to the medoid objective function, using Delaunay triangulations to store proximity information.
Received December 21, 1998; revised August 25, 1999, and October 25, 1999. 相似文献
10.
Abstract. We investigate a variant of on-line edge-coloring in which there is a fixed number of colors available and the aim is to
color as many edges as possible. We prove upper and lower bounds on the performance of different classes of algorithms for
the problem. Moreover, we determine the performance of two specific algorithms, First-Fit and Next-Fit .
Specifically, algorithms that never reject edges that they are able to color are called fair algorithms. We consider the
four combinations of fair/ not fair and deterministic/ randomized.
We show that the competitive ratio of deterministic fair algorithms can vary only between approximately 0.4641 and 1/2, and
that Next-Fit is worst possible among fair algorithms. Moreover, we show that no algorithm is better than 4/7-competitive.
If the graphs are all k -colorable, any fair algorithm is at least 1/2-competitive. Again, this performance is matched by Next-Fit while the competitive ratio for First-Fit is shown to be k/(2k-1) , which is significantly better, as long as k is not too large. 相似文献
11.
Choosing balls that best approximate a 3D object is a non‐trivial problem. To answer it, we first address the inner approximation problem, which consists of approximating an object defined by a union of n balls with balls defining a region . This solution is further used to construct an outer approximation enclosing the initial shape, and an interpolated approximation sandwiched between the inner and outer approximations. The inner approximation problem is reduced to a geometric generalization of weighted max k‐cover, solved with the greedy strategy which achieves the classical lower bound. The outer approximation is reduced to exploiting the partition of the boundary of by the Apollonius Voronoi diagram of the balls defining the inner approximation. Implementation‐wise, we present robust software incorporating the calculation of the exact Delaunay triangulation of points with degree two algebraic coordinates, of the exact medial axis of a union of balls, and of a certified estimate of the volume of a union of balls. Application‐wise, we exhibit accurate coarse‐grain molecular models using a number of balls 20 times smaller than the number of atoms, a key requirement to simulate crowded cellular environments. 相似文献