共查询到20条相似文献,搜索用时 31 毫秒
1.
We continue the study of priority or “greedy-like” algorithms as initiated in Borodin et al. (2003) [10] and as extended to graph theoretic problems in Davis and Impagliazzo (2009) [12]. Graph theoretic problems pose some modeling problems that did not exist in the original applications of Borodin et al. and Angelopoulos and Borodin (2002) [3]. Following the work of Davis and Impagliazzo, we further clarify these concepts. In the graph theoretic setting, there are several natural input formulations for a given problem and we show that priority algorithm bounds in general depend on the input formulation. We study a variety of graph problems in the context of arbitrary and restricted priority models corresponding to known “greedy algorithms”. 相似文献
2.
We consider a single-source network design problem from a game-theoretic perspective. Gupta, Kumar and Roughgarden (Proc.
35th Annual ACM STOC, pp. 365–372, 2003) developed a simple method for a single-source rent-or-buy problem that also yields the best-known approximation ratio for
the problem. We show how to use a variant of this method to develop an approximately budget-balanced and group strategyproof
cost-sharing method for the problem.
The novelty of our approach stems from our obtaining the cost-sharing methods for the rent-or-buy problem by carefully combining
cost-shares for the simpler Steiner tree problem. Our algorithm is conceptually simpler than the previous such cost-sharing
method due to Pál and Tardos (Proc. 44th Annual FOCS, pp. 584–593, 2003), and improves the previously-known approximation factor of 15 to 4.6.
A preliminary version of this work appears in the Proc. International Workshop on Approximation Algorithms for Combinatorial Optimization Problems, 2004. This research was done in part during the IMA Workshop on Network Management and Design at the University of Minnesota, April 2003.
A. Gupta supported in part by an NSF CAREER award CCF-0448095, and by an Alfred P. Sloan Fellowship.
A. Srinivasan supported in part by the National Science Foundation under Grant No. 0208005 and ITR Award CNS-0426683.
Research of é. Tardos supported in part by ONR grant N00014-98-1-0589, and NSF grants CCR-0311333 and CCR-0325453. 相似文献
3.
The data migration problem is to compute an efficient plan for moving data stored on devices in a network from one configuration to another.
It is modeled by a transfer graph, where vertices represent the storage devices, and edges represent data transfers required
between pairs of devices. Each vertex has a non-negative weight, and each edge has a processing time. A vertex completes when
all the edges incident on it complete; the constraint is that two edges incident on the same vertex cannot be processed simultaneously.
The objective is to minimize the sum of weighted completion times of all vertices. Kim (J. Algorithms 55, 42–57, 2005) gave an LP-rounding 3-approximation algorithm when edges have unit processing times. We give a more efficient primal-dual
algorithm that achieves the same approximation guarantee. When edges have arbitrary processing times we give a primal-dual
5.83-approximation algorithm. We also study a variant of the open shop scheduling problem. This is a special case of the data
migration problem in which the transfer graph is bipartite and the objective is to minimize the sum of completion times of
edges. We present a simple algorithm that achieves an approximation ratio of
, thus improving the 1.796-approximation given by Gandhi et al. (ACM Trans. Algorithms 2(1), 116–129, 2006). We show that the analysis of our algorithm is almost tight.
A preliminary version of the paper appeared in the Proceedings of the 9th International Workshop on Approximation Algorithms
for Combinatorial Optimization Problems, APPROX 2006.
Research of R. Gandhi partially supported by Rutgers University Research Council Grant.
Research of J. Mestre done at the University of Maryland; supported by NSF Awards CCR-0113192 and CCF-0430650, and the University
of Maryland Dean’s Dissertation Fellowship. 相似文献
4.
The notion of ε-kernel was introduced by Agarwal et al. (J. ACM 51:606–635, 2004) to set up a unified framework for computing various extent measures of a point set P approximately. Roughly speaking, a subset Q⊆P is an ε-kernel of P if for every slab W containing Q, the expanded slab (1+ε)W contains P. They illustrated the significance of ε-kernel by showing that it yields approximation algorithms for a wide range of geometric optimization problems.
We present a simpler and more practical algorithm for computing the ε-kernel of a set P of points in ℝ
d
. We demonstrate the practicality of our algorithm by showing its empirical performance on various inputs. We then describe
an incremental algorithm for fitting various shapes and use the ideas of our algorithm for computing ε-kernels to analyze the performance of this algorithm. We illustrate the versatility and practicality of this technique by
implementing approximation algorithms for minimum enclosing cylinder, minimum-volume bounding box, and minimum-width annulus.
Finally, we show that ε-kernels can be effectively used to expedite the algorithms for maintaining extents of moving points.
A preliminary version of the paper appeared in Proceedings of the 20th Annual ACM Symposium on Computational Geometry, 2004, pp. 263–272. Research by the first two authors is supported by NSF under grants CCR-00-86013, EIA-98-70724, EIA-01-31905,
and CCR-02-04118, and by a grant from the US–Israel Binational Science Foundation. Research by the fourth author is supported
by NSF CAREER award CCR-0237431. 相似文献
5.
The “Priority Algorithm” is a model of computation introduced by Borodin, Nielsen and Rackoff ((Incremental) Priority algorithms,
Algorithmica 37(4):295–326, 2003) which formulates a wide class of greedy algorithms. For an arbitrary set
\mathbbS\mathbb{S}
of jobs, we are interested in whether or not there exists a priority algorithm that gains optimal profit on every subset of
\mathbbS\mathbb{S}
. In the case where the jobs are all intervals, we characterize such sets
\mathbbS\mathbb{S}
and give an efficient algorithm (when
\mathbbS\mathbb{S}
is finite) for determining this. We show that in general, however, the problem is NP-hard. 相似文献
6.
V. S. Anil Kumar Madhav V. Marathe Srinivasan Parthasarathy Aravind Srinivasan 《Algorithmica》2009,55(1):205-226
We present polylogarithmic approximations for the R|prec|C
max and R|prec|∑
j
w
j
C
j
problems, when the precedence constraints are “treelike”—i.e., when the undirected graph underlying the precedences is a
forest. These are the first non-trivial generalizations of the job shop scheduling problem to scheduling with precedence constraints
that are not just chains. These are also the first non-trivial results for the weighted completion time objective on unrelated
machines with precedence constraints of any kind. We obtain improved bounds for the weighted completion time and flow time for the case of chains with restricted assignment—this
generalizes the job shop problem to these objective functions. We use the same lower bound of “congestion + dilation”, as
in other job shop scheduling approaches (e.g. Shmoys, Stein and Wein, SIAM J. Comput. 23, 617–632, 1994). The first step in our algorithm for the R|prec|C
max problem with treelike precedences involves using the algorithm of Lenstra, Shmoys and Tardos to obtain a processor assignment
with the congestion + dilation value within a constant factor of the optimal. We then show how to generalize the random-delays
technique of Leighton, Maggs and Rao to the case of trees. For the special case of chains, we show a dependent rounding technique
which leads to a bicriteria approximation algorithm for minimizing the flow time, a notoriously hard objective function.
A preliminary version of this paper appeared in the Proc. International Workshop on Approximation Algorithms for Combinatorial Optimization Problems (APPROX), pages 146–157, 2005.
V.S. Anil Kumar supported in part by NSF Award CNS-0626964. Part of this work was done while at the Los Alamos National Laboratory,
and supported in part by the Department of Energy under Contract W-7405-ENG-36.
M.V. Marathe supported in part by NSF Award CNS-0626964. Part of this work was done while at the Los Alamos National Laboratory,
and supported in part by the Department of Energy under Contract W-7405-ENG-36.
Part of this work by S. Parthasarathy was done while at the Department of Computer Science, University of Maryland, College
Park, MD 20742, and in part while visiting the Los Alamos National Laboratory. Research supported in part by NSF Award CCR-0208005
and NSF ITR Award CNS-0426683.
Research of A. Srinivasan supported in part by NSF Award CCR-0208005, NSF ITR Award CNS-0426683, and NSF Award CNS-0626636. 相似文献
7.
Polynomial-time approximation algorithms with nontrivial performance guarantees are presented for the problems of (a) partitioning
the vertices of a weighted graph intok blocks so as to maximize the weight of crossing edges, and (b) partitioning the vertices of a weighted graph into two blocks
of equal cardinality, again so as to maximize the weight of crossing edges. The approach, pioneered by Goemans and Williamson,
is via a semidefinite programming relaxation.
The first author was supported in part by NSF Grant CCR-9225008. The work described here was undertaken while the second author
was visiting Carnegie Mellon University; at that time he was a Nuffield Science Research Fellow, and was supported in part
by Grant GR/F 90363 of the UK Science and Engineering Research Council, and Esprit Working Group 7097 “RAND”. 相似文献
8.
We prove upper and lower bounds on the competitiveness of randomized algorithms for the list update problem of Sleator and Tarjan. We give a simple and elegant randomized algorithm that is more competitive than the best previous randomized algorithm due to Irani. Our algorithm uses randomness only during an initialization phase, and from then on runs completely deterministically. It is the first randomized competitive algorithm with this property to beat the deterministic lower bound. We generalize our approach to a model in which access costs are fixed but update costs are scaled by an arbitrary constantd. We prove lower bounds for deterministic list update algorithms and for randomized algorithms against oblivious and adaptive on-line adversaries. In particular, we show that for this problem adaptive on-line and adaptive off-line adversaries are equally powerful.A preliminary version of these results appeared in a joint paper with S. Irani in theProceedings of the 2nd Symposium on Discrete Algorithms, 1991 [17].This research was partially supported by NSF Grants CCR-8808949 and CCR-8958528.This research was partially supported by NSF Grant CCR-9009753.This research was supported in part by the National Science Foundation under Grant CCR-8658139, by DIMACS, a National Science Foundation Science and Technology center, Grant No. NSF-STC88-09648. 相似文献
9.
Timothy M. Chan 《Algorithmica》2008,50(2):236-243
We describe an O(n
3/log n)-time algorithm for the all-pairs-shortest-paths problem for a real-weighted directed graph with n vertices. This slightly improves a series of previous, slightly subcubic algorithms by Fredman (SIAM J. Comput. 5:49–60,
1976), Takaoka (Inform. Process. Lett. 43:195–199, 1992), Dobosiewicz (Int. J. Comput. Math. 32:49–60, 1990), Han (Inform. Process. Lett. 91:245–250, 2004), Takaoka (Proc. 10th Int. Conf. Comput. Comb., Lect. Notes Comput. Sci., vol. 3106, pp. 278–289, Springer, 2004), and Zwick (Proc. 15th Int. Sympos. Algorithms and Computation, Lect. Notes Comput. Sci., vol. 3341, pp. 921–932, Springer,
2004). The new algorithm is surprisingly simple and different from previous ones.
A preliminary version of this paper appeared in Proc. 9th Workshop Algorithms Data Struct. (WADS), Lect. Notes Comput. Sci.,
vol. 3608, pp. 318–324, Springer, 2005. 相似文献
10.
Giorgio Ausiello Camil Demetrescu Paolo G. Franciosa Giuseppe F. Italiano Andrea Ribichini 《Algorithmica》2009,55(2):346-374
This article reports the results of an extensive experimental analysis of efficient algorithms for computing graph spanners
in the data streaming model, where an (α,β)-spanner of a graph G is a subgraph S⊆G such that for each pair of vertices the distance in S is at most α times the distance in G plus β. To the best of our knowledge, this is the first computational study of graph spanner algorithms in a streaming setting.
We compare experimentally the randomized algorithms proposed by Baswana () and by Elkin (In: Proceedings of the 34th International Colloquium on Automata, Languages and Programming (ICALP 2007),
Wroclaw, Poland, pp. 716–727, 9–13 July 2007) for general stretch factors with the deterministic algorithm presented by Ausiello et al. (In: Proceedings of the 15th Annual European Symposium on Algorithms (ESA 2007), Engineering and Applications Track, Eilat,
Israel, 8–10 October 2007. LNCS, vol. 4698, pp. 605–617, 2007), designed for building small stretch spanners. All the algorithms we implemented work in a data streaming model where the
input graph is given as a stream of edges in arbitrary order, and all of them need a single pass over the data. Differently
from the algorithm in Ausiello et al., the algorithms in Baswana () and Elkin (In: Proceedings of the 34th International Colloquium on Automata, Languages and Programming (ICALP 2007), Wroclaw,
Poland, pp. 716–727, 9–13 July 2007) need to know in advance the number of vertices in the graph.
The results of our experimental investigation on several input families confirm that all these algorithms are very efficient
in practice, finding spanners with stretch and size much smaller than the theoretical bounds and comparable to those obtainable
by off-line algorithms. Moreover, our experimental findings confirm that small values of the stretch factor are the case of
interest in practice, and that the algorithm by Ausiello et al. tends to produce spanners of better quality than the algorithms by Baswana and Elkin, while still using a comparable amount
of time and space resources.
Work partially supported by the Italian Ministry of University and Research under Project MAINSTREAM “Algorithms for Massive
Information Structures and Data Streams”. A preliminary version of this paper was presented at the 15th Annual European Symposium
on Algorithms (ESA 2007) 5. 相似文献
11.
Xin He 《Algorithmica》1995,13(6):553-572
We present an efficient parallel algorithm for constructing rectangular duals of plane triangular graphs. This problem finds applications in VLSI design and floor-planning problems. No NC algorithm for solving this problem was previously known. The algorithm takesO(log2
n) time withO(n) processors on a CRCW PRAM, wheren is the number of vertices of the graph.This research was supported by NSF Grants CCR-9011214 and CCR-9205982. 相似文献
12.
We apply and extend the priority algorithm framework
introduced by Borodin, Nielsen, and
Rackoff to define greedy-like algorithms for the
(uncapacitated) facility location problems and set cover problems. These problems
have been the focus of extensive research from the point of view of
approximation algorithms and for both problems greedy-like algorithms
have been proposed and analyzed. The priority algorithm definitions
are general enough to capture a broad class of algorithms that can be characterized
as greedy-like while still possible to derive non-trivial
lower bounds on the approximability of the problems
by algorithms in such a class.
Our results are orthogonal to
complexity considerations, and hence apply to algorithms that
are not necessarily polynomial time. 相似文献
13.
We study three new techniques that will speed up the branch-and-bound algorithm for the MAX-2-SAT problem: The first technique is a group of new lower bound functions for the algorithm and we show that these functions are admissible and consistently better than other known lower bound functions. The other two techniques are based on the strongly connected components of the implication graph of a 2CNF formula: One uses the graph to simplify the formula and the other uses the graph to design a new variable ordering. The experiments show that the simplification can reduce the size of the input substantially no matter what is the clause-to-variable ratio and that the new variable ordering performs much better when the clause-to-variable ratio is less than 2. A direct outcome of this research is a high-performance implementation of an exact algorithm for MAX-2-SAT which outperforms any implementation we know about in the same category. We also show that our implementation is a feasible and effective tool to solve large instances of the Max-Cut problem in graph theory.Preliminary results of this paper appeared in [20,21]. This research was supported in part by NSF under grant CCR-0098093. 相似文献
14.
Computing the duplication history of a tandem repeated region is an important problem in computational biology (Fitch in Genetics
86:623–644, 1977; Jaitly et al. in J. Comput. Syst. Sci. 65:494–507, 2002; Tang et al. in J. Comput. Biol. 9:429–446, 2002). In this paper, we design a polynomial-time approximation scheme (PTAS) for the case where the size of the duplication block
is 1. Our PTAS is faster than the previously best PTAS in Jaitly et al. (J. Comput. Syst. Sci. 65:494–507, 2002). For example, to achieve a ratio of 1.5, our PTAS takes O(n
5) time while the PTAS in Jaitly et al. (J. Comput. Syst. Sci. 65:494–507, 2002) takes O(n
11) time. We also design a ratio-6 polynomial-time approximation algorithm for the case where the size of each duplication block
is at most 2. This is the first polynomial-time approximation algorithm with a guaranteed ratio for this case.
Part of work was done during a Z.-Z. Chen visit at City University of Hong Kong. 相似文献
15.
Stanley P. Y. Fung Feifeng Zheng Wun-Tat Chan Francis Y. L. Chin Chung Keung Poon Prudence W. H. Wong 《Journal of Scheduling》2008,11(4):299-308
We study an on-line broadcast scheduling problem in which requests have deadlines, and the objective is to maximize the weighted
throughput, i.e., the weighted total length of the satisfied requests. For the case where all requested pages have the same
length, we present an online deterministic algorithm named BAR and prove that it is 4.56-competitive. This improves the previous
algorithm of (Kim, J.-H., Chwa, K.-Y. in Theor. Comput. Sci. 325(3):479–488, 2004) which is shown to be 5-competitive by (Chan, W.-T., et al. in Lecture Notes in Computer Science, vol. 3106, pp. 210–218,
2004). In the case that pages may have different lengths, we give a (
)-competitive algorithm where Δ is the ratio of maximum to minimum page lengths. This improves the (4Δ+3)-competitive algorithm
of (Chan, W.-T., et al. in Lecture Notes in Computer Science, vol. 3106, pp. 210–218, 2004). We also prove an almost matching lower bound of Ω(Δ/log Δ). Furthermore, for small values of Δ we give better lower bounds.
The work described in this paper was fully supported by grants from the Research Grants Council of the Hong Kong SAR, China
[CityU 1198/03E, HKU 7142/03E, HKU 5172/03E], an NSF Grant of China [No. 10371094], and a Nuffield Foundation Grant of UK
[NAL/01004/G]. 相似文献
16.
On approximating the longest path in a graph 总被引:6,自引:0,他引:6
We consider the problem of approximating the longest path in undirected graphs. In an attempt to pin down the best achievable
performance ratio of an approximation algorithm for this problem, we present both positive and negative results. First, a
simple greedy algorithm is shown to find long paths in dense graphs. We then consider the problem of finding paths in graphs
that are guaranteed to have extremely long paths. We devise an algorithm that finds paths of a logarithmic length in Hamiltonian
graphs. This algorithm works for a much larger class of graphs (weakly Hamiltonian), where the result is the best possible.
Since the hard case appears to be that of sparse graphs, we also consider sparse random graphs. Here we show that a relatively
long path can be obtained, thereby partially answering an open problem of Broderet al.
To explain the difficulty of obtaining better approximations, we also prove hardness results. We show that, for any ε<1, the
problem of finding a path of lengthn-n
ε in ann-vertex Hamiltonian graph isNP-hard. We then show that no polynomial-time algorithm can find a constant factor approximation to the longest-path problem
unlessP=NP. We conjecture that the result can be strengthened to say that, for some constant δ>0, finding an approximation of ration
δ is alsoNP-hard. As evidence toward this conjecture, we show that if any polynomial-time algorithm can approximate the longest path
to a ratio of
, for any ε>0, thenNP has a quasi-polynomial deterministic time simulation. The hardness results apply even to the special case where the input
consists of bounded degree graphs.
D. Karger was supported by an NSF Graduate Fellowship, NSF Grant CCR-9010517, and grants from the Mitsubishi Corporation and
OTL. R. Motwani was supported by an Alfred P. Sloan Research Fellowship, an IBM Faculty Development Award, grants from Mitsubishi
and OTL, NSF Grant CCR-9010517, and NSF Young Investigator Award CCR-9357849, with matching funds from IBM, the Schlumberger
Foundation, the Shell Foundation, and the Xerox Corporation, G. D. S. Ramkumar was supported by a grant from the Toshiba Corporation.
Communicated by M. X. Goemans. 相似文献
17.
We present two new algorithms, Arc Length and Peer Count, for choosing a peer uniformly at random from the set of all peers in Chord (Proceedings of the ACM SIGCOMM 2001 Technical
Conference, 2001). We show analytically that, in expectation, both algorithms have latency O(log n) and send O(log n) messages. Moreover, we show empirically that the average latency and message cost of Arc Length is 10.01log n and that the average latency and message cost of Peer Count is 20.02log n. To the best of our knowledge, these two algorithms are the first fully distributed algorithms for choosing a peer uniformly
at random from the set of all peers in a Distributed Hash Table (DHT). Our motivation for studying this problem is threefold:
to enable data collection by statistically rigorous sampling methods; to provide support for randomized, distributed algorithms
over peer-to-peer networks; and to support the creation and maintenance of random links, and thereby offer a simple means
of improving fault-tolerance.
Research of S. Lewis, J. Saia and M. Young was partially supported by NSF grant CCR-0313160 and Sandia University Research
Program grant No. 191445. 相似文献
18.
We study approximation algorithms and hardness of approximation for several versions of the problem of packing Steiner trees.
For packing edge-disjoint Steiner trees of undirected graphs, we show APX-hardness for four terminals. For packing Steiner-node-disjoint
Steiner trees of undirected graphs, we show a logarithmic hardness result, and give an approximation guarantee ofO (√n logn), wheren denotes the number of nodes. For the directed setting (packing edge-disjoint Steiner trees of directed graphs), we show a
hardness result of Θ(m
1/3/−ɛ) and give an approximation guarantee ofO(m
1/2/+ɛ), wherem denotes the number of edges. We have similar results for packing Steiner-node-disjoint priority Steiner trees of undirected
graphs.
Supported by NSERC Grant No. OGP0138432.
Supported by an NSERC postdoctoral fellowship, Department of Combinatorics and Optimization at University of Waterloo, and
a University start-up fund at University of Alberta. 相似文献
19.
We study a capacitated network design problem with applications in
local access network design. Given a network, the problem is to route
flow from several sources to a sink and to install
capacity on the edges to support the flow at minimum cost.
Capacity can be purchased only in multiples of a fixed quantity.
All the flow from a source must be routed
in a single path to the sink. This NP-hard problem generalizes the
Steiner tree problem and also more effectively models the applications
traditionally formulated as capacitated tree problems. We present
an approximation algorithm with performance ratio
(ρST + 2) where ρST is the performance ratio of
any approximation algorithm for the minimum Steiner tree problem.
When all sources have unit demand, the ratio
improves to ρST + 1) and, in particular, to 2 when all nodes
in the graph are sources. 相似文献
20.
Bernhard Fuchs 《Information Processing Letters》2003,87(4):219-220
In 2002, Lin and Xue [Inform. Process. Lett. 84 (2002) 103-107] introduced a variant of the graph Steiner tree problem, in which each terminal vertex is required to be a leaf in the solution Steiner tree. They presented a ρ+2 approximation algorithm, where ρ is the approximation ratio of the best known efficient algorithm for the regular graph Steiner tree problem. In this note, we derive a simplified algorithm with an improved approximation ratio of 2ρ (currently ρ≈1.550, see [SODA 2000, 2000, pp. 770-790]). 相似文献