共查询到20条相似文献,搜索用时 31 毫秒
1.
We consider the problem of approximately integrating a Lipschitz function f (with a known Lipschitz constant) over an interval. The goal is to achieve an additive error of at most ε using as few samples of f as possible. We use the adaptive framework: on all problem instances an adaptive algorithm should perform almost as well
as the best possible algorithm tuned for the particular problem instance. We distinguish between
and
, the performances of the best possible deterministic and randomized algorithms, respectively. We give a deterministic algorithm
that uses
samples and show that an asymptotically better algorithm is impossible. However, any deterministic algorithm requires
samples on some problem instance. By combining a deterministic adaptive algorithm and Monte Carlo sampling with variance reduction,
we give an algorithm that uses at most
samples. We also show that any algorithm requires
samples in expectation on some problem instance (f,ε), which proves that our algorithm is optimal. 相似文献
2.
We study dynamic routing in store-and-forward packet networks where each network link has bounded buffer capacity for receiving
incoming packets and is capable of transmitting a fixed number of packets per unit of time. At any moment in time, packets
are injected at various network nodes with each packet specifying its destination node. The goal is to maximize the throughput, defined as the number of packets delivered to their destinations.
In this paper, we make some progress on throughput maximization in various network topologies. Let n and m denote the number of nodes and links in the network, respectively. For line networks, we show that Nearest-to-Go (NTG), a
natural distributed greedy algorithm, is
-competitive, essentially matching a known
lower bound on the performance of any greedy algorithm. We also show that if we allow the online routing algorithm to make centralized decisions, there is a randomized
polylog(n)-competitive algorithm for line networks as well as for rooted tree networks, where each packet is destined for the root
of the tree. For grid graphs, we show that NTG has a competitive ratio of
while no greedy algorithm can achieve a ratio better than
. Finally, for arbitrary network topologies, we show that NTG is
-competitive, improving upon an earlier bound of O(mn).
An extended abstract appeared in the Proceedings of the 8th Workshop on Approximation Algorithms for Combinatorial Optimization Problems, APPROX 2005, Berkeley, CA, USA, pp. 1–13, Lecture Notes in Computer Science, vol. 1741, Springer, Berlin.
S. Angelov is supported in part by NSF Career Award CCR-0093117, NSF Award ITR 0205456 and NIGMS Award 1-P20-GM-6912-1.
S. Khanna is supported in part by an NSF Career Award CCR-0093117, NSF Award CCF-0429836, and a US-Israel Binational Science
Foundation Grant.
K. Kunal is supported in part by an NSF Career Award CCR-0093117 and NSF Award CCF-0429836. 相似文献
3.
An instance of the path hitting problem consists of two families of paths,
and ℋ, in a common undirected graph, where each path in ℋ is associated with a non-negative cost. We refer to
and ℋ as the sets of demand and hitting paths, respectively. When p∈ℋ and
share at least one mutual edge, we say that p
hits q. The objective is to find a minimum cost subset of ℋ whose members collectively hit those of
. In this paper we provide constant factor approximation algorithms for path hitting, confined to instances in which the underlying
graph is a tree, a spider, or a star. Although such restricted settings may appear to be very simple, we demonstrate that
they still capture some of the most basic covering problems in graphs. Our approach combines several novel ideas: We extend
the algorithm of Garg, Vazirani and Yannakakis (Algorithmica, 18:3–20, 1997) for approximate multicuts and multicommodity flows in trees to prove new integrality properties; we present a reduction
that involves multiple calls to this extended algorithm; and we introduce a polynomial-time solvable variant of the edge cover
problem, which may be of independent interest.
An extended abstract of this paper appeared in Proceedings of the 14th Annual European Symposium on Algorithms, 2006.
This work is part of D. Segev’s Ph.D. thesis prepared at Tel-Aviv University under the supervision of Prof. Refael Hassin. 相似文献
4.
We study an online job scheduling problem arising in networks with aggregated links. The goal is to schedule n jobs, divided into k disjoint chains, on m identical machines, without preemption, so that the jobs within each chain complete in the order of release times and the
maximum flow time is minimized.
We present a deterministic online algorithm
with competitive ratio
, and show a matching lower bound, even for randomized algorithms. The performance bound for
we derive in the paper is, in fact, more subtle than a standard competitive ratio bound, and it shows that in overload conditions
(when many jobs are released in a short amount of time),
’s performance is close to the optimum.
We also show how to compute an offline solution efficiently for k=1, and that minimizing the maximum flow time for k,m≥2 is
-hard. As by-products of our method, we obtain two offline polynomial-time algorithms for minimizing makespan: an optimal
algorithm for k=1, and a 2-approximation algorithm for any k.
W. Jawor and M. Chrobak supported by NSF grants OISE-0340752 and CCR-0208856.
Work of C. Dürr conducted while being affiliated with the Laboratoire de Recherche en Informatique, Université Paris-Sud,
91405 Orsay. Supported by the CNRS/NSF grant 17171 and ANR Alpage. 相似文献
5.
We analyze approximation algorithms for several variants of the traveling salesman problem with multiple objective functions.
First, we consider the symmetric TSP (STSP) with γ-triangle inequality. For this problem, we present a deterministic polynomial-time algorithm that achieves an approximation
ratio of
and a randomized approximation algorithm that achieves a ratio of
. In particular, we obtain a 2+ε approximation for multi-criteria metric STSP.
Then we show that multi-criteria cycle cover problems admit fully polynomial-time randomized approximation schemes. Based
on these schemes, we present randomized approximation algorithms for STSP with γ-triangle inequality (ratio
), asymmetric TSP (ATSP) with γ-triangle inequality (ratio
), STSP with weights one and two (ratio 4/3) and ATSP with weights one and two (ratio 3/2).
A preliminary version of this work has been presented at the 4th Workshop on Approximation and Online Algorithms (WAOA 2006)
(Lecture Notes in Computer Science, vol. 4368, pp. 302–315, 2007).
B. Manthey is supported by the Postdoc-Program of the German Academic Exchange Service (DAAD). He is on leave from Saarland
University and has done part of the work at the Institute for Theoretical Computer Science of the University of Lübeck supported
by DFG research grant RE 672/3 and at the Department of Computer Science at Saarland University. 相似文献
6.
Hash tables on external memory are commonly used for indexing in database management systems. In this paper we present an
algorithm that, in an asymptotic sense, achieves the best possible I/O and space complexities. Let B denote the number of records that fit in a block, and let N denote the total number of records. Our hash table uses
I/Os, expected, for looking up a record (no matter if it is present or not). To insert, delete or change a record that has
just been looked up requires
I/Os, amortized expected, including I/Os for reorganizing the hash table when the size of the database changes. The expected
external space usage is
times the optimum of N/B blocks, and just O(1) blocks of internal memory are needed. 相似文献
7.
We consider the problem of finding a stable matching of maximum size when both ties and unacceptable partners are allowed
in preference lists. This problem is NP-hard and the current best known approximation algorithm achieves the approximation
ratio 2−c(log N)/N, where c is an arbitrary positive constant and N is the number of men in an input. In this paper, we improve the ratio to
, where c is an arbitrary constant that satisfies
.
A preliminary version of this paper was presented at the 16th Annual International Symposium on Algorithms and Computation,
ISAAC 2005. 相似文献
8.
We consider the problems of enumerating all minimal strongly connected subgraphs and all minimal dicuts of a given strongly
connected directed graph G=(V,E). We show that the first of these problems can be solved in incremental polynomial time, while the second problem is NP-hard:
given a collection of minimal dicuts for G, it is NP-hard to tell whether it can be extended. The latter result implies, in particular, that for a given set of points
, it is NP-hard to generate all maximal subsets of
contained in a closed half-space through the origin. We also discuss the enumeration of all minimal subsets of
whose convex hull contains the origin as an interior point, and show that this problem includes as a special case the well-known
hypergraph transversal problem.
This research was supported by the National Science Foundation (Grant IIS-0118635). The third and fourth authors are also
grateful for the partial support by DIMACS, the National Science Foundation’s Center for Discrete Mathematics and Theoretical
Computer Science.
Our friend and co-author, Leonid Khachiyan tragically passed away on April 29, 2005. 相似文献
9.
Chvátal-Gomory cuts are among the most well-known classes of cutting planes for general integer linear programs (ILPs). In
case the constraint multipliers are either 0 or
, such cuts are known as
-cuts. It has been proven by Caprara and Fischetti (Math. Program. 74:221–235, 1996) that separation of
-cuts is
-hard.
In this paper, we study ways to separate
-cuts effectively in practice. We propose a range of preprocessing rules to reduce the size of the separation problem. The
core of the preprocessing builds a Gaussian elimination-like procedure. To separate the most violated
-cut, we formulate the (reduced) problem as integer linear program. Some simple heuristic separation routines complete the
algorithmic framework.
Computational experiments on benchmark instances show that the combination of preprocessing with exact and/or heuristic separation
is a very vital idea to generate strong generic cutting planes for integer linear programs and to reduce the overall computation
times of state-of-the-art ILP-solvers. 相似文献
10.
We present a time
algorithm finding a minimum feedback vertex set in an undirected graph on n vertices. We also prove that a graph on n vertices can contain at most 1.8638
n
minimal feedback vertex sets and that there exist graphs having 105
n/10≈1.5926
n
minimal feedback vertex sets.
Preliminary extended abstracts of this paper appeared in the proceedings of SWAT’06 [29] and IWPEC’06 [18].
Additional support of F.V. Fomin, S. Gaspers and A.V. Pyatkin by the Research Council of Norway.
The work of A.V. Pyatkin was partially supported by grants of the Russian Foundation for Basic Research (project code 05-01-00395),
INTAS (project code 04–77–7173).
I. Razgon is supported by Science Foundation Ireland (Grant Number 05/IN/I886). 相似文献
11.
It is proved that “FIFO” worksharing protocols provide asymptotically optimal solutions to two problems related to sharing
large collections of independent tasks in a heterogeneous network of workstations (HNOW)
. In the
, one seeks to accomplish as much work as possible on
during a prespecified fixed period of L time units. In the
, one seeks to complete W units of work by “renting”
for as short a time as necessary. The worksharing protocols we study are crafted within an architectural model that characterizes
via parameters that measure
’s workstations’ computational and communicational powers. All valid protocols are self-scheduling, in the sense that they determine completely both an amount of work to allocate to each of
’s workstations and a schedule for all related interworkstation communications. The schedules provide either a value for W given L, or a value for L given W, hence solve both of the motivating problems. A protocol observes a FIFO regimen if it has
’s workstations finish their assigned work, and return their results, in the same order in which they are supplied with their
workloads. The proven optimality of FIFO protocols resides in the fact that they accomplish at least as much work as any other
protocol during all sufficiently long worksharing episodes, and that they complete sufficiently large given collections of
tasks at least as fast as any other protocol. Simulation experiments illustrate that the superiority of FIFO protocols is
often observed during worksharing episodes of only a few minutes’ duration.
A portion of this research was presented at the 15th ACM Symp. on Parallelism in Algorithms and Architectures (2003). 相似文献
12.
Romeo Rizzi 《Algorithmica》2009,53(3):402-424
In the last years, new variants of the minimum cycle basis (MCB) problem and new classes of cycle bases have been introduced, as motivated by several applications from disparate areas of
scientific and technological inquiry. At present, the complexity status of the MCB problem is settled only for undirected, directed, and strictly fundamental cycle bases (SFCB’s). Weakly fundamental cycle
bases (WFCB’s) form a natural superclass of SFCB’s. A cycle basis
of a graph G is a WFCB iff ν=0 or there exists an edge e of G and a circuit C
i
in
such that
is a WFCB of G∖e. WFCB’s still possess several of the nice properties offered by SFCB’s. At the same time, several classes of graphs enjoying
WFCB’s of cost asymptotically inferior to the cost of the cheapest SFCB’s have been found and exhibited in the literature.
Considered also the computational difficulty of finding cheap SFCB’s, these works advocated an in-depth study of WFCB’s. In
this paper, we settle the complexity status of the MCB problem for WFCB’s (the MWFCB problem). The problem turns out to be
-hard. However, in this paper, we also offer a simple and practical 2⌈log 2
n⌉-approximation algorithm for the MWFCB problem. In O(n
ν) time, this algorithm actually returns a WFCB whose cost is at most 2⌈log 2
n⌉∑
e∈E(G)
w
e
, thus allowing a fast 2⌈log 2
n⌉-approximation also for the MCB problem. With this algorithm, we provide tight bounds on the cost of any MCB and MWFCB. 相似文献
13.
We consider the following problem of scheduling with conflicts (swc): Find a minimum makespan schedule on identical machines where conflicting jobs cannot be scheduled concurrently. We study
the problem when conflicts between jobs are modeled by general graphs.
Our first main positive result is an exact algorithm for two machines and job sizes in {1,2}. For jobs sizes in {1,2,3}, we can obtain a
-approximation, which improves on the
-approximation that was previously known for this case. Our main negative result is that for jobs sizes in {1,2,3,4}, the
problem is APX-hard.
Our second contribution is the initiation of the study of an online model for swc, where we present the first results in this model. Specifically, we prove a lower bound of
on the competitive ratio of any deterministic online algorithm for m machines and unit jobs, and an upper bound of 2 when the algorithm is not restricted computationally. For three machines
we can show that an efficient greedy algorithm achieves this bound. For two machines we present a more complex algorithm that
achieves a competitive ratio of
when the number of jobs is known in advance to the algorithm. 相似文献
14.
Tomasz Jurdziński Friedrich Otto František Mráz Martin Plátek 《Theory of Computing Systems》2008,42(4):488-518
The
-automaton is the weakest form of the nondeterministic version of the restarting automaton that was introduced by Jančar et al. to model the so-called analysis by reduction. Here it is shown that the class ℒ(R) of languages that are accepted by
-automata is incomparable under set inclusion to the class
of Church-Rosser languages and to the class
of growing context-sensitive languages. In fact this already holds for the class
of languages that are accepted by 2-monotone
-automata. In addition, we prove that already the latter class contains
-complete languages, showing that already the 2-monotone
-automaton has a surprisingly large expressive power.
The results of this paper have been announced at DLT 2004 in Auckland, New Zealand.
This work was mainly carried out while T. Jurdziński was visiting the University of Kassel, supported by a grant from the
Deutsche Forschungsgemeinschaft (DFG).
F. Mráz and M. Plátek were partially supported by the Grant Agency of the Czech Republic under Grant-No. 201/04/2102 and by
the program ‘Information Society’ under project 1ET100300517. F. Mráz was also supported by the Grant Agency of Charles University
in Prague under Grant-No. 358/2006/A-INF/MFF. 相似文献
15.
On the Competitive Ratio for Online Facility Location 总被引:2,自引:0,他引:2
Dimitris Fotakis 《Algorithmica》2008,50(1):1-57
We consider the problem of Online Facility Location, where the demand points arrive online and must be assigned irrevocably
to an open facility upon arrival. The objective is to minimize the sum of facility and assignment costs. We prove that the
competitive ratio for Online Facility Location is Θ
. On the negative side, we show that no randomized algorithm can achieve a competitive ratio better than Ω
against an oblivious adversary even if the demands lie on a line segment. On the positive side, we present a deterministic
algorithm which achieves a competitive ratio of
in every metric space.
A preliminary version of this work appeared in the Proceedings of the 30th International Colloquium on Automata, Languages
and Programming (ICALP 2003), Lecture Notes in Computer Science 2719. This work was done while the author was at the Max-Planck-Institut
für Informatik, Saarbrücken, Germany, and was partially supported by the Future and Emerging Technologies programme of the
EU under contract number IST-1999-14186 (ALCOM–FT). 相似文献
16.
A traveling salesman game is a cooperative game
. Here N, the set of players, is the set of cities (or the vertices of the complete graph) and c
D
is the characteristic function where D is the underlying cost matrix. For all S⊆N, define c
D
(S) to be the cost of a minimum cost Hamiltonian tour through the vertices of S∪{0} where
is called as the home city. Define Core
as the core of a traveling salesman game
. Okamoto (Discrete Appl. Math. 138:349–369, [2004]) conjectured that for the traveling salesman game
with D satisfying triangle inequality, the problem of testing whether Core
is empty or not is
-hard. We prove that this conjecture is true. This result directly implies the
-hardness for the general case when D is asymmetric. We also study approximately fair cost allocations for these games. For this, we introduce the cycle cover
games and show that the core of a cycle cover game is non-empty by finding a fair cost allocation vector in polynomial time.
For a traveling salesman game, let
and ∀
S⊆N, x(S)≤ε⋅c
D
(S)} be an ε-approximate core, for a given ε>1. By viewing an approximate fair cost allocation vector for this game as a sum of exact fair cost allocation vectors of
several related cycle cover games, we provide a polynomial time algorithm demonstrating the non-emptiness of the log 2(|N|−1)-approximate core by exhibiting a vector in this approximate core for the asymmetric traveling salesman game. We improve
it further by finding a
-approximate core in polynomial time for some constant c. We also show that there exists an ε
0>1 such that it is
-hard to decide whether ε
0-Core
is empty or not.
A preliminary version of the paper appeared in the third Workshop on Approximation and Online Algorithms (WAOA), 2005. 相似文献
17.
We consider the management of FIFO buffers for network switches providing differentiated services. In each time step, an arbitrary
number of packets arrive and only one packet can be sent. The buffer can store a limited number of packets and, due to the
FIFO property, the sequence of sent packets has to be a subsequence of the arriving packets. The differentiated service model
is abstracted by attributing each packet with a value according to its service level. A buffer management strategy can drop
packets, and the goal is to maximize the sum of the values of sent packets.
For only two different packet values, we introduce the account strategy and prove that this strategy achieves an optimal competitive
ratio of
if the buffer size tends to infinity and an optimal competitive ratio of
for arbitrary buffer sizes. For general packet values, the simple preemptive greedy strategy (PG) is studied. We show that
PG achieves a competitive ratio of
which is the best known upper bound on the competitive ratio of this problem. In addition, we give a lower bound of
on the competitive ratio of PG which improves the previously known lower bound. As a consequence, the competitive ratio of
PG cannot be further improved significantly.
Supported by the DFG grant WE 2842/1. A preliminary version of this paper appeared in Proceedings of the 14th Annual European
Symposium on Algorithms (ESA), 2006. 相似文献
18.
Radio networks model wireless data communication when the bandwidth is limited to one wave frequency. The key restriction
of such networks is mutual interference of packets arriving simultaneously at a node. The many-to-many (m2m) communication
primitive involves p participant nodes from among n nodes in the network, where the distance between any pair of participants is at most d. The task is to have all the participants get to know all the input messages. We consider three cases of the m2m communication
problem. In the ad-hoc case, each participant knows only its name and the values of n, p and d. In the partially centralized case, each participant knows the topology of the network and the values of p and d, but does not know the names of the other participants. In the centralized case, each participant knows the topology of the
network and the names of all the participants. For the centralized m2m problem, we give deterministic protocols, for both
undirected and directed networks, working in
time, which is provably optimal. For the partially centralized m2m problem, we give a randomized protocol for undirected networks
working in
time with high probability (whp), and we show that any deterministic protocol requires
time. For the ad-hoc m2m problem, we develop a randomized protocol for undirected networks that works in
time whp. We show two lower bounds for the ad-hoc m2m problem. One lower bound states that any randomized protocol for the
m2m ad hoc problem requires
expected time. Another lower bound states that for any deterministic protocol for the m2m ad hoc problem, there is a network
on which the protocol requires
time when n−p(n)=Ω(n) and d>1, and that it requires Ω(n) time when n−p(n)=o(n).
The results of this paper appeared in a preliminary form in “On many-to-many communication in packet radio networks” in Proceedings
of the 10th Conference on Principles of Distributed Systems (OPODIS), Bordeaux, France, 2006, Lecture Notes in Computer Science
4305, Springer, Heidelberg, pp. 258–272.
The work of B.S. Chlebus was supported by NSF Grant 0310503. 相似文献
19.
Daniel P. Friedman Abdulaziz Ghuloum Jeremy G. Siek Onnie Lynn Winebarger 《Higher-Order and Symbolic Computation》2007,20(3):271-293
Krivine presents the
machine, which produces weak head normal form results. Sestoft introduces several call-by-need variants of the
machine that implement result sharing via pushing update markers on the stack in a way similar to the TIM and the STG machine.
When a sequence of consecutive markers appears on the stack, all but the first cause redundant updates. Improvements related
to these sequences have dealt with either the consumption of the markers or the removal of the markers once they appear. Here
we present an improvement that eliminates the production of marker sequences of length greater than one. This improvement
results in the
machine, a more space and time efficient variant of
.
We then apply the classic optimization of short-circuiting operand variable dereferences to create the call-by-need
machine. Finally, we combine the two improvements in the
machine. On our benchmarks this machine uses half the stack space, performs one quarter as many updates, and executes between
27% faster and 17% slower than our ℒ variant of Sestoft’s lazy Krivine machine. More interesting is that on one benchmark
ℒ,
, and
consume unbounded space, but
consumes constant space. Our comparisons to Sestoft’s Mark 2 machine are not exact, however, since we restrict ourselves to
unpreprocessed closed lambda terms. Our variant of his machine does no environment trimming, conversion to deBruijn-style
variable access, and does not provide basic constants, data type constructors, or the recursive let. (The Y combinator is used instead.) 相似文献
20.
Joel Ratsaby 《Annals of Mathematics and Artificial Intelligence》2008,52(1):55-65
Consider a class of binary functions h: X→{ − 1, + 1} on an interval . Define the sample width of h on a finite subset (a sample) S ⊂ X as ω
S
(h) = min
x ∈ S
|ω
h
(x)| where ω
h
(x) = h(x) max {a ≥ 0: h(z) = h(x), x − a ≤ z ≤ x + a}. Let be the space of all samples in X of cardinality ℓ and consider sets of wide samples, i.e., hypersets which are defined as Through an application of the Sauer-Shelah result on the density of sets an upper estimate is obtained on the growth function
(or trace) of the class , β > 0, i.e., on the number of possible dichotomies obtained by intersecting all hypersets with a fixed collection of samples
of cardinality m. The estimate is .
相似文献