共查询到20条相似文献,搜索用时 31 毫秒
1.
2.
A traveling salesman game is a cooperative game
. Here N, the set of players, is the set of cities (or the vertices of the complete graph) and c
D
is the characteristic function where D is the underlying cost matrix. For all S⊆N, define c
D
(S) to be the cost of a minimum cost Hamiltonian tour through the vertices of S∪{0} where
is called as the home city. Define Core
as the core of a traveling salesman game
. Okamoto (Discrete Appl. Math. 138:349–369, [2004]) conjectured that for the traveling salesman game
with D satisfying triangle inequality, the problem of testing whether Core
is empty or not is
-hard. We prove that this conjecture is true. This result directly implies the
-hardness for the general case when D is asymmetric. We also study approximately fair cost allocations for these games. For this, we introduce the cycle cover
games and show that the core of a cycle cover game is non-empty by finding a fair cost allocation vector in polynomial time.
For a traveling salesman game, let
and ∀
S⊆N, x(S)≤ε⋅c
D
(S)} be an ε-approximate core, for a given ε>1. By viewing an approximate fair cost allocation vector for this game as a sum of exact fair cost allocation vectors of
several related cycle cover games, we provide a polynomial time algorithm demonstrating the non-emptiness of the log 2(|N|−1)-approximate core by exhibiting a vector in this approximate core for the asymmetric traveling salesman game. We improve
it further by finding a
-approximate core in polynomial time for some constant c. We also show that there exists an ε
0>1 such that it is
-hard to decide whether ε
0-Core
is empty or not.
A preliminary version of the paper appeared in the third Workshop on Approximation and Online Algorithms (WAOA), 2005. 相似文献
3.
4.
We present an algorithm that takes
I/Os (sort(N)=Θ((N/(DB))log
M/B
(N/B)) is the number of I/Os it takes to sort N data items) to compute a tree decomposition of width at most k, for any graph G of treewidth at most k and size N, where k is a constant. Given such a tree decomposition, we use a dynamic programming framework to solve a wide variety of problems
on G in
I/Os, including the single-source shortest path problem and a number of problems that are NP-hard on general graphs. The tree
decomposition can also be used to obtain an optimal separator decomposition of G. We use such a decomposition to perform depth-first search in G in
I/Os.
As important tools that are used in the tree decomposition algorithm, we introduce flippable DAGs and present an algorithm that computes a perfect elimination ordering of a k-tree in
I/Os.
The second contribution of our paper, which is of independent interest, is a general and simple framework for obtaining I/O-efficient
algorithms for a number of graph problems that can be solved using greedy algorithms in internal memory. We apply this framework
in order to obtain an improved algorithm for finding a maximal matching and the first deterministic I/O-efficient algorithm
for finding a maximal independent set of an arbitrary graph. Both algorithms take
I/Os. The maximal matching algorithm is used in the tree decomposition algorithm.
An abstract of this paper was presented at the 12th Annual ACM-SIAM Symposium on Discrete Algorithms, Proceedings, pp. 89–90,
2001.
Research of A. Maheshwari supported by NSERC.
Part of this work was done while the second author was a Ph.D. student at the School of Computer Science of Carleton University. 相似文献
5.
It is proved that “FIFO” worksharing protocols provide asymptotically optimal solutions to two problems related to sharing
large collections of independent tasks in a heterogeneous network of workstations (HNOW)
. In the
, one seeks to accomplish as much work as possible on
during a prespecified fixed period of L time units. In the
, one seeks to complete W units of work by “renting”
for as short a time as necessary. The worksharing protocols we study are crafted within an architectural model that characterizes
via parameters that measure
’s workstations’ computational and communicational powers. All valid protocols are self-scheduling, in the sense that they determine completely both an amount of work to allocate to each of
’s workstations and a schedule for all related interworkstation communications. The schedules provide either a value for W given L, or a value for L given W, hence solve both of the motivating problems. A protocol observes a FIFO regimen if it has
’s workstations finish their assigned work, and return their results, in the same order in which they are supplied with their
workloads. The proven optimality of FIFO protocols resides in the fact that they accomplish at least as much work as any other
protocol during all sufficiently long worksharing episodes, and that they complete sufficiently large given collections of
tasks at least as fast as any other protocol. Simulation experiments illustrate that the superiority of FIFO protocols is
often observed during worksharing episodes of only a few minutes’ duration.
A portion of this research was presented at the 15th ACM Symp. on Parallelism in Algorithms and Architectures (2003). 相似文献
6.
7.
We consider the problem of approximately integrating a Lipschitz function f (with a known Lipschitz constant) over an interval. The goal is to achieve an additive error of at most ε using as few samples of f as possible. We use the adaptive framework: on all problem instances an adaptive algorithm should perform almost as well
as the best possible algorithm tuned for the particular problem instance. We distinguish between
and
, the performances of the best possible deterministic and randomized algorithms, respectively. We give a deterministic algorithm
that uses
samples and show that an asymptotically better algorithm is impossible. However, any deterministic algorithm requires
samples on some problem instance. By combining a deterministic adaptive algorithm and Monte Carlo sampling with variance reduction,
we give an algorithm that uses at most
samples. We also show that any algorithm requires
samples in expectation on some problem instance (f,ε), which proves that our algorithm is optimal. 相似文献
8.
Dmitry Tsarkov Ian Horrocks Peter F. Patel-Schneider 《Journal of Automated Reasoning》2007,39(3):277-316
9.
Chvátal-Gomory cuts are among the most well-known classes of cutting planes for general integer linear programs (ILPs). In
case the constraint multipliers are either 0 or
, such cuts are known as
-cuts. It has been proven by Caprara and Fischetti (Math. Program. 74:221–235, 1996) that separation of
-cuts is
-hard.
In this paper, we study ways to separate
-cuts effectively in practice. We propose a range of preprocessing rules to reduce the size of the separation problem. The
core of the preprocessing builds a Gaussian elimination-like procedure. To separate the most violated
-cut, we formulate the (reduced) problem as integer linear program. Some simple heuristic separation routines complete the
algorithmic framework.
Computational experiments on benchmark instances show that the combination of preprocessing with exact and/or heuristic separation
is a very vital idea to generate strong generic cutting planes for integer linear programs and to reduce the overall computation
times of state-of-the-art ILP-solvers. 相似文献
10.
The authors’
programming formalism is a version of call-by-value
under a complexity-theoretically motivated type system.
programs run in type-2 polynomial-time and all standard type-2 basic feasible functionals are
-definable (
types are confined to levels 0, 1, and 2). A limitation of the original version of
is that the only directly expressible recursions are tail-recursions. Here we extend
so that a broad range of affine recursions are directly expressible. In particular, the revised
can fairly naturally express the classic insertion- and selection-sort algorithms, thus overcoming a sticking point of most
prior implicit-complexity-based formalisms. The paper’s main work is in refining the original time-complexity semantics for
to show that these new recursion schemes do not lead out of the realm of feasibility. 相似文献
11.
Abstract We present nonconforming, rectangular mixed finite element methods based on the Hellinger-Reissner variational principle in both two and three dimensions and show stability and convergence. An optimal error estimate of
is obtained for the displacement, along with a suboptimal,
, error estimate for the stress, in both dimensions. 相似文献
12.
Daniel P. Friedman Abdulaziz Ghuloum Jeremy G. Siek Onnie Lynn Winebarger 《Higher-Order and Symbolic Computation》2007,20(3):271-293
Krivine presents the
machine, which produces weak head normal form results. Sestoft introduces several call-by-need variants of the
machine that implement result sharing via pushing update markers on the stack in a way similar to the TIM and the STG machine.
When a sequence of consecutive markers appears on the stack, all but the first cause redundant updates. Improvements related
to these sequences have dealt with either the consumption of the markers or the removal of the markers once they appear. Here
we present an improvement that eliminates the production of marker sequences of length greater than one. This improvement
results in the
machine, a more space and time efficient variant of
.
We then apply the classic optimization of short-circuiting operand variable dereferences to create the call-by-need
machine. Finally, we combine the two improvements in the
machine. On our benchmarks this machine uses half the stack space, performs one quarter as many updates, and executes between
27% faster and 17% slower than our ℒ variant of Sestoft’s lazy Krivine machine. More interesting is that on one benchmark
ℒ,
, and
consume unbounded space, but
consumes constant space. Our comparisons to Sestoft’s Mark 2 machine are not exact, however, since we restrict ourselves to
unpreprocessed closed lambda terms. Our variant of his machine does no environment trimming, conversion to deBruijn-style
variable access, and does not provide basic constants, data type constructors, or the recursive let. (The Y combinator is used instead.) 相似文献
13.
Remco Duits Michael Felsberg Gösta Granlund Bart ter Haar Romeny 《International Journal of Computer Vision》2007,72(1):79-102
Inspired by the early visual system of many mammalians we consider the construction of-and reconstruction from- an orientation
score
as a local orientation representation of an image,
. The mapping
is a wavelet transform
corresponding to a reducible representation of the Euclidean motion group onto
and oriented wavelet
. This wavelet transform is a special case of a recently developed generalization of the standard wavelet theory and has the
practical advantage over the usual wavelet approaches in image analysis (constructed by irreducible representations of the
similitude group) that it allows a stable reconstruction from one (single scale) orientation score. Since our wavelet transform
is a unitary mapping with stable inverse, we directly relate operations on orientation scores to operations on images in a
robust manner.
Furthermore, by geometrical examination of the Euclidean motion group
, which is the domain of our orientation scores, we deduce that an operator Φ on orientation scores must be left invariant
to ensure that the corresponding operator
on images is Euclidean invariant. As an example we consider all linear second order left invariant evolutions on orientation
scores corresponding to stochastic processes on G. As an application we detect elongated structures in (medical) images and automatically close the gaps between them.
Finally, we consider robust orientation estimates by means of channel representations, where we combine robust orientation
estimation and learning of wavelets resulting in an auto-associative processing of orientation features. Here linear averaging
of the channel representation is equivalent to robust orientation estimation and an adaptation of the wavelet to the statistics
of the considered image class leads to an auto-associative behavior of the system.
The Netherlands Organization for Scientific Research is gratefully acknowledged for financial support. This work has been
supported by EC Grant IST-2003-004176 COSPAL. 相似文献
14.
Anne Berry Jean-Paul Bordat Alain Sigayret 《Annals of Mathematics and Artificial Intelligence》2007,49(1-4):117-136
Generating concepts defined by a binary relation between a set of properties and a set of objects is one of the important current problems encountered in Data Mining and Knowledge Discovery in Databases. We present
a new algorithmic process which computes all the concepts, without requiring an exponential-size data structure, and with
a good worst-time complexity analysis, which makes it competitive with the best existing algorithms for this problem. Our
algorithm can be used to compute the edges of the lattice as well at no extra cost.
相似文献
15.
16.
We consider the problems of enumerating all minimal strongly connected subgraphs and all minimal dicuts of a given strongly
connected directed graph G=(V,E). We show that the first of these problems can be solved in incremental polynomial time, while the second problem is NP-hard:
given a collection of minimal dicuts for G, it is NP-hard to tell whether it can be extended. The latter result implies, in particular, that for a given set of points
, it is NP-hard to generate all maximal subsets of
contained in a closed half-space through the origin. We also discuss the enumeration of all minimal subsets of
whose convex hull contains the origin as an interior point, and show that this problem includes as a special case the well-known
hypergraph transversal problem.
This research was supported by the National Science Foundation (Grant IIS-0118635). The third and fourth authors are also
grateful for the partial support by DIMACS, the National Science Foundation’s Center for Discrete Mathematics and Theoretical
Computer Science.
Our friend and co-author, Leonid Khachiyan tragically passed away on April 29, 2005. 相似文献
17.
Relations between states and maps, which are known for quantum systems in finitedimensional Hilbert spaces, are formulated
rigorously in geometrical terms with no use of coordinate (matrix) interpretation. In a tensor product realization they are
represented simply by a permutation of factors. This leads to natural generalizations for infinite-dimensional Hilbert spaces
and a simple proof of a generalized Choi Theorem. The natural framework is based on spaces of Hilbert-Schmidt operators and the corresponding tensor products of Hilbert spaces. It is proved that the corresponding isomorphisms cannot be naturally extended to compact (or bounded)
operators, nor reduced to the trace-class operators. On the other hand, it is proven that there is a natural continuous map
from trace-class operators on (with the nuclear norm) into compact operators mapping the space of all bounded operators on into trace class operators on (with the operator-norm). Also in the infinite-dimensional context, the Schmidt measure of entanglement and multipartite
generalizations of state-maps relations are considered in the paper. 相似文献
18.
Jérémy Barbay Francisco Claude Travis Gagie Gonzalo Navarro Yakov Nekrich 《Algorithmica》2014,69(1):232-268
We present a data structure that stores a sequence s[1..n] over alphabet [1..σ] in $n\mathcal{H}_{0}(s) + o(n)(\mathcal {H}_{0}(s){+}1)$ bits, where $\mathcal{H}_{0}(s)$ is the zero-order entropy of s. This structure supports the queries access, rank and select, which are fundamental building blocks for many other compressed data structures, in worst-case time ${\mathcal{O} ( {\lg\lg\sigma} )}$ and average time ${\mathcal{O} ( {\lg\mathcal{H}_{0}(s)} )}$ . The worst-case complexity matches the best previous results, yet these had been achieved with data structures using $n\mathcal{H}_{0}(s)+o(n\lg \sigma)$ bits. On highly compressible sequences the o(nlgσ) bits of the redundancy may be significant compared to the $n\mathcal{H}_{0}(s)$ bits that encode the data. Our representation, instead, compresses the redundancy as well. Moreover, our average-case complexity is unprecedented. Our technique is based on partitioning the alphabet into characters of similar frequency. The subsequence corresponding to each group can then be encoded using fast uncompressed representations without harming the overall compression ratios, even in the redundancy. The result also improves upon the best current compressed representations of several other data structures. For example, we achieve (i) compressed redundancy, retaining the best time complexities, for the smallest existing full-text self-indexes; (ii) compressed permutations π with times for π() and π ?1() improved to loglogarithmic; and (iii) the first compressed representation of dynamic collections of disjoint sets. We also point out various applications to inverted indexes, suffix arrays, binary relations, and data compressors. Our structure is practical on large alphabets. Our experiments show that, as predicted by theory, it dominates the space/time tradeoff map of all the sequence representations, both in synthetic and application scenarios. 相似文献
19.
Tomasz Jurdziński Friedrich Otto František Mráz Martin Plátek 《Theory of Computing Systems》2008,42(4):488-518
The
-automaton is the weakest form of the nondeterministic version of the restarting automaton that was introduced by Jančar et al. to model the so-called analysis by reduction. Here it is shown that the class ℒ(R) of languages that are accepted by
-automata is incomparable under set inclusion to the class
of Church-Rosser languages and to the class
of growing context-sensitive languages. In fact this already holds for the class
of languages that are accepted by 2-monotone
-automata. In addition, we prove that already the latter class contains
-complete languages, showing that already the 2-monotone
-automaton has a surprisingly large expressive power.
The results of this paper have been announced at DLT 2004 in Auckland, New Zealand.
This work was mainly carried out while T. Jurdziński was visiting the University of Kassel, supported by a grant from the
Deutsche Forschungsgemeinschaft (DFG).
F. Mráz and M. Plátek were partially supported by the Grant Agency of the Czech Republic under Grant-No. 201/04/2102 and by
the program ‘Information Society’ under project 1ET100300517. F. Mráz was also supported by the Grant Agency of Charles University
in Prague under Grant-No. 358/2006/A-INF/MFF. 相似文献
20.
Tomasz Jurdziński 《Theory of Computing Systems》2009,45(1):74-107
Hardness of a separation of nondeterminism, randomization and determinism for polynomial time computations has motivated the
analysis of this issue for restricted models of computation. Following this line of research, we consider randomized length-reducing
two-pushdown automata (
), a natural extension of pushdown automata (
). Our main results are as follows. We show that deterministic
s are weaker than Las Vegas
s which in turn are weaker than Monte Carlo
s. Moreover, bounded two-sided error
s are stronger than Monte Carlo
s and they are able to recognize some languages which cannot be recognized nondeterministically. Finally, we prove that amplification
is impossible for Las Vegas and Monte Carlo automata.
Partially supported by MNiSW grant number N206 024 31/3826, 2006-2008. An extended abstract of this paper appeared in the
MFCS06 Proceedings (Lecture Notes in Computer Science, vol. 4162, pp. 561–572, 2006). 相似文献