首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this article, we formulate and study quantum analogues of randomized search heuristics, which make use of Grover search (in Proceedings of the 28th Annual ACM Symposium on Theory of Computing, pp. 212–219. ACM, New York, 1996) to accelerate the search for improved offsprings. We then specialize the above formulation to two specific search heuristics: Random Local Search and the (1+1) Evolutionary Algorithm. We call the resulting quantum versions of these search heuristics Quantum Local Search and the (1+1) Quantum Evolutionary Algorithm. We conduct a rigorous runtime analysis of these quantum search heuristics in the computation model of quantum algorithms, which, besides classical computation steps, also permits those unique to quantum computing devices. To this end, we study the six elementary pseudo-Boolean optimization problems OneMax, LeadingOnes, Discrepancy, Needle, Jump, and TinyTrap. It turns out that the advantage of the respective quantum search heuristic over its classical counterpart varies with the problem structure and ranges from no speedup at all for the problem Discrepancy to exponential speedup for the problem TinyTrap. We show that these runtime behaviors are closely linked to the probabilities of performing successful mutations in the classical algorithms.  相似文献   

2.
The AtMostSeqCard constraint is the conjunction of a cardinality constraint on a sequence of n variables and of n???q?+?1 constraints AtMost u on each subsequence of size q. This constraint is useful in car-sequencing and crew-rostering problems. In van Hoeve et al. (Constraints 14(2):273–292, 2009), two algorithms designed for the AmongSeq constraint were adapted to this constraint with an O(2 q n) and O(n 3) worst case time complexity, respectively. In Maher et al. (2008), another algorithm similarly adaptable to filter the AtMostSeqCard constraint with a time complexity of O(n 2) was proposed. In this paper, we introduce an algorithm for achieving arc consistency on the AtMostSeqCard constraint with an O(n) (hence optimal) worst case time complexity. Next, we show that this algorithm can be easily modified to achieve arc consistency on some extensions of this constraint. In particular, the conjunction of a set of m AtMostSeqCard constraints sharing the same scope can be filtered in O(nm). We then empirically study the efficiency of our propagator on instances of the car-sequencing and crew-rostering problems.  相似文献   

3.
We analyze the classic board game of Mastermind with n holes and a constant number of colors. The classic result of Chvátal (Combinatorica 3:325–329, 1983) states that the codebreaker can find the secret code with Θ(n/logn) questions. We show that this bound remains valid if the codebreaker may only store a constant number of guesses and answers. In addition to an intrinsic interest in this question, our result also disproves a conjecture of Droste, Jansen, and Wegener (Theory Comput. Syst. 39:525–544, 2006) on the memory-restricted black-box complexity of the OneMax function class.  相似文献   

4.
In this paper, we consider multi-objective evolutionary algorithms for the Vertex Cover problem in the context of parameterized complexity. We consider two different measures for the problem. The first measure is a very natural multi-objective one for the use of evolutionary algorithms and takes into account the number of chosen vertices and the number of edges that remain uncovered. The second fitness function is based on a linear programming formulation and proves to give better results. We point out that both approaches lead to a kernelization for the Vertex Cover problem. Based on this, we show that evolutionary algorithms solve the vertex cover problem efficiently if the size of a minimum vertex cover is not too large, i.e., the expected runtime is bounded by O(f(OPT)?n c ), where c is a constant and f a function that only depends on OPT. This shows that evolutionary algorithms are randomized fixed-parameter tractable algorithms for the vertex cover problem.  相似文献   

5.
Vertex deletion and edge deletion problems play a central role in parameterized complexity. Examples include classical problems like Feedback Vertex Set, Odd Cycle Transversal, and Chordal Deletion. The study of analogous edge contraction problems has so far been left largely unexplored from a parameterized perspective. We consider two basic problems of this type: Tree Contraction and Path Contraction. These two problems take as input an undirected graph G on n vertices and an integer k, and the task is to determine whether we can obtain a tree or a path, respectively, by a sequence of at most k edge contractions in G. For Tree Contraction, we present a randomized 4 k ? n O(1) time polynomial-space algorithm, as well as a deterministic 4.98 k ? n O(1) time algorithm, based on a variant of the color coding technique of Alon, Yuster and Zwick. We also present a deterministic 2 k+o(k)+n O(1) time algorithm for Path Contraction. Furthermore, we show that Path Contraction has a kernel with at most 5k+3 vertices, while Tree Contraction does not have a polynomial kernel unless NP ? coNP/poly. We find the latter result surprising because of the connection between Tree Contraction and Feedback Vertex Set, which is known to have a kernel with 4k 2 vertices.  相似文献   

6.
We study the complexity of some algorithmic problems on directed hypergraphs and their strongly connected components (Sccs). The main contribution is an almost linear time algorithm computing the terminal strongly connected components (i.e. Sccs which do not reach any components but themselves). Almost linear here means that the complexity of the algorithm is linear in the size of the hypergraph up to a factor α(n), where α is the inverse of Ackermann function, and n is the number of vertices. Our motivation to study this problem arises from a recent application of directed hypergraphs to computational tropical geometry. We also discuss the problem of computing all Sccs. We establish a superlinear lower bound on the size of the transitive reduction of the reachability relation in directed hypergraphs, showing that it is combinatorially more complex than in directed graphs. Besides, we prove a linear time reduction from the well-studied problem of finding all minimal sets among a given family to the problem of computing the Sccs. Only subquadratic time algorithms are known for the former problem. These results strongly suggest that the problem of computing the Sccs is harder in directed hypergraphs than in directed graphs.  相似文献   

7.
We strengthen a previously known connection between the size complexity of two-way finite automata ( ) and the space complexity of Turing machines (tms). Specifically, we prove that
  • every s-state has a poly(s)-state that agrees with it on all inputs of length ≤s if and only if NL?L/poly, and
  • every s-state has a poly(s)-state that agrees with it on all inputs of length ≤2 s if and only if NLL?LL/polylog.
  • Here, and are the deterministic and nondeterministic , NL and L/poly are the standard classes of languages recognizable in logarithmic space by nondeterministic tms and by deterministic tms with access to polynomially long advice, and NLL and LL/polylog are the corresponding complexity classes for space O(loglogn) and advice length poly(logn). Our arguments strengthen and extend an old theorem by Berman and Lingas and can be used to obtain variants of the above statements for other modes of computation or other combinations of bounds for the input length, the space usage, and the length of advice.  相似文献   

    8.
    We study the Cutwidth problem, where the input is a graph G, and the objective is find a linear layout of the vertices that minimizes the maximum number of edges intersected by any vertical line inserted between two consecutive vertices. We give an algorithm for Cutwidth with running time O(2 k n O(1)). Here k is the size of a minimum vertex cover of the input graph G, and n is the number of vertices in G. Our algorithm gives an O(2 n/2 n O(1)) time algorithm for Cutwidth on bipartite graphs as a corollary. This is the first non-trivial exact exponential time algorithm for Cutwidth on a graph class where the problem remains NP-complete. Additionally, we show that Cutwidth parameterized by the size of the minimum vertex cover of the input graph does not admit a polynomial kernel unless NP?coNP/poly. Our kernelization lower bound contrasts with the recent results of Bodlaender et al. (ICALP, Springer, Berlin, 2011; SWAT, Springer, Berlin, 2012) that both Treewidth and Pathwidth parameterized by vertex cover do admit polynomial kernels.  相似文献   

    9.
    A circle graph is the intersection graph of a set of chords in a circle. Keil [Discrete Appl. Math., 42(1):51–63, 1993] proved that Dominating Set, Connected Dominating Set, and Total Dominating Set are NP-complete in circle graphs. To the best of our knowledge, nothing was known about the parameterized complexity of these problems in circle graphs. In this paper we prove the following results, which contribute in this direction:
    • Dominating Set, Independent Dominating Set, Connected Dominating Set, Total Dominating Set, and Acyclic Dominating Set are W[1]-hard in circle graphs, parameterized by the size of the solution.
    • Whereas both Connected Dominating Set and Acyclic Dominating Set are W[1]-hard in circle graphs, it turns out that Connected Acyclic Dominating Set is polynomial-time solvable in circle graphs.
    • If T is a given tree, deciding whether a circle graph G has a dominating set inducing a graph isomorphic to T is NP-complete when T is in the input, and FPT when parameterized by t=|V(T)|. We prove that the FPT algorithm runs in subexponential time, namely $2^{\mathcal{O}(t \cdot\frac{\log\log t}{\log t})} \cdot n^{\mathcal{O}(1)}$ , where n=|V(G)|.
      相似文献   

    10.
    Approximation Algorithms for the Directed k-Tour and k-Stroll Problems   总被引:1,自引:0,他引:1  
    We consider two natural generalizations of the Asymmetric Traveling Salesman problem: the k-Stroll and the k-Tour problems. The input to the k-Stroll problem is a directed n-vertex graph with nonnegative edge lengths, an integer k, as well as two special vertices s and t. The goal is to find a minimum-length s-t walk, containing at least k distinct vertices (including the endpoints s,t). The k-Tour problem can be viewed as a special case of k-Stroll, where s=t. That is, the walk is required to be a tour, containing some pre-specified vertex s. When k=n, the k-Stroll problem becomes equivalent to Asymmetric Traveling Salesman Path, and k-Tour to Asymmetric Traveling Salesman. Our main result is a polylogarithmic approximation algorithm for the k-Stroll problem. Prior to our work, only bicriteria (O(log2 k),3)-approximation algorithms have been known, producing walks whose length is bounded by 3OPT, while the number of vertices visited is Ω(k/log2 k). We also show a simple O(log2 n/loglogn)-approximation algorithm for the k-Tour problem. The best previously known approximation algorithms achieved min(O(log3 k),O(log2 n?logk/loglogn)) approximation in polynomial time, and O(log2 k) approximation in quasipolynomial time.  相似文献   

    11.
    We use algorithmic tools for graphs of small treewidth to address questions in complexity theory. For our main construction, we prove that multiplicatively disjoint arithmetic circuits of size n O(1) and treewidth k can be simulated by bounded fan-in arithmetic formulas of depth O(k 2logn). From this we derive an analogous statement for syntactically multilinear arithmetic circuits, which strengthens the central theorem of M. Mahajan and B.V.R. Rao (Proc. 33rd International Symposium on Mathematical Foundations of Computer Science, vol. 5162, pp. 455–466, 2008). We show our main construction has the following three applications:
    • Bounded width arithmetic circuits of size n O(1) can be balanced to depth O(logn), provided chains of iterated multiplication in the circuit are of length O(1).
    • Boolean bounded fan-in circuits of size n O(1) and treewidth k can be simulated by bounded fan-in formulas of depth O(k 2logn). This strengthens in the non-uniform setting the known inclusion that SC0?NC1.
    • We demonstrate treewidth restricted cases of Directed-Reachability and Circuit Value Problem that can be solved in LogDCFL.
    We also give a construction showing, for both arithmetic and Boolean circuits, that any circuit of size n O(1) and treewidth O(log i n) can be simulated by a circuit of width O(log i+1 n) and size n c , where c=O(1), if i=0, and c=O(loglogn) otherwise.  相似文献   

    12.
    In the k-Feedback Arc/Vertex Set problem we are given a directed graph D and a positive integer k and the objective is to check whether it is possible to delete at most k arcs/vertices from D to make it acyclic. Dom et al. (J. Discrete Algorithm 8(1):76–86, 2010) initiated a study of the Feedback Arc Set problem on bipartite tournaments (k-FASBT) in the realm of parameterized complexity. They showed that k-FASBT can be solved in time O(3.373 k n 6) on bipartite tournaments having n vertices. However, until now there was no known polynomial sized problem kernel for k-FASBT. In this paper we obtain a cubic vertex kernel for k-FASBT. This completes the kernelization picture for the Feedback Arc/Vertex Set problem on tournaments and bipartite tournaments, as for all other problems polynomial kernels were known before. We obtain our kernel using a non-trivial application of “independent modules” which could be of independent interest.  相似文献   

    13.
    We propose a novel distributed algorithm for the minimum cut problem. Motivated by applications like volumetric segmentation in computer vision, we aim at solving large sparse problems. When the problem does not fully fit in the memory, we need to either process it by parts, looking at one part at a time, or distribute across several computers. Many mincut/maxflow algorithms are designed for the shared memory architecture and do not scale to this setting. We consider algorithms that work on disjoint regions of the problem and exchange messages between the regions. We show that the region push-relabel algorithm of Delong and Boykov (A scalable graph-cut algorithm for N-D grids, in CVPR, 2008) uses Θ(n 2) rounds of message exchange, where n is the number of vertices. Our new algorithm performs path augmentations inside the regions and push-relabel style updates between the regions. It uses asymptotically less message exchanges, $O(\mathcal{B}^{2})$ , where $\mathcal{B}$ is the set of boundary vertices. The sequential and parallel versions of our algorithm are competitive with the state-of-the-art in the shared memory model. By achieving a lower amount of message exchanges (even asymptotically lower in our synthetic experiments), they suit better for solving large problems using a disk storage or a distributed system.  相似文献   

    14.
    Fast Algorithms for max independent set   总被引:1,自引:0,他引:1  
    We first propose a method, called “bottom-up method” that, informally, “propagates” improvement of the worst-case complexity for “sparse” instances to “denser” ones and we show an easy though non-trivial application of it to the min set cover problem. We then tackle max independent set. Here, we propagate improvements of worst-case complexity from graphs of average degree?d to graphs of average degree greater than?d. Indeed, using algorithms for max independent set in graphs of average degree 3, we successively solve max independent set in graphs of average degree 4, 5 and?6. Then, we combine the bottom-up technique with measure and conquer techniques to get improved running times for graphs of maximum degree?5 and?6 but also for general graphs. The computation bounds obtained for max independent set are?O ?(1.1571 n ), O ?(1.1895 n ) and?O ?(1.2050 n ), for graphs of maximum (or more generally average) degree?4, 5 and?6 respectively, and?O ?(1.2114 n ) for general graphs. These results improve upon the best known results for these cases for polynomial space algorithms.  相似文献   

    15.
    S-Net is a declarative coordination language and component technology aimed at radically facilitating software engineering for modern parallel compute systems by near-complete separation of concerns between application (component) engineering and concurrency orchestration. S-Net builds on the concept of stream processing to structure networks of communicating asynchronous components implemented in a conventional (sequential) language. In this paper we present the design, implementation and evaluation of a new and innovative runtime system for S-Net streaming networks. The Front runtime system outperforms the existing implementations of S-Net by orders of magnitude for stress-test benchmarks, significantly reduces runtimes of fully-fledged parallel applications with compute-intensive components and achieves good scalability on our 48-core test system.  相似文献   

    16.
    The Pathwidth One Vertex Deletion (POVD) problem asks whether, given an undirected graph?G and an integer k, one can delete at most k vertices from?G so that the remaining graph has pathwidth at most 1. The question can be considered as a natural variation of the extensively studied Feedback Vertex Set (FVS) problem, where the deletion of at most k vertices has to result in the remaining graph having treewidth at most 1 (i.e., being a forest). Recently Philip et?al. (WG, Lecture Notes in Computer Science, vol.?6410, pp.?196?C207, 2010) initiated the study of the parameterized complexity of POVD, showing a quartic kernel and an algorithm which runs in time 7 k n O(1). In this article we improve these results by showing a quadratic kernel and an algorithm with time complexity 4.65 k n O(1), thus obtaining almost tight kernelization bounds when compared to the general result of Dell and van Melkebeek (STOC, pp.?251?C260, ACM, New York, 2010). Techniques used in the kernelization are based on the quadratic kernel for FVS, due to Thomassé (ACM Trans. Algorithms 6(2), 2010).  相似文献   

    17.
    M. Praveen 《Algorithmica》2013,65(4):713-753
    The coverability and boundedness problems for Petri nets are known to be Expspace-complete. Given a Petri net, we associate a graph with it. With the vertex cover number k of this graph and the maximum arc weight W as parameters, we show that coverability and boundedness are in ParaPspace. This means that these problems can be solved in space $\mathcal{O} ({\mathit{ef}}(k, W){\mathit{poly}}(n) )$ , where ef(k,W) is some super-polynomial function and poly(n) is some polynomial in the size of the input n. We then extend the ParaPspace result to model checking a logic that can express some generalizations of coverability and boundedness.  相似文献   

    18.
    Given a graph with n vertices, k terminals and positive integer weights not larger than c, we compute a minimum Steiner Tree in $\mathcal{O}^{\star}(2^{k}c)$ time and $\mathcal{O}^{\star}(c)$ space, where the $\mathcal{O}^{\star}$ notation omits terms bounded by a polynomial in the input-size. We obtain the result by defining a generalization of walks, called branching walks, and combining it with the Inclusion-Exclusion technique. Using this combination we also give $\mathcal{O}^{\star}(2^{n})$ -time polynomial space algorithms for Degree Constrained Spanning Tree, Maximum Internal Spanning Tree and #Spanning Forest with a given number of components. Furthermore, using related techniques, we also present new polynomial space algorithms for computing the Cover Polynomial of a graph, Convex Tree Coloring and counting the number of perfect matchings of a graph.  相似文献   

    19.
    Measure & Conquer (M&C) is a prominent technique for analyzing exact algorithms for computationally hard problems, in particular, graph problems. It tries to balance worse and better situations within the algorithm analysis. This has led, e.g., to algorithms for Minimum Vertex Cover with a running time of $\mathcal{O}(c^{n})$ for some constant c??1.2, where n is the number of vertices in the graph. Several obstacles prevent the application of this technique in parameterized algorithmics, making it rarely applied in this area. However, these difficulties can be handled in some situations. We will exemplify this with two problems related to Vertex Cover, namely Connected Vertex Cover and Edge Dominating Set. For both problems, several parameterized algorithms have been published, all based on the idea of first enumerating minimal vertex covers. Using M&C in this context will allow us to improve on the hitherto published running times. In contrast to some of the earlier suggested algorithms, ours will use polynomial space.  相似文献   

    20.
    Given a relation ?? ? ?? × ?? on a set ?? of objects and a set ?? of attributes, the AOC-poset (Attribute/Object Concept poset), is the partial order defined on the “introducers” of objects and attributes in the corresponding concept lattice. In this paper, we present Hermes, a simple and efficient algorithm for building an AOC-poset which runs in O(m i n{n m, n α }), where n is the number of objects plus the number of attributes, m is the size of the relation, and n α is the time required to perform matrix multiplication (currently α = 2.376). Finally, we compare the runtime of Hermes with the runtime of other algorithms computing the AOC-poset: Ares, Ceres and Pluton. We characterize the cases where each algorithm is the more relevant.  相似文献   

    设为首页 | 免责声明 | 关于勤云 | 加入收藏

    Copyright©北京勤云科技发展有限公司  京ICP备09084417号