首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A circle graph is the intersection graph of a set of chords in a circle. Keil [Discrete Appl. Math., 42(1):51–63, 1993] proved that Dominating Set, Connected Dominating Set, and Total Dominating Set are NP-complete in circle graphs. To the best of our knowledge, nothing was known about the parameterized complexity of these problems in circle graphs. In this paper we prove the following results, which contribute in this direction:
  • Dominating Set, Independent Dominating Set, Connected Dominating Set, Total Dominating Set, and Acyclic Dominating Set are W[1]-hard in circle graphs, parameterized by the size of the solution.
  • Whereas both Connected Dominating Set and Acyclic Dominating Set are W[1]-hard in circle graphs, it turns out that Connected Acyclic Dominating Set is polynomial-time solvable in circle graphs.
  • If T is a given tree, deciding whether a circle graph G has a dominating set inducing a graph isomorphic to T is NP-complete when T is in the input, and FPT when parameterized by t=|V(T)|. We prove that the FPT algorithm runs in subexponential time, namely $2^{\mathcal{O}(t \cdot\frac{\log\log t}{\log t})} \cdot n^{\mathcal{O}(1)}$ , where n=|V(G)|.
  相似文献   

2.
The Power Dominating Set problem is an extension of the well-known domination problem on graphs in a way that we enrich it by a second propagation rule: given a graph G(V,E), a set P?V is a power dominating set if every vertex is observed after the exhaustive application of the following two rules. First, a vertex is observed if vP or it has a neighbor in P. Secondly, if an observed vertex has exactly one unobserved neighbor u, then also u will be observed, as well. We show that Power Dominating Set remains $\mathcal{NP}$ -hard on cubic graphs. We design an algorithm solving this problem in time $\mathcal{O}^{*}(1.7548^{n})$ on general graphs, using polynomial space only. To achieve this, we introduce so-called reference search trees that can be seen as a compact representation of usual search trees, providing non-local pointers in order to indicate pruned subtrees.  相似文献   

3.
In the Parameterized Connected Dominating Set problem the input consists of a graph G and a positive integer k, and the question is whether there is a set S of at most k vertices in G—a connected dominating set of G—such that (i) S is a dominating set of G, and (ii) the subgraph G[S] induced by S is connected; the parameter is k. The underlying decision problem is a basic connectivity problem which is long known to be NP-complete, and it has been extensively studied using several algorithmic approaches. Parameterized Connected Dominating Set is W[2]-hard, and therefore it is unlikely (Downey and Fellows, Parameterized Complexity, Springer, 1999) that the problem has fixed-parameter tractable (FPT) algorithms or polynomial kernels in graphs in general. We investigate the effect of excluding short cycles, as subgraphs, on the kernelization complexity of Parameterized Connected Dominating Set. The girth of a graph G is the length of a shortest cycle in G. It turns out that the Parameterized Connected Dominating Set problem is hard on graphs with small cycles, and becomes progressively easier as the girth increases. More precisely, we obtain the following kernelization landscape: Parameterized Connected Dominating Set
  • does not have a kernel of any size on graphs of girth three or four (since the problem is W[2]-hard);
  • admits a kernel of size 2 k k 3k on graphs of girth at least five;
  • has no polynomial kernel (unless the Polynomial Hierarchy collapses to the third level) on graphs of girth at most six, and,
  • has a cubic ( $\mathcal {O}(k^{3})$ ) vertex kernel on graphs of girth at least seven.
While there is a large and growing collection of parameterized complexity results available for problems on graph classes characterized by excluded minors, our results add to the very few known in the field for graph classes characterized by excluded subgraphs.  相似文献   

4.
Inclusion/exclusion and measure and conquer are two central techniques from the field of exact exponential-time algorithms that recently received a lot of attention. In this paper, we show that both techniques can be used in a single algorithm. This is done by looking at the principle of inclusion/exclusion as a branching rule. This inclusion/exclusion-based branching rule can be combined in a branch-and-reduce algorithm with traditional branching rules and reduction rules. The resulting algorithms can be analysed using measure and conquer allowing us to obtain good upper bounds on their running times. In this way, we obtain the currently fastest exact exponential-time algorithms for a number of domination problems in graphs. Among these are faster polynomial-space and exponential-space algorithms for #Dominating Set and Minimum Weight Dominating Set (for the case where the set of possible weight sums is polynomially bounded), and a faster polynomial-space algorithm for Domatic Number. This approach is also extended in this paper to the setting where not all requirements in a problem need to be satisfied. This results in faster polynomial-space and exponential-space algorithms for Partial Dominating Set, and faster polynomial-space and exponential-space algorithms for the well-studied parameterised problem k-Set Splitting and its generalisation k-Not-All-Equal Satisfiability.  相似文献   

5.
Vertex deletion and edge deletion problems play a central role in parameterized complexity. Examples include classical problems like Feedback Vertex Set, Odd Cycle Transversal, and Chordal Deletion. The study of analogous edge contraction problems has so far been left largely unexplored from a parameterized perspective. We consider two basic problems of this type: Tree Contraction and Path Contraction. These two problems take as input an undirected graph G on n vertices and an integer k, and the task is to determine whether we can obtain a tree or a path, respectively, by a sequence of at most k edge contractions in G. For Tree Contraction, we present a randomized 4 k ? n O(1) time polynomial-space algorithm, as well as a deterministic 4.98 k ? n O(1) time algorithm, based on a variant of the color coding technique of Alon, Yuster and Zwick. We also present a deterministic 2 k+o(k)+n O(1) time algorithm for Path Contraction. Furthermore, we show that Path Contraction has a kernel with at most 5k+3 vertices, while Tree Contraction does not have a polynomial kernel unless NP ? coNP/poly. We find the latter result surprising because of the connection between Tree Contraction and Feedback Vertex Set, which is known to have a kernel with 4k 2 vertices.  相似文献   

6.
In the k-Feedback Arc/Vertex Set problem we are given a directed graph D and a positive integer k and the objective is to check whether it is possible to delete at most k arcs/vertices from D to make it acyclic. Dom et al. (J. Discrete Algorithm 8(1):76–86, 2010) initiated a study of the Feedback Arc Set problem on bipartite tournaments (k-FASBT) in the realm of parameterized complexity. They showed that k-FASBT can be solved in time O(3.373 k n 6) on bipartite tournaments having n vertices. However, until now there was no known polynomial sized problem kernel for k-FASBT. In this paper we obtain a cubic vertex kernel for k-FASBT. This completes the kernelization picture for the Feedback Arc/Vertex Set problem on tournaments and bipartite tournaments, as for all other problems polynomial kernels were known before. We obtain our kernel using a non-trivial application of “independent modules” which could be of independent interest.  相似文献   

7.
We study the Cutwidth problem, where the input is a graph G, and the objective is find a linear layout of the vertices that minimizes the maximum number of edges intersected by any vertical line inserted between two consecutive vertices. We give an algorithm for Cutwidth with running time O(2 k n O(1)). Here k is the size of a minimum vertex cover of the input graph G, and n is the number of vertices in G. Our algorithm gives an O(2 n/2 n O(1)) time algorithm for Cutwidth on bipartite graphs as a corollary. This is the first non-trivial exact exponential time algorithm for Cutwidth on a graph class where the problem remains NP-complete. Additionally, we show that Cutwidth parameterized by the size of the minimum vertex cover of the input graph does not admit a polynomial kernel unless NP?coNP/poly. Our kernelization lower bound contrasts with the recent results of Bodlaender et al. (ICALP, Springer, Berlin, 2011; SWAT, Springer, Berlin, 2012) that both Treewidth and Pathwidth parameterized by vertex cover do admit polynomial kernels.  相似文献   

8.
Measure & Conquer (M&C) is a prominent technique for analyzing exact algorithms for computationally hard problems, in particular, graph problems. It tries to balance worse and better situations within the algorithm analysis. This has led, e.g., to algorithms for Minimum Vertex Cover with a running time of $\mathcal{O}(c^{n})$ for some constant c??1.2, where n is the number of vertices in the graph. Several obstacles prevent the application of this technique in parameterized algorithmics, making it rarely applied in this area. However, these difficulties can be handled in some situations. We will exemplify this with two problems related to Vertex Cover, namely Connected Vertex Cover and Edge Dominating Set. For both problems, several parameterized algorithms have been published, all based on the idea of first enumerating minimal vertex covers. Using M&C in this context will allow us to improve on the hitherto published running times. In contrast to some of the earlier suggested algorithms, ours will use polynomial space.  相似文献   

9.
A planning and scheduling (P&S) system takes as input a domain model and a goal, and produces a plan of actions to be executed, which will achieve the goal. A P&S system typically also offers plan execution and monitoring engines. Due to the non-deterministic nature of planning problems, it is a challenge to construct correct and reliable P&S systems, including, for example, declarative domain models. Verification and validation (V&V) techniques have been applied to address these issues. Furthermore, V&V systems have been applied to actually perform planning, and conversely, P&S systems have been applied to perform V&V of more traditional software. This article overviews some of the literature on the fruitful interaction between V&V and P&S.  相似文献   

10.
We consider the Partition Into Triangles problem on bounded degree graphs. We show that this problem is polynomial-time solvable on graphs of maximum degree three by giving a linear-time algorithm. We also show that this problem becomes $\mathcal{NP}$ -complete on graphs of maximum degree four. Moreover, we show that there is no subexponential-time algorithm for this problem on graphs of maximum degree four unless the Exponential-Time Hypothesis fails. However, the Partition Into Triangles problem on graphs of maximum degree at most four is in many cases practically solvable as we give an algorithm for this problem that runs in $\mathcal{O}(1.02220^{n})$ time and linear space.  相似文献   

11.
We consider a CNF formula F as a multiset of clauses: F={c 1,…,c m }. The set of variables of F will be denoted by V(F). Let B F denote the bipartite graph with partite sets V(F) and F and with an edge between vV(F) and cF if vc or $\bar{v} \in c$ . The matching number ν(F) of F is the size of a maximum matching in B F . In our main result, we prove that the following parameterization of MaxSat (denoted by (ν(F)+k)-SAT) is fixed-parameter tractable: Given a formula F, decide whether we can satisfy at least ν(F)+k clauses in F, where k is the parameter. A formula F is called variable-matched if ν(F)=|V(F)|. Let δ(F)=|F|?|V(F)| and δ ?(F)=max F′?F δ(F′). Our main result implies fixed-parameter tractability of MaxSat parameterized by δ(F) for variable-matched formulas F; this complements related results of Kullmann (IEEE Conference on Computational Complexity, pp. 116–124, 2000) and Szeider (J. Comput. Syst. Sci. 69(4):656–674, 2004) for MaxSat parameterized by δ ?(F). To obtain our main result, we reduce (ν(F)+k)-SAT into the following parameterization of the Hitting Set problem (denoted by (m?k)-Hitting Set): given a collection $\mathcal{C}$ of m subsets of a ground set U of n elements, decide whether there is X?U such that CX≠? for each $C\in \mathcal{C}$ and |X|≤m?k, where k is the parameter. Gutin, Jones and Yeo (Theor. Comput. Sci. 412(41):5744–5751, 2011) proved that (m?k)-Hitting Set is fixed-parameter tractable by obtaining an exponential kernel for the problem. We obtain two algorithms for (m?k)-Hitting Set: a deterministic algorithm of runtime $O((2e)^{2k+O(\log^{2} k)} (m+n)^{O(1)})$ and a randomized algorithm of expected runtime $O(8^{k+O(\sqrt{k})} (m+n)^{O(1)})$ . Our deterministic algorithm improves an algorithm that follows from the kernelization result of Gutin, Jones and Yeo (Theor. Comput. Sci. 412(41):5744–5751, 2011).  相似文献   

12.
Target Set Selection, which is a prominent NP-hard problem occurring in social network analysis and distributed computing, is notoriously hard both in terms of achieving useful polynomial-time approximation as well as fixed-parameter algorithms. Given an undirected graph, the task is to select a minimum number of vertices into a “target set” such that all other vertices will become active in the course of a dynamic process (which may go through several activation rounds). A vertex, equipped with a threshold value t, becomes active once at least t of its neighbors are active; initially, only the target set vertices are active. We contribute further insights into the existence of islands of tractability for Target Set Selection by spotting new parameterizations characterizing some sparse graphs as well as some “cliquish” graphs and developing corresponding fixed-parameter tractability and (parameterized) hardness results. In particular, we demonstrate that upper-bounding the thresholds by a constant may significantly alleviate the search for efficiently solvable, but still meaningful special cases of Target Set Selection.  相似文献   

13.
The Induced Graph Matching problem asks to find \(k\) disjoint induced subgraphs isomorphic to a given graph  \(H\) in a given graph \(G\) such that there are no edges between vertices of different subgraphs. This problem generalizes the classical Independent Set and Induced Matching problems, among several other problems. We show that Induced Graph Matching is fixed-parameter tractable in \(k\) on claw-free graphs when \(H\) is a fixed connected graph, and even admits a polynomial kernel when  \(H\) is a complete graph. Both results rely on a new, strong, and generic algorithmic structure theorem for claw-free graphs. Complementing the above positive results, we prove \(\mathsf {W}[1]\) -hardness of Induced Graph Matching on graphs excluding \(K_{1,4}\) as an induced subgraph, for any fixed complete graph \(H\) . In particular, we show that Independent Set is \(\mathsf {W}[1]\) -hard on \(K_{1,4}\) -free graphs. Finally, we consider the complexity of Induced Graph Matching on a large subclass of claw-free graphs, namely on proper circular-arc graphs. We show that the problem is either polynomial-time solvable or \(\mathsf {NP}\) -complete, depending on the connectivity of \(H\) and the structure of \(G\) .  相似文献   

14.
In this paper, we present Para Miner which is a generic and parallel algorithm for closed pattern mining. Para Miner is built on the principles of pattern enumeration in strongly accessible set systems. Its efficiency is due to a novel dataset reduction technique (that we call EL-reduction), combined with novel technique for performing dataset reduction in a parallel execution on a multi-core architecture. We illustrate Para Miner’s genericity by using this algorithm to solve three different pattern mining problems: the frequent itemset mining problem, the mining frequent connected relational graphs problem and the mining gradual itemsets problem. In this paper, we prove the soundness and the completeness of Para Miner. Furthermore, our experiments show that despite being a generic algorithm, Para Miner can compete with specialized state of the art algorithms designed for the pattern mining problems mentioned above. Besides, for the particular problem of gradual itemset mining, Para Miner outperforms the state of the art algorithm by two orders of magnitude.  相似文献   

15.
The Contractibility problem takes as input two graphs G and H, and the task is to decide whether H can be obtained from G by a sequence of edge contractions. The Induced Minor and Induced Topological Minor problems are similar, but the first allows both edge contractions and vertex deletions, whereas the latter allows only vertex deletions and vertex dissolutions. All three problems are NP-complete, even for certain fixed graphs H. We show that these problems can be solved in polynomial time for every fixed H when the input graph G is chordal. Our results can be considered tight, since these problems are known to be W[1]-hard on chordal graphs when parameterized by the size of H. To solve Contractibility and Induced Minor, we define and use a generalization of the classic Disjoint Paths problem, where we require the vertices of each of the k paths to be chosen from a specified set. We prove that this variant is NP-complete even when k=2, but that it is polynomial-time solvable on chordal graphs for every fixed k. Our algorithm for Induced Topological Minor is based on another generalization of Disjoint Paths called Induced Disjoint Paths, where the vertices from different paths may no longer be adjacent. We show that this problem, which is known to be NP-complete when k=2, can be solved in polynomial time on chordal graphs even when k is part of the input. Our results fit into the general framework of graph containment problems, where the aim is to decide whether a graph can be modified into another graph by a sequence of specified graph operations. Allowing combinations of the four well-known operations edge deletion, edge contraction, vertex deletion, and vertex dissolution results in the following ten containment relations: (induced) minor, (induced) topological minor, (induced) subgraph, (induced) spanning subgraph, dissolution, and contraction. Our results, combined with existing results, settle the complexity of each of the ten corresponding containment problems on chordal graphs.  相似文献   

16.
Fast Algorithms for max independent set   总被引:1,自引:0,他引:1  
We first propose a method, called “bottom-up method” that, informally, “propagates” improvement of the worst-case complexity for “sparse” instances to “denser” ones and we show an easy though non-trivial application of it to the min set cover problem. We then tackle max independent set. Here, we propagate improvements of worst-case complexity from graphs of average degree?d to graphs of average degree greater than?d. Indeed, using algorithms for max independent set in graphs of average degree 3, we successively solve max independent set in graphs of average degree 4, 5 and?6. Then, we combine the bottom-up technique with measure and conquer techniques to get improved running times for graphs of maximum degree?5 and?6 but also for general graphs. The computation bounds obtained for max independent set are?O ?(1.1571 n ), O ?(1.1895 n ) and?O ?(1.2050 n ), for graphs of maximum (or more generally average) degree?4, 5 and?6 respectively, and?O ?(1.2114 n ) for general graphs. These results improve upon the best known results for these cases for polynomial space algorithms.  相似文献   

17.
The NP-complete geometric covering problem Rectangle Stabbing is defined as follows: Given a set R of axis-parallel rectangles in the plane, a set L of horizontal and vertical lines in the plane, and a positive integer k, select at most k of the lines such that every rectangle is intersected by at least one of the selected lines. While it is known that the problem can be approximated in polynomial time within a factor of two, its parameterized complexity with respect to the parameter k was open so far. Giving two fixed-parameter reductions, one from the W[1]-complete problem Multicolored Clique and one to the W[1]-complete problem Short Turing Machine Acceptance, we prove that Rectangle Stabbing is W[1]-complete with respect to the parameter k, which in particular means that there is no hope for an algorithm running in f(k)?|RL| O(1) time. Our reductions also show the W[1]-completeness of the more general problem Set Cover on instances that “almost have the consecutive-ones property”, that is, on instances whose matrix representation has at most two blocks of 1s per row. We also show that the special case of Rectangle Stabbing where all rectangles are squares of the same size is W[1]-hard. The case where the input consists of non-overlapping rectangles was open for some time and has recently been shown to be fixed-parameter tractable (Heggernes et al., Fixed-parameter algorithms for cochromatic number and disjoint rectangle stabbing, 2009). By giving an algorithm running in (2k) k ?|RL| O(1) time, we show that Rectangle Stabbing is fixed-parameter tractable in the still NP-hard case where both these restrictions apply, that is, in the case of disjoint squares of the same size. This algorithm is faster than the one in Heggernes et al. (Fixed-parameter algorithms for cochromatic number and disjoint rectangle stabbing, 2009) for the disjoint rectangles case. Moreover, we show fixed-parameter tractability for the restrictions where the rectangles have bounded width or height or where each horizontal line intersects only a bounded number of rectangles.  相似文献   

18.
A sunflower in a hypergraph is a set of hyperedges pairwise intersecting in exactly the same vertex set. Sunflowers are a useful tool in polynomial-time data reduction for problems formalizable as d-Hitting Set, the problem of covering all hyperedges (whose cardinality is bounded from above by a constant d) of a hypergraph by at most k vertices. Additionally, in fault diagnosis, sunflowers yield concise explanations for “highly defective structures”. We provide a linear-time algorithm that, by finding sunflowers, transforms an instance of d-Hitting Set into an equivalent instance comprising at most O(k d ) hyperedges and vertices. In terms of parameterized complexity, we show a problem kernel with asymptotically optimal size (unless \(\operatorname {coNP}\subseteq \operatorname {NP/poly}\) ) and provide experimental results that show the practical applicability of our algorithm. Finally, we show that the number of vertices can be reduced to O(k d?1) with additional processing in O(k 1.5d ) time—nontrivially combining the sunflower technique with problem kernels due to Abu-Khzam and Moser.  相似文献   

19.
The adoption of Artificial Neural Networks (ANNs) in safety-related applications is often avoided because it is difficult to rule out possible misbehaviors with traditional analytical or probabilistic techniques. In this paper we present NeVer, our tool for checking safety of ANNs. NeVer encodes the problem of verifying safety of ANNs into the problem of satisfying corresponding Boolean combinations of linear arithmetic constraints. We describe the main verification algorithm and the structure of NeVer. We present also empirical results confirming the effectiveness of NeVer on realistic case studies.  相似文献   

20.
Given a set of pointsV in the plane, the Euclidean bottleneck matching problem is to match each point with some other point such that the longest Euclidean distance between matched points, resulting from this matching, is minimized. To solve this problem, we definek-relative neighborhood graphs, (kRNG) which are derived from Toussaint's relative neighborhood graphs (RNG). Two points are calledk-relative neighbors if and only if there are less thank points ofV which are closer to both of the two points than the two points are to each other. AkRNG is an undirected graph (V,E r k ) whereE r k is the set of pairs of points ofV which arek-relative neighbors. We prove that there exists an optimal solution of the Euclidean bottleneck matching problem which is a subset ofE r 17 . We also prove that ¦E r k ¦ < 18kn wheren is the number of points in setV. Our algorithm would construct a 17RNG first. This takesO(n 2) time. We then use Gabow and Tarjan's bottleneck maximum cardinality matching algorithm for general graphs whose time-complexity isO((n logn)0.5 m), wherem is the number of edges in the graph, to solve the bottleneck maximum cardinality matching problem in the 17RNG. This takesO(n 1.5 log0.5 n) time. The total time-complexity of our algorithm for the Euclidean bottleneck matching problem isO(n 2 +n 1.5 log0.5 n).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号