首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The Planar Feedback Vertex Set problem asks whether an n-vertex planar graph contains at most k vertices meeting all its cycles. The Face Cover problem asks whether all vertices of a plane graph G lie on the boundary of at most k faces of G. Standard techniques from parameterized algorithm design indicate that both problems can be solved by sub-exponential parameterized algorithms (where k is the parameter). In this paper we improve the algorithmic analysis of both problems by proving a series of combinatorial results relating the branchwidth of planar graphs with their face cover. Combining this fact with duality properties of branchwidth, allows us to derive analogous results on feedback vertex set. As a consequence, it follows that Planar Feedback Vertex Set and Face Cover can be solved in \(O(2^{15.11\cdot\sqrt{k}}+n^{2})\) and \(O(2^{10.1\cdot\sqrt {k}}+n^{2})\) steps, respectively.  相似文献   

2.
We present a framework for an automated generation of exact search tree algorithms for NP-hard problems. The purpose of our approach is twofold—rapid development and improved upper bounds. Many search tree algorithms for various problems in the literature are based on complicated case distinctions. Our approach may lead to a much simpler process of developing and analyzing these algorithms. Moreover, using the sheer computing power of machines it may also lead to improved upper bounds on search tree sizes (i.e., faster exact solving algorithms) in comparison with previously developed hand-made search trees. Among others, such an example is given with the NP-complete Cluster Editing problem (also known as Correlation Clustering on complete unweighted graphs), which asks for the minimum number of edge additions and deletions to create a graph which is a disjoint union of cliques. The hand-made search tree for Cluster Editing had worst-case size O(2.27k), which now is improved to O(1.92k) due to our new method. (Herein, k denotes the number of edge modifications allowed.)  相似文献   

3.
Graph-Modeled Data Clustering: Exact Algorithms for Clique Generation   总被引:1,自引:0,他引:1  
We present efficient fixed-parameter algorithms for the NP-complete edge modification problems Cluster Editing and Cluster Deletion. Here, the goal is to make the fewest changes to the edge set of an input graph such that the new graph is a vertex-disjoint union of cliques. Allowing up to k edge additions and deletions (Cluster Editing), we solve this problem in O(2.27k + |V|3) time; allowing only up to k edge deletions (Cluster Deletion), we solve this problem in O(1.77k + |V|3) time. The key ingredients of our algorithms are two easy to implement bounded search tree algorithms and an efficient polynomial-time reduction to a problem kernel of size O(k3). This improves and complements previous work. Finally, we discuss further improvements on search tree sizes using computer-generated case distinctions.  相似文献   

4.
We present two parameterized algorithms for the Minimum Fill-in problem, also known as Chordal Completion: given an arbitrary graph G and integer k, can we add at most k edges to G to obtain a chordal graph? Our first algorithm has running time \(\mathcal {O}(k^{2}nm+3.0793^{k})\), and requires polynomial space. This improves the base of the exponential part of the best known parameterized algorithm time for this problem so far. We are able to improve this running time even further, at the cost of more space. Our second algorithm has running time \(\mathcal {O}(k^{2}nm+2.35965^{k})\) and requires \(\mathcal {O}^{\ast}(1.7549^{k})\) space. To achieve these results, we present a new lemma describing the edges that can safely be added to achieve a chordal completion with the minimum number of edges, regardless of k.  相似文献   

5.
A graph G is said to be a bicluster graph if G is a disjoint union of bicliques (complete bipartite subgraphs), and a cluster graph if G is a disjoint union of cliques (complete subgraphs). In this work, we study the parameterized versions of the NP-hard Bicluster Graph Editing and Cluster Graph Editing problems. The former consists of obtaining a bicluster graph by making the minimum number of modifications in the edge set of an input bipartite graph. When at most k modifications are allowed (Bicluster(k) Graph Editing problem), this problem is FPT, and can be solved in O(4 k nm) time by a standard search tree algorithm. We develop an algorithm of time complexity O(4 k +n+m), which uses a strategy based on modular decomposition techniques; we slightly generalize the original problem as the input graph is not necessarily bipartite. The algorithm first builds a problem kernel with O(k 2) vertices in O(n+m) time, and then applies a bounded search tree. We also show how this strategy based on modular decomposition leads to a new way of obtaining a problem kernel with O(k 2) vertices for the Cluster(k) Graph Editing problem, in O(n+m) time. This problem consists of obtaining a cluster graph by modifying at most k edges in an input graph. A previous FPT algorithm of time O(1.92 k +n 3) for this problem was presented by Gramm et al. (Theory Comput. Syst. 38(4), 373–392, 2005, Algorithmica 39(4), 321–347, 2004). In their solution, a problem kernel with O(k 2) vertices is built in O(n 3) time.  相似文献   

6.
Kernelization algorithms for the cluster editing problem have been a popular topic in the recent research in parameterized computation. Most kernelization algorithms for the problem are based on the concept of critical cliques. In this paper, we present new observations and new techniques for the study of kernelization algorithms for the cluster editing problem. Our techniques are based on the study of the relationship between cluster editing and graph edge-cuts. As an application, we present a simple algorithm that constructs a 2k-vertex kernel for the integral-weighted version of the cluster editing problem. Our result matches the best kernel bound for the unweighted version of the cluster editing problem, and significantly improves the previous best kernel bound for the weighted version of the problem. For the more general real-weighted version of the problem, our techniques lead to a simple kernelization algorithm that constructs a kernel of at most 4k vertices.  相似文献   

7.
Vertex deletion and edge deletion problems play a central role in parameterized complexity. Examples include classical problems like Feedback Vertex Set, Odd Cycle Transversal, and Chordal Deletion. The study of analogous edge contraction problems has so far been left largely unexplored from a parameterized perspective. We consider two basic problems of this type: Tree Contraction and Path Contraction. These two problems take as input an undirected graph G on n vertices and an integer k, and the task is to determine whether we can obtain a tree or a path, respectively, by a sequence of at most k edge contractions in G. For Tree Contraction, we present a randomized 4 k ? n O(1) time polynomial-space algorithm, as well as a deterministic 4.98 k ? n O(1) time algorithm, based on a variant of the color coding technique of Alon, Yuster and Zwick. We also present a deterministic 2 k+o(k)+n O(1) time algorithm for Path Contraction. Furthermore, we show that Path Contraction has a kernel with at most 5k+3 vertices, while Tree Contraction does not have a polynomial kernel unless NP ? coNP/poly. We find the latter result surprising because of the connection between Tree Contraction and Feedback Vertex Set, which is known to have a kernel with 4k 2 vertices.  相似文献   

8.
In the framework of parameterized complexity, exploring how one parameter affects the complexity of a different parameterized (or unparameterized problem) is of general interest. A well-developed example is the investigation of how the parameter treewidth influences the complexity of (other) graph problems. The reason why such investigations are of general interest is that real-world input distributions for computational problems often inherit structure from the natural computational processes that produce the problem instances (not necessarily in obvious, or well-understood ways). The max leaf number ml(G) of a connected graph G is the maximum number of leaves in a spanning tree for G. Exploring questions analogous to the well-studied case of treewidth, we can ask: how hard is it to solve 3-Coloring, Hamilton Path, Minimum Dominating Set, Minimum Bandwidth or many other problems, for graphs of bounded max leaf number? What optimization problems are W[1]-hard under this parameterization? We do two things:
  1. (1)
    We describe much improved FPT algorithms for a large number of graph problems, for input graphs G for which ml(G)≤k, based on the polynomial-time extremal structure theory canonically associated to this parameter. We consider improved algorithms both from the point of view of kernelization bounds, and in terms of improved fixed-parameter tractable (FPT) runtimes O *(f(k)).
     
  2. (2)
    The way that we obtain these concrete algorithmic results is general and systematic. We describe the approach, and raise programmatic questions.
     
  相似文献   

9.
An instance of the maximum constraint satisfaction problem (Max CSP) is a finite collection of constraints on a set of variables, and the goal is to assign values to the variables that maximises the number of satisfied constraints. Max CSP captures many well-known problems (such as Maxk-SAT and Max Cut) and is consequently NP-hard. Thus, it is natural to study how restrictions on the allowed constraint types (or constraint language) affect the complexity and approximability of Max CSP. The PCP theorem is equivalent to the existence of a constraint language for which Max CSP has a hard gap at location 1; i.e. it is NP-hard to distinguish between satisfiable instances and instances where at most some constant fraction of the constraints are satisfiable. All constraint languages, for which the CSP problem (i.e., the problem of deciding whether all constraints can be satisfied) is currently known to be NP-hard, have a certain algebraic property. We prove that any constraint language with this algebraic property makes Max CSP have a hard gap at location 1 which, in particular, implies that such problems cannot have a PTAS unless P=NP. We then apply this result to Max CSP restricted to a single constraint type; this class of problems contains, for instance, Max Cut and Max DiCut. Assuming PNP, we show that such problems do not admit PTAS except in some trivial cases. Our results hold even if the number of occurrences of each variable is bounded by a constant. Finally, we give some applications of our results.  相似文献   

10.
In this paper we provide improved approximation algorithms for the Min-Max Tree Cover and Bounded Tree Cover problems. Given a graph G=(V,E) with weights w:E→?+, a set T 1,T 2,…,T k of subtrees of G is called a tree cover of G if $V=\bigcup_{i=1}^{k} V(T_{i})$ . In the Min-Max k-tree Cover problem we are given graph G and a positive integer k and the goal is to find a tree cover with k trees, such that the weight of the largest tree in the cover is minimized. We present a 3-approximation algorithm for this improving the two different approximation algorithms presented in Arkin et al. (J. Algorithms 59:1–18, 2006) and Even et al. (Oper. Res. Lett. 32(4):309–315, 2004) with ratio 4. The problem is known to have an APX-hardness lower bound of $\frac{3}{2}$ (Xu and Wen in Oper. Res. Lett. 38:169–173, 2010). In the Bounded Tree Cover problem we are given graph G and a bound λ and the goal is to find a tree cover with minimum number of trees such that each tree has weight at most λ. We present a 2.5-approximation algorithm for this, improving the 3-approximation bound in Arkin et al. (J. Algorithms 59:1–18, 2006).  相似文献   

11.
We study the Cutwidth problem, where the input is a graph G, and the objective is find a linear layout of the vertices that minimizes the maximum number of edges intersected by any vertical line inserted between two consecutive vertices. We give an algorithm for Cutwidth with running time O(2 k n O(1)). Here k is the size of a minimum vertex cover of the input graph G, and n is the number of vertices in G. Our algorithm gives an O(2 n/2 n O(1)) time algorithm for Cutwidth on bipartite graphs as a corollary. This is the first non-trivial exact exponential time algorithm for Cutwidth on a graph class where the problem remains NP-complete. Additionally, we show that Cutwidth parameterized by the size of the minimum vertex cover of the input graph does not admit a polynomial kernel unless NP?coNP/poly. Our kernelization lower bound contrasts with the recent results of Bodlaender et al. (ICALP, Springer, Berlin, 2011; SWAT, Springer, Berlin, 2012) that both Treewidth and Pathwidth parameterized by vertex cover do admit polynomial kernels.  相似文献   

12.
The Contractibility problem takes as input two graphs G and H, and the task is to decide whether H can be obtained from G by a sequence of edge contractions. The Induced Minor and Induced Topological Minor problems are similar, but the first allows both edge contractions and vertex deletions, whereas the latter allows only vertex deletions and vertex dissolutions. All three problems are NP-complete, even for certain fixed graphs H. We show that these problems can be solved in polynomial time for every fixed H when the input graph G is chordal. Our results can be considered tight, since these problems are known to be W[1]-hard on chordal graphs when parameterized by the size of H. To solve Contractibility and Induced Minor, we define and use a generalization of the classic Disjoint Paths problem, where we require the vertices of each of the k paths to be chosen from a specified set. We prove that this variant is NP-complete even when k=2, but that it is polynomial-time solvable on chordal graphs for every fixed k. Our algorithm for Induced Topological Minor is based on another generalization of Disjoint Paths called Induced Disjoint Paths, where the vertices from different paths may no longer be adjacent. We show that this problem, which is known to be NP-complete when k=2, can be solved in polynomial time on chordal graphs even when k is part of the input. Our results fit into the general framework of graph containment problems, where the aim is to decide whether a graph can be modified into another graph by a sequence of specified graph operations. Allowing combinations of the four well-known operations edge deletion, edge contraction, vertex deletion, and vertex dissolution results in the following ten containment relations: (induced) minor, (induced) topological minor, (induced) subgraph, (induced) spanning subgraph, dissolution, and contraction. Our results, combined with existing results, settle the complexity of each of the ten corresponding containment problems on chordal graphs.  相似文献   

13.
Approximation Algorithms for the Directed k-Tour and k-Stroll Problems   总被引:1,自引:0,他引:1  
We consider two natural generalizations of the Asymmetric Traveling Salesman problem: the k-Stroll and the k-Tour problems. The input to the k-Stroll problem is a directed n-vertex graph with nonnegative edge lengths, an integer k, as well as two special vertices s and t. The goal is to find a minimum-length s-t walk, containing at least k distinct vertices (including the endpoints s,t). The k-Tour problem can be viewed as a special case of k-Stroll, where s=t. That is, the walk is required to be a tour, containing some pre-specified vertex s. When k=n, the k-Stroll problem becomes equivalent to Asymmetric Traveling Salesman Path, and k-Tour to Asymmetric Traveling Salesman. Our main result is a polylogarithmic approximation algorithm for the k-Stroll problem. Prior to our work, only bicriteria (O(log2 k),3)-approximation algorithms have been known, producing walks whose length is bounded by 3OPT, while the number of vertices visited is Ω(k/log2 k). We also show a simple O(log2 n/loglogn)-approximation algorithm for the k-Tour problem. The best previously known approximation algorithms achieved min(O(log3 k),O(log2 n?logk/loglogn)) approximation in polynomial time, and O(log2 k) approximation in quasipolynomial time.  相似文献   

14.
《国际计算机数学杂志》2012,89(14):3157-3174
Based on the methodology we presented in earlier work on parameterized algorithms for 3-Hitting Set, we develop simple search tree-based algorithms for d-Hitting Set. We considerably improve on the bounds that were elsewhere derived for these problems.  相似文献   

15.
In the k-Feedback Arc/Vertex Set problem we are given a directed graph D and a positive integer k and the objective is to check whether it is possible to delete at most k arcs/vertices from D to make it acyclic. Dom et al. (J. Discrete Algorithm 8(1):76–86, 2010) initiated a study of the Feedback Arc Set problem on bipartite tournaments (k-FASBT) in the realm of parameterized complexity. They showed that k-FASBT can be solved in time O(3.373 k n 6) on bipartite tournaments having n vertices. However, until now there was no known polynomial sized problem kernel for k-FASBT. In this paper we obtain a cubic vertex kernel for k-FASBT. This completes the kernelization picture for the Feedback Arc/Vertex Set problem on tournaments and bipartite tournaments, as for all other problems polynomial kernels were known before. We obtain our kernel using a non-trivial application of “independent modules” which could be of independent interest.  相似文献   

16.
The k-truss of a graph is the largest edge-induced subgraph such that every edge is contained in at least k triangles within the subgraph, where a triangle is a cycle consisting of three vertices. As a new notion of cohesive subgraphs, truss has recently attracted a lot of research attentions in the database and data mining fields. At the same time, uncertainty is an intrinsic property of massive graph data, and truss decomposition (i.e., finding all k-trusses of a graph) has become a key primitive on uncertain graphs. In this paper, we study the truss decomposition problem on uncertain graphs, that is, finding all highly probable k-trusses of an uncertain graph. We first give an formal statement of the truss decomposition problem on uncertain graphs. Then, we prove that the truss decomposition of an uncertain graph attains two elegant properties, namely uniqueness and hierarchy. We show that the truss decomposition of an uncertain graph can be found in \(O(m^{1.5}Q)\) time by proposing an in-memory algorithm called \(\mathtt {TD_{mem}}\), where m is the number of edges of the uncertain graph, and Q is at most the maximum number of common neighbors of the endpoints of an edge. When an uncertain graph is too large to fit into main memory, we propose an external-memory algorithm \(\mathtt {TD_{I/O}}\) to find the truss decomposition of the uncertain graph. Extensive experiments have been carried out to evaluate the practical performance of the proposed algorithms. The experimental results verify that both \(\mathtt {TD_{mem}}\) and \(\mathtt {TD_{I/O}}\) are efficient when an uncertain graph is small enough to fit into main memory, and that \(\mathtt {TD_{I/O}}\) is much faster than \(\mathtt {TD_{mem}}\) when the graph is too large to fit into main memory.  相似文献   

17.
The Pathwidth One Vertex Deletion (POVD) problem asks whether, given an undirected graph?G and an integer k, one can delete at most k vertices from?G so that the remaining graph has pathwidth at most 1. The question can be considered as a natural variation of the extensively studied Feedback Vertex Set (FVS) problem, where the deletion of at most k vertices has to result in the remaining graph having treewidth at most 1 (i.e., being a forest). Recently Philip et?al. (WG, Lecture Notes in Computer Science, vol.?6410, pp.?196?C207, 2010) initiated the study of the parameterized complexity of POVD, showing a quartic kernel and an algorithm which runs in time 7 k n O(1). In this article we improve these results by showing a quadratic kernel and an algorithm with time complexity 4.65 k n O(1), thus obtaining almost tight kernelization bounds when compared to the general result of Dell and van Melkebeek (STOC, pp.?251?C260, ACM, New York, 2010). Techniques used in the kernelization are based on the quadratic kernel for FVS, due to Thomassé (ACM Trans. Algorithms 6(2), 2010).  相似文献   

18.
This paper considers QLtl, a quantitative analagon of Ltl and presents algorithms for model checking QLtl over quantitative versions of Kripke structures and Markov chains.  相似文献   

19.
We study the problem of packing element-disjoint Steiner trees in graphs. We are given a graph and a designated subset of terminal nodes, and the goal is to find a maximum cardinality set of element-disjoint trees such that each tree contains every terminal node. An element means a non-terminal node or an edge. (Thus, each non-terminal node and each edge must be in at most one of the trees.) We show that the problem is APX-hard when there are only three terminal nodes, thus answering an open question. Our main focus is on the special case when the graph is planar. We show that the problem of finding two element-disjoint Steiner trees in a planar graph is NP-hard. Similarly, the problem of finding two edge-disjoint Steiner trees in a planar graph is NP-hard. We design an algorithm for planar graphs that achieves an approximation guarantee close to 2. In fact, given a planar graph that is k element-connected on the terminals (k is an upper bound on the number of element-disjoint Steiner trees), the algorithm returns $\lfloor\frac{k}{2} \rfloor-1$ element-disjoint Steiner trees. Using this algorithm, we get an approximation algorithm for the edge-disjoint version of the problem on planar graphs that improves on the previous approximation guarantees. We also show that the natural LP relaxation of the planar problem has an integrality ratio approaching?2.  相似文献   

20.
We consider the $\mathcal{NP}$ -hard problem of finding a spanning tree with a maximum number of internal vertices. This problem is a generalization of the famous Hamiltonian Path problem. Our dynamic-programming algorithms for general and degree-bounded graphs have running times of the form $\mathcal{O}^{*}(c^{n})$ with c≤2. For graphs with bounded degree, c<2. The main result, however, is a branching algorithm for graphs with maximum degree three. It only needs polynomial space and has a running time of $\mathcal{O}(1.8612^{n})$ when analyzed with respect to the number of vertices. We also show that its running time is $2.1364^{k} n^{\mathcal{O}(1)}$ when the goal is to find a spanning tree with at least k internal vertices. Both running time bounds are obtained via a Measure & Conquer analysis, the latter one being a novel use of this kind of analysis for parameterized algorithms.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号