首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Thorup and Zwick (J. ACM 52(1):1–24, 2005 and STOC’01) in their seminal work introduced the notion of distance oracles. Given an n-vertex weighted undirected graph with m edges, they show that for any integer k≥1 it is possible to preprocess the graph in $\tilde {O}(mn^{1/k})$ time and generate a compact data structure of size O(kn 1+1/k ). For each pair of vertices, it is then possible to retrieve an estimated distance with multiplicative stretch 2k?1 in O(k) time. For k=2 this gives an oracle of O(n 1.5) size that produces in constant time estimated distances with stretch 3. Recently, Pǎtra?cu and Roditty (In: Proc. of 51st FOCS, 2010) broke the theoretical status-quo in the field of distance oracles and obtained a distance oracle for sparse unweighted graphs of O(n 5/3) size that produces in constant time estimated distances with stretch 2. In this paper we show that it is possible to break the stretch 2 barrier at the price of non-constant query time in unweighted undirected graphs. We present a data structure that produces estimated distances with 1+ε stretch. The size of the data structure is O(nm 1?ε) and the query time is $\tilde {O}(m^{1-\varepsilon '})$ . Using it for sparse unweighted graphs we can get a data structure of size O(n 1.87) that can supply in O(n 0.87) time estimated distances with multiplicative stretch 1.75.  相似文献   

2.
We study the Cutwidth problem, where the input is a graph G, and the objective is find a linear layout of the vertices that minimizes the maximum number of edges intersected by any vertical line inserted between two consecutive vertices. We give an algorithm for Cutwidth with running time O(2 k n O(1)). Here k is the size of a minimum vertex cover of the input graph G, and n is the number of vertices in G. Our algorithm gives an O(2 n/2 n O(1)) time algorithm for Cutwidth on bipartite graphs as a corollary. This is the first non-trivial exact exponential time algorithm for Cutwidth on a graph class where the problem remains NP-complete. Additionally, we show that Cutwidth parameterized by the size of the minimum vertex cover of the input graph does not admit a polynomial kernel unless NP?coNP/poly. Our kernelization lower bound contrasts with the recent results of Bodlaender et al. (ICALP, Springer, Berlin, 2011; SWAT, Springer, Berlin, 2012) that both Treewidth and Pathwidth parameterized by vertex cover do admit polynomial kernels.  相似文献   

3.
Vertex deletion and edge deletion problems play a central role in parameterized complexity. Examples include classical problems like Feedback Vertex Set, Odd Cycle Transversal, and Chordal Deletion. The study of analogous edge contraction problems has so far been left largely unexplored from a parameterized perspective. We consider two basic problems of this type: Tree Contraction and Path Contraction. These two problems take as input an undirected graph G on n vertices and an integer k, and the task is to determine whether we can obtain a tree or a path, respectively, by a sequence of at most k edge contractions in G. For Tree Contraction, we present a randomized 4 k ? n O(1) time polynomial-space algorithm, as well as a deterministic 4.98 k ? n O(1) time algorithm, based on a variant of the color coding technique of Alon, Yuster and Zwick. We also present a deterministic 2 k+o(k)+n O(1) time algorithm for Path Contraction. Furthermore, we show that Path Contraction has a kernel with at most 5k+3 vertices, while Tree Contraction does not have a polynomial kernel unless NP ? coNP/poly. We find the latter result surprising because of the connection between Tree Contraction and Feedback Vertex Set, which is known to have a kernel with 4k 2 vertices.  相似文献   

4.
Consider the NP-hard problem of, given a simple graph?G, to find a series-parallel subgraph of?G with the maximum number of edges. The algorithm that, given a connected graph?G, outputs a spanning tree of?G, is a $\frac{1}{2}$ -approximation. Indeed, if n is the number of vertices in G, any spanning tree in G has?n?1 edges and any series-parallel graph on?n vertices has at most?2n?3 edges. We present a $\frac{7}{12}$ -approximation for this problem and results showing the limits of our approach.  相似文献   

5.
We consider the cover timeE u [G], the expected time it takes a random walk that starts at vertexu to visit alln vertices of a connected graphG. Aleliunas et al introduced the spanning tree argument: for any spanning treeT of the graphG, E u [G]W[T], whereW[T] is the sum of commute times along the edges ofT. By refining the spanning tree argument we obtain: $$E_u [G] \leqslant \frac{1}{2}(\mathop {\min }\limits_T [W[T]] + \mathop {\max }\limits_{\upsilon \in G} [H[u,\upsilon ] - H[\upsilon ,u]])$$ whereH[u,v] is the hitting time fromu tov. We use this bound to show:
  1. max G min u E u [G]=(1+o(1))2n 3/27. This answers an open question of Aldous.
  2. Then-path is then-vertex tree on which the cover time is maximized. This confirms a conjecture of Brightwell and Winkler.
  3. For regular graphs,E u [G]<2n 2. This improves the leading constant in previously known upper bounds.
We also provide upper bounds onE u + [G], the expected time to coverG and return tou.  相似文献   

6.
Yuichi Yoshida  Hiro Ito 《Algorithmica》2012,62(3-4):701-712
We present an algorithm for testing the k-vertex-connectivity of graphs with the given maximum degree. The time complexity of the algorithm is independent of the number of vertices and edges of graphs. Fixed degree bound d, a graph G with n vertices and a maximum degree at most d is called ε-far from k-vertex-connectivity when at least $\frac{\epsilon dn}{2}$ edges must be added to or removed from G to obtain a k-vertex-connected graph with a maximum degree at most d. The algorithm always accepts every graph that is k-vertex-connected and rejects every graph that is ε-far from k-vertex-connectivity with a probability of at least 2/3. The algorithm runs in $O(d(\frac{c}{\epsilon d})^{k}\log\frac {1}{\epsilon d})$ time (c>1 is a constant) for (k?1)-vertex-connected graphs, and in $O(d(\frac{ck}{\epsilon d})^{k}\log\frac{k}{\epsilon d})$ time (c>1 is a constant) for general graphs. It is the first constant-time k-vertex-connectivity testing algorithm for general k≥4.  相似文献   

7.
Minimum witnesses for Boolean matrix multiplication play an important role in several graph algorithms. For two Boolean matrices A and B of order n, with one of the matrices having at most m nonzero entries, the fastest known algorithms for computing the minimum witnesses of their product run in either O(n 2.575) time or in O(n 2+mnlog(n 2/m)/log2 n) time. We present a new algorithm for this problem. Our algorithm runs either in time $$\tilde{O}\bigl(n^{\frac{3}{4-\omega}}m^{1-\frac{1}{4-\omega }}\bigr) $$ where ω<2.376 is the matrix multiplication exponent, or, if fast rectangular matrix multiplication is used, in time $$O\bigl(n^{1.939}m^{0.318}\bigr). $$ In particular, if ω?1<α<2 where m=n α , the new algorithm is faster than both of the aforementioned algorithms.  相似文献   

8.
Traditionally, the quality of orthogonal planar drawings is quantified by either the total number of bends, or the maximum number of bends per edge. However, this neglects that in typical applications, edges have varying importance. In this work, we investigate an approach that allows to specify the maximum number of bends for each edge individually, depending on its importance. We consider a new problem called FlexDraw that is defined as follows. Given a planar graph G=(V,E) on n vertices with maximum degree 4 and a function $\operatorname{flex}: E \longrightarrow\mathbb{N}_{0}$ that assigns a flexibility to each edge, does G admit a planar embedding on the grid such that each edge e has at most $\operatorname{flex}(e)$ bends? Note that in our setting the combinatorial embedding of G is not fixed. FlexDraw directly extends the problem β-embeddability asking whether G can be embedded with at most β bends per edge. We give an algorithm with running-time O(n 2) solving FlexDraw when the flexibility of each edge is positive. This includes 1-embeddability as a special case and thus closes the complexity gap between 0-embeddability, which is $\mathcal{NP}$ -hard to decide, and 2-embeddability, which is efficiently solvable since every planar graph with maximum degree 4 admits a 2-embedding except for the octahedron. In addition to the polynomial-time algorithm we show that FlexDraw is $\mathcal{NP}$ -hard even if the edges with flexibility 0 induce a tree or a union of disjoint stars.  相似文献   

9.
Let α(G) denote the independence number of a graphG, that is the maximum number of pairwise independent vertices inG. We present a parallel algorithm that computes in a planar graphG = (V, E), an independent set \(I \subseteq V\) such that ¦I¦≥ α (G)/2. The algorithm runs in timeOlog2 n) and requires a linear number of processors. This is achieved by denning a new set of reductions that can be executed “locally” and simultaneously; furthermore, it is shown that a constant fraction of the vertices in the graph are reducible. This is the best known approximation scheme when the number of processors available is linear; parallel implementation of known sequential algorithms requires many more processors.  相似文献   

10.
The Pathwidth One Vertex Deletion (POVD) problem asks whether, given an undirected graph?G and an integer k, one can delete at most k vertices from?G so that the remaining graph has pathwidth at most 1. The question can be considered as a natural variation of the extensively studied Feedback Vertex Set (FVS) problem, where the deletion of at most k vertices has to result in the remaining graph having treewidth at most 1 (i.e., being a forest). Recently Philip et?al. (WG, Lecture Notes in Computer Science, vol.?6410, pp.?196?C207, 2010) initiated the study of the parameterized complexity of POVD, showing a quartic kernel and an algorithm which runs in time 7 k n O(1). In this article we improve these results by showing a quadratic kernel and an algorithm with time complexity 4.65 k n O(1), thus obtaining almost tight kernelization bounds when compared to the general result of Dell and van Melkebeek (STOC, pp.?251?C260, ACM, New York, 2010). Techniques used in the kernelization are based on the quadratic kernel for FVS, due to Thomassé (ACM Trans. Algorithms 6(2), 2010).  相似文献   

11.
Zeev Nutov 《Algorithmica》2012,63(1-2):398-410
We consider the (undirected) Node Connectivity Augmentation (NCA) problem: given a graph J=(V,E J ) and connectivity requirements $\{r(u,v): u,v \in V\}$ , find a minimum size set I of new edges (any edge is allowed) such that the graph JI contains r(u,v) internally-disjoint uv-paths, for all u,vV. In Rooted NCA there is sV such that r(u,v)>0 implies u=s or v=s. For large values of k=max? u,vV r(u,v), NCA is at least as hard to approximate as Label-Cover and thus it is unlikely to admit an approximation ratio polylogarithmic in k. Rooted NCA is at least as hard to approximate as Hitting-Set. The previously best approximation ratios for the problem were O(kln?n) for NCA and O(ln?n) for Rooted NCA. In this paper we give an approximation algorithm with ratios O(kln?2 k) for NCA and O(ln?2 k) for Rooted NCA. This is the first approximation algorithm with ratio independent of?n, and thus is a constant for any fixed k. Our algorithm is based on the following new structural result which is of independent interest. If $\mathcal{D}$ is a set of node pairs in a graph?J, then the maximum degree in the hypergraph formed by the inclusion minimal tight sets separating at least one pair in $\mathcal{D}$ is O(? 2), where ? is the maximum connectivity in J of a pair in $\mathcal{D}$ .  相似文献   

12.
Let G be a graph with n vertices and m edges. A sparsifier of G is a sparse graph on the same vertex set approximating G in some natural way. It allows us to say useful things about G while considering much fewer than m edges. The strongest commonly-used notion of sparsification is spectral sparsification; H is a spectral sparsifier of G if the quadratic forms induced by the Laplacians of G and H approximate one another well. This notion is strictly stronger than the earlier concept of combinatorial sparsification. In this paper, we consider a semi-streaming setting, where we have only $\tilde{O}(n)$ storage space, and we thus cannot keep all of G. In this case, maintaining a sparsifier instead gives us a useful approximation to G, allowing us to answer certain questions about the original graph without storing all of it. We introduce an algorithm for constructing a spectral sparsifier of G with O(nlogn/? 2) edges (where ? is a parameter measuring the quality of the sparsifier), taking $\tilde{O}(m)$ time and requiring only one pass over G. In addition, our algorithm has the property that it maintains at all times a valid sparsifier for the subgraph of G that we have received. Our algorithm is natural and conceptually simple. As we read edges of G, we add them to the sparsifier H. Whenever H gets too big, we resparsify it in $\tilde{O}(n)$ time. Adding edges to a graph changes the structure of its sparsifier’s restriction to the already existing edges. It would thus seem that the above procedure would cause errors to compound each time that we resparsify, and that we should need to either retain significantly more information or reexamine previously discarded edges in order to construct the new sparsifier. However, we show how to use the information contained in H to perform this resparsification using only the edges retained by earlier steps in nearly linear time.  相似文献   

13.
We present the design and analysis of a nearly-linear work parallel algorithm for solving symmetric diagonally dominant (SDD) linear systems. On input an SDD n-by-n matrix A with m nonzero entries and a vector b, our algorithm computes a vector \(\tilde{x}\) such that \(\|\tilde{x} - A^{+}b\|_{A} \leq\varepsilon\cdot\|{A^{+}b}\|_{A}\) in \(O(m\log^{O(1)}{n}\log {\frac{1}{\varepsilon}})\) work and \(O(m^{1/3+\theta}\log\frac{1}{\varepsilon})\) depth for any θ>0, where A + denotes the Moore-Penrose pseudoinverse of A. The algorithm relies on a parallel algorithm for generating low-stretch spanning trees or spanning subgraphs. To this end, we first develop a parallel decomposition algorithm that in O(mlog O(1) n) work and polylogarithmic depth, partitions a graph with n nodes and m edges into components with polylogarithmic diameter such that only a small fraction of the original edges are between the components. This can be used to generate low-stretch spanning trees with average stretch O(n α ) in O(mlog O(1) n) work and O(n α ) depth for any α>0. Alternatively, it can be used to generate spanning subgraphs with polylogarithmic average stretch in O(mlog O(1) n) work and polylogarithmic depth. We apply this subgraph construction to derive a parallel linear solver. By using this solver in known applications, our results imply improved parallel randomized algorithms for several problems, including single-source shortest paths, maximum flow, minimum-cost flow, and approximate maximum flow.  相似文献   

14.
An important result in the study of polynomial-time preprocessing shows that there is an algorithm which given an instance (G,k) of Vertex Cover outputs an equivalent instance (G′,k′) in polynomial time with the guarantee that G′ has at most 2k′ vertices (and thus $\mathcal{O}((k')^{2})$ edges) with k′≤k. Using the terminology of parameterized complexity we say that k-Vertex Cover has a kernel with 2k vertices. There is complexity-theoretic evidence that both 2k vertices and Θ(k 2) edges are optimal for the kernel size. In this paper we consider the Vertex Cover problem with a different parameter, the size $\mathop{\mathrm{\mbox{\textsc{fvs}}}}(G)$ of a minimum feedback vertex set for G. This refined parameter is structurally smaller than the parameter k associated to the vertex covering number $\mathop{\mathrm{\mbox {\textsc{vc}}}}(G)$ since $\mathop{\mathrm{\mbox{\textsc{fvs}}}}(G)\leq\mathop{\mathrm{\mbox{\textsc{vc}}}}(G)$ and the difference can be arbitrarily large. We give a kernel for Vertex Cover with a number of vertices that is cubic in $\mathop{\mathrm{\mbox{\textsc{fvs}}}}(G)$ : an instance (G,X,k) of Vertex Cover, where X is a feedback vertex set for G, can be transformed in polynomial time into an equivalent instance (G′,X′,k′) such that |V(G′)|≤2k and $|V(G')| \in\mathcal{O}(|X'|^{3})$ . A similar result holds when the feedback vertex set X is not given along with the input. In sharp contrast we show that the Weighted Vertex Cover problem does not have a polynomial kernel when parameterized by the cardinality of a given vertex cover of the graph unless NP ? coNP/poly and the polynomial hierarchy collapses to the third level.  相似文献   

15.
We give an approximation algorithm for fractional packing and covering linear programs (linear programs with non-negative coefficients). Given a constraint matrix with n non-zeros, r rows, and c columns, the algorithm (with high probability) computes feasible primal and dual solutions whose costs are within a factor of 1+ε of opt (the optimal cost) in time O((r+c)log(n)/ε 2+n).  相似文献   

16.
Given a graph G=(V,E), a vertex v of G is a median vertex if it minimizes the sum of the distances to all other vertices of G. The median problem consists of finding the set of all median vertices of G. In this note, we present self-stabilizing algorithms for the median problem in partial rectangular grids and relatives. Our algorithms are based on the fact that partial rectangular grids can be isometrically embedded into the Cartesian product of two trees, to which we apply the algorithm proposed by Antonoiu and Srimani (J. Comput. Syst. Sci. 58:215–221, 1999) and Bruell et al. (SIAM J. Comput. 29:600–614, 1999) for computing the medians in trees. Then we extend our approach from partial rectangular grids to a more general class of plane quadrangulations. We also show that the characterization of medians of trees given by Gerstel and Zaks (Networks 24:23–29, 1994) extends to cube-free median graphs, a class of graphs which includes these quadrangulations.  相似文献   

17.
The Hamiltonian Cycle problem is the problem of deciding whether an n-vertex graph G has a cycle passing through all vertices of G. This problem is a classic NP-complete problem. Finding an exact algorithm that solves it in ${\mathcal {O}}^{*}(\alpha^{n})$ time for some constant α<2 was a notorious open problem until very recently, when Björklund presented a randomized algorithm that uses ${\mathcal {O}}^{*}(1.657^{n})$ time and polynomial space. The Longest Cycle problem, in which the task is to find a cycle of maximum length, is a natural generalization of the Hamiltonian Cycle problem. For a claw-free graph G, finding a longest cycle is equivalent to finding a closed trail (i.e., a connected even subgraph, possibly consisting of a single vertex) that dominates the largest number of edges of some associated graph H. Using this translation we obtain two deterministic algorithms that solve the Longest Cycle problem, and consequently the Hamiltonian Cycle problem, for claw-free graphs: one algorithm that uses ${\mathcal {O}}^{*}(1.6818^{n})$ time and exponential space, and one algorithm that uses ${\mathcal {O}}^{*}(1.8878^{n})$ time and polynomial space.  相似文献   

18.
C. C. McGeoch 《Algorithmica》1995,13(5):426-441
The essential subgraph H of a weighted graph or digraphG contains an edge (v, w) if that edge is uniquely the least-cost path between its vertices. Let s denote the number of edges ofH. This paper presents an algorithm for solving all-pairs shortest paths onG that requires O(ns+n2 logn) worst-case running time. In general the time is equivalent to that of solvingn single-source problems using only edges inH. For general models of random graphs and digraphsG, s=0(n logn) almost surely. The subgraphH is optimal in the sense that it is the smallest subgraph sufficient for solving shortest-path problems inG. Lower bounds on the largest-cost edge ofH and on the diameter ofH andG are obtained for general randomly weighted graphs. Experimental results produce some new conjectures about essential subgraphs and distances in graphs with uniform edge costs.Much of this research was carried out while the author was a Visiting Fellow at the Center for Discrete Mathematics and Theoretical Computer Science (DIMACS).  相似文献   

19.
Minor Containment is a fundamental problem in Algorithmic Graph Theory used as a subroutine in numerous graph algorithms. A model of a graph H in a graph G is a set of disjoint connected subgraphs of G indexed by the vertices of H, such that if {u,v} is an edge of H, then there is an edge of G between components C u and C v . A graph H is a minor of G if G contains a model of H as a subgraph. We give an algorithm that, given a planar n-vertex graph G and an h-vertex graph H, either finds in time $\mathcal{O}(2^{\mathcal{O}(h)} \cdot n +n^{2}\cdot\log n)$ a model of H in G, or correctly concludes that G does not contain H as a minor. Our algorithm is the first single-exponential algorithm for this problem and improves all previous minor testing algorithms in planar graphs. Our technique is based on a novel approach called partially embedded dynamic programming.  相似文献   

20.
We present an approximation algorithm to find a weighted matching of a graph in the one-pass semi-streaming model. The semi-streaming model forbids random access to the input graph and restricts the memory to ${\mathcal{O}}(n\cdot\operatorname{polylog}n)$ bits where n denotes the number of the vertices of the input graph. We obtain an approximation ratio of 5.58 while the previously best algorithm achieves a ratio of 5.82.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号