首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The vertex updating problem for a minimum spanning tree (MST) is defined as follows: Given a graphG=(V, E G) and an MSTT forG, find a new MST forG to which a new vertexz has been added along with weighted edges that connectz with the vertices ofG. We present a set of rules that produce simple optimal parallel algorithms that run inO(lgn) time usingn/lgn EREW PRAM processors, wherenV¦. These algorithms employ any valid tree-contraction schedule that can be produced within the stated resource bounds. These rules can also be used to derive simple linear-time sequential algorithms for the same problem. The previously best-known parallel result was a rather complicated algorithm that usedn processors in the more powerful CREW PRAM model. Furthermore, we show how our solution can be used to solve the multiple vertex updating problem: Update a given MST whenk new vertices are introduced simultaneously. This problem is solved inO(lgk·lgn) parallel time using (k·n)/(lgk·lgn) EREW PRAM processors. This is optimal for graphs having (kn) edges.Part of this work was done while P. Metaxas was with the Department of Mathematics and Computer Science, Dartmouth College.  相似文献   

2.
This paper determines upper bounds on the expected time complexity for a variety of parallel algorithms for undirected and directed random graph problems. For connectivity, biconnectivity, transitive closure, minimum spanning trees, and all pairs minimum cost paths, we prove the expected time to beO(log logn) for the CRCW PRAM (this parallel RAM machine allows resolution of write conflicts) andO(logn · log logn) for the CREW PRAM (which allows simultaneous reads but not simultaneous writes). We also show that the problem of graph isomorphism has expected parallel timeO(log logn) for the CRCW PRAM andO(logn) for the CREW PRAM. Most of these results follow because of upper bounds on the mean depth of a graph, derived in this paper, for more general graphs than was known before.For undirected connectivity especially, we present a new probabilistic algorithm which runs on a randomized input and has an expected running time ofO(log logn) on the CRCW PRAM, withO(n) expected number of processors only.Our results also improve known upper bounds on the expected space required for sequential graph algorithms. For example, we show that the problems of finding connected components, transitive closure, minimum spanning trees, and minimum cost paths have expected sequential spaceO(logn · log logn) on a deterministic Turing Machine. We use a simulation of the CRCW PRAM to get these expected sequential space bounds.This research was supported by National Science Foundation Grant DCR-85-03251 and Office of Naval Research Contract N00014-80-C-0647.This research was partially supported by the National Science Foundation Grants MCS-83-00630, DCR-8503497, by the Greek Ministry of Research and Technology, and by the ESPRIT Basic Research Actions Project ALCOM.  相似文献   

3.
E. Ruppert 《Algorithmica》2000,28(2):242-254
A concurrent-read exclusive-write PRAM algorithm is developed to find the k shortest paths between pairs of vertices in an edge-weighted directed graph. Repetitions of vertices along the paths are allowed. The algorithm computes an implicit representation of the k shortest paths to a given destination vertex from every vertex of a graph with n vertices and m edges, using O(m+nk log 2 k) work and O( log^3k log ^*k+ log n( log log k+ log ^*n)) time, assuming that a shortest path tree rooted at the destination is pre-computed. The paths themselves can be extracted from the implicit representation in O( log k + log n) time, and O(n log n +L) work, where L is the total length of the output. Received July 2, 1997; revised June 18, 1998.  相似文献   

4.
We present a model for edge updates with restricted randomness in dynamic graph algorithms and a general technique for analyzing the expected running time of an update operation. This model is able to capture the average case in many applications, since (1) it allows restrictions on the set of edges which can be used for insertions and (2) the type (insertion or deletion) of each update operation is arbitrary, i.e., not random. We use our technique to analyze existing and new dynamic algorithms for the following problems: maximum cardinality matching, minimum spanning forest, connectivity, 2-edge connectivity, k -edge connectivity, k -vertex connectivity, and bipartiteness. Given a random graph G with m 0 edges and n vertices and a sequence of l update operations such that the graph contains m i edges after operation i , the expected time for performing the updates for any l is in the case of minimum spanning forests, connectivity, 2-edge connectivity, and bipartiteness. The expected time per update operation is O(n) in the case of maximum matching. We also give improved bounds for k -edge and k -vertex connectivity. Additionally we give an insertions-only algorithm for maximum cardinality matching with worst-case O(n) amortized time per insertion. Received June 11, 1995; revised March 8, 1996.  相似文献   

5.
《国际计算机数学杂志》2012,89(3-4):205-226
Ghosh and Bhattacharjee propose [2] (Intern. J. Computer Math., 1984, Vol. 15, pp. 255-268) an algorithm of determining breadth first spanning trees for graphs, which requires that the input graphs contain some vertices, from which every other vertex in the input graph can be reached. These vertices are called starting vertices. The complexity of the GB algorithm is O(log2 n) using O{n 3) processors. In this paper an algorithm, named BREADTH, also computing breadth first spanning trees, is proposed. The complexity is O(log2 n) using O{n 3/logn) processors. Then an efficient parallel algorithm, named- BREADTHFOREST, is proposed, which generalizes algorithm BREADTH. The output of applying BREADTHFOREST to a general graph, which may not contain any starting vertices, is a breadth first spanning forest of the input graph. The complexity of BREADTHFOREST is the same as BREADTH.  相似文献   

6.
In this paper we give efficient parallel algorithms for solving a number of visibility and shortest-path problems for simple polygons. Our algorithms all run inO(logn) time and are based on the use of a new data structure for implicitly representing all shortest paths in a simple polygonP, which we call thestratified decomposition tree. We use this approach to derive efficient parallel methods for computing the visibility ofP from an edge, constructing the visibility graph of the vertices ofP (using an output-sensitive number of processors), constructing the shortest-path tree from a vertex ofP, and determining all-farthest neighbors for the vertices inP. The computational model we use is the CREW PRAM.This research was announced in preliminary form in theProceedings of the 6th ACM Symposium on Computational Geometry, 1990, pp. 73–82. The research of Michael T. Goodrich was supported by the National Science Foundation under Grants CCR-8810568 and CCR-9003299, and by the NSF and DARPA under Grant CCR-8908092.  相似文献   

7.
In this paper we give efficient parallel algorithms for solving a number of visibility and shortest-path problems for simple polygons. Our algorithms all run inO(logn) time and are based on the use of a new data structure for implicitly representing all shortest paths in a simple polygonP, which we call thestratified decomposition tree. We use this approach to derive efficient parallel methods for computing the visibility ofP from an edge, constructing the visibility graph of the vertices ofP (using an output-sensitive number of processors), constructing the shortest-path tree from a vertex ofP, and determining all-farthest neighbors for the vertices inP. The computational model we use is the CREW PRAM.  相似文献   

8.
割点求解是图应用中的一个重要操作.深度优先搜索树算法可以解决割点求解问题.但是该算法存在缺点,导致它不能在实际问题中得到很好的应用.这是因为当今数据的两大特点,一是数据规模庞大,对于很多图操作提出了挑战性的要求;二是数据多变,每天数据的大量更新使得传统算法必须依据更新重复计算,浪费了时间和空间.深度优先搜索树算法的时间复杂度为O(|V|+|E|),其中,|V|和|E|分别为图的顶点的数目和边的数目.它能够很好地适应第1个特点,但是对于第2个特点该算法则无能为力.提出一种基于压缩的割点求解算法来解决这个问题.该算法通过点的朴素相似来压缩图,时间复杂度为O(|E|).在得到的无损压缩图上进行割点求解,同时在压缩图上动态地维护点和边的更新,在不解压图的情况下完成图的更新,在更新后的图上进行割点求解,极大地降低了时间和空间消耗.该压缩算法得到的压缩图对其他图操作同样适用.  相似文献   

9.
This paper determines upper bounds on the expected time complexity for a variety of parallel algorithms for undirected and directed random graph problems. For connectivity, biconnectivity, transitive closure, minimum spanning trees, and all pairs minimum cost paths, we prove the expected time to beO(log logn) for the CRCW PRAM (this parallel RAM machine allows resolution of write conflicts) andO(logn · log logn) for the CREW PRAM (which allows simultaneous reads but not simultaneous writes). We also show that the problem of graph isomorphism has expected parallel timeO(log logn) for the CRCW PRAM andO(logn) for the CREW PRAM. Most of these results follow because of upper bounds on the mean depth of a graph, derived in this paper, for more general graphs than was known before. For undirected connectivity especially, we present a new probabilistic algorithm which runs on a randomized input and has an expected running time ofO(log logn) on the CRCW PRAM, withO(n) expected number of processors only. Our results also improve known upper bounds on the expected space required for sequential graph algorithms. For example, we show that the problems of finding connected components, transitive closure, minimum spanning trees, and minimum cost paths have expected sequential spaceO(logn · log logn) on a deterministic Turing Machine. We use a simulation of the CRCW PRAM to get these expected sequential space bounds.  相似文献   

10.
Efficiently searching top-k representative vertices is crucial for understanding the structure of large dynamic graphs. Recent studies show that communities formed by a vertex with high local clustering coefficient and its neighbours can achieve enhanced information propagation speed as well as disease transmission speed. However, local clustering coefficient, which measures the cliquishness of a vertex in its local neighbourhood, prefers vertices with small degrees. To remedy this issue, in this paper we propose a new ranking measure, weighted clustering coefficient (WCC) of vertices, by integrating both local clustering coefficient and degree. WCC not only inherits the properties of local clustering coefficient but also approximately measures the density (i.e., average degree) of its neighbourhood subgraph. Thus, vertices with higher WCC are more likely to be representative. We study efficiently computing and monitoring top-k representative vertices based on WCC over large dynamic graphs. To reduce the search space, we propose a series of heuristic upper bounds for WCC to prune a large portion of disqualifying vertices from the search space. We also develop an approximation algorithm by utilizing Flajolet-Martin sketch to trade acceptable accuracy for enhanced efficiency. An efficient incremental algorithm dealing with frequent updates in dynamic graphs is explored as well. Extensive experimental results on a variety of real-life graph datasets demonstrate the efficiency and effectiveness of our approaches.  相似文献   

11.
We consider graphs whose vertices may be in one of two different states: either on or off . We wish to maintain dynamically such graphs under an intermixed sequence of updates and queries. An update may reverse the status of a vertex, by switching it either on or off , and may insert a new edge or delete an existing edge. A query tests whether any two given vertices are connected in the subgraph induced by the vertices that are on . We give efficient algorithms that maintain information about connectivity on planar graphs in O( log 3 n) amortized time per query, insert, delete, switch-on, and switch-off operation over sequences of at least Ω(n) operations, where n is the number of vertices of the graph. Received September 1997; revised January 1999.  相似文献   

12.
The connected vertex cover problem is a variant of the vertex cover problem, in which a vertex cover is additional required to induce a connected subgraph in a given connected graph. The problem is known to be NP-hard and to be at least as hard to approximate as the vertex cover problem is. While several 2-approximation NC algorithms are known for vertex cover, whether unweighted or weighted, no parallel algorithm with guaranteed approximation is known for connected vertex cover. Moreover, converting the existing sequential 2-approximation algorithms for connected vertex cover to parallel ones results in RNC algorithms of rather high complexity at best.In this paper we present a 2-approximation NC (and RNC) algorithm for connected vertex cover (and tree cover). The NC algorithm runs in O(log2n) time using O(Δ2(m+n)/logn) processors on an EREW-PRAM, while the RNC algorithm runs in O(logn) expected time using O(m+n) processors on a CRCW-PRAM, when a given graph has n vertices and m edges with maximum vertex degree of Δ.  相似文献   

13.
A k-core of a graph is a maximal connected subgraph in which every vertex is connected to at least k vertices in the subgraph. k-core decomposition is often used in large-scale network analysis, such as community detection, protein function prediction, visualization, and solving NP-hard problems on real networks efficiently, like maximal clique finding. In many real-world applications, networks change over time. As a result, it is essential to develop efficient incremental algorithms for dynamic graph data. In this paper, we propose a suite of incremental k-core decomposition algorithms for dynamic graph data. These algorithms locate a small subgraph that is guaranteed to contain the list of vertices whose maximum k-core values have changed and efficiently process this subgraph to update the k-core decomposition. We present incremental algorithms for both insertion and deletion operations, and propose auxiliary vertex state maintenance techniques that can further accelerate these operations. Our results show a significant reduction in runtime compared to non-incremental alternatives. We illustrate the efficiency of our algorithms on different types of real and synthetic graphs, at varying scales. For a graph of 16 million vertices, we observe relative throughputs reaching a million times, relative to the non-incremental algorithms.  相似文献   

14.
The Bandwidth Minimization Problem (BMP) is the problem, given a graphG and an integerk, to map the vertices ofG to distinct positive integers, so that no edge ofG has its endpoints mapped to integers that differ by more thank. There is no known approximation algorithm for this problem, even for the case of trees. We present an approximation algorithm for the BMP for the case of special graphs, called caterpillars. The BMP arises from many different engineering applications which try to achieve efficient storage and processing and has been studied extensively, especially with relation to other graph layout problems. In particular, the BMP for caterpillars is related to multiprocessor scheduling. It has been shown to be NP-complete, even for degree-3 trees. Our algorithm, gives a logn times optimal algorithm, wheren is the number of nodes of the caterpillar. It is based on the idea of level algorithms.This work has been, in part, supported by the Computer Learning Research (CLEAR) Center at UTD.  相似文献   

15.
Consider a weighted transitive graph, where each vertex is assigned a positive weight. Given a positive integerk, the maximumk-covering problem is to findk disjoint cliques covering a set of vertices with maximum total weight. An 0(kn 2)-time algorithm to solve the problem in a transitive graph is proposed, wheren is the number of vertices. Based on the proposed algorithm the weighted version of a number of problems in VLSI layout (e.g.,k-layer topological via minimization), computational geometry (e.g., maximum multidimensionalk-chain), graph theory (e.g., maximumk-independent set in interval graphs), and sequence manipulation (e.g., maximum increasingk-subsequence) can be solved inO(kn 2), wheren is the input size.This Work was supported in part by the National Science Foundation under Grant MIP-8709074 and MIP-8921540.  相似文献   

16.
We address the problem of very efficient reasoning and update in ?,<-chains, where a ?,<-chain is a directed acyclic graph such that there is a directed path between every pair of vertices, and edges are consistently labeled by ? or <. It is easy to show that, subsequent to an O(n) labeling scheme, queries concerning implied ? and < relations can be answered in O(1) time. However this scheme does not allow for efficient updates to a chain. We show via a novel encoding of precedence information that updates to chains can be accomplished very efficiently. For edge or vertex addition, updates can be carried out in O(logn) time, with manageable degradation for queries: ? queries are answered in O(1) time while < queries require O(logn) time. This result is surprising, in that in the obvious approach to updates O(n) time is required.  相似文献   

17.
C. C. McGeoch 《Algorithmica》1995,13(5):426-441
The essential subgraph H of a weighted graph or digraphG contains an edge (v, w) if that edge is uniquely the least-cost path between its vertices. Let s denote the number of edges ofH. This paper presents an algorithm for solving all-pairs shortest paths onG that requires O(ns+n2 logn) worst-case running time. In general the time is equivalent to that of solvingn single-source problems using only edges inH. For general models of random graphs and digraphsG, s=0(n logn) almost surely. The subgraphH is optimal in the sense that it is the smallest subgraph sufficient for solving shortest-path problems inG. Lower bounds on the largest-cost edge ofH and on the diameter ofH andG are obtained for general randomly weighted graphs. Experimental results produce some new conjectures about essential subgraphs and distances in graphs with uniform edge costs.Much of this research was carried out while the author was a Visiting Fellow at the Center for Discrete Mathematics and Theoretical Computer Science (DIMACS).  相似文献   

18.
19.
The Feedback Vertex Set problem on unweighted, undirected graphs is considered. Improving upon a result by Burrage et al. (Proceedings 2nd International Workshop on Parameterized and Exact Computation, pp. 192–202, 2006), we show that this problem has a kernel with O(k 3) vertices, i.e., there is a polynomial time algorithm, that given a graph G and an integer k, finds a graph G′ with O(k 3) vertices and integer k′≤k, such that G has a feedback vertex set of size at most k, if and only if G′ has a feedback vertex set of size at most k′. Moreover, the algorithm can be made constructive: if the reduced instance G′ has a feedback vertex set of size k′, then we can easily transform a minimum size feedback vertex set of G′ into a minimum size feedback vertex set of G. This kernelization algorithm can be used as the first step of an FPT algorithm for Feedback Vertex Set, but also as a preprocessing heuristic for Feedback Vertex Set.  相似文献   

20.
In this paper we describe a technique for finding efficient parallel algorithms for problems on directed graphs that involve checking the existence of certain kinds of paths in the graph. This technique provides efficient algorithms for finding dominators in flow graphs, performing interval and loop analysis on reducible flow graphs, and finding the feedback vertices of a digraph. Each of these algorithms takesO(log2 n) time using the same number of processors needed for fast matrix multiplication. All of these bounds are for an EREW PRAM.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号