首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 344 毫秒
1.
Given a collection of segments in the plane, if we regard the segments as opaque barriers, their upper envelope consists of the portions of the segments visible from point (0, +∞). In this paper, we present deterministic parallel methods for constructing the upper envelope of segments on the weakest shared-memory model, the EREW PRAM. We show that we can find the upper envelope of n line segments optimally in 0(logn) time using 0(n) processors. Furthermore, if the segments are nonintersecting and their endpoints are sorted in x-coordinate, then we can reduce the number of processors to 0(n/ logn). Our method implies that we can find the upper envelope sequentially in 0(n log log n) time, which improves previous results. We also show that we can find the upper envelope of n k-intersecting segments (any pair of the segments intersects at most k times) with a slightly larger time and processor bound  相似文献   

2.
We present a technique that can be used to obtain efficient parallel geometric algorithms in the EREW PRAM computational model. This technique enables us to solve optimally a number of geometric problems in O(log n) time using O(n/log n) EREW PRAM processors, where n is the input size of a problem. These problems include: computing the convex hull of a set of points in the plane that are given sorted, computing the convex hull of a simple polygon, computing the common intersection of half-planes whose slopes are given sorted, finding the kernel of a simple polygon, triangulating a set of points in the plane that are given sorted, triangulating monotone polygons and star-shaped polygons, and computing the all dominating neighbors of a sequence of values. PRAM algorithms for these problems were previously known to be optimal (i.e., in O(log n) time and using O(n/log n) processors) only on the CREW PRAM, which is a stronger model than the EREW PRAM  相似文献   

3.
Given a set of n intervals representing an interval graph, the problem of finding a maximum matching between pairs of disjoint (nonintersecting) intervals has been considered in the sequential model. In this paper we present parallel algorithms for computing maximum cardinality matchings among pairs of disjoint intervals in interval graphs in the EREW PRAM and hypercube models. For the general case of the problem, our algorithms compute a maximum matching in O( log 3 n) time using O(n/ log 2 n) processors on the EREW PRAM and using n processors on the hypercubes. For the case of proper interval graphs, our algorithm runs in O( log n ) time using O(n) processors if the input intervals are not given already sorted and using O(n/ log n ) processors otherwise, on the EREW PRAM. On n -processor hypercubes, our algorithm for the proper interval case takes O( log n log log n ) time for unsorted input and O( log n ) time for sorted input. Our parallel results also lead to optimal sequential algorithms for computing maximum matchings among disjoint intervals. In addition, we present an improved parallel algorithm for maximum matching between overlapping intervals in proper interval graphs. Received November 20, 1995; revised September 3, 1998.  相似文献   

4.
We present four polylog-time parallel algorithms for matching parentheses on an exclusive-read and exclusive-write (EREW) parallel random-access machine (PRAM) model. These algorithms provide new insights into the parentheses-matching problem. The first algorithm has a time complexity of O(log2 n) employing O(n/(log n)) processors for an input string containing n parentheses. Although this algorithm is not cost-optimal, it is extremely simple to implement. The remaining three algorithms, which are based on a different approach, achieve O(log n) time complexity in each case, and represent successive improvements. The second algorithm requires O(n) processors and working space, and it is comparable to the first algorithm in its ease of implementation. The third algorithm uses O(n/(log n)) processors and O(n log n) space. Thus, it is cost-optimal, but uses extra space compared to the standard stack-based sequential algorithm. The last algorithm reduces the space complexity to O(n) while maintaining the same processor and time complexities. Compared to other existing time-optimal algorithms for the parentheses-matching problem that either employ extensive pipelining or use linked lists and comparable data structures, and employ sorting or a linked list ranking algorithm as subroutines, the last two algorithms have two distinct advantages. First, these algorithms employ arrays as their basic data structures, and second, they do not use any pipelining, sorting, or linked list ranking algorithms  相似文献   

5.
In this paper we consider the problem of computing the connected components of the complement of a given graph. We describe a simple sequential algorithm for this problem, which works on the input graph and not on its complement, and which for a graph on n vertices and m edges runs in optimal O(n+m) time. Moreover, unlike previous linear co-connectivity algorithms, this algorithm admits efficient parallelization, leading to an optimal O(log n)-time and O((n+m)log n)-processor algorithm on the EREW PRAM model of computation. It is worth noting that, for the related problem of computing the connected components of a graph, no optimal deterministic parallel algorithm is currently available. The co-connectivity algorithms find applications in a number of problems. In fact, we also include a parallel recognition algorithm for weakly triangulated graphs, which takes advantage of the parallel co-connectivity algorithm and achieves an O(log2 n) time complexity using O((n+m2) log n) processors on the EREW PRAM model of computation.  相似文献   

6.

This paper presents an optimal sequential and an optimal parallel algorithm to compute a minimum cardinality Steiner set and a Steiner tree. The sequential algorithm takes O ( n ) time and parallel algorithm takes O (log n ) time and O ( n /log n ) processors on an EREW PRAM model.  相似文献   

7.
完全欧几里德距离变换的最优算法   总被引:12,自引:2,他引:12  
陈Leng 《计算机学报》1995,18(8):611-616
欧几里德距离变换(EDT)对由黑白素构成的二值图象中所有象素找出其到最近黑素的距离,应用于图象分析,计算机视觉,在本文之前,该问题的最好复杂度为O(n^2logn)。本文提出了一个复杂度为O(n^2)的算法,使复杂度达到最优,该算法可以并行化,在有r个处理单元的EREWPRAM计算模型上,若rlogr≤22/6n,则时间复杂度为O(n/r)否则为O(nlogr)。  相似文献   

8.
Given a graph G=(V, E) with n vertices and m edges, the k-connectivity of G denotes either the k-edge connectivity or the k-vertex connectivity of G. In this paper, we deal with the fully dynamic maintenance of k-connectivity of G in the parallel setting for k=2, 3. We study the problem of maintaining k-edge/vertex connected components of a graph undergoing repeatedly dynamic updates, such as edge insertions and deletions, and answering the query of whether two vertices are included in the same k-edge/vertex connected component. Our major results are the following: (1) An NC algorithm for the 2-edge connectivity problem is proposed, which runs in O(log n log(m/n)) time using O(n3/4) processors per update and query. (2) It is shown that the biconnectivity problem can be solved in O(log2 n ) time using O(nα(2n, n)/logn) processors per update and O(1) time with a single processor per query or in O(log n logn/m) time using O(nα(2n, n)/log n) processors per update and O(logn) time using O(nα(2n, n)/logn) processors per query, where α(.,.) is the inverse of Ackermann's function. (3) An NC algorithm for the triconnectivity problem is also derived, which takes O(log n logn/m+logn log log n/α(3n, n)) time using O(nα(3n, n)/log n) processors per update and O(1) time with a single processor per query. (4) An NC algorithm for the 3-edge connectivity problem is obtained, which has the same time and processor complexities as the algorithm for the triconnectivity problem. To the best of our knowledge, the proposed algorithms are the first NC algorithms for the problems using O(n) processors in contrast to Ω(m) processors for solving them from scratch. In particular, the proposed NC algorithm for the 2-edge connectivity problem uses only O(n3/4) processors. All the proposed algorithms run on a CRCW PRAM  相似文献   

9.
For 2⩽k⩽n, the k-merge problem is to merge a collection of ksorted sequences of total length n into a new sorted sequence. The k-merge problem is fundamental as it provides a common generalization of both merging and sorting. The main contribution of this work is to give simple and intuitive work-time optimal algorithms for the k-merge problem on three PRAM models, thus settling the status of the k-merge problem. We first prove that Ω(n log k) work is required to solve the k-merge problem on the PRAM models. We then show that the EREW-PRAM and both the CREW-PRAM and the CRCW require Ω(log n) time and Ω(log log n+log k) time, respectively, provided that the amount of work is bounded by O(n log k). Our first k-merge algorithm runs in Θ(log n) time and performs Θ(n log k) work on the EREW-PRAM. Finally, we design a work-time optimal CREW-PRAM k-merge algorithm that runs in Θ(log log n+log k) time and performs Θ(n log k) work. This latter algorithm is also work-time optimal on the CREW-PRAM model. Our algorithms completely settle the status of the k-merge problem on the three main PRAM models  相似文献   

10.
Let A be a sorted array of n numbers and B a sorted array of m numbers, both in nondecreasing order, with n⩽m. We consider the problem of determining, for each element A(j), j=1, 2, …, n, the unique element B(i), 0⩽i⩽m, such that B(i)⩽A(j)相似文献   

11.
In the literature, there are quite a few sequential and parallel algorithms for solving problems on distance-hereditary graphs. With an n-vertex and m-edge distance-hereditary graph G, we show that the efficient domination problem on G can be solved in O(log/sup 2/ n) time using O(n + m) processors on a CREW PRAM. Moreover, if a binary tree representation of G is given, the problem can be optimally solved in O(log n) time using O(n/log n) processors on an EREW PRAM.  相似文献   

12.
The problem of merging k (k⩾2) sorted lists is considered. We give an optimal parallel algorithm which takes O((n log k/p)+log n) time using p processors on a parallel random access machine that allows concurrent reads and exclusive writes, where n is the total size of the input lists. This algorithm achieves O(log n) time using p=n log k/log n processors. Most of the previous log n research for this problem has been focused on the case when k=2. Very recently, parallel solutions for the case when k=2 have been reported. Our solution is the first logarithmic time optimal parallel algorithm for the problem when k⩾2. It can also be seen as a unified optimal parallel algorithm for sorting and merging. In order to support the algorithm, a new processor assignment strategy is also presented  相似文献   

13.
Considers the use of massively parallel architectures to execute a trace-driven simulation of a single cache set. A method is presented for the least-recently-used (LRU) policy, which, regardless of the set size C, runs in time O(log N) using N processors on the EREW (exclusive read, exclusive write) parallel model. A simpler LRU simulation algorithm is given that runs in O(C log N) time using N/log N processors. We present timings of this algorithm's implementation on the MasPar MP-1, a machine with 16384 processors. A broad class of reference-based line replacement policies are considered, which includes LRU as well as the least-frequently-used (LFU) and random replacement policies. A simulation method is presented for any such policy that, on any trace of length N directed to a C line set, runs in O(C log N) time with high probability using N processors on the EREW model. The algorithms are simple, have very little space overhead, and are well suited for SIMD implementation  相似文献   

14.
An edge is a bisector of a simple path if it contains the middle point of the path. Let T=(V,E) be a tree. Given a source vertex s ∈ V, the single-source tree bisector problem is to find, for every vertex υ ∈ V, a bisector of the simple path from s to υ. The all-pairs tree bisector problem is to find for, every pair of vertices u, υ ∈ V, a bisector of the simple path from u to υ. In this paper, it is first shown that solving the single-source tree bisector problem of a weighted tree has a time lower bound Ω(n log n) in the sequential case. Then, efficient parallel algorithms are proposed on the EREW PRAM for the single-source and all-pairs tree bisector problems. Two O(log n) time single-source algorithms are proposed. One uses O(n) work and is for unweighted trees. The other uses O(n log n) work and is for weighted trees. Previous algorithms for the single-source problem could achieve the same time O(log n) and the same optimal work, O(n) for unweighted trees and O(n log n) for weighted trees, on the CRCW PRAM. The contribution of our single-source algorithms is the improvement from CRCW to EREW. One all-pairs parallel algorithm is proposed. It requires O(log n) time using O(n2) work. All the proposed algorithms are cost-optimal. Efficient tree bisector algorithms have practical applications to several location problems on trees. Using the proposed algorithms, efficient parallel solutions for those problems are also presented  相似文献   

15.
In this paper, a parallel algorithm is presented to find all cut-vertices and blocks of an interval graph. If the list of sorted end points of the intervals of an interval graph is given then the proposed algorithm takes O(log n) time and O(n/log n) processors on an EREW PRAM, if the sorted list is not given then the time and processors complexities are respectively O(log n) and O(n).  相似文献   

16.
In the literature, there are quite a few sequential and parallel algorithms to solve problems on distance-hereditary graphs. Two well-known classes of graphs, which contain trees and cographs, belong to distance-hereditary graphs. We consider the vertex-coloring problem on distance-hereditary graphs. Let T/sub d/(|V|, |E|) and P/sub d/d(|V|, |E|) denote the time and processor complexities, respectively, required to construct a decomposition tree representation of a distance-hereditary graph G=(V,E) on a PRAM model M/sub d/. Our algorithm runs in O(T/sub d/(|V|, |E|)+log|V|) time using O(P/sub d/(|V|, |E|)+|V|/log|V|) processors on M/sub d/. The best known result for constructing a decomposition tree needs O(log/sup 2/ |V|) time using O(|V|+|E|) processors on a CREW PRAM. If a decomposition tree is provided as input, we solve the problem in O(log |V|) time using O(|V|/log |V|) processors on an EREW PRAM. To the best of our knowledge, there is no parallel algorithm for this problem on distance-hereditary graphs.  相似文献   

17.
The data compression based on dictionary techniques works by replacing phrases in the input string with indexes into some dictionary. The dictionary can be static or dynamic. In static dictionary compression, the dictionary contains a predetermined fixed set of entries. In dynamic dictionary compression, the dictionary changes its entries during compression. We present parallel algorithms for two parsing strategies for static dictionary compression. One is the optimal parsing strategy with dictionaries that have the prefix properly, for which our algorithm requires O(L+log n) time and O(n) processors, where n is the number of symbols in the input string, and L is the maximum length of the dictionary entries, while previous results run in O(L+log n) time using O(n2) processors or in O(L+log2 n) time using O(n) processors. The other is the longest fragment first (LFF) parsing strategy, for which our algorithm requires O(L+log n,) time and O(n log L) processors, while a previous result obtained an O(L log n) time performance on O(n/log n) processors. For both strategies, we derive our parallel algorithms by modifying the on-line algorithms using a pointer doubling technique  相似文献   

18.
We efficiently map a priority queue on the hypercube architecture in a load balanced manner, with no additional communication overhead, and present optimal parallel algorithms for performing insert and deletemin operations. Two implementations for such operations are proposed on the single port hypercube model. In a b-bandwidth, n-item priority queue in which every node contains b items in sorted order, the first implementation achieves optimal speed up of O(min{log n, b log n/log b+log log n}) for inserting b presorted items or deleting b smallest items, where b=O(n1c/) with c>1. In particular, single insertion and deletion operations are cost optimal and require O(log n/p+log p) time using O(log n/log log n) processors. The second implementation is more scalable since it uses a larger number of processors, and attains a “nearly” optimal speedup on the single hypercube. Namely, the insertion of log n presorted items or the deletion of the log n smallest items is accomplished in O(log log n2) time using O(log2 n/log log n) processors. Finally, on the slightly more powerful pipelined hypercube model, the second implementation performs log n operations in O(log log n) time using O(log2 n/log log n) processors, thus achieving an optimal speed up. To the best of our knowledge, our algorithms are the first implementations of b-bandwidth distributed priority queues, which are load balanced and yet guarantee optimal speed ups  相似文献   

19.
We consider the following problem. For a binary tree T = (V, E) where V = {1, 2, ..., n}, given its inorder traversal and either its preorder or its postorder traversal, reconstruct the binary tree. We present a new parallel algorithm for this problem. Our algorithm requires O(n) space. The main idea of our algorithm is to reduce the reconstruction process to merging two sorted sequences. With the best parallel merging algorithms, our algorithm can be implemented in O(log log n) time using O(n/log log n) processors on the CREW PRAM (or in O(log n) time using O(n/log n) processors on the EREW PRAM). Our result provides one more example of a fundamental problem which can be solved by optimal parallel algorithms in O(log log n)time on the CREW PRAM.  相似文献   

20.
We present a parallel algorithm for finding the convex hull of a sorted planar point set. Our algorithm runs in O(log n) time using O(n/log n) processors in the CREW PRAM computational model, which is optimal. One of the techniques we use to achieve these optimal bounds is the use of a parallel data structure which we call the hull tree.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号