首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We present an algorithm for finding optimum partitions of simple monotone rectilinear polygons into star-shaped polygons. The algorithm may introduce Steiner points and its time complexity isO(n), wheren is the number of vertices in the polygon. We then use this algorithm to obtain anO(n logn) approximation algorithm for partitioning simple rectilinear polygons into star-shaped polygons with the size of the partition being at most six times the optimum.  相似文献   

2.
Consider a collection of disjoint polygons in the plane containing a total ofn edges. We show how to build, inO(n 2) time and space, a data structure from which inO(n) time we can compute the visibility polygon of a given point with respect to the polygon collection. As an application of this structure, the visibility graph of the given polygons can be constructed inO(n 2) time and space. This implies that the shortest path that connects two points in the plane and avoids the polygons in our collection can be computed inO(n 2) time, improving earlierO(n 2 logn) results.  相似文献   

3.
We present an algorithm for finding optimum partitions of simple monotone rectilinear polygons into star-shaped polygons. The algorithm may introduce Steiner points and its time complexity isO(n), wheren is the number of vertices in the polygon. We then use this algorithm to obtain anO(n logn) approximation algorithm for partitioning simple rectilinear polygons into star-shaped polygons with the size of the partition being at most six times the optimum.  相似文献   

4.
Consider a collection of disjoint polygons in the plane containing a total ofn edges. We show how to build, inO(n 2) time and space, a data structure from which inO(n) time we can compute the visibility polygon of a given point with respect to the polygon collection. As an application of this structure, the visibility graph of the given polygons can be constructed inO(n 2) time and space. This implies that the shortest path that connects two points in the plane and avoids the polygons in our collection can be computed inO(n 2) time, improving earlierO(n 2 logn) results.  相似文献   

5.
In a proportional contact representation of a planar graph, each vertex is represented by a simple polygon with area proportional to a given weight, and edges are represented by adjacencies between the corresponding pairs of polygons. In this paper we first study proportional contact representations that use rectilinear polygons without wasted areas (white space). In this setting, the best known algorithm for proportional contact representation of a maximal planar graph uses 12-sided rectilinear polygons and takes O(nlogn) time. We describe a new algorithm that guarantees 10-sided rectilinear polygons and runs in O(n) time. We also describe a linear-time algorithm for proportional contact representation of planar 3-trees with 8-sided rectilinear polygons and show that this is optimal, as there exist planar 3-trees that require 8-sided polygons. We then show that a maximal outer-planar graph admits a proportional contact representation using rectilinear polygons with 6 sides when the outer-boundary is a rectangle and with 4 sides otherwise. Finally we study maximal series-parallel graphs. Here we show that O(1)-sided rectilinear polygons are not possible unless we allow holes, but 6-sided polygons can be achieved with arbitrarily small holes.  相似文献   

6.
We prove that the greedy triangulation heuristic for minimum weight triangulation of convex polygons yields solutions within a constant factor from the optimum. For interesting classes of convex polygons, we derive small upper bounds on the constant approximation factor. Our results contrast with Kirkpatrick's Ω(n) bound on the approximation factor of the Delaunay triangulation heuristic for minimum weight triangulation of convexn-vertex polygons. On the other hand, we present a straightforward implementation of the greedy triangulation heuristic for ann-vertex convex point set or a convex polygon takingO(n 2) time andO(n) space. To derive the latter result, we show that given a convex polygonP, one can find for all verticesv ofP a shortest diagonal ofP incident tov in linear time. Finally, we observe that the greedy triangulation for convex polygons having so-called semicircular property can be constructed in timeO(n logn).  相似文献   

7.
The length-constrained heaviest path (LCHP) in a weighted tree T, where each edge is assigned a weight and a length, is the path P in T with maximum total path weight and total path length bounded by a given value B. This paper presents an O(nlogn) time LCHP algorithm which utilizes a data structure constructed from the spine decomposition of the input tree. This is an improvement over the existing algorithm by Wu et al. (1999), which runs in O(nlog2n) time. Our method also improves on a previous O(nlogn) time algorithm by Kim (2005) for the special case of finding a longest nonnegative path in a constant degree tree in that we can handle trees of arbitrary degree within the same time bounds.  相似文献   

8.
We present a simple parallel algorithm for computing the greatest common divisor (gcd) of twon-bit integers in the Common version of the CRCW model of computation. The run-time of the algorithm in terms of bit operations isO(n/logn), usingn 1+? processors, where ? is any positive constant. This improves on the algorithm of Kannan, Miller, and Rudolph, the only sublinear algorithm known previously, both in run time and in number of processors; they requireO(n log logn/logn),n 2 log2 n, respectively, in the same CRCW model. We give an alternative implementation of our algorithm in the CREW model. Its run-time isO(n log logn/logn), usingn 1+? processors. Both implementations can be modified to yield the extended gcd, within the same complexity bounds.  相似文献   

9.
M. Orlowski 《Algorithmica》1990,5(1-4):65-73
A rectangleA and a setS ofn points inA are given. We present a new simple algorithm for the so-called largest empty rectangle problem, i.e., the problem of finding a maximum area rectangle contained inA and not containing any point ofS in its interior. The computational complexity of the presented algorithm isO(n logn + s), where s is the number of possible restricted rectangles considered. Moreover, the expected performance isO(n · logn).  相似文献   

10.
A stringw isprimitive if it is not a power of another string (i.e., writingw =v k impliesk = 1. Conversely,w is asquare ifw =vv, withv a primitive string. A stringx issquare-free if it has no nonempty substring of the formww. It is shown that the square-freedom of a string ofn symbols over an arbitrary alphabet can be tested by a CRCW PRAM withn processors inO(logn) time and linear auxiliary space. If the cardinality of the input alphabet is bounded by a constant independent of the input size, then the number of processors can be reduced ton/logn without affecting the time complexity of this strategy. The fastest sequential algorithms solve this problemO(n logn) orO(n) time, depending on whether the cardinality of the input alphabet is unbounded or bounded, and either performance is known to be optimal within its class. More elaborate constructions lead to a CRCW PRAM algorithm for detecting, within the samen-processors bounds, all positioned squares inx in timeO(logn) and using linear auxiliary space. The fastest sequential algorithms solve this problem inO(n logn) time, and such a performance is known to be optimal.  相似文献   

11.
We present an algorithm for maintaining the biconnected components of a graph during a sequence of edge insertions and deletions. It requires linear storage and preprocessing time. The amortized running time for insertions and for deletions isO(m 2/3 ), wherem is the number of edges in the graph. Any query of the form ‘Are the verticesu andv biconnected?’ can be answered in timeO(1). This is the first sublinear algorithm for this problem. We can also output all articulation points separating any two vertices efficiently. If the input is a plane graph, the amortized running time for insertions and deletions drops toO(√n logn) and the query time isO(log2 n), wheren is the number of vertices in the graph. The best previously known solution takes timeO(n 2/3 ) per update or query.  相似文献   

12.
In this paper a general technique for reducing processors in simulation without any increase in time is described. This results in an O(√logn) time algorithm for simulating one step of PRIORITY on TOLERANT with processor-time product of O(n log logn); the same as that for simulating PRIORITY on ARBITRARY. This is used to obtain anO(logn/log logn + √logn (log logm ? log logn)) time algorithm for sortingn integers from the set {0,...,m ? 1},mn, with a processor-time product ofO(n log logm log logn) on a TOLERANT CRCW PRAM. New upper and lower bounds for ordered chaining problem on an allocated COMMON CRCW model are also obtained. The algorithm for ordered chaining takesO(logn/log logn) time on an allocated PRAM of sizen. It is shown that this result is best possible (upto a constant multiplicative factor) by obtaining a lower bound of Ω(r logn/(logr + log logn)) for finding the first (leftmost one) live processor on an allocated-COMMON PRAM of sizen ofr-slow virtual processors (one processor simulatesr processors of allocated PRAM). As a result, for ordered chaining problem, “processor-time product” has to be at least Ω(n logn/log logn) for any poly-logarithmic time algorithm. Algorithm for ordered-chaining problem results in anO(logN/log logN) time algorithm for (stable) sorting ofn integers from the set {0,...,m ? 1} withn-processors on a COMMON CRCW PRAM; hereN = max(n, m). In particular if,m =n O(1), then sorting takes Θ(logn/log logn) time on both TOLERANT and COMMON CRCW PRAMs. Processor-time product for TOLERANT isO(n(log logn)2). Algorithm for COMMON usesn processors.  相似文献   

13.
This paper re-examines, in a unified framework, two classic approaches to the problem of finding a longest common subsequence (LCS) of two strings, and proposes faster implementations for both. Letl be the length of an LCS between two strings of lengthm andnm, respectively, and let s be the alphabet size. The first revised strategy follows the paradigm of a previousO(ln) time algorithm by Hirschberg. The new version can be implemented in timeO(lm · min logs, logm, log(2n/m)), which is profitable when the input strings differ considerably in size (a looser bound for both versions isO(mn)). The second strategy improves on the Hunt-Szymanski algorithm. This latter takes timeO((r +n) logn), wherermn is the total number of matches between the two input strings. Such a performance is quite good (O(n logn)) whenrn, but it degrades to Θ(mn logn) in the worst case. On the other hand the variation presented here is never worse than linear-time in the productmn. The exact time bound derived for this second algorithm isO(m logn +d log(2mn/d)), wheredr is the number ofdominant matches (elsewhere referred to asminimal candidates) between the two strings. Both algorithms require anO(n logs) preprocessing that is nearly standard for the LCS problem, and they make use of simple and handy auxiliary data structures.  相似文献   

14.
We consider a chromatic variant of the art gallery problem, where each guard is assigned one of k distinct colors. A placement of such colored guards is conflict-free if each point of the polygon is seen by some guard whose color appears exactly once among the guards visible to that point. What is the smallest number k(n) of colors that ensure a conflict-free covering of all n-vertex polygons? We call this the conflict-free chromatic art gallery problem. Our main result shows that k(n) is O(logn) for orthogonal and for monotone polygons, and O(log2 n) for arbitrary simple polygons. By contrast, if all guards visible from each point must have distinct colors, then k(n) is Ω(n) for arbitrary simple polygons, as shown by Erickson and LaValle (Robotics: Science and Systems, vol. VII, pp. 81–88, 2012). The problem is motivated by applications in distributed robotics and wireless sensor networks but is also of interest from a theoretical point of view.  相似文献   

15.
Efficient data structures are given for the following two query problems: (i) preprocess a setP of simple polygons with a total ofn edges, so that all polygons ofP intersected by a query segment can be reported efficiently, and (ii) preprocess a setS ofn segments, so that the connected components of the arrangement ofS intersected by a query segment can be reported quickly. In these problems we do not want to return the polygons or connected components explicitly (i.e., we do not wish to report the segments defining the polygon, or the segments lying in the connected components). Instead, we assume that the polygons (or connected components) are labeled and we just want to report their labels. We present data structures of sizeO(n 1+) that can answer a query in timeO(n 1++k), wherek is the output size. If the edges ofP (or the segments inS) are orthogonal, the query time can be improved toO(logn+k) usingO(n logn) space. We also present data structures that can maintain the connected components as we insert new segments. For arbitrary segments the amortized update and query time areO(n 1/2+) andO(n 1/2++k), respectively, and the space used by the data structure isO(n 1+. If we allowO(n 4/3+ space, the amortized update and query time can be improved toO(n 1/3+ andO(n 1/3++k, respectively. For orthogonal segments the amortized update and query time areO(log2 n) andO(log2 n+klogn), and the space used by the data structure isO (n logn). Some other related results are also mentioned.Part of this work was done while the second author was visiting the first author on a grant by the Dutch Organization for Scientific Research (N.W.O.). The research of the second author was also supported by the ESPRIT Basic Research Action No. 3075 (project ALCOM). The research of the first author was supported by National Science Foundation Grant CCR-91-06514.  相似文献   

16.
An optimal algorithm for the maximum-density path in a tree   总被引:1,自引:0,他引:1  
We studied the problem of finding the maximum-density path in a tree. By spine decomposition and a linear-time algorithm for the maximum density segment problem, we developed an O(nlogn) time algorithm, which improves the previously best result of O(nlog2n) by using centroid decomposition. We also show the time complexity is optimal in the algebraic computation tree model.  相似文献   

17.
Let s be a point source of light inside a polygon P of n vertices. A polygonal path from s to some point t inside P is called a diffuse reflection path if the turning points of the path lie on edges of?P. A?diffuse reflection path is said to be optimal if it has the minimum number of reflections on the path. The problem of computing a diffuse reflection path from s to t inside P has not been considered explicitly in the past. We present three different algorithms for this problem which produce suboptimal paths. For constructing such a path, the first algorithm uses a greedy method, the second algorithm uses a transformation of a minimum link path, and the third algorithm uses the edge–edge visibility graph of?P. The first two algorithms are for polygons without holes, and they run in O(n+klogn) time, where k denotes the number of reflections in the constructed path. The third algorithm is for polygons with or without holes, and it runs in O(n 2) time. The number of reflections in the path produced by this third algorithm can be at most three times that of an optimal diffuse reflection path. Though the combinatorial approach used in the third algorithm gives a better bound on the number of reflections on the path, the first and the second algorithms stand on the merit of their elegant geometric approaches based on local geometric information.  相似文献   

18.
An integrative overview of the algorithmic characteristics of three well-known polynomialtime heuristics for the undirected Steiner minimum tree problem:shortest path heuristic (SPH),distance network heuristic (DNH), andaverage distance heuristic (ADH) is given. The performance of thesesingle-pass heuristics (and some variants) is compared and contrasted with several heuristics based onrepetitive applications of the SPH. It is shown that two of these repetitive SPH variants generate solutions that in general are better than solutions obtained by any single-pass heuristic. The worst-case time complexity of the two new variants isO(pn 3) andO(p 3 n 2), while the worst-case time complexity of the SPH, DNH, and ADH is respectivelyO(pn 2),O(m + n logn), andO(n 3) wherep is the number of vertices to be spanned,n is the total number of vertices, andm is the total number of edges. However, use of few simple tests is shown to provide large reductions of problem instances (both in terms of vertices and in term of edges). As a consequence, a substantial speed-up is obtained so that the repetitive variants are also competitive with respect to running times.  相似文献   

19.
We consider the following problem: Given an unsorted array of n elements, and a sequence of intervals in the array, compute the median in each of the subarrays defined by the intervals. We describe a simple algorithm which needs O(nlogk+klogn) time to answer k such median queries. This improves previous algorithms by a logarithmic factor and matches a comparison lower bound for k=O(n). The space complexity of our simple algorithm is O(nlogn) in the pointer machine model, and O(n) in the RAM model. In the latter model, a more involved O(n) space data structure can be constructed in O(nlogn) time where the time per query is reduced to O(logn/loglogn). We also give efficient dynamic variants of both data structures, achieving O(log2n) query time using O(nlogn) space in the comparison model and O((logn/loglogn)2) query time using O(nlogn/loglogn) space in the RAM model, and show that in the cell-probe model, any data structure which supports updates in O(logO(1)n) time must have Ω(logn/loglogn) query time.Our approach naturally generalizes to higher-dimensional range median problems, where element positions and query ranges are multidimensional—it reduces a range median query to a logarithmic number of range counting queries.  相似文献   

20.
We present a parallel algorithm for performing boolean set operations on generalized polygons that have holes in them. The intersection algorithm has a processor complexity of O(m2n2) processors and a time complexity of O(max(2log m, log2n)), where m is the maximum number of vertices in any loop of a polygon, and n is the maximum number of loops per polygon. The union and difference algorithms have a processor complexity of O(m2n2) and time complexity of O(log m) and O(2log m, log n) respectively. The algorithm is based on the EREW PRAM model. The algorithm tries to minimize the intersection point computations by intersecting only a subset of loops of the polygons, taking advantage of the topological structure of the two polygons. We believe this will result in better performance on the average as compared to the worst case. Though all the algorithms presented here are deterministic, randomized algorithms such as sample sort can be used for the sorting subcomponent of the algorithms to obtain fast practical implementations.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号