首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The problem of computing the convex hull of a set of n sorted points in the plane is one of the fundamental tasks in image processing, pattern recognition, cellular network design, and robotics, among many others. Somewhat surprisingly, in spite of a great deal of effort, the best previously known algorithm to solve this problem on a reconfigurable mesh of size √n×√n was running in O(log2 n) time. It was open for more than ten years to obtain an algorithm for this important problem running in sublogarithmic time. Our main contribution is to provide the first breakthrough: we propose an almost optimal convex hull algorithm running in O((log log n)2) time on a reconfigurable mesh of size √n×√n. With slight modifications, this algorithm can be implemented to run in O((log log n)2) time on a reconfigurable mesh of size √n/loglogn×√n/loglogn. Clearly, the latter algorithm is work-optimal. We also show that any algorithm that computes the convex hull of a set of n sorted points on an n-processor reconfigurable mesh must take Ω(log log n) time. Our result opens the door to an entire slew of efficient convex-hull-based algorithms on reconfigurable meshes  相似文献   

2.
Abstract. We present an optimal parallel randomized algorithm for the Voronoi diagram of a set of n nonintersecting (except possibly at endpoints) line segments in the plane. Our algorithm runs in O(log n) time with high probability using O(n) processors on a CRCW PRAM. This algorithm is optimal in terms of work done since the sequential time bound for this problem is Ω(n log n) . Our algorithm improves by an O(log n) factor the previously best known deterministic parallel algorithm, given by Goodrich, ó'Dúnlaing, and Yap, which runs in O( log 2 n) time using O(n) processors. We obtain this result by using a new ``two-stage' random sampling technique. By choosing large samples in the first stage of the algorithm, we avoid the hurdle of problem-size ``blow-up' that is typical in recursive parallel geometric algorithms. We combine the two-stage sampling technique with efficient search and merge procedures to obtain an optimal algorithm. This technique gives an alternative optimal algorithm for the Voronoi diagram of points as well (all other optimal parallel algorithms for this problem use the transformation to three-dimensional half-space intersection).  相似文献   

3.
A matrix A of size m×n containing items from a totally ordered universe is termed monotone if, for every i, j, 1⩽i2. In case m=nϵ for some constant ϵ, (0<ϵ⩽1), our algorithm runs in O(log log n) time  相似文献   

4.
In this paper we consider the problem of computing the connected components of the complement of a given graph. We describe a simple sequential algorithm for this problem, which works on the input graph and not on its complement, and which for a graph on n vertices and m edges runs in optimal O(n+m) time. Moreover, unlike previous linear co-connectivity algorithms, this algorithm admits efficient parallelization, leading to an optimal O(log n)-time and O((n+m)log n)-processor algorithm on the EREW PRAM model of computation. It is worth noting that, for the related problem of computing the connected components of a graph, no optimal deterministic parallel algorithm is currently available. The co-connectivity algorithms find applications in a number of problems. In fact, we also include a parallel recognition algorithm for weakly triangulated graphs, which takes advantage of the parallel co-connectivity algorithm and achieves an O(log2 n) time complexity using O((n+m2) log n) processors on the EREW PRAM model of computation.  相似文献   

5.
Finding a vast array of applications, the list-ranking problem has emerged as one of the fundamental techniques in parallel algorithm design. Surprisingly, the best previously known algorithm to rank a list of n items on a reconfigurable mesh of size was running in O(log n ) time. It was open for more than 8 years to obtain a faster algorithm for this important problem. Our main contribution is to provide the first breakthrough: we propose a deterministic list-ranking algorithm that runs in O(log* n ) time as well as a randomized one running in O(1) expected time, both on a reconfigurable mesh of size . Our results open the door to a large number of efficient list-ranking-based algorithms on reconfigurable meshes. Received February 1997, and in final form February 1998.  相似文献   

6.
Finding a vast array of applications, the list-ranking problem has emerged as one of the fundamental techniques in parallel algorithm design. Surprisingly, the best previously known algorithm to rank a list of n items on a reconfigurable mesh of size was running in O(log n ) time. It was open for more than 8 years to obtain a faster algorithm for this important problem. Our main contribution is to provide the first breakthrough: we propose a deterministic list-ranking algorithm that runs in O(log* n ) time as well as a randomized one running in O(1) expected time, both on a reconfigurable mesh of size . Our results open the door to a large number of efficient list-ranking-based algorithms on reconfigurable meshes. Received November 1996, and in final form February 1998.  相似文献   

7.
In this paper, we present a progressive compression algorithm for textured surface meshes, which is able to handle polygonal non‐manifold meshes as well as discontinuities in the texture mapping. Our method applies iterative batched simplifications, which create high quality levels of detail by preserving both the geometry and the texture mapping. The main features of our algorithm are (1) generic edge collapse and vertex split operators suited for polygonal non‐manifold meshes with arbitrary texture seam configurations, and (2) novel geometry‐driven prediction schemes and entropy reduction techniques for efficient encoding of connectivity and texture mapping. To our knowledge, our method is the first progressive algorithm to handle polygonal non‐manifold models. For geometry and connectivity encoding of triangular manifolds and non‐manifolds, our method is competitive with state‐of‐the‐art and even better at low/medium bitrates. Moreover, our method allows progressive encoding of texture coordinates with texture seams; it outperforms state‐of‐the‐art approaches for texture coordinate encoding. We also present a bit‐allocation framework which multiplexes mesh and texture refinement data using a perceptually‐based image metric, in order to optimize the quality of levels of detail.  相似文献   

8.
在计算机图形学中,3D形状可有多种表示形式,包括网格、体素、多视角图像、点云、参数曲面和隐式曲面等。3D网格是常见的表示形式之一,其构成3D物体的顶点、边缘和面的集合,通常用于表示数字3D物体的曲面和容积特性。在过去的20年中,基于3D网格载体的虚拟现实、实时仿真和交叉3维设计已经在工业,医疗和娱乐等场景得到广泛应用,以3D网格为载体的水印技术、隐写和隐写分析技术也受到研究者的关注。相比于图像与音视频等载体的隐写,3D网格具备嵌入方式灵活与载体形式多变等其自身的优势。本文回顾了3D网格隐写和隐写分析的发展,并对现有研究工作进行了系统的总结和分类。根据嵌入方式和嵌入位置将隐写算法分成4类:两态调制隐写、最低位隐写、置换隐写和变换域隐写;根据特征提取角度将隐写分析算法分为2类:通用型隐写分析和专用型隐写分析。随后,介绍了每个类别的技术,综合安全性、鲁棒性、容量以及运算效率分析了各类算法的优劣性,总结当前的发展水平,并提供了不同嵌入率下两种数据集上隐写分析算法之间的性能比较。最后讨论了3D隐写和隐写分析现有技术的局限性,并探讨了潜在的研究方向,旨在为后续学者进一步推动3D隐写和隐写分析技术提供指导。  相似文献   

9.
This paper studies the Voronoi diagrams on 2‐manifold meshes based on geodesic metric (a.k.a. geodesic Voronoi diagrams or GVDs), which have polyline generators. We show that our general setting leads to situations more complicated than conventional 2D Euclidean Voronoi diagrams as well as point‐source based GVDs, since a typical bisector contains line segments, hyperbolic segments and parabolic segments. To tackle this challenge, we introduce a new concept, called local Voronoi diagram (LVD), which is a combination of additively weighted Voronoi diagram and line‐segment Voronoi diagram on a mesh triangle. We show that when restricting on a single mesh triangle, the GVD is a subset of the LVD and only two types of mesh triangles can contain GVD edges. Based on these results, we propose an efficient algorithm for constructing the GVD with polyline generators. Our algorithm runs in O(nNlogN) time and takes O(nN) space on an n‐face mesh with m generators, where N = max{m, n}. Computational results on real‐world models demonstrate the efficiency of our algorithm.  相似文献   

10.
In this paper, we describe a novel approach for the reconstruction of animated meshes from a series of time‐deforming point clouds. Given a set of unordered point clouds that have been captured by a fast 3‐D scanner, our algorithm is able to compute coherent meshes which approximate the input data at arbitrary time instances. Our method is based on the computation of an implicit function in ?4 that approximates the time‐space surface of the time‐varying point cloud. We then use the four‐dimensional implicit function to reconstruct a polygonal model for the first time‐step. By sliding this template mesh along the time‐space surface in an as‐rigid‐as‐possible manner, we obtain reconstructions for further time‐steps which have the same connectivity as the previously extracted mesh while recovering rigid motion exactly. The resulting animated meshes allow accurate motion tracking of arbitrary points and are well suited for animation compression. We demonstrate the qualities of the proposed method by applying it to several data sets acquired by real‐time 3‐D scanners.  相似文献   

11.
We consider the problem of generating random permutations with uniform distribution. That is, we require that for an arbitrary permutation π of n elements, with probability 1/n! the machine halts with the i th output cell containing π(i) , for 1 ≤ i ≤ n . We study this problem on two models of parallel computations: the CREW PRAM and the EREW PRAM. The main result of the paper is an algorithm for generating random permutations that runs in O(log log n) time and uses O(n 1+o(1) ) processors on the CREW PRAM. This is the first o(log n) -time CREW PRAM algorithm for this problem. On the EREW PRAM we present a simple algorithm that generates a random permutation in time O(log n) using n processors and O(n) space. This algorithm outperforms each of the previously known algorithms for the exclusive write PRAMs. The common and novel feature of both our algorithms is first to design a suitable random switching network generating a permutation and then to simulate this network on the PRAM model in a fast way. Received November 1996; revised March 1997.  相似文献   

12.
For 2⩽k⩽n, the k-merge problem is to merge a collection of ksorted sequences of total length n into a new sorted sequence. The k-merge problem is fundamental as it provides a common generalization of both merging and sorting. The main contribution of this work is to give simple and intuitive work-time optimal algorithms for the k-merge problem on three PRAM models, thus settling the status of the k-merge problem. We first prove that Ω(n log k) work is required to solve the k-merge problem on the PRAM models. We then show that the EREW-PRAM and both the CREW-PRAM and the CRCW require Ω(log n) time and Ω(log log n+log k) time, respectively, provided that the amount of work is bounded by O(n log k). Our first k-merge algorithm runs in Θ(log n) time and performs Θ(n log k) work on the EREW-PRAM. Finally, we design a work-time optimal CREW-PRAM k-merge algorithm that runs in Θ(log log n+log k) time and performs Θ(n log k) work. This latter algorithm is also work-time optimal on the CREW-PRAM model. Our algorithms completely settle the status of the k-merge problem on the three main PRAM models  相似文献   

13.
The problem of merging k (k⩾2) sorted lists is considered. We give an optimal parallel algorithm which takes O((n log k/p)+log n) time using p processors on a parallel random access machine that allows concurrent reads and exclusive writes, where n is the total size of the input lists. This algorithm achieves O(log n) time using p=n log k/log n processors. Most of the previous log n research for this problem has been focused on the case when k=2. Very recently, parallel solutions for the case when k=2 have been reported. Our solution is the first logarithmic time optimal parallel algorithm for the problem when k⩾2. It can also be seen as a unified optimal parallel algorithm for sorting and merging. In order to support the algorithm, a new processor assignment strategy is also presented  相似文献   

14.
We present a high-capacity steganographic approach for three-dimensional (3D) polygonal meshes. We first use the representation information of a 3D model to embed messages. Our approach successfully combines both the spatial domain and the representation domain for steganography. In the spatial domain, every vertex of a 3D polygonal mesh can be represented by at least three bits using a modified multi-level embed procedure (MMLEP). In the representation domain, the representation order of vertices and polygons and even the topology information of polygons can be represented with an average of six bits per vertex using the proposed representation rearrangement procedure (RRP). Experimental results show that the proposed technique is efficient and secure, has high capacity and low distortion, and is robust against affine transformations. Our technique is a feasible alternative to other steganographic approaches.  相似文献   

15.
一种新的基于顶点聚类的网格简化算法   总被引:22,自引:0,他引:22  
在计算机图形学中,经常采用多边形网格来描述物体模型.由于绘制时间和存储量与多边形的数量成正比,过于庞大的物体网格模型通常是不实用的.模型简化在计算机动画、虚拟现实和交互式可视化等计算机图形应用领域有着广阔的应用前景.为此提出一种新的基于顶点聚类的网格简化算法.该算法利用八叉树对网格进行自适应划分,给出了一种基于点到平面距离的有效的误差控制方法,并能在用户指定的误差范围内通过使原始网格中的顶点聚类达到大量简化的目的.该算法实现简单,速度快且能很好地保持边界特征.给出的一组图例说明了该算法的有效性.  相似文献   

16.
Given a set of n intervals representing an interval graph, the problem of finding a maximum matching between pairs of disjoint (nonintersecting) intervals has been considered in the sequential model. In this paper we present parallel algorithms for computing maximum cardinality matchings among pairs of disjoint intervals in interval graphs in the EREW PRAM and hypercube models. For the general case of the problem, our algorithms compute a maximum matching in O( log 3 n) time using O(n/ log 2 n) processors on the EREW PRAM and using n processors on the hypercubes. For the case of proper interval graphs, our algorithm runs in O( log n ) time using O(n) processors if the input intervals are not given already sorted and using O(n/ log n ) processors otherwise, on the EREW PRAM. On n -processor hypercubes, our algorithm for the proper interval case takes O( log n log log n ) time for unsorted input and O( log n ) time for sorted input. Our parallel results also lead to optimal sequential algorithms for computing maximum matchings among disjoint intervals. In addition, we present an improved parallel algorithm for maximum matching between overlapping intervals in proper interval graphs. Received November 20, 1995; revised September 3, 1998.  相似文献   

17.
Consider a set P of points in the plane sorted by the x-coordinate. A point p in P is said to be a proximate point if there exists a point q on the x-axis such that p is the closest point to q over all points in P. The proximate point problem is to determine all the proximate points in P. Our main contribution is to propose optimal parallel algorithms for solving instances of size n of the proximate points problem. We begin by developing a work-time optimal algorithm running in O(log log n) time and using n/loglogn Common-CRCW processors. We then go on to show that this algorithm can be implemented to run in O(log n) time using n/logn EREW processors. In addition to being work-time optimal, our EREW algorithm turns out to also be time-optimal. Our second main contribution is to show that the proximate points problem finds interesting, and quite unexpected, applications to digital geometry and image processing. As a first application, we present a work-time optimal parallel algorithm for finding the convex hull of a set of n points in the plane sorted by x-coordinate; this algorithm runs in O(log log n) time using n/logn Common-CRCW processors. We then show that this algorithm can be implemented to run in O(log n) time using n/logn EREW processors. Next, we show that the proximate points algorithms afford us work-time optimal (resp, time-optimal) parallel algorithms for various fundamental digital geometry and image processing problems  相似文献   

18.
In this paper we present a new framework for subdivision surface approximation of three‐dimensional models represented by polygonal meshes. Our approach, particularly suited for mechanical or Computer Aided Design (CAD) parts, produces a mixed quadrangle‐triangle control mesh, optimized in terms of face and vertex numbers while remaining independent of the connectivity of the input mesh. Our algorithm begins with a decomposition of the object into surface patches. The main idea is to approximate the region boundaries first and then the interior data. Thus, for each patch, a first step approximates the boundaries with subdivision curves (associated with control polygons) and creates an initial subdivision surface by linking the boundary control points with respect to the lines of curvature of the target surface. Then, a second step optimizes the initial subdivision surface by iteratively moving control points and enriching regions according to the error distribution. The final control mesh defining the whole model is then created assembling every local subdivision control meshes. This control polyhedron is much more compact than the original mesh and visually represents the same shape after several subdivision steps, hence it is particularly suitable for compression and visualization tasks. Experiments conducted on several mechanical models have proven the coherency and the efficiency of our algorithm, compared with existing methods.  相似文献   

19.
Signed distance fields obtained from polygonal meshes are commonly used in various applications. However, they can have C1 discontinuities causing creases to appear when applying operations such as blending or metamorphosis. The focus of this work is to efficiently evaluate the signed distance function and to apply a smoothing filter to it while preserving the shape of the initial mesh. The resulting function is smooth almost everywhere, while preserving the exact shape of the polygonal mesh. Due to its low complexity, the proposed filtering technique remains fast compared to its main alternatives providing C1‐continuous distance field approximation. Several applications are presented such as blending, metamorphosis and heterogeneous modelling with polygonal meshes.  相似文献   

20.
We address the problem of generating quality surface triangle meshes from 3D point clouds sampled on piecewise smooth surfaces. Using a feature detection process based on the covariance matrices of Voronoi cells, we first extract from the point cloud a set of sharp features. Our algorithm also runs on the input point cloud a reconstruction process, such as Poisson reconstruction, providing an implicit surface. A feature preserving variant of a Delaunay refinement process is then used to generate a mesh approximating the implicit surface and containing a faithful representation of the extracted sharp edges. Such a mesh provides an enhanced trade‐off between accuracy and mesh complexity. The whole process is robust to noise and made versatile through a small set of parameters which govern the mesh sizing, approximation error and shape of the elements. We demonstrate the effectiveness of our method on a variety of models including laser scanned datasets ranging from indoor to outdoor scenes.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号