首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The technique of Delaunay refinement has been recognized as a versatile tool to generate Delaunay meshes of a variety of geometries. Despite its usefulness, it suffers from one lacuna that limits its application. It does not scale well with the mesh size. As the sample point set grows, the Delaunay triangulation starts stressing the available memory space which ultimately stalls any effective progress. A natural solution to the problem is to maintain the point set in clusters and run the refinement on each individual cluster. However, this needs a careful point insertion strategy and a balanced coordination among the neighboring clusters to ensure consistency across individual meshes. We design an octtree based localized Delaunay refinement method for meshing surfaces in three dimensions which meets these goals. We prove that the algorithm terminates and provide guarantees about structural properties of the output mesh. Experimental results show that the method can avoid memory thrashing while computing large meshes and thus scales much better than the standard Delaunay refinement method.  相似文献   

2.
Feature preserving Delaunay mesh generation from 3D multi-material images   总被引:1,自引:0,他引:1  
Generating realistic geometric models from 3D segmented images is an important task in many biomedical applications. Segmented 3D images impose particular challenges for meshing algorithms because they contain multi-material junctions forming features such as surface patches, edges and corners. The resulting meshes should preserve these features to ensure the visual quality and the mechanical soundness of the models. We present a feature preserving Delaunay refinement algorithm which can be used to generate high-quality tetrahedral meshes from segmented images. The idea is to explicitly sample corners and edges from the input image and to constrain the Delaunay refinement algorithm to preserve these features in addition to the surface patches. Our experimental results on segmented medical images have shown that, within a few seconds, the algorithm outputs a tetrahedral mesh in which each material is represented as a consistent submesh without gaps and overlaps. The optimization property of the Delaunay triangulation makes these meshes suitable for the purpose of realistic visualization or finite element simulations.  相似文献   

3.
We present an isosurface meshing algorithm, DelIso, based on the Delaunay refinement paradigm. This paradigm has been successfully applied to mesh a variety of domains with guarantees for topology, geometry, mesh gradedness, and triangle shape. A restricted Delaunay triangulation, dual of the intersection between the surface and the three-dimensional Voronoi diagram, is often the main ingredient in Delaunay refinement. Computing and storing three-dimensional Voronoi/Delaunay diagrams become bottlenecks for Delaunay refinement techniques since isosurface computations generally have large input datasets and output meshes. A highlight of our algorithm is that we find a simple way to recover the restricted Delaunay triangulation of the surface without computing the full 3D structure. We employ techniques for efficient ray tracing of isosurfaces to generate surface sample points, and demonstrate the effectiveness of our implementation using a variety of volume datasets.  相似文献   

4.
We develop a novel isotropic remeshing method based on constrained centroidal Delaunay mesh (CCDM), a generalization of centroidal patch triangulation from 2D to mesh surface. Our method starts with resampling an input mesh with a vertex distribution according to a user‐defined density function. The initial remeshing result is then progressively optimized by alternatively recovering the Delaunay mesh and moving each vertex to the centroid of its 1‐ring neighborhood. The key to making such simple iterations work is an efficient optimization framework that combines both local and global optimization methods. Our method is parameterization‐free, thus avoiding the metric distortion introduced by parameterization, and generating more well‐shaped triangles. Our method guarantees that the topology of surface is preserved without requiring geodesic information. We conduct various experiments to demonstrate the simplicity, efficacy, and robustness of the presented method.  相似文献   

5.
We present a method for producing quad‐dominant subdivided meshes, which supports both adaptive refinement and adaptive coarsening. A hierarchical structure is stored implicitly in a standard half‐edge data structure, while allowing us to efficiently navigate through the different level of subdivision. Subdivided meshes contain a majority of quad elements and a moderate amount of triangles and pentagons in the regions of transition across different levels of detail. Topological LOD editing is controlled with local conforming operators, which support both mesh refinement and mesh coarsening. We show two possible applications of this method: we define an adaptive subdivision surface scheme that is topologically and geometrically consistent with the Catmull–Clark subdivision; and we present a remeshing method that produces semi‐regular adaptive meshes.  相似文献   

6.
When simulating fluids, tetrahedral methods provide flexibility and ease of adaptivity that Cartesian grids find difficult to match. However, this approach has so far been limited by two conflicting requirements. First, accurate simulation requires quality Delaunay meshes and the use of circumcentric pressures. Second, meshes must align with potentially complex moving surfaces and boundaries, necessitating continuous remeshing. Unfortunately, sacrificing mesh quality in favour of speed yields inaccurate velocities and simulation artifacts. We describe how to eliminate the boundary‐matching constraint by adapting recent embedded boundary techniques to tetrahedra, so that neither air nor solid boundaries need to align with mesh geometry. This enables the use of high quality, arbitrarily graded, non‐conforming Delaunay meshes, which are simpler and faster to generate. Temporal coherence can also be exploited by reusing meshes over adjacent timesteps to further reduce meshing costs. Lastly, our free surface boundary condition eliminates the spurious currents that previous methods exhibited for slow or static scenarios. We provide several examples demonstrating that our efficient tetrahedral embedded boundary method can substantially increase the flexibility and accuracy of adaptive Eulerian fluid simulation.  相似文献   

7.
8.
In medical imaging, the generation of surface representations of anatomical objects obtained by labeling images from various modalities, is a critical component for visualization, simulation, and analysis. The interfaces between labeled regions can meet at arbitrary angles and with complex topologies, causing most automatic meshing algorithms to fail. We apply a recent Delaunay refinement algorithm to generate high quality triangular meshes that approximate the interface surfaces. This algorithm has proven guarantees for meshing piecewise-smooth shapes and its implementation overhead is low. Consequently, the approach is applicable to labeled datasets generated from binary segmentations as well as from probabilistic segmentation algorithms. We show the effectiveness of this technique on data from a variety of medical fields and discuss its ability to control the quality and size of the output meshes. The same algorithm can be used to generate tetrahedral meshes of the segmentation space.  相似文献   

9.
Updating a Delaunay triangulation when data points are slightly moved is the bottleneck of computation time in variational methods for mesh generation and remeshing. Utilizing the connectivity coherence between two consecutive Delaunay triangulations for computation speedup is the key to solving this problem. Our contribution is an effective filtering technique that confirms most bi‐cells whose Delaunay connectivities remain unchanged after the points are perturbed. Based on bi‐cell flipping, we present an efficient algorithm for updating two‐dimensional and three‐dimensional Delaunay triangulations of dynamic point sets. Experimental results show that our algorithm outperforms previous methods.  相似文献   

10.
The discovery of meaningful parts of a shape is required for many geometry processing applications, such as parameterization, shape correspondence, and animation. It is natural to consider primitives such as spheres, cylinders and cones as the building blocks of shapes, and thus to discover parts by fitting such primitives to a given surface. This approach, however, will break down if primitive parts have undergone almost‐isometric deformations, as is the case, for example, for articulated human models. We suggest that parts can be discovered instead by finding intrinsic primitives, which we define as parts that posses an approximate intrinsic symmetry. We employ the recently‐developed method of computing discrete approximate Killing vector fields (AKVFs) to discover intrinsic primitives by investigating the relationship between the AKVFs of a composite object and the AKVFs of its parts. We show how to leverage this relationship with a standard clustering method to extract k intrinsic primitives and remaining asymmetric parts of a shape for a given k. We demonstrate the value of this approach for identifying the prominent symmetry generators of the parts of a given shape. Additionally, we show how our method can be modified slightly to segment an entire surface without marking asymmetric connecting regions and compare this approach to state‐of‐the‐art methods using the Princeton Segmentation Benchmark.  相似文献   

11.
Generation of a finite element MESH from stereolithography (STL) files   总被引:1,自引:0,他引:1  
The aim of the method proposed here is to show the possibility of generating adaptive surface meshes suitable for the finite element method, directly from an approximated boundary representation of an object created with CAD software. First, we describe the boundary representation, which is composed of a simple triangulation of the surface of the object. Then we will show how to obtain a conforming size-adapted mesh. The size adaptation is made considering geometrical approximation and with respect to an isotropic size map provided by an error estimator. The mesh can be used “as is” for a finite element computation (with shell elements), or can be used as a surface mesh to initiate a volume meshing algorithm (Delaunay or advancing front). The principle used to generate the mesh is based on the Delaunay method, which is associated with refinement algorithms, and smoothing. Finally, we will show that not using the parametric representation of the geometrical model allows us to override some of the limitations of conventional meshing software that is based on an exact representation of the geometry.  相似文献   

12.
Deep neural networks provide a promising tool for incorporating semantic information in geometry processing applications. Unlike image and video processing, however, geometry processing requires handling unstructured geometric data, and thus data representation becomes an important challenge in this framework. Existing approaches tackle this challenge by converting point clouds, meshes, or polygon soups into regular representations using, e.g., multi‐view images, volumetric grids or planar parameterizations. In each of these cases, geometric data representation is treated as a fixed pre‐process that is largely disconnected from the machine learning tool. In contrast, we propose to optimize for the geometric representation during the network learning process using a novel metric alignment layer. Our approach maps unstructured geometric data to a regular domain by minimizing the metric distortion of the map using the regularized Gromov–Wasserstein objective. This objective is parameterized by the metric of the target domain and is differentiable; thus, it can be easily incorporated into a deep network framework. Furthermore, the objective aims to align the metrics of the input and output domains, promoting consistent output for similar shapes. We show the effectiveness of our layer within a deep network trained for shape classification, demonstrating state‐of‐the‐art performance for nonrigid shapes.  相似文献   

13.
Style transfer aims to apply the style of an exemplar model to a target one, while retaining the target's structure. The main challenge in this process is to algorithmically distinguish style from structure, a high‐level, potentially ill‐posed cognitive task. Inspired by cognitive science research we recast style transfer in terms of shape analogies. In IQ testing, shape analogy queries present the subject with three shapes: source, target and exemplar, and ask them to select an output such that the transformation, or analogy, from the exemplar to the output is similar to that from the source to the target. The logical process involved in identifying the source‐to‐target analogies implicitly detects the structural differences between the source and target and can be used effectively to facilitate style transfer. Since the exemplar has a similar structure to the source, applying the analogy to the exemplar will provide the output we seek. The main technical challenge we address is to compute the source to target analogies, consistent with human logic. We observe that the typical analogies we look for consist of a small set of simple transformations, which when applied to the exemplar generate a continuous, seamless output model. To assemble a shape analogy, we compute an optimal set of source‐to‐target transformations, such that the assembled analogy best fits these criteria. The assembled analogy is then applied to the exemplar shape to produce the desired output model. We use the proposed framework to seamlessly transfer a variety of style properties between 2D and 3D objects and demonstrate significant improvements over the state of the art in style transfer. We further show that our framework can be used to successfully complete partial scans with the help of a user provided structural template, coherently propagating scan style across the completed surfaces.  相似文献   

14.
We present the first 3D algorithm capable of answering the question: what would a Mandelbrot‐like set in the shape of a bunny look like? More concretely, can we find an iterated quaternion rational map whose potential field contains an isocontour with a desired shape? We show that it is possible to answer this question by casting it as a shape optimization that discovers novel, highly complex shapes. The problem can be written as an energy minimization, the optimization can be made practical by using an efficient method for gradient evaluation, and convergence can be accelerated by using a variety of multi‐resolution strategies. The resulting shapes are not invariant under common operations such as translation, and instead undergo intricate, non‐linear transformations.  相似文献   

15.
The generation of discrete stream surfaces is an important and challenging task in scientific visualization, which can be considered a particular instance of geometric modeling. The quality of numerically integrated stream surfaces depends on a number of parameters that can be controlled locally, such as time step or distance of adjacent vertices on the front line. In addition there is a parameter that cannot be controlled locally: stream surface meshes tend to show high quality, well‐shaped elements only if the current front line is “globally” approximately perpendicular to the flow direction. We analyze the impact of this geometric property and present a novel solution – a stream surface integrator that forces the front line to be perpendicular to the flow and that generates quad‐dominant meshes with well‐shaped and well‐aligned elements. It is based on the integration of a scaled version of the flow field, and requires repeated minimization of an error functional along the current front line. We show that this leads to computing the 1‐dimensional kernel of a bidiagonal matrix: a linear problem that can be solved efficiently. We compare our method with existing stream surface integrators and apply it to a number of synthetic and real world data sets.  相似文献   

16.
Point cloud data is one of the most common types of input for geometric processing applications. In this paper, we study the point cloud density adaptation problem that underlies many pre‐processing tasks of points data. Specifically, given a (sparse) set of points Q sampling an unknown surface and a target density function, the goal is to adapt Q to match the target distribution. We propose a simple and robust framework that is effective at achieving both local uniformity and precise global density distribution control. Our approach relies on the Gaussian‐weighted graph Laplacian and works purely in the points setting. While it is well known that graph Laplacian is related to mean‐curvature flow and thus has denoising ability, our algorithm uses certain information encoded in the graph Laplacian that is orthogonal to the mean‐curvature flow. Furthermore, by leveraging the natural scale parameter contained in the Gaussian kernel and combining it with a simulated annealing idea, our algorithm moves points in a multi‐scale manner. The resulting algorithm relies much less on the input points to have a good initial distribution (neither uniform nor close to the target density distribution) than many previous refinement‐based methods. We demonstrate the simplicity and effectiveness of our algorithm with point clouds sampled from different underlying surfaces with various geometric and topological properties.  相似文献   

17.
We present a Conforming Delaunay Triangulation (CDT) algorithm based on maximal Poisson disk sampling. Points are unbiased, meaning the probability of introducing a vertex in a disk-free subregion is proportional to its area, except in a neighborhood of the domain boundary. In contrast, Delaunay refinement CDT algorithms place points dependent on the geometry of empty circles in intermediate triangulations, usually near the circle centers. Unconstrained angles in our mesh are between 30° and 120°, matching some biased CDT methods. Points are placed on the boundary using a one-dimensional maximal Poisson disk sampling. Any triangulation method producing angles bounded away from 0° and 180° must have some bias near the domain boundary to avoid placing vertices infinitesimally close to the boundary.Random meshes are preferred for some simulations, such as fracture simulations where cracks must follow mesh edges, because deterministic meshes may introduce non-physical phenomena. An ensemble of random meshes aids simulation validation. Poisson-disk triangulations also avoid some graphics rendering artifacts, and have the blue-noise property.We mesh two-dimensional domains that may be non-convex with holes, required points, and multiple regions in contact. Our algorithm is also fast and uses little memory. We have recently developed a method for generating a maximal Poisson distribution of n output points, where n=Θ(Area/r2) and r is the sampling radius. It takes O(n) memory and O(nlogn) expected time; in practice the time is nearly linear. This, or a similar subroutine, generates our random points. Except for this subroutine, we provably use O(n) time and space. The subroutine gives the location of points in a square background mesh. Given this, the neighborhood of each point can be meshed independently in constant time. These features facilitate parallel and GPU implementations. Our implementation works well in practice as illustrated by several examples and comparison to Triangle.  相似文献   

18.
We introduce a method for surface reconstruction from point sets that is able to cope with noise and outliers. First, a splat-based representation is computed from the point set. A robust local 3D RANSAC-based procedure is used to filter the point set for outliers, then a local jet surface – a low-degree surface approximation – is fitted to the inliers. Second, we extract the reconstructed surface in the form of a surface triangle mesh through Delaunay refinement. The Delaunay refinement meshing approach requires computing intersections between line segment queries and the surface to be meshed. In the present case, intersection queries are solved from the set of splats through a 1D RANSAC procedure.  相似文献   

19.
This paper develops a scale space strategy for orienting and meshing exactly and completely a raw point set. The scale space is based on the intrinsic heat equation, also called mean curvature motion (MCM). A simple iterative scheme implementing MCM directly on the raw point set is described, and a mathematical proof of its consistency with MCM is given. Points evolved by this MCM implementation can be trivially backtracked to their initial raw position. Therefore, both the orientation and mesh of the data point set obtained at a smooth scale can be transported back on the original. The gain in visual accuracy is demonstrated on archaeological objects by comparison with several state of the art meshing methods.  相似文献   

20.
We address the problem of generating quality surface triangle meshes from 3D point clouds sampled on piecewise smooth surfaces. Using a feature detection process based on the covariance matrices of Voronoi cells, we first extract from the point cloud a set of sharp features. Our algorithm also runs on the input point cloud a reconstruction process, such as Poisson reconstruction, providing an implicit surface. A feature preserving variant of a Delaunay refinement process is then used to generate a mesh approximating the implicit surface and containing a faithful representation of the extracted sharp edges. Such a mesh provides an enhanced trade‐off between accuracy and mesh complexity. The whole process is robust to noise and made versatile through a small set of parameters which govern the mesh sizing, approximation error and shape of the elements. We demonstrate the effectiveness of our method on a variety of models including laser scanned datasets ranging from indoor to outdoor scenes.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号