首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
This paper considers the problem of interactively finding the cutting contour to extract components from a given mesh. Some existing methods support cuts of arbitrary shape but require careful and tedious input from the user. Others need little user input however they are sensitive to user input and need a postprocessing step to smooth the generated jaggy cutting contours. The popular geometric snake can be used to optimize the cutting contour, but it cannot deal with the topology change. In this paper, we propose a geodesic curvature flow based framework to overcome all these problems. Since in many cases the meaningful cutting contour on a 3D mesh is locally shortest in the sense of some weighted curve length, the geodesic curvature flow is an ideal tool for our problem. It evolves the cutting contour to the nearby local minimum. We should mention that the previous numerical scheme, discretized geodesic curvature flow (dGCF) is too slow and has not been applied to mesh segmentation. With a careful observation to dGCF, we devise here a fast computation scheme called fast geodesic curvature flow (FGCF), which only needs to solve a smaller and easier problem. The initial cutting contour is generated by a variant of random walks algorithm, which is very fast and gives reasonable cutting result with little user input. Experiment results on the benchmark mesh segmentation data set show that our proposed framework is robust to user input and capable of producing good results reflecting geometric features and human shape perception.  相似文献   

2.
支持外观属性保持的三维网格模型简化   总被引:3,自引:0,他引:3  
卢威  曾定浩  潘金贵 《软件学报》2009,20(3):713-723
对已有的三维网格简化技术进行分析,利用半边折叠操作对QEM(quadric error metric)算法进行改进,提出了一种基于二次误差测度(QEM)的网格简化算法,解决了非连续外观属性在简化过程中的畸变问题.通过分析顶点与非连续外观接缝的关系,得出了一个新的边折叠代价公式,使得外观畸变在简化过程中尽可能地推迟;并且在执行半边折叠时给受影响的三角形找到了合适的替换wedge,避免外观畸变的发生.实验结果表明,该算法保持了QEM 算法的高效性,同时在几何属性和外观属性上都取得了令人满意的简化效果.  相似文献   

3.
We present a novel hierarchical grid based method for fast collision detection (CD) for deformable models on GPU architecture. A two‐level grid is employed to accommodate the non‐uniform distribution of practical scene geometry. A bottom‐to‐top method is implemented to assign the triangles into the hierarchical grid without any iteration while a deferred scheme is introduced to efficiently update the data structure. To address the issue of load balancing, which greatly influences the performance in SIMD parallelism, a propagation scheme which utilizes a parallel scan and a segmented scan is presented, distributing workloads evenly across all concurrent threads. The proposed method supports both discrete collision detection (DCD) and continuous collision detection (CCD) with self‐collision. Some typical benchmarks are tested to verify the effectiveness of our method. The results highlight our speedups over prior algorithms on different commodity GPUs.  相似文献   

4.
张欣  秦茂玲  谢堂龙 《微机发展》2012,(1):94-97,102
针对模型简化过程中出现的特征细节丢失、简化结果过于均匀等问题,文中基于特征保持提出一种改进的三角形折叠网格简化算法。简化前对原始模型中的三角形预分类,简化中以二次误差测度度量简化过程,以三角形狭长度、局部区域面积以及局部区域尖锐度控制三角形简化顺序,对边界三角形和内部三角形采取不同的简化策略,以此保持模型特征和降低算法复杂度。本算法在Visual c++6.0开发环境下,结合OpenGL编程语言实现。实验结果表明,改进算法采用延迟简化特征区域及形状好的三角形的方法,有效地保持了模型原始特征,且简化速度较快。  相似文献   

5.
We present a part‐type segmentation method for articulated voxel‐shapes based on curve skeletons. Shapes are considered to consist of several simpler, intersecting shapes. Our method is based on the junction rule: the observation that two intersecting shapes generate an additional junction in their joined curve‐skeleton near the place of intersection. For each curve‐skeleton point, we construct a piecewise‐geodesic loop on the shape surface. Starting from the junctions, we search along the curve skeleton for points whose associated loops make for suitable part cuts. The segmentations are robust to noise and discretization artifacts, because the curve skeletonization incorporates a single user‐parameter to filter spurious curve‐skeleton branches. Furthermore, segment borders are smooth and minimally twisting by construction. We demonstrate our method on several real‐world examples and compare it to existing part‐type segmentation methods.  相似文献   

6.
In the field of computer vision, the introduction of a low‐level preprocessing step to oversegment images into superpixels – relatively small regions whose boundaries agree with those of the semantic entities in the scene – has enabled advances in segmentation by reducing the number of elements to be labeled from hundreds of thousands, or millions, to a just few hundred. While some recent works in mesh processing have used an analogous oversegmentation, they were not intended to be general and have relied on graph cut techniques that do not scale to current mesh sizes. Here, we present an iterative superfacet algorithm and introduce adaptations of undersegmentation error and compactness, which are well‐motivated and principled metrics from the vision community. We demonstrate that our approach produces results comparable to those of the normalized cuts algorithm when evaluated on the Princeton Segmentation Benchmark, while requiring orders of magnitude less time and memory and easily scaling to, and enabling the processing of, much larger meshes.  相似文献   

7.
We develop a novel isotropic remeshing method based on constrained centroidal Delaunay mesh (CCDM), a generalization of centroidal patch triangulation from 2D to mesh surface. Our method starts with resampling an input mesh with a vertex distribution according to a user‐defined density function. The initial remeshing result is then progressively optimized by alternatively recovering the Delaunay mesh and moving each vertex to the centroid of its 1‐ring neighborhood. The key to making such simple iterations work is an efficient optimization framework that combines both local and global optimization methods. Our method is parameterization‐free, thus avoiding the metric distortion introduced by parameterization, and generating more well‐shaped triangles. Our method guarantees that the topology of surface is preserved without requiring geodesic information. We conduct various experiments to demonstrate the simplicity, efficacy, and robustness of the presented method.  相似文献   

8.
提出在三角网格中利用多个三角形组合及检索n边形(n为正整数)的规则,并提出一种具有相似折叠规律的n边形折叠的网格简化算法,该算法以n边形折叠为基本简化操作,并以二次误差作为误差度量,每次n边形折叠操作可以减少n-1个顶点以及2(n-1)个三角形,n越大达到某一简化目标所需的折叠次数越少,因此简化速度也可能越快.通过选取适当的n值及新顶点位置,新算法可以转化成顶点删除、边折叠及三角形折叠3种已知的几何元素删除算法,因此也可以视做为基于二次误差度量的几何元素删除简化算法的总括算法.最后分别对几种n取值情况列举实验数据,说明该算法的有效性.  相似文献   

9.
Updating a Delaunay triangulation when data points are slightly moved is the bottleneck of computation time in variational methods for mesh generation and remeshing. Utilizing the connectivity coherence between two consecutive Delaunay triangulations for computation speedup is the key to solving this problem. Our contribution is an effective filtering technique that confirms most bi‐cells whose Delaunay connectivities remain unchanged after the points are perturbed. Based on bi‐cell flipping, we present an efficient algorithm for updating two‐dimensional and three‐dimensional Delaunay triangulations of dynamic point sets. Experimental results show that our algorithm outperforms previous methods.  相似文献   

10.
We propose a connectivity editing framework for quad‐dominant meshes. In our framework, the user can edit the mesh connectivity to control the location, type, and number of irregular vertices (with more or fewer than four neighbors) and irregular faces (non‐quads). We provide a theoretical analysis of the problem, discuss what edits are possible and impossible, and describe how to implement an editing framework that realizes all possible editing operations. In the results, we show example edits and illustrate the advantages and disadvantages of different strategies for quad‐dominant mesh design.  相似文献   

11.
Approximating Gradients for Meshes and Point Clouds via Diffusion Metric   总被引:1,自引:0,他引:1  
The gradient of a function defined on a manifold is perhaps one of the most important differential objects in data analysis. Most often in practice, the input function is available only at discrete points sampled from the underlying manifold, and the manifold is approximated by either a mesh or simply a point cloud. While many methods exist for computing gradients of a function defined over a mesh, computing and simplifying gradients and related quantities such as critical points, of a function from a point cloud is non-trivial.
In this paper, we initiate the investigation of computing gradients under a different metric on the manifold from the original natural metric induced from the ambient space. Specifically, we map the input manifold to the eigenspace spanned by its Laplacian eigenfunctions, and consider the so-called diffusion distance metric associated with it. We show the relation of gradient under this metric with that under the original metric. It turns out that once the Laplace operator is constructed, it is easier to approximate gradients in the eigenspace for discrete inputs (especially point clouds) and it is robust to noises in the input function and in the underlying manifold. More importantly, we can easily smooth the gradient field at different scales within this eigenspace framework. We demonstrate the use of our new eigen-gradients with two applications: approximating / simplifying the critical points of a function, and the Jacobi sets of two input functions (which describe the correlation between these two functions), from point clouds data.  相似文献   

12.
We present a method for simplifying a polygonal character with an associated skeletal deformation such that the simplified character approximates the original shape well when deformed. As input, we require a set of example poses that are representative of the types of deformations the character undergoes and we produce a multi-resolution hierarchy for the simplified character where all simplified vertices also have associated skin weights. We create this hierarchy by minimizing an error metric for a simplified set of vertices and their skin weights, and we show that this quartic error metric can be effectively minimized using alternating quadratic minimization for the vertices and weights separately. To enable efficient GPU accelerated deformations of the simplified character, we also provide a method that guarantees the maximum number of bone weights per simplified vertex is less than a user specified threshold at all levels of the hierarchy.  相似文献   

13.
The notion of parts in a shape plays an important role in many geometry problems, including segmentation, correspondence, recognition, editing, and animation. As the fundamental geometric representation of 3D objects in computer graphics is surface-based, solutions of many such problems utilize a surface metric, a distance function defined over pairs of points on the surface, to assist shape analysis and understanding. The main contribution of our work is to bring together these two fundamental concepts: shape parts and surface metric. Specifically, we develop a surface metric that is part-aware. To encode part information at a point on a shape, we model its volumetric context – called the volumetric shape image (VSI) – inside the shape's enclosed volume, to capture relevant visibility information. We then define the part-aware metric by combining an appropriate VSI distance with geodesic distance and normal variation. We show how the volumetric view on part separation addresses certain limitations of the surface view, which relies on concavity measures over a surface as implied by the well-known minima rule. We demonstrate how the new metric can be effectively utilized in various applications including mesh segmentation, shape registration, part-aware sampling and shape retrieval.  相似文献   

14.
Triangle meshes have been nearly ubiquitous in computer graphics, and a large body of data structures and geometry processing algorithms based on them has been developed in the literature. At the same time, quadrilateral meshes, especially semi‐regular ones, have advantages for many applications, and significant progress was made in quadrilateral mesh generation and processing during the last several years. In this survey we discuss the advantages and problems of techniques operating on quadrilateral meshes, including surface analysis and mesh quality, simplification, adaptive refinement, alignment with features, parametrisation and remeshing.  相似文献   

15.
Representing digital objects with structured meshes that embed a coarse block decomposition is a relevant problem in applications like computer animation, physically‐based simulation and Computer Aided Design (CAD). One of the key ingredients to produce coarse block structures is to achieve a good alignment between the mesh singularities (i.e., the corners of each block). In this paper we improve on the polycube‐based meshing pipeline to produce both surface and volumetric coarse block‐structured meshes of general shapes. To this aim we add a new step in the pipeline. Our goal is to optimize the positions of the polycube corners to produce as coarse as possible base complexes. We rely on re‐mapping the positions of the corners on an integer grid and then using integer numerical programming to reach the optimal. To the best of our knowledge this is the first attempt to solve the singularity misalignment problem directly in polycube space. Previous methods for polycube generation did not specifically address this issue. Our corner optimization strategy is efficient and requires a negligible extra running time for the meshing pipeline. In the paper we show that our optimized polycubes produce coarser block structured surface and volumetric meshes if compared with previous approaches. They also induce higher quality hexahedral meshes and are better suited for spline fitting because they reduce the number of splines necessary to cover the domain, thus improving both the efficiency and the overall level of smoothness throughout the volume.  相似文献   

16.
The discovery of meaningful parts of a shape is required for many geometry processing applications, such as parameterization, shape correspondence, and animation. It is natural to consider primitives such as spheres, cylinders and cones as the building blocks of shapes, and thus to discover parts by fitting such primitives to a given surface. This approach, however, will break down if primitive parts have undergone almost‐isometric deformations, as is the case, for example, for articulated human models. We suggest that parts can be discovered instead by finding intrinsic primitives, which we define as parts that posses an approximate intrinsic symmetry. We employ the recently‐developed method of computing discrete approximate Killing vector fields (AKVFs) to discover intrinsic primitives by investigating the relationship between the AKVFs of a composite object and the AKVFs of its parts. We show how to leverage this relationship with a standard clustering method to extract k intrinsic primitives and remaining asymmetric parts of a shape for a given k. We demonstrate the value of this approach for identifying the prominent symmetry generators of the parts of a given shape. Additionally, we show how our method can be modified slightly to segment an entire surface without marking asymmetric connecting regions and compare this approach to state‐of‐the‐art methods using the Princeton Segmentation Benchmark.  相似文献   

17.
This paper poses the problem of fabricating physical construction sets from example geometry: A construction set provides a small number of different types of building blocks from which the example model as well as many similar variants can be reassembled. This process is formalized by tiling grammars. Our core contribution is an approach for simplifying tiling grammars such that we obtain physically manufacturable building blocks of controllable granularity while retaining variability, i.e., the ability to construct many different, related shapes. Simplification is performed by sequences of two types of elementary Operations: non‐local joint edge collapses in the tile graphs reduce the granularity of the decomposition and approximate replacement Operations reduce redundancy. We evaluate our method on abstract graph grammars in addition to computing several physical construction sets, which are manufactured using a commodity 3D printer.  相似文献   

18.
Simplicial meshes are useful as discrete approximations of continuous spaces in numerical simulations. In some applications, however, meshes need to be modified over time. Mesh update operations are often expensive and brittle, making the simulations unstable. In this paper we propose a framework for updating simplicial meshes that undergo geometric and topological changes. Instead of explicitly maintaining connectivity information, we keep a collection of weights associated with mesh vertices, using a Weighted Delaunay Triangulation (WDT). These weights implicitly define mesh connectivity and allow direct merging of triangulations. We propose two formulations for computing the weights, and two techniques for merging triangulations, and finally illustrate our results with examples in two and three dimensions.  相似文献   

19.
Scalar functions defined on manifold triangle meshes is a starting point for many geometry processing algorithms such as mesh parametrization, skeletonization, and segmentation. In this paper, we propose the Auto Diffusion Function (ADF) which is a linear combination of the eigenfunctions of the Laplace-Beltrami operator in a way that has a simple physical interpretation. The ADF of a given 3D object has a number of further desirable properties: Its extrema are generally at the tips of features of a given object, its gradients and level sets follow or encircle features, respectively, it is controlled by a single parameter which can be interpreted as feature scale, and, finally, the ADF is invariant to rigid and isometric deformations.
We describe the ADF and its properties in detail and compare it to other choices of scalar functions on manifolds. As an example of an application, we present a pose invariant, hierarchical skeletonization and segmentation algorithm which makes direct use of the ADF.  相似文献   

20.
Updating a Delaunay triangulation when its vertices move is a bottleneck in several domains of application. Rebuilding the whole triangulation from scratch is surprisingly a very viable option compared to relocating the vertices. This can be explained by several recent advances in efficient construction of Delaunay triangulations. However, when all points move with a small magnitude, or when only a fraction of the vertices move, rebuilding is no longer the best option. This paper considers the problem of efficiently updating a Delaunay triangulation when its vertices are moving under small perturbations. The main contribution is a set of filters based upon the concept of vertex tolerances. Experiments show that filtering relocations is faster than rebuilding the whole triangulation from scratch under certain conditions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号