首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 130 毫秒
1.
金耀  熊宇龙  周泳全  张华熊  何利力 《软件学报》2019,30(12):3862-3875
针对传统基于测地线的泊松融合方法中插值旋转场与尺度场计算量大而影响交互建模的应用,提出了基于复用拉普拉斯算子的高效融合方法.该方法将几何融合、旋转场与尺度场的插值问题均转化为拉普拉斯(泊松)方程进行求解,仅需一次Cholesky分解和多次回代计算,得到融合所需的8个标量场,比起传统基于测地线的插值方法快两个数量级;随后,运用基于约束Delaunay三角化与离散极小曲面的鲁棒方法对融合边界处的网格进行优化,实现网格的高效融合.同时,再次复用拉普拉斯算子,在进行几何融合的同时,实现了纹理坐标的快速融合.该算法不仅能够处理具有复杂拓扑与多个边界模型,并获得与传统泊松融合方法相媲美的实验结果,而且显著地提高了效率,能够满足交互响应的需求.  相似文献   

2.
针对Image Quilting纹理合成算法拼贴块时会出现的边界不连续现象,提出一种基于多阶误差曲面的纹理合成改进算法。该算法通过多阶误差曲面来计算最小误差边界分割,从而得到更精确的最佳切割路径;此外在按照最佳切割路径拼贴块后,利用泊松混合(Poisson Blending)来修复边界不连续区域,使得不连续的边界区域能够变得平滑,纹理合成效果更符合视觉要求,并且将改进后的算法扩展到纹理传输的实现。实验结果表明,改进后的算法可以较好地克服Image Quilting算法存在的不足,得到良好的合成结果。  相似文献   

3.
本文提出了一种新的用户可控的高度规整三角网格生成算法.通过在网格表面上构造3个标量场,利用其等值线相交生成高度规整的三角网格.算法借助N-对称方向场来指导生成网格的边方向,在网格表面指定密度场来控制采样密度,同时还提供了特征对齐和对带边界模型的处理能力.所有的控制需求都被纳入标量场求解框架中统一优化.实验表明,本文的方法能够满足多种用户控制需求,生成高度规整的三角网格.  相似文献   

4.
提出一种有效的三角网格模型分割方法。用Dijkstra算法求出三角网格模型上任意给定一个基点到其余顶点的最短路径树;求出该模型对偶图的最大生成树,且对偶图的边与该最短路径树的边不相交;找出该模型上所有既不属于最短路径树也不和最大生成树相交的边,这些边分别与最短路径树组成的最短环集合就是给定基点处的基本群,沿着这些最短环就可以把网格分割成一个拓扑同胚于圆盘的区域。实验结果表明,该分割方法可以快速、有效地实现网格的分割。  相似文献   

5.
根据任意亏格和任意边界的3D网格模型,给出一种网格重建算法。该算法通过对原始网格进行一系列自适应的局部修改操作,改进网格中三角形的质量和顶点位置分布。为减少优化过程中误差的累积,提出基于距离场的算法将新生成的顶点保留到原始网格曲面上,该算法实现简单,不需要复杂的全局参数化操作。实验结果表明,该算法有效、快速、稳定。  相似文献   

6.
基于细分曲面的泊松网格编辑   总被引:2,自引:0,他引:2  
针对具有丰富几何细节的三维网格模型,基于直接坐标操纵的传统编辑算法在编辑过程中不可避免地存在细节特征无法得到有效保持的问题.综合基于细分曲面的空间变形方法以及微分域网格编辑二者优势,提出一种基于细分曲面的泊松网格编辑方法.首先建立待变形网格模型的包围网格,以包围网格所决定的细分曲面构造变形控制曲面;然后根据用户变形意图操纵包围网格,将对应细分曲面变化信息转化为对网格模型泊松梯度场的改变;最后根据变化后的梯度场重建网格模型.文中方法交互简单、直观,具有多分辨率编辑的优势,可以有效地保持网格模型的细节特征.丰富的变形实例证明了该方法的有效性和可行性.  相似文献   

7.
基于双距离场的三维中心路径提取算法   总被引:6,自引:0,他引:6       下载免费PDF全文
在真实的三维数据场中,自动地提取中心路径是实现自动漫游的关键问题.为了解决当前中心路径自动 提取算法中存在的效果差,计算量大的问题,提出了一种基于双距离场的快速三维中心路径提取算法,该算法对于 任意给定可连通的起点和终点,首先建立基于起点的源距离场和基于边界的边界距离场,然后通过两个距离场的 共同约束来快速地提取出一条连接起点和终点的中心路径,同时为了保证漫游的效果,还采用3次B样条曲线对 所获取的路径进行了光滑,最后在PC机平台上实现和测试了该算法,实验结果证明,该算法不仅速度快、效果好, 而且具有很高的灵活性.  相似文献   

8.
为了提高平面参数化的鲁棒性,提出一种基于多层次结构的平面参数化算法,主要包含简化和细分加点2步.对于一个拓扑同胚于圆盘的三角形网格,首先对网格进行简化并存储所简化点的拓扑信息;然后将简化后的网格映射到圆盘上;再根据所存储的拓扑信息分批次加点直至恢复出三角网格的全部顶点,并在此过程中不断地优化网格,防止三角形翻转同时使网格顶点均匀分布;最后对恢复出全部顶点的圆盘网格进行优化,得到最终的参数化网格.实验结果表明,与当前的算法相比,该算法的鲁棒性有很大的提升.  相似文献   

9.
为提高大规模点云曲面重建的精度和效率,提出一种基于拓扑不变性的全局支撑的径向基函数(GSRBF)隐式曲面重建算法。结合Hausdorff算法,根据点云的主曲率和高斯曲率引入一个临界值,防止提取特征点时产生较大误差,构造特征点点云拓扑同胚的拓扑结构;引入八叉树网格划分法进行点云拓扑关系的构造,通过构造与模型控制网格拓扑同胚的拓扑结构来重建曲面的拓扑;构造基函数确定特征点的影响范围,将其归一化得到曲面拓扑上的单位分解,复合单位分解与特征点得到隐式曲面。实验结果表明,该算法适用于任意拓扑的曲面重建,具有较高的精度和效率。  相似文献   

10.
为提高超椭圆曲线上标量乘计算效率,将椭圆曲线上的斜-Frobenius映射推广到超椭圆曲线上,在亏格为4的超椭圆曲线上构造斜-Frobenius映射,通过对亏格为2,3,4的超椭圆曲线上的斜-Frobenius映射,提出超椭圆曲线上斜-Frobenius映射的一般形式。基于超椭圆曲线上的斜-Frobenius映射的一般形式构造新的标量乘算法,提高计算超椭圆曲线上标量乘的效率。实验结果表明,提出的基于超椭圆曲线上的斜-Frobenius映射标量乘效率比基于二进制标量乘算法提高了39%。  相似文献   

11.
Contour trees are extensively used in scalar field analysis. The contour tree is a data structure that tracks the evolution of level set topology in a scalar field. Scalar fields are typically available as samples at vertices of a mesh and are linearly interpolated within each cell of the mesh. A more suitable way of representing scalar fields, especially when a smoother function needs to be modeled, is via higher order interpolants. We propose an algorithm to compute the contour tree for such functions. The algorithm computes a local structure by connecting critical points using a numerically stable monotone path tracing procedure. Such structures are computed for each cell and are stitched together to obtain the contour tree of the function. The algorithm is scalable to higher degree interpolants whereas previous methods were restricted to quadratic or linear interpolants. The algorithm is intrinsically parallelizable and has potential applications to isosurface extraction.  相似文献   

12.
We present a new method to construct a trivariate T-spline representation of complex solids for the application of isogeometric analysis. We take a genus-zero solid as a basis of our study, but at the end of the work we explain the way to generalize the results to any genus solids. The proposed technique only demands a surface triangulation of the solid as input data. The key of this method lies in obtaining a volumetric parameterization between the solid and the parametric domain, the unitary cube. To do that, an adaptive tetrahedral mesh of the parametric domain is isomorphically transformed onto the solid by applying a mesh untangling and smoothing procedure. The control points of the trivariate T-spline are calculated by imposing the interpolation conditions on points sited both on the inner and on the surface of the solid. The distribution of the interpolating points is adapted to the singularities of the domain to preserve the features of the surface triangulation. We present some results of the application of isogeometric analysis with T-splines to the resolution of Poisson equation in solids parameterized with this technique.  相似文献   

13.
We consider a tangent-space representation of surfaces that maps each point on a surface to the tangent plane of the surface at that point. Such representations are known to facilitate the solution of several visibility problems, in particular, those involving silhouette analysis. In this paper, we introduce a novel class of distance fields for a given surface defined by its tangent planes. At each point in space, we assign a scalar value which is a weighted sum of distances to these tangent planes. We call the resulting scalar field a 'tangential distance field' (TDF). When applied to triangle mesh models, the tangent planes become supporting planes of the mesh triangles. The weighting scheme used to construct a TDF for a given mesh and the way the TDF is utilized can be closely tailored to a specific application. At the same time, the TDFs are continuous, lending themselves to standard optimization techniques such as greedy local search, thus leading to efficient algorithms. In this paper, we use four applications to illustrate the benefit of using TDFs: multi-origin silhouette extraction in Hough space, silhouette-based view point selection, camera path planning and light source placement.  相似文献   

14.
This paper presents a novel algorithm for generating a highly regular triangle mesh under various user requirements. Three scalar fields are first computed on the input mesh. Then, the intersections of their isocontours with one another are used to construct the highly regular mesh result. The proposed algorithm uses the N-symmetry direction field to guide the edge orientation. Size control is achieved by using a density function on the surface. All user requirements are incorporated into an energy optimiza...  相似文献   

15.
Given a large set of unorganized point sample data, we propose a new framework for computing a triangular mesh representing an approximating piecewise smooth surface. The data may be non-uniformly distributed, noisy, and may contain holes. This framework is based on the combination of two types of surface representations, triangular meshes and T-spline level sets, which are implicit surfaces defined by refinable spline functions allowing T-junctions. Our method contains three main steps. Firstly, we construct an implicit representation of a smooth (C 2 in our case) surface, by using an evolution process of T-spline level sets, such that the implicit surface captures the topology and outline of the object to be reconstructed. The initial mesh with high quality is obtained through the marching triangulation of the implicit surface. Secondly, we project each data point to the initial mesh, and get a scalar displacement field. Detailed features will be captured by the displaced mesh. Finally, we present an additional evolution process, which combines data-driven velocities and feature-preserving bilateral filters, in order to reproduce sharp features. We also show that various shape constraints, such as distance field constraints, range constraints and volume constraints can be naturally added to our framework, which is helpful to obtain a desired reconstruction result, especially when the given data contains noise and inaccuracies.  相似文献   

16.
In this paper we define a new 3D vector field distance transform to implicitly represent a mesh surface. We show that this new representation is more accurate than the classic scalar field distance transform by comparing both representations with an error metric evaluation. The widely used marching cube triangulation algorithm is adapted to the new vector field distance transform to correctly reconstruct the resulting explicit surface. In the reconstruction process of 3D scanned data, the useful mesh denoising operation is extended to the new vector field representation, which enables adaptive and selective filtering features. Results show that mesh processing with this new vector field representation is more accurate than with the scalar field distance transform and that it outperforms previous mesh filtering algorithms. Future work is discussed to extend this new vector field representation to other mesh useful operations and applications.  相似文献   

17.
18.
This paper introduces a novel, non‐local characterization of critical points and their global relation in 2D uncertain scalar fields. The characterization is based on the analysis of the support of the probability density functions (PDF) of the input data. Given two scalar fields representing reliable estimations of the bounds of this support, our strategy identifies mandatory critical points: spatial regions and function ranges where critical points have to occur in any realization of the input. The algorithm provides a global pairing scheme for mandatory critical points which is used to construct mandatory join and split trees. These trees enable a visual exploration of the common topological structure of all possible realizations of the uncertain data. To allow multi‐scale visualization, we introduce a simplification scheme for mandatory critical point pairs revealing the most dominant features. Our technique is purely combinatorial and handles parametric distribution models and ensemble data. It does not depend on any computational parameter and does not suffer from numerical inaccuracy or global inconsistency. The algorithm exploits ideas of the established join/split tree computation. It is therefore simple to implement, and its complexity is output‐sensitive. We illustrate, evaluate, and verify our method on synthetic and real‐world data.  相似文献   

19.
We consider scientific data sets that describe density functions over three-dimensional geometric domains. Such data sets are often large and coarsened representations are needed for visualization and analysis. Assuming a tetrahedral mesh representation, we construct such representations with a simplification algorithm that combines three goals: the approximation of the function, the preservation of the mesh topology, and the improvement of the mesh quality. The third goal is achieved with a novel extension of the well-known quadric error metric. We perform a number of computational experiments to understand the effect of mesh quality improvement on the density map approximation. In addition, we study the effect of geometric simplification on the topological features of the function by monitoring its critical points.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号