首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 281 毫秒
1.
We present a method for producing quad‐dominant subdivided meshes, which supports both adaptive refinement and adaptive coarsening. A hierarchical structure is stored implicitly in a standard half‐edge data structure, while allowing us to efficiently navigate through the different level of subdivision. Subdivided meshes contain a majority of quad elements and a moderate amount of triangles and pentagons in the regions of transition across different levels of detail. Topological LOD editing is controlled with local conforming operators, which support both mesh refinement and mesh coarsening. We show two possible applications of this method: we define an adaptive subdivision surface scheme that is topologically and geometrically consistent with the Catmull–Clark subdivision; and we present a remeshing method that produces semi‐regular adaptive meshes.  相似文献   

2.
面向移动终端的三角网格逆细分压缩算法   总被引:2,自引:0,他引:2  
马建平  罗笑南  陈渤  李峥 《软件学报》2009,20(9):3607-2615
针对移动用户的实时显示需求,提出一种基于逆细分的三角网格压缩算法.通过改进逆Butterfly简化算法,采用逆改版Loop模式,将细密的三角网格简化生成由稀疏的基网格和一系列偏移量组成的渐进网格;然后,通过设计偏移量小波树,将渐进网格进行嵌入式零树编码压缩.实验结果表明:该算法与以往方法相比,在获得较高压缩比的同时,运行速度较快.适用于几何模型的网络渐进传输和在移动终端上的3D图形实时渲染.  相似文献   

3.
提出一种基于面的高效三角网格拓扑压缩算法.该算法是单分辨率无损压缩算法,是对Edgebreaker算法的改进:在网格遍历部分,通过自适应网格遍历方法使非常影响压缩比的分割图形操作尽可能少;在熵编码部分,为网格遍历后得到的每个操作符各设计一个模版,根据模版确定该操作符的二进制表示,然后采用自适应算术编码方法压缩该二进制表示得到最后的压缩结果.与网格拓扑压缩领域中基于面的最好的算法得到的压缩比相比较,该算法得到的压缩比有很大提高.  相似文献   

4.
In this paper, we present a progressive compression algorithm for textured surface meshes, which is able to handle polygonal non‐manifold meshes as well as discontinuities in the texture mapping. Our method applies iterative batched simplifications, which create high quality levels of detail by preserving both the geometry and the texture mapping. The main features of our algorithm are (1) generic edge collapse and vertex split operators suited for polygonal non‐manifold meshes with arbitrary texture seam configurations, and (2) novel geometry‐driven prediction schemes and entropy reduction techniques for efficient encoding of connectivity and texture mapping. To our knowledge, our method is the first progressive algorithm to handle polygonal non‐manifold models. For geometry and connectivity encoding of triangular manifolds and non‐manifolds, our method is competitive with state‐of‐the‐art and even better at low/medium bitrates. Moreover, our method allows progressive encoding of texture coordinates with texture seams; it outperforms state‐of‐the‐art approaches for texture coordinate encoding. We also present a bit‐allocation framework which multiplexes mesh and texture refinement data using a perceptually‐based image metric, in order to optimize the quality of levels of detail.  相似文献   

5.
Most state‐of‐the‐art compression algorithms use complex connectivity traversal and prediction schemes, which are not efficient enough for online compression of large meshes. In this paper we propose a scalable massively parallel approach for compression and decompression of large triangle meshes using the GPU. Our method traverses the input mesh in a parallel breadth‐first manner and encodes the connectivity data similarly to the well known cut‐border machine. Geometry data is compressed using a local prediction strategy. In contrast to the original cut‐border machine, we can additionally handle triangle meshes with inconsistently oriented faces. Our approach is more than one order of magnitude faster than currently used methods and achieves competitive compression rates.  相似文献   

6.
In this paper, we introduce a new formalism for mesh geometry prediction. We derive a class of smooth linear predictors from a simple approach based on the Taylor expansion of the mesh geometry function. We use this method as a generic way to compute weights for various linear predictors used for mesh compression and compare them with those of existing methods. We show that our scheme is actually equivalent to the Modified Butterfly subdivision scheme used for wavelet mesh compression. We also build new efficient predictors that can be used for connectivity‐driven compression in place of other schemes like Average/Dual Parallelogram Prediction and High Degree Polygon Prediction. The new predictors use the same neighbourhood, but do not make any assumption on mesh anisotropy. In the case of Average Parallelogram Prediction, our new weights improve compression rates from 3% to 18% on our test meshes. For Dual Parallelogram Prediction, our weights are equivalent to those of the previous Freelence approach, that outperforms traditional schemes by 16% on average. Our method effectively shows that these weights are optimal for the class of smooth meshes. Modifying existing schemes to make use of our method is free because only the prediction weights have to be modified in the code.  相似文献   

7.
Angle-Analyzer: A Triangle-Quad Mesh Codec   总被引:2,自引:0,他引:2  
  相似文献   

8.
9.
We present a new approach to dynamic mesh compression, which combines compression with simplification to achieve improved compression results, a natural support for incremental transmission and level of detail. The algorithm allows fast progressive transmission of dynamic 3D content. Our scheme exploits both temporal and spatial coherency of the input data, and is especially efficient for the case of highly detailed dynamic meshes. The algorithm can be seen as an ultimate extension of the clustering and local coordinate frame (LCF)‐based approaches, where each vertex is expressed within its own specific coordinate system. The presented results show that we have achieved better compression efficiency compared to the state of the art methods. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

10.
The paper investigates the set of all selectively refined meshes that can be obtained from a progressive mesh. We call the set the transitive mesh space of a progressive mesh and present a theoretical analysis of the space. We define selective edge collapse and vertex split transformations, which we use to traverse all selectively refined meshes in the transitive mesh space. We propose a complete selective refinement scheme for a progressive mesh based on the transformations and compare the scheme with previous selective refinement schemes in both theoretical and experimental ways. In our comparison, we show that the complete scheme always generates selectively refined meshes with smaller numbers of vertices and faces than previous schemes for a given refinement criterion. The concept of dual pieces of the vertices in the vertex hierarchy plays a central role in the analysis of the transitive mesh space and the design of selective edge collapse and vertex split transformations.  相似文献   

11.
We present a generic framework for compression of densely sampled three‐dimensional (3D) surfaces in order to satisfy the increasing demand for storing large amounts of 3D content. We decompose a given surface into patches that are parameterized as elevation maps over planar domains and resampled on regular grids. The resulting shaped images are encoded using a state‐of‐the‐art wavelet image coder. We show that our method is not only applicable to mesh‐ and point‐based geometry, but also outperforms current surface encoders for both primitives.  相似文献   

12.
We present a novel method to adaptively apply modifications to scene data stored in GPU memory. Such modifications may include interactive painting and sculpting operations in an authoring tool, or deformations resulting from collisions between scene objects detected by a physics engine. We only allocate GPU memory for the faces affected by these modifications to store fine‐scale colour or displacement values. This requires dynamic GPU memory management in order to assign and adaptively apply edits to individual faces at runtime. We present such a memory management technique based on a scan‐operation that is efficiently parallelizable. Since our approach runs entirely on the GPU, we avoid costly CPU–GPU memory transfer and eliminate typical bandwidth limitations. This minimizes runtime overhead to under a millisecond and makes our method ideally suited to many real‐time applications such as video games and interactive authoring tools. In addition, our algorithm significantly reduces storage requirements and allows for much higher resolution content compared to traditional global texturing approaches. Our technique can be applied to various mesh representations, including Catmull–Clark subdivision surfaces, as well as standard triangle and quad meshes. In this paper, we demonstrate several scenarios for these mesh types where our algorithm enables adaptive mesh refinement, local surface deformations and interactive on‐mesh painting and sculpting.  相似文献   

13.
Science and engineering applications often have anisotropic physics and therefore require anisotropic mesh adaptation. In common with previous researchers on this topic, we use metrics to specify the desired mesh. Where previous approaches are typically heuristic and sometimes require expensive optimization steps, our approach is an extension of isotropic Delaunay meshing methods and requires only occasional, relatively inexpensive optimization operations. We use a discrete metric formulation, with the metric defined at vertices. To map a local sub-mesh to the metric space, we compute metric lengths for edges, and use those lengths to construct a triangulation in the metric space. Based on the metric edge lengths, we define a quality measure in the metric space similar to the well-known shortest-edge to circumradius ratio for isotropic meshes. We extend the common mesh swapping, Delaunay insertion, and vertex removal primitives for use in the metric space. We give examples demonstrating our scheme’s ability to produce a mesh consistent with a discontinuous, anisotropic mesh metric and the use of our scheme in solution adaptive refinement.  相似文献   

14.
The size of geometric data sets in scientific and industrial applications is constantly increasing. Storing surface or volume meshes in standard uncompressed formats results in large files that are expensive to store and slow to load and transmit. Scientists and engineers often refrain from using mesh compression because currently available schemes modify the mesh data. While connectivity is encoded in a lossless manner, the floating-point coordinates associated with the vertices are quantized onto a uniform integer grid to enable efficient predictive compression. Although a fine enough grid can usually represent the data with sufficient precision, the original floating-point values will change, regardless of grid resolution.In this paper we describe a method for compressing floating-point coordinates with predictive coding in a completely lossless manner. The initial quantization step is omitted and predictions are calculated in floating-point. The predicted and the actual floating-point values are broken up into sign, exponent, and mantissa and their corrections are compressed separately with context-based arithmetic coding. As the quality of the predictions varies with the exponent, we use the exponent to switch between different arithmetic contexts. We report compression results using the popular parallelogram predictor, but our approach will work with any prediction scheme. The achieved bit-rates for lossless floating-point compression nicely complement those resulting from uniformly quantizing with different precisions.  相似文献   

15.
三角形条带为三角形网格提供了一种紧凑的表示方法,使快速的绘制和传输三角形网格成为可能,因此对由三角形条带构成的网格压缩进行研究具有重要的意义.本文使用Triangle Fixer方法对三角形条带构成的三维模型拓扑信息进行了压缩,并采用3阶自适应算术编码进一步提高压缩率;同时结合量化、平行四边形顶点坐标预测以及算术编码来实现三角形网格几何信息的压缩,在几何模型质量基本没有损失的情况下,获得了很好的压缩性能.  相似文献   

16.
Laplacian mesh compression, also known as high‐pass mesh coding, is a popular technique for efficiently storing both static and dynamic triangle meshes that gained further recognition with the advent of perceptual mesh distortion evaluation metrics. Currently, the usual rule of thumb that drives the decision for a mesh compression algorithm is whether or not accuracy in absolute scale is required: Laplacian mesh encoding is chosen when perceptual quality is the main objective, while other techniques provide better results in terms of mechanistic error measures such as mean squared error. In this work, we present a modification of the Laplacian mesh encoding algorithm that preserves its benefits while it substantially reduces the resulting absolute error. Our approach is based on analyzing the reconstruction stage and modifying the quantization of differential coordinates, so that the decoded result stays close to the input even in areas that are distant from anchor points. In our approach, we avoid solving an overdetermined system of linear equations and thus reduce data redundancy, improve conditioning and achieve faster processing. Our approach can be directly applied to both static and dynamic mesh compression and we provide quantitative results comparing our approach with the state of the art methods.  相似文献   

17.
We present intrinsic methods to address the fundamental problem of segmenting a mesh into a specified number of patches with a uniform size and a controllable overlap. Although never addressed in the literature, such a segmentation is useful for a wide range of processing operations where patches represent local regions and overlaps regularize solutions in neighbour patches. Further, we propose a symmetry‐aware distance measure and symmetric modification to furthest‐point sampling, so that our methods can operate on semantically symmetric meshes. We introduce quantitative measures of patch size uniformity and symmetry, and show that our segmentation outperforms state‐of‐the‐art alternatives in experiments on a well‐known dataset. We also use our segmentation in illustrative applications to texture stitching and synthesis where we improve results over state‐of‐the‐art approaches.  相似文献   

18.
This paper presents a novel wavelet‐based transform and coding scheme for irregular meshes. The transform preserves geometric features at lower resolutions by adaptive vertex sampling and retriangulation, resulting in more accurate subsampling and better avoidance of smoothing and aliasing artefacts. By employing octree‐based coding techniques, the encoding of both connectivity and geometry information is decoupled from any mesh traversal order, and allows for exploiting the intra‐band statistical dependencies between wavelet coefficients. Improvements over the state of the art obtained by our approach are three‐fold: (1) improved rate–distortion performance over Wavemesh and IPR for both the Hausdorff and root mean square distances at low‐to‐mid‐range bitrates, most obvious when clear geometric features are present while remaining competitive for smooth, feature‐poor models; (2) improved rendering performance at any triangle budget, translating to a better quality for the same runtime memory footprint; (3) improved visual quality when applying similar limits to the bitrate or triangle budget, showing more pronounced improvements than rate–distortion curves.  相似文献   

19.
When simulating fluids, tetrahedral methods provide flexibility and ease of adaptivity that Cartesian grids find difficult to match. However, this approach has so far been limited by two conflicting requirements. First, accurate simulation requires quality Delaunay meshes and the use of circumcentric pressures. Second, meshes must align with potentially complex moving surfaces and boundaries, necessitating continuous remeshing. Unfortunately, sacrificing mesh quality in favour of speed yields inaccurate velocities and simulation artifacts. We describe how to eliminate the boundary‐matching constraint by adapting recent embedded boundary techniques to tetrahedra, so that neither air nor solid boundaries need to align with mesh geometry. This enables the use of high quality, arbitrarily graded, non‐conforming Delaunay meshes, which are simpler and faster to generate. Temporal coherence can also be exploited by reusing meshes over adjacent timesteps to further reduce meshing costs. Lastly, our free surface boundary condition eliminates the spurious currents that previous methods exhibited for slow or static scenarios. We provide several examples demonstrating that our efficient tetrahedral embedded boundary method can substantially increase the flexibility and accuracy of adaptive Eulerian fluid simulation.  相似文献   

20.
This paper addresses the problem of representing dynamic 3D meshes in a compact way, so that they can be stored and transmitted efficiently. We focus on sequences of triangle meshes with shared connectivity, avoiding the necessity of having a skinning structure. Our method first computes an average mesh of the whole sequence in edge shape space. A discrete geometric Laplacian of this average surface is then used to encode the coefficients that describe the trajectories of the mesh vertices. Optionally, a novel spatio‐temporal predictor may be applied to the trajectories to further improve the compression rate. We demonstrate that our approach outperforms the current state of the art in terms of low data rate at a given perceived distortion, as measured by the STED and KG error metrics.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号