首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到16条相似文献,搜索用时 859 毫秒
1.
使用Face Fixer方法对由一般多边形网格构成的三维模型拓扑信息进行了压缩,采用3阶自适应算术编码进一步提高压缩比,通过把顶点位置坐标变换到局部坐标系中,结合量化、平行四边形顶点坐标预测以及算术编码来实现三维网格模型几何信息的压缩,在几何模型质量基本没有损失的情况下,获得了很好的压缩性能。  相似文献   

2.
为了取得较好的三角形网格压缩性能,提出了一种基于小波变换的三角形网格非渐进压缩方法。该压缩方法先利用重新网格化来去除大部分连接信息,然后利用小波变换的强去相关能力来压缩几何信息。在进行重新网格化和小波变换后,再按一个确定的次序将所有的小波系数扫描为一个序列,然后对其做量化和算术编码。另外,对重新网格化得到的自适应半正规采样模式,还设计了一种自适应细分信息编码算法,以便使解码端知道每一个小波系数应该放置在哪一个顶点上。实验表明,用该压缩方法对由三维扫描仪获取的复杂网格进行压缩,取得了比Edgebreaker方法明显要好的率失真性能;10比特量化时,压缩倍数在200倍左右,为Edgebreaker方法的2倍多。  相似文献   

3.
刘迎  刘学慧  吴恩华 《软件学报》2008,19(4):1016-1025
针对三角网格模型的拓扑信息。提出了一种高效压缩方法.不同于以往的单纯利用算术编码或霍夫曼鳊码对遍历三角网格生成的拓扑流进行编码压缩,根据三角网格模型(特别是规则三角网格模型)的特点,自适应地提高编码过程中对当前编码字符发生的预测准确率,实现对三角网格模型的拓扑信息的高效压缩.算法首先遍历三角网格模型,得到操作符序列;然后对得到的操作符序列的每个操作符作模版可变的自适应算术编码.在编码过程中,根据当前编码字符的前一个操作符、三角网格模型的特点以及网格遍历方法为当前编码操作符计算一个模版,在这个模版中,预测准确率高的操作符用较短的二进制串表示.根据当前编码操作符的可变模版,可以得到该操作符的二进制表示,并对这个二进制表示的每个比特作自适应算术编码.该方法是针对流形三角网格模型的拓扑信息作单分辨率的基于面的无损压缩,可以得到很好的三角网格拓扑信息的压缩结果,其压缩比甚至比拓扑压缩领域压缩比方面最好的TG算法的压缩比还要好.  相似文献   

4.
一种保留特征的网格简化和压缩递进传输方法   总被引:1,自引:0,他引:1  
针对数字博物馆中三维藏品网络传输及传输过程中藏品特征保留的需要,提出了一种保留拓扑及纹理特征的网格简化方法,在三角形折叠简化算法的基础之上,通过引入边界三角形和色异三角形等概念,对误差矩阵的计算和误差控制方法进行了改进,保留了原始模型的几何边界和纹理属性等特征信息;并结合递进网格和压缩编码,构造了基于八叉树编码的递进网格文件,从而实现了基于网络的三维模型递进传输系统.  相似文献   

5.
提出了一种普遍适用于网格拓扑压缩的高效熵编码方法.不同于以往的单纯利用算术编码或Huffman编码对遍历网格生成的拓扑流进行编码压缩,对这些拓扑流的每个符号先计算其Huffman编码,然后采用基于上下文(已编码序列的倒数第2个符号作为上下文)的算术编码方法来编码其Huffman值,从而实现对网格模型拓扑信息的有效压缩.实验结果表明,熵编码方法普遍适用于各种网格拓扑压缩方法得到的拓扑流的压缩,其压缩结果普遍高于拓扑流序列的熵值——绝大多数拓扑压缩算法各自最好的压缩比.  相似文献   

6.
本文提出一种几何数据压缩的新算法,其基本思想是在已知物体网格边界的条件下,首先寻找边界的凹点,然后建立网格结点的特殊树结构,即横切面树,并将横切面树中相邻节点内网格结点之间的关系表示为链表(三角形条带),按契约数结构及链表(三角形条带)编码、存储帮传输网格结点的连接关系,这种算法不同于Gabriel Taubin算法,它具有对顶点坐标、属性坐标及三角形连接关系压缩无损等许多优点。  相似文献   

7.
一般多边形网格的几何压缩   总被引:8,自引:1,他引:8  
提出一个通用的一般多边形网格的几何压缩算法,针对目前三维拓扑压缩算法大都仅适用于三角网格的现状,在巳有算法的基础上,进行了有效的推广,使得对于包含任意边数多边形的网格都可以进行有效的压缩编码;另外,根据多边形网格任一多边形中的各个顶点共面的特性,提出一种顶点坐标压缩方案,该方案与上述拓扑压缩算法有机结合可以显著地减少一般多边形网格数据在网上传输所需的带宽;最后,对编码过程产生的输出流进行流程编码与算术编码相结合的混合压缩编码,从而进一步提高压缩比。  相似文献   

8.
宫法明  徐涛  周笑天 《计算机工程与设计》2007,28(19):4800-4802,4830
三维网格单一位率压缩技术将网格的几何信息和拓扑连接信息分开独立压缩.进行连接信息压缩时,通常对某种结构表示的网格连接信息进行某种形式的遍历,对遍历过程进行信息编码压缩;压缩几何信息时,一般需要经过量化、预测和熵编码3个处理过程.通过对该类算法进行研究总结,提出并设计了一个针对三角网格的单一位率压缩统一模式框架,并基于OpenGL和Visual C 6.0,以Edgebreaker算法为例进行了实验.  相似文献   

9.
提出一种几何模型分组渐进压缩算法.利用偏移细分曲面的模型表示方法,通过网格分块准则将模型的控制网格划分为多个chart.在对曲面的偏移量进行小波变换后,采用零树编码算法对每个chart上的小波系数压缩编码,生成多条相互独立的渐进压缩码流.该算法支持动态多分辨率解码,能够从压缩结果中直接获得模型的视点相关多分辨率表示.实验结果表明,该算法有利于提高多分辨率模型表示和绘制的效率,降低分布式系统的网络带宽需求.  相似文献   

10.
网格拓扑压缩方法是计算机图形学的基础算法。该文方法是单分辨率,主要针对非三角网格模型的拓扑信息作无损压缩。算法首先遍历网格的所有多边形得到操作系列;然后对操作系列作霍夫曼编码;再对霍夫曼编码结果作基于上下文长度可变的算术编码得到最后的压缩结果。相比于对非三角网格拓扑信息作压缩的压缩比很高的算法,该算法得到的压缩结果更好。此算法的另一个突出优点是在解码时间和空间上有了改进——新算法可以在接收一个多边形的编码后立即完成解码并抛弃这个编码,从而使得该算法特别适用于在线传输和解码的实时与交互应用场合。此外,该算法还可以处理有空洞和柄(handle)的模型。  相似文献   

11.
In this paper, we present an adaptive-coding method for generic triangular meshes including both regular and irregular meshes. Though it is also based on iterative octree decomposition of the object space for the original mesh, as some prior arts, it has novelties in the following two aspects. First, it mathematically models the occupancy codes containing only a single–“1” bit for accurate initialization of the arithmetic coder at each octree level. Second, it adaptively prioritizes the bits in an occupancy code using a local surface smoothness measure that is based on triangle areas and therefore mitigates the effect of non-uniform vertex sampling over the surface. As a result, the proposed 3D mesh coder yields outstanding coding performance for both regular and irregular meshes and especially for the latter, as demonstrated by the experiments.  相似文献   

12.
We describe a method for streaming compression of hexahedral meshes. Given an interleaved stream of vertices and hexahedra our coder incrementally compresses the mesh in the presented order. Our coder is extremely memory efficient when the input stream also documents when vertices are referenced for the last time (i.e. when it contains topological finalization tags). Our coder then continuously releases and reuses data structures that no longer contribute to compressing the remainder of the stream. This means in practice that our coder has only a small fraction of the whole mesh in memory at any time. We can therefore compress very large meshes—even meshes that do not fit in memory. Compared to traditional, non-streaming approaches that load the entire mesh and globally reorder it during compression, our algorithm trades a less compact compressed representation for significant gains in speed, memory, and I/O efficiency. For example, on the 456k hexahedra “blade” mesh, our coder is twice as fast and uses 88 times less memory (only 3.1 MB) with the compressed file increasing about 3% in size. We also present the first scheme for predictive compression of properties associated with hexahedral cells.  相似文献   

13.
Multiresolution meshes provide an efficient and structured representation of geometric objects. To increase the mesh resolution only at vital parts of the object, adaptive refinement is widely used. We propose a lossless compression scheme for these adaptive structures that exploits the parent–child relationships inherent to the mesh hierarchy. We use the rules that correspond to the adaptive refinement scheme and store bits only where some freedom of choice is left, leading to compact codes that are free of redundancy. Moreover, we extend the coder to sequences of meshes with varying refinement. The connectivity compression ratio of our method exceeds that of state‐of‐the‐art coders by a factor of 2–7. For efficient compression of vertex positions we adapt popular wavelet‐based coding schemes to the adaptive triangular and quadrangular cases to demonstrate the compatibility with our method. Akin to state‐of‐the‐art coders, we use a zerotree to encode the resulting coefficients. Using improved context modelling we enhanced the zerotree compression, cutting the overall geometry data rate by 7% below those of the successful Progressive Geometry Compression. More importantly, by exploiting the existing refinement structure we achieve compression factors that are four times greater than those of coders which can handle irregular meshes.  相似文献   

14.
现代图形应用系统需要绘制大量的几何体,这给绘制硬件带来内存、带宽等问题。解决该问题的方法之一就是在预处理阶段对静态三维几何物体进行压缩处理。本文提出了一种新的三角形网格压缩/解压缩算法,该算法将三角形网格分解成一组三角形条和序列顶点链,然后对顶点连通性进行熵缟码。该算法与已有的GTM压缩算法相比,压缩率提
高了32%,并且支持并行解压缩。本文还提出了一种平行四边形预测方法来压缩顶点坐标。  相似文献   

15.
Edgebreaker: connectivity compression for triangle meshes   总被引:10,自引:0,他引:10  
Edgebreaker is a simple scheme for compressing the triangle/vertex incidence graphs (sometimes called connectivity or topology) of three-dimensional triangle meshes. Edgebreaker improves upon the storage required by previously reported schemes, most of which can guarantee only an O(t log(t)) storage cost for the incidence graph of a mesh of t triangles. Edgebreaker requires at most 2t bits for any mesh homeomorphic to a sphere and supports fully general meshes by using additional storage per handle and hole. For large meshes, entropy coding yields less than 1.5 bits per triangle. Edgebreaker's compression and decompression processes perform identical traversals of the mesh from one triangle to an adjacent one. At each stage, compression produces an op-code describing the topological relation between the current triangle and the boundary of the remaining part of the mesh. Decompression uses these op-codes to reconstruct the entire incidence graph. Because Edgebreaker's compression and decompression are independent of the vertex locations, they may be combined with a variety of vertex-compressing techniques that exploit topological information about the mesh to better estimate vertex locations. Edgebreaker may be used to compress the connectivity of an entire mesh bounding a 3D polyhedron or the connectivity of a triangulated surface patch whose boundary need not be encoded. The paper also offers a comparative survey of the rapidly growing field of geometric compression  相似文献   

16.
Most state‐of‐the‐art compression algorithms use complex connectivity traversal and prediction schemes, which are not efficient enough for online compression of large meshes. In this paper we propose a scalable massively parallel approach for compression and decompression of large triangle meshes using the GPU. Our method traverses the input mesh in a parallel breadth‐first manner and encodes the connectivity data similarly to the well known cut‐border machine. Geometry data is compressed using a local prediction strategy. In contrast to the original cut‐border machine, we can additionally handle triangle meshes with inconsistently oriented faces. Our approach is more than one order of magnitude faster than currently used methods and achieves competitive compression rates.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号