首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到17条相似文献,搜索用时 156 毫秒
1.
三角形条带为三角形网格提供了一种紧凑的表示方法,使快速的绘制和传输三角形网格成为可能,因此对由三角形条带构成的网格压缩进行研究具有重要的意义.本文使用Triangle Fixer方法对三角形条带构成的三维模型拓扑信息进行了压缩,并采用3阶自适应算术编码进一步提高压缩率;同时结合量化、平行四边形顶点坐标预测以及算术编码来实现三角形网格几何信息的压缩,在几何模型质量基本没有损失的情况下,获得了很好的压缩性能.  相似文献   

2.
一般多边形网格的几何压缩   总被引:8,自引:1,他引:8  
提出一个通用的一般多边形网格的几何压缩算法,针对目前三维拓扑压缩算法大都仅适用于三角网格的现状,在巳有算法的基础上,进行了有效的推广,使得对于包含任意边数多边形的网格都可以进行有效的压缩编码;另外,根据多边形网格任一多边形中的各个顶点共面的特性,提出一种顶点坐标压缩方案,该方案与上述拓扑压缩算法有机结合可以显著地减少一般多边形网格数据在网上传输所需的带宽;最后,对编码过程产生的输出流进行流程编码与算术编码相结合的混合压缩编码,从而进一步提高压缩比。  相似文献   

3.
为进一步提高三维网格压缩算法性能,在高斯混合概率模型(GHPM)基础上,提出基于贝叶斯熵编码的局部坐标分级跳跃渐进式3D网格压缩算法。采用GHPM模型实现三维网格压缩过程的顶点创建、边沿触发器设计、面方向预测以及分级跳跃分割,实现对给定顶点的后验概率几何拓扑符号估计。基于后验概率的算术编码器进行拓扑符号编码,采用不同情景进行设计,提出渐进式的标签预测过程,实现已编码组信息的充分利用,并采用局部坐标系,有效压缩几何残差。通过与对比编码器实验验证,所提算法相对于AD,wavemesh,AAD以及RDO编码器,具有更高的压缩比和压缩精度,计算性能更好。  相似文献   

4.
为了取得较好的三角形网格压缩性能,提出了一种基于小波变换的三角形网格非渐进压缩方法。该压缩方法先利用重新网格化来去除大部分连接信息,然后利用小波变换的强去相关能力来压缩几何信息。在进行重新网格化和小波变换后,再按一个确定的次序将所有的小波系数扫描为一个序列,然后对其做量化和算术编码。另外,对重新网格化得到的自适应半正规采样模式,还设计了一种自适应细分信息编码算法,以便使解码端知道每一个小波系数应该放置在哪一个顶点上。实验表明,用该压缩方法对由三维扫描仪获取的复杂网格进行压缩,取得了比Edgebreaker方法明显要好的率失真性能;10比特量化时,压缩倍数在200倍左右,为Edgebreaker方法的2倍多。  相似文献   

5.
三维网格压缩方法综述   总被引:2,自引:0,他引:2  
针对三维网格大数据量与三维图形引擎处理能力及网络带宽限制之间的矛盾,三维网格压缩编码技术提供了一系列解决方法。本文从静态压缩和递进网格两个角度分类,以拓扑信息驱动和几何信息驱动为两条主线,归纳比较了国内外近十年来三维网格压缩的各种方法,并给出其未采发展趋势。  相似文献   

6.
刘迎  刘学慧  吴恩华 《软件学报》2008,19(4):1016-1025
针对三角网格模型的拓扑信息。提出了一种高效压缩方法.不同于以往的单纯利用算术编码或霍夫曼鳊码对遍历三角网格生成的拓扑流进行编码压缩,根据三角网格模型(特别是规则三角网格模型)的特点,自适应地提高编码过程中对当前编码字符发生的预测准确率,实现对三角网格模型的拓扑信息的高效压缩.算法首先遍历三角网格模型,得到操作符序列;然后对得到的操作符序列的每个操作符作模版可变的自适应算术编码.在编码过程中,根据当前编码字符的前一个操作符、三角网格模型的特点以及网格遍历方法为当前编码操作符计算一个模版,在这个模版中,预测准确率高的操作符用较短的二进制串表示.根据当前编码操作符的可变模版,可以得到该操作符的二进制表示,并对这个二进制表示的每个比特作自适应算术编码.该方法是针对流形三角网格模型的拓扑信息作单分辨率的基于面的无损压缩,可以得到很好的三角网格拓扑信息的压缩结果,其压缩比甚至比拓扑压缩领域压缩比方面最好的TG算法的压缩比还要好.  相似文献   

7.
三维图形数据的压缩与网上浏览   总被引:2,自引:0,他引:2  
尽管互联网上存在着大量的压缩图像,但互联网上的三维图形数据却很少,其中一个重要原因就是三维图形数据的数据量比较大,所以要进行高效的压缩,以节约存储空间和网络带宽。文章结合Edgebreaker连接关系编码算法、平行四边形顶点坐标预测以及算术编码,来实现三角形网格的压缩,得到了50倍左右的压缩比;然后设计了一种存储压缩三维图形数据的eb文件格式,实现了一个支持网上浏览压缩三维图形数据的IE浏览器插件,可用于三维网页、数字博物馆等应用中。  相似文献   

8.
渐进网格中真实选择细化技术的探讨   总被引:1,自引:0,他引:1  
三维几何模型的选择性细化,是在几何网格LOD技术基础上,依据某种标准(如视点的位置、角度)对网格进行局域精简或细化,以尽可能少的顶点数来再现原模型的细节.其核心是顶点拓扑信息记录、检索方式.真实选择细化技术的优点在于将精简和细化操作局限在顶点及其领域上,不会产生可能失控的连锁反应.就此方法应用于采用半边缩减的渐进网格上,提出了具体的拓扑压缩数据结构方案,并对该结构的实现条件提出建议.该方案结构紧凑,适用于大型三维几何模型通过互联网进行传榆并适于在个人电脑上进行动态显示.  相似文献   

9.
宫法明  徐涛  周笑天 《计算机工程与设计》2007,28(19):4800-4802,4830
三维网格单一位率压缩技术将网格的几何信息和拓扑连接信息分开独立压缩.进行连接信息压缩时,通常对某种结构表示的网格连接信息进行某种形式的遍历,对遍历过程进行信息编码压缩;压缩几何信息时,一般需要经过量化、预测和熵编码3个处理过程.通过对该类算法进行研究总结,提出并设计了一个针对三角网格的单一位率压缩统一模式框架,并基于OpenGL和Visual C 6.0,以Edgebreaker算法为例进行了实验.  相似文献   

10.
提出了一种普遍适用于网格拓扑压缩的高效熵编码方法.不同于以往的单纯利用算术编码或Huffman编码对遍历网格生成的拓扑流进行编码压缩,对这些拓扑流的每个符号先计算其Huffman编码,然后采用基于上下文(已编码序列的倒数第2个符号作为上下文)的算术编码方法来编码其Huffman值,从而实现对网格模型拓扑信息的有效压缩.实验结果表明,熵编码方法普遍适用于各种网格拓扑压缩方法得到的拓扑流的压缩,其压缩结果普遍高于拓扑流序列的熵值——绝大多数拓扑压缩算法各自最好的压缩比.  相似文献   

11.
This paper presents a novel algorithm for hierarchical random accessible mesh decompression. Our approach progressively decompresses the requested parts of a mesh without decoding less interesting parts. Previous approaches divided a mesh into independently compressed charts and a base coarse mesh. We propose a novel hierarchical representation of the mesh. We build this representation by using a boundary-based approach to recursively split the mesh in two parts, under the constraint that any of the two resulting submeshes should be reconstructible independently.
In addition to this decomposition technique, we introduce the concepts of opposite vertex and context dependant numbering . This enables us to achieve seemingly better compression ratios than previous work on quad and higher degree polygonal meshes. Our coder uses about 3 bits per polygon for connectivity and 14 bits per vertex for geometry using 12 bits quantification.  相似文献   

12.
网格拓扑压缩方法是计算机图形学的基础算法。该文方法是单分辨率,主要针对非三角网格模型的拓扑信息作无损压缩。算法首先遍历网格的所有多边形得到操作系列;然后对操作系列作霍夫曼编码;再对霍夫曼编码结果作基于上下文长度可变的算术编码得到最后的压缩结果。相比于对非三角网格拓扑信息作压缩的压缩比很高的算法,该算法得到的压缩结果更好。此算法的另一个突出优点是在解码时间和空间上有了改进——新算法可以在接收一个多边形的编码后立即完成解码并抛弃这个编码,从而使得该算法特别适用于在线传输和解码的实时与交互应用场合。此外,该算法还可以处理有空洞和柄(handle)的模型。  相似文献   

13.
The Alliez Desbrun (AD) coder has accomplished the best compression ratios for multiresolution 2-manifold meshes in the last decade. This paper presents a Bayesian AD coder which has better compression ratios in connectivity coding than the original coder, based on a mesh-aware valence coding scheme for multiresolution meshes. In contrast to the original AD coder, which directly encodes a valence for each decimated vertex, our coder indirectly encodes the valence according to its rank in a sorted list with respect to the mesh-aware scores of the possible valences. Experimental results show that the Bayesian AD coder shows an improvement of 8.5-36.2% in connectivity coding compared to the original AD coder despite of the fact that a simple coarse-to-fine step of the mesh-aware valence coding is plugged into the original algorithm.  相似文献   

14.
Multiresolution meshes provide an efficient and structured representation of geometric objects. To increase the mesh resolution only at vital parts of the object, adaptive refinement is widely used. We propose a lossless compression scheme for these adaptive structures that exploits the parent–child relationships inherent to the mesh hierarchy. We use the rules that correspond to the adaptive refinement scheme and store bits only where some freedom of choice is left, leading to compact codes that are free of redundancy. Moreover, we extend the coder to sequences of meshes with varying refinement. The connectivity compression ratio of our method exceeds that of state‐of‐the‐art coders by a factor of 2–7. For efficient compression of vertex positions we adapt popular wavelet‐based coding schemes to the adaptive triangular and quadrangular cases to demonstrate the compatibility with our method. Akin to state‐of‐the‐art coders, we use a zerotree to encode the resulting coefficients. Using improved context modelling we enhanced the zerotree compression, cutting the overall geometry data rate by 7% below those of the successful Progressive Geometry Compression. More importantly, by exploiting the existing refinement structure we achieve compression factors that are four times greater than those of coders which can handle irregular meshes.  相似文献   

15.
In this paper, we present an adaptive-coding method for generic triangular meshes including both regular and irregular meshes. Though it is also based on iterative octree decomposition of the object space for the original mesh, as some prior arts, it has novelties in the following two aspects. First, it mathematically models the occupancy codes containing only a single–“1” bit for accurate initialization of the arithmetic coder at each octree level. Second, it adaptively prioritizes the bits in an occupancy code using a local surface smoothness measure that is based on triangle areas and therefore mitigates the effect of non-uniform vertex sampling over the surface. As a result, the proposed 3D mesh coder yields outstanding coding performance for both regular and irregular meshes and especially for the latter, as demonstrated by the experiments.  相似文献   

16.
自适应算术编码是一种高效的熵编码方法。本文较为详细地介绍了自适应算术编码的工作原理和实现方法,以及在JPEG2000的应用情况,分析了自适应算术编码应用于图像压缩编码的实现过程,并讨论了实现过程中应该解决的若干技术问题。  相似文献   

17.
We describe a method for streaming compression of hexahedral meshes. Given an interleaved stream of vertices and hexahedra our coder incrementally compresses the mesh in the presented order. Our coder is extremely memory efficient when the input stream also documents when vertices are referenced for the last time (i.e. when it contains topological finalization tags). Our coder then continuously releases and reuses data structures that no longer contribute to compressing the remainder of the stream. This means in practice that our coder has only a small fraction of the whole mesh in memory at any time. We can therefore compress very large meshes—even meshes that do not fit in memory. Compared to traditional, non-streaming approaches that load the entire mesh and globally reorder it during compression, our algorithm trades a less compact compressed representation for significant gains in speed, memory, and I/O efficiency. For example, on the 456k hexahedra “blade” mesh, our coder is twice as fast and uses 88 times less memory (only 3.1 MB) with the compressed file increasing about 3% in size. We also present the first scheme for predictive compression of properties associated with hexahedral cells.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号