首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到18条相似文献,搜索用时 156 毫秒
1.
刘迎  刘学慧  吴恩华 《软件学报》2008,19(4):1016-1025
针对三角网格模型的拓扑信息。提出了一种高效压缩方法.不同于以往的单纯利用算术编码或霍夫曼鳊码对遍历三角网格生成的拓扑流进行编码压缩,根据三角网格模型(特别是规则三角网格模型)的特点,自适应地提高编码过程中对当前编码字符发生的预测准确率,实现对三角网格模型的拓扑信息的高效压缩.算法首先遍历三角网格模型,得到操作符序列;然后对得到的操作符序列的每个操作符作模版可变的自适应算术编码.在编码过程中,根据当前编码字符的前一个操作符、三角网格模型的特点以及网格遍历方法为当前编码操作符计算一个模版,在这个模版中,预测准确率高的操作符用较短的二进制串表示.根据当前编码操作符的可变模版,可以得到该操作符的二进制表示,并对这个二进制表示的每个比特作自适应算术编码.该方法是针对流形三角网格模型的拓扑信息作单分辨率的基于面的无损压缩,可以得到很好的三角网格拓扑信息的压缩结果,其压缩比甚至比拓扑压缩领域压缩比方面最好的TG算法的压缩比还要好.  相似文献   

2.
当前的网格压缩算法在编码顶点和连通信息之前,需要按照某种顺序遍历网格模型,而且需要构造支持拓扑结构访问的临时数据结构,不利于处理大型网格模型.从根源上抛弃了传统网格数据组织形式,给出了一种新的网格顶点和连通信息交错的网格数据组织方法,提出了以网格数据读入顺序渐进编码算法.实验结果表明,算法的处理速度和所占用的内存空间都优于传统的压缩方法.  相似文献   

3.
张文博  刘波  张鸿宾 《自动化学报》2007,33(10):1023-1028
提出一种率-失真优化的渐进几何压缩方法. 根据三维模型细节信息分布的不均匀性, 该方法将细节信息分块并对各块独立编码, 然后以一定位率下重构网格几何失真最小为准则, 将各块的位流优化组装成最终的码流, 从而在渐进传输时使有限的网络带宽能优先分配给那些细节信息较为丰富的块. 实验结果表明, 与渐进几何压缩方法 (Progressive geometry compression, PGC) 相比, 在低位率时本文方法重构网格的峰值信噪比 (Peak signal-to-noise ratio, PSNR) 提高了约 2.25dB. 此外, 该方法也为实现三维网格感兴趣区域编码提供了新的方案.  相似文献   

4.
网格拓扑压缩方法是计算机图形学的基础算法。该文方法是单分辨率,主要针对非三角网格模型的拓扑信息作无损压缩。算法首先遍历网格的所有多边形得到操作系列;然后对操作系列作霍夫曼编码;再对霍夫曼编码结果作基于上下文长度可变的算术编码得到最后的压缩结果。相比于对非三角网格拓扑信息作压缩的压缩比很高的算法,该算法得到的压缩结果更好。此算法的另一个突出优点是在解码时间和空间上有了改进——新算法可以在接收一个多边形的编码后立即完成解码并抛弃这个编码,从而使得该算法特别适用于在线传输和解码的实时与交互应用场合。此外,该算法还可以处理有空洞和柄(handle)的模型。  相似文献   

5.
为了取得较好的三角形网格压缩性能,提出了一种基于小波变换的三角形网格非渐进压缩方法。该压缩方法先利用重新网格化来去除大部分连接信息,然后利用小波变换的强去相关能力来压缩几何信息。在进行重新网格化和小波变换后,再按一个确定的次序将所有的小波系数扫描为一个序列,然后对其做量化和算术编码。另外,对重新网格化得到的自适应半正规采样模式,还设计了一种自适应细分信息编码算法,以便使解码端知道每一个小波系数应该放置在哪一个顶点上。实验表明,用该压缩方法对由三维扫描仪获取的复杂网格进行压缩,取得了比Edgebreaker方法明显要好的率失真性能;10比特量化时,压缩倍数在200倍左右,为Edgebreaker方法的2倍多。  相似文献   

6.
在现有的代表性三角形网格压缩方法中,先采用一定的网格遍历方法来压缩连接信息,同时用遍历路径上的相邻顶点来对每个顶点的几何坐标进行平行四边形预测,以压缩几何信息。它们的主要缺点是平行四边形预测不太准确,且受到所采用的遍历方法的制约。文章提出一种新的几何信息压缩方法。编码时,对每个顶点的几何坐标,采用比平行四边形预测更为准确、且与遍历方法无关的邻域预测。解码时,采用预处理共轭梯度法,联立求解所有顶点的预测公式组成的稀疏线性方程组,同时求出所有顶点的坐标。文章采用渐进解码方法来减少求解稀疏线性方程组时,用户的等待时间。  相似文献   

7.
一般多边形网格的几何压缩   总被引:8,自引:1,他引:8  
提出一个通用的一般多边形网格的几何压缩算法,针对目前三维拓扑压缩算法大都仅适用于三角网格的现状,在巳有算法的基础上,进行了有效的推广,使得对于包含任意边数多边形的网格都可以进行有效的压缩编码;另外,根据多边形网格任一多边形中的各个顶点共面的特性,提出一种顶点坐标压缩方案,该方案与上述拓扑压缩算法有机结合可以显著地减少一般多边形网格数据在网上传输所需的带宽;最后,对编码过程产生的输出流进行流程编码与算术编码相结合的混合压缩编码,从而进一步提高压缩比。  相似文献   

8.
三维网格压缩方法综述   总被引:2,自引:0,他引:2  
针对三维网格大数据量与三维图形引擎处理能力及网络带宽限制之间的矛盾,三维网格压缩编码技术提供了一系列解决方法。本文从静态压缩和递进网格两个角度分类,以拓扑信息驱动和几何信息驱动为两条主线,归纳比较了国内外近十年来三维网格压缩的各种方法,并给出其未采发展趋势。  相似文献   

9.
提出了一种普遍适用于网格拓扑压缩的高效熵编码方法.不同于以往的单纯利用算术编码或Huffman编码对遍历网格生成的拓扑流进行编码压缩,对这些拓扑流的每个符号先计算其Huffman编码,然后采用基于上下文(已编码序列的倒数第2个符号作为上下文)的算术编码方法来编码其Huffman值,从而实现对网格模型拓扑信息的有效压缩.实验结果表明,熵编码方法普遍适用于各种网格拓扑压缩方法得到的拓扑流的压缩,其压缩结果普遍高于拓扑流序列的熵值——绝大多数拓扑压缩算法各自最好的压缩比.  相似文献   

10.
为进一步优化三角网格的拓扑编码压缩率,提出一种高效的三角网格无损拓扑压缩算法.与已有的拓扑压缩算法对三角网的遍历顺序不同,该算法沿哈密顿回路对网格进行以面为单位的拓扑压缩,可以仅用HETS共4种操作符表示原始网格的拓扑信息,降低了操作符序列的熵;此外,利用序列中各操作符的相互关系对操作符成对进行组合熵编码,缩短了操作符序列的长度,实验结果表明,较当前各类拓扑压缩算法,文中算法处理各种三角网格模型获得的压缩率有很大降低,  相似文献   

11.
提出一种基于面的高效三角网格拓扑压缩算法.该算法是单分辨率无损压缩算法,是对Edgebreaker算法的改进:在网格遍历部分,通过自适应网格遍历方法使非常影响压缩比的分割图形操作尽可能少;在熵编码部分,为网格遍历后得到的每个操作符各设计一个模版,根据模版确定该操作符的二进制表示,然后采用自适应算术编码方法压缩该二进制表示得到最后的压缩结果.与网格拓扑压缩领域中基于面的最好的算法得到的压缩比相比较,该算法得到的压缩比有很大提高.  相似文献   

12.
Angle-Analyzer: A Triangle-Quad Mesh Codec   总被引:2,自引:0,他引:2  
  相似文献   

13.
This paper proposes a novel and efficient algorithm for single-rate compression of triangle meshes. The input mesh is traversed along its greedy Hamiltonian cycle in O(n) time. Based on the Hamiltonian cycle, the mesh connectivity can be encoded by a face label sequence with low entropy containing only four kinds of labels (HETS) and the transmission delay at the decoding end that frequently occurs in the conventional single-rate approaches is obviously reduced. The mesh geometry is compressed with a global coordinate concentration strategy and a novel local parallelogram error prediction scheme. Experiments on realistic 3D models demonstrate the effectiveness of our approach in terms of compression rates and run time performance compared to the leading single-rate and progressive mesh compression methods.  相似文献   

14.
Connectivity compression techniques for very large 3D triangle meshes are based on clever traversals of the graph representing the mesh, so as to avoid the repeated references to vertices. In this paper we present a new algorithm for compressing large 3D triangle meshes through the successive conquest of triangle fans. The connectivity of vertices in a fan is implied. As each fan is traversed, the current mesh boundary is advanced by the fan-front. The process is recursively continued till the entire mesh is traversed. The mesh is then compactly encoded as a sequence of fan configuration codes. The fan configuration code comprehensively encodes the connectivity of the fan with the rest of the mesh. There is no need for any further special operators like split codes and additional vertex offsets. The number of fans is typically one-fourth of the total number of triangles. Only a few of the fan configurations occur with high frequency, enabling excellent connectivity information compression using range encoding. A simple implementation shows significant improvements, on the average, in bit-rate per vertex, compared to earlier reported techniques.  相似文献   

15.
In certain practical situations, the connectivity of a triangle mesh needs to be transmitted or stored given a fixed set of 3D vertices that is known at both ends of the transaction (encoder/decoder). This task is different from a typical mesh compression scenario, in which the connectivity and geometry (vertex positions) are encoded either simultaneously or in reversed order (connectivity first), usually exploiting the freedom in vertex/triangle re-indexation. Previously proposed algorithms for encoding the connectivity for a known geometry were based on a canonical mesh traversal and predicting which vertex is to be connected to the part of the mesh that is already processed. In this paper, we take this scheme a step further by replacing the fixed traversal with a priority queue of open expansion gates, out of which in each step a gate is selected that has the most certain prediction, that is one in which there is a candidate vertex that exhibits the largest advantage in comparison with other possible candidates, according to a carefully designed quality metric. Numerical experiments demonstrate that this improvement leads to a substantial reduction in the required data rate in comparison with the state of the art.  相似文献   

16.
This paper proposes a novel scheme for 3D model compression based on mesh segmentation using multiple principal plane analysis. This algorithm first performs a mesh segmentation scheme, based on fusion of the well-known k-means clustering and the proposed principal plane analysis to separate the input 3D mesh into a set of disjointed polygonal regions. The boundary indexing scheme for the whole object is created by assembling local regions. Finally, the current work proposes a triangle traversal scheme to encode the connectivity and geometry information simultaneously for every patch under the guidance of the boundary indexing scheme. Simulation results demonstrate that the proposed algorithm obtains good performance in terms of compression rate and reconstruction quality.  相似文献   

17.
We present a new, single-rate method for compressing the connectivity information of a connected 2-manifold triangle mesh with or without boundary. Traditional compression schemes interleave geometry and connectivity coding, and are thus typically unable to utilize information from vertices (mesh regions) they have not yet processed. With the advent of competitive point cloud compression schemes, it has become feasible to develop separate connectivity encoding schemes that can exploit complete, global vertex position information to improve performance. Our scheme demonstrates the utility of this separation of vertex and connectivity coding. By traversing the mesh edges in a consistent fashion, and using global vertex information, we can predict the position of the vertex that completes the unprocessed triangle attached to a given edge. We then rank the vertices in the neighborhood of this predicted position by their Euclidean distance. The distance rank of the correct closing vertex is stored. Typically, these rank values are small, and the set of rank values thus possesses low entropy and compresses very well. The sequence of rank values is all that is required to represent the mesh connectivity—no special split or merge codes are necessary. Results indicate improvements over traditional valence-based schemes for more regular triangulations. Highly irregular triangulations or those containing a large number of slivers are not well modelled by our current set of predictors and may yield poorer connectivity compression rates than those provided by the best valence-based schemes.  相似文献   

18.
A rate-distortion (R-D) optimized progressive coding algorithm for three-dimensional (3D) meshes is proposed in this work. We propose the prioritized gate selection and the curvature prediction to improve the connectivity and geometry compression performance, respectively. Furthermore, based on the bit plane coding, we develop a progressive transmission method, which improves the qualities of intermediate meshes as well as that of the fully reconstructed mesh, and extend it to the view-dependent transmission method. Experiments on various 3D mesh models show that the proposed algorithm provides significantly better compression performance than the conventional algorithms, while supporting progressive reconstruction efficiently.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号