首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到3条相似文献,搜索用时 0 毫秒
1.
Laplacian mesh compression, also known as high‐pass mesh coding, is a popular technique for efficiently storing both static and dynamic triangle meshes that gained further recognition with the advent of perceptual mesh distortion evaluation metrics. Currently, the usual rule of thumb that drives the decision for a mesh compression algorithm is whether or not accuracy in absolute scale is required: Laplacian mesh encoding is chosen when perceptual quality is the main objective, while other techniques provide better results in terms of mechanistic error measures such as mean squared error. In this work, we present a modification of the Laplacian mesh encoding algorithm that preserves its benefits while it substantially reduces the resulting absolute error. Our approach is based on analyzing the reconstruction stage and modifying the quantization of differential coordinates, so that the decoded result stays close to the input even in areas that are distant from anchor points. In our approach, we avoid solving an overdetermined system of linear equations and thus reduce data redundancy, improve conditioning and achieve faster processing. Our approach can be directly applied to both static and dynamic mesh compression and we provide quantitative results comparing our approach with the state of the art methods.  相似文献   

2.
Lossy compression of motion capture data can alleviate the problems of efficient storage and transmission by exploiting the redundancy and the superfluous precision of the data. When considering the acceptable amount of distortion, perceptual issues have to be taken into account. Current state of the art methods reduce the data rate required for high quality storage of motion capture data using various techniques. Most of them, however, do not use the common tools of general data compression, such as the method of Lagrange multipliers, and thus they obtain sub‐optimal results, making it difficult to do a fair comparison of their performance. In this paper, we present a general preprocessing step based on Lagrange multipliers, which allows to rigorously adjust the precision in each of the degrees of freedom of the input data according to the amount of influence the given degree of freedom has on the overall distortion. We then present a simple compression method based on Principal Component Analysis, which in combination with the proposed preprocessing achieves significantly better results than current state of the art methods. It allows optimization with respect to various distortion metrics, and we discuss the choice of the metric in two common but distinct scenarios, proposing a perceptually oriented comparison metric based on the relation of the problem at hand to the problem of compression of dynamic meshes.  相似文献   

3.
Most surfaces, be it from a fine‐art artifact or a mechanical object, are characterized by a strong self‐similarity. This property finds its source in the natural structures of objects but also in the fabrication processes: regularity of the sculpting technique, or machine tool. In this paper, we propose to exploit the self‐similarity of the underlying shapes for compressing point cloud surfaces which can contain millions of points at a very high precision. Our approach locally resamples the point cloud in order to highlight the self‐similarity of the shape, while remaining consistent with the original shape and the scanner precision. It then uses this self‐similarity to create an ad hoc dictionary on which the local neighborhoods will be sparsely represented, thus allowing for a light‐weight representation of the total surface. We demonstrate the validity of our approach on several point clouds from fine‐arts and mechanical objects, as well as a urban scene. In addition, we show that our approach also achieves a filtering of noise whose magnitude is smaller than the scanner precision.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号