首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Development of geometry data compression techniques in the past years has been limited by the lack of a metric with proven correlation with human perception of mesh distortion. Many algorithms have been proposed, but usually the aim has been to minimise mean squared error, or some of its derivatives. In the field of dynamic mesh compression, the situation has changed with the recent proposal of the STED metric, which has been shown to capture the human perception of mesh distortion much better than previous metrics. In this paper we show how existing algorithms can be steered to provide optimal results with respect to this metric, and we propose a novel dynamic mesh compression algorithm, based on trajectory space PCA and Laplacian coordinates, specifically designed to minimise the newly proposed STED error. Our experiments show that using the proposed algorithm, we were able to reduce the required data rate by up to 50% while preserving the introduced STED error.  相似文献   

2.
This paper addresses the problem of representing dynamic 3D meshes in a compact way, so that they can be stored and transmitted efficiently. We focus on sequences of triangle meshes with shared connectivity, avoiding the necessity of having a skinning structure. Our method first computes an average mesh of the whole sequence in edge shape space. A discrete geometric Laplacian of this average surface is then used to encode the coefficients that describe the trajectories of the mesh vertices. Optionally, a novel spatio‐temporal predictor may be applied to the trajectories to further improve the compression rate. We demonstrate that our approach outperforms the current state of the art in terms of low data rate at a given perceived distortion, as measured by the STED and KG error metrics.  相似文献   

3.
We present a multi‐view stereo reconstruction technique that directly produces a complete high‐fidelity head model with consistent facial mesh topology. While existing techniques decouple shape estimation and facial tracking, our framework jointly optimizes for stereo constraints and consistent mesh parameterization. Our method is therefore free from drift and fully parallelizable for dynamic facial performance capture. We produce highly detailed facial geometries with artist‐quality UV parameterization, including secondary elements such as eyeballs, mouth pockets, nostrils, and the back of the head. Our approach consists of deforming a common template model to match multi‐view input images of the subject, while satisfying cross‐view, cross‐subject, and cross‐pose consistencies using a combination of 2D landmark detection, optical flow, and surface and volumetric Laplacian regularization. Since the flow is never computed between frames, our method is trivially parallelized by processing each frame independently. Accurate rigid head pose is extracted using a PCA‐based dimension reduction and denoising scheme. We demonstrate high‐fidelity performance capture results with challenging head motion and complex facial expressions around eye and mouth regions. While the quality of our results is on par with the current state‐of‐the‐art, our approach can be fully parallelized, does not suffer from drift, and produces face models with production‐quality mesh topologies.  相似文献   

4.
We propose a lossless, single‐rate triangle mesh topology codec tailored for fast data‐parallel GPU decompression. Our compression scheme coherently orders generalized triangle strips in memory. To unpack generalized triangle strips efficiently, we propose a novel parallel and scalable algorithm. We order vertices coherently to further improve our compression scheme. We use a variable bit‐length code for additional compression benefits, for which we propose a scalable data‐parallel decompression algorithm. For a set of standard benchmark models, we obtain (min: 3.7, med: 4.6, max: 7.6) bits per triangle. Our CUDA decompression requires only about 15% of the time it takes to render the model even with a simple shader.  相似文献   

5.
Reconstructing a surface mesh from a set of discrete point samples is a fundamental problem in geometric modeling. It becomes challenging in presence of ‘singularities’ such as boundaries, sharp features, and non‐manifolds. A few of the current research in reconstruction have addressed handling some of these singularities, but a unified approach to handle them all is missing. In this paper we allow the presence of various singularities by requiring that the sampled object is a collection of smooth surface patches with boundaries that can meet or intersect. Our algorithm first identifies and reconstructs the features where singularities occur. Next, it reconstructs the surface patches containing these feature curves. The identification and reconstruction of feature curves are achieved by a novel combination of the Gaussian weighted graph Laplacian and the Reeb graphs. The global reconstruction is achieved by a method akin to the well known Cocone reconstruction, but with weighted Delaunay triangulation that allows protecting the feature samples with balls. We provide various experimental results to demonstrate the effectiveness of our feature‐preserving singular surface reconstruction algorithm.  相似文献   

6.
Many processing operations are nowadays applied on 3D meshes like compression, watermarking, remeshing and so forth; these processes are mostly driven and/or evaluated using simple distortion measures like the Hausdorff distance and the root mean square error, however these measures do not correlate with the human visual perception while the visual quality of the processed meshes is a crucial issue. In that context we introduce a full‐reference 3D mesh quality metric; this metric can compare two meshes with arbitrary connectivity or sampling density and produces a score that predicts the distortion visibility between them; a visual distortion map is also created. Our metric outperforms its counterparts from the state of the art, in term of correlation with mean opinion scores coming from subjective experiments on three existing databases. Additionally, we present an application of this new metric to the improvement of rate‐distortion evaluation of recent progressive compression algorithms.  相似文献   

7.
This paper presents a novel interactive approach for adding depth information into hand‐drawn cartoon images and animations. In comparison to previous depth assignment techniques our solution requires minimal user effort and enables creation of consistent pop‐ups in a matter of seconds. Inspired by perceptual studies we formulate a custom tailored optimization framework that tries to mimic the way that a human reconstructs depth information from a single image. Its key advantage is that it completely avoids inputs requiring knowledge of absolute depth and instead uses a set of sparse depth (in)equalities that are much easier to specify. Since these constraints lead to a solution based on quadratic programming that is time consuming to evaluate we propose a simple approximative algorithm yielding similar results with much lower computational overhead. We demonstrate its usefulness in the context of a cartoon animation production pipeline including applications such as enhancement, registration, composition, 3D modelling and stereoscopic display.  相似文献   

8.
Raster‐based topographic maps are commonly used in geoinformation systems to overlay geographic entities on top of digital terrain models. Using compressed texture formats for encoding topographic maps allows reducing latency times while visualizing large geographic datasets. Topographic maps encompass high‐frequency content with large uniform regions, making current compressed texture formats inappropriate for encoding them. In this paper we present a method for locally‐adaptive compression of topographic maps. Key elements include a Hilbert scan to maximize spatial coherence, efficient encoding of homogeneous image regions through arbitrarily‐sized texel runs, a cumulative run‐length encoding supporting fast random‐access, and a compression algorithm supporting lossless and lossy compression. Our scheme can be easily implemented on current programmable graphics hardware allowing real‐time GPU decompression and rendering of bilinear‐filtered topographic maps.  相似文献   

9.
We present an example‐based approach to rendering hand‐colored animations which delivers visual richness comparable to real artwork while enabling control over the amount of perceived temporal noise. This is important both for artistic purposes and viewing comfort, but is tedious or even intractable to achieve manually. We analyse typical features of real hand‐colored animations and propose an algorithm that tries to mimic them using only static examples of drawing media. We apply the algorithm to various animations using different drawing media and compare the quality of synthetic results with real artwork. To verify our method perceptually, we conducted experiments confirming that our method delivers distinguishable noise levels and reduces eye strain. Finally, we demonstrate the capabilities of our method to mask imperfections such as shower‐door artifacts.  相似文献   

10.
We propose a new real‐time temporal filtering and antialiasing (AA) method for rasterization graphics pipelines. Our method is based on Pixel History Linear Models (PHLM), a new concept for modeling the history of pixel shading values over time using linear models. Based on PHLM, our method can predict per‐pixel variations of the shading function between consecutive frames. This combines temporal reprojection with per‐pixel shading predictions in order to provide temporally coherent shading, even in the presence of very noisy input images. Our method can address both spatial and temporal aliasing problems under a unique filtering framework that minimizes filtering error through a recursive least squares algorithm. We demonstrate our method working with a commercial deferred shading engine for rasterization and with our own OpenGL deferred shading renderer. We have implemented our method in GPU and it has shown significant reduction of temporal flicker in very challenging scenarios including foliage rendering, complex non‐linear camera motions, dynamic lighting, reflections, shadows and fine geometric details. Our approach, based on PHLM, avoids the creation of visible ghosting artifacts and it reduces the filtering overblur characteristic of temporal deflickering methods. At the same time, the results are comparable to state‐of‐the‐art real‐time filters in terms of temporal coherence.  相似文献   

11.
Most state‐of‐the‐art compression algorithms use complex connectivity traversal and prediction schemes, which are not efficient enough for online compression of large meshes. In this paper we propose a scalable massively parallel approach for compression and decompression of large triangle meshes using the GPU. Our method traverses the input mesh in a parallel breadth‐first manner and encodes the connectivity data similarly to the well known cut‐border machine. Geometry data is compressed using a local prediction strategy. In contrast to the original cut‐border machine, we can additionally handle triangle meshes with inconsistently oriented faces. Our approach is more than one order of magnitude faster than currently used methods and achieves competitive compression rates.  相似文献   

12.
We propose a versatile pipeline to render B‐Rep models interactively, precisely and without rendering‐related artifacts such as cracks. Our rendering method is based on dynamic surface evaluation using both tesselation and ray‐casting, and direct GPU surface trimming. An initial rendering of the scene is performed using dynamic tesselation. The algorithm we propose reliably detects then fills up cracks in the rendered image. Crack detection works in image space, using depth information, while crack‐filling is either achieved in image space using a simple classification process, or performed in object space through selective ray‐casting. The crack filling method can be dynamically changed at runtime. Our image space crack filling approach has a limited runtime cost and enables high quality, real‐time navigation. Our higher quality, object space approach results in a rendering of similar quality than full‐scene ray‐casting, but is 2 to 6 times faster, can be used during navigation and provides accurate, reliable rendering. Integration of our work with existing tesselation‐based rendering engines is straightforward.  相似文献   

13.
We propose a watermarking algorithm for polygonal meshes based on the modification of the Laplacian coordinates. More specifically, we first compute the Laplacian coordinates (x,y,z) of the mesh vertices, then construct the histogram of the lengths of the (x,y,z) vectors, and finally, insert the watermark by altering the shape of that histogram. The watermark extraction is carried out blindly, with no reference to the host model. The proposed method is more robust than several existing high capacity watermarking algorithms. In particular, it is able to resist attacks such as translations, rotations, uniform scaling and vertex reordering, due to the invariance of the histogram of the Laplacian vector lengths under such transformations. Compared to the existing robust watermarking methods, our experiments show that the proposed method can better resist common mesh editing attacks, due to the good behaviour of the Laplacian coordinates under such operations.  相似文献   

14.
Particle‐based simulation techniques, like the discrete element method or molecular dynamics, are widely used in many research fields. In real‐time explorative visualization it is common to render the resulting data using opaque spherical glyphs with local lighting only. Due to massive overlaps, however, inner structures of the data are often occluded rendering visual analysis impossible. Furthermore, local lighting is not sufficient as several important features like complex shapes, holes, rifts or filaments cannot be perceived well. To address both problems we present a new technique that jointly supports transparency and ambient occlusion in a consistent illumination model. Our approach is based on the emission‐absorption model of volume rendering. We provide analytic solutions to the volume rendering integral for several density distributions within a spherical glyph. Compared to constant transparency our approach preserves the three‐dimensional impression of the glyphs much better. We approximate ambient illumination with a fast hierarchical voxel cone‐tracing approach, which builds on a new real‐time voxelization of the particle data. Our implementation achieves interactive frame rates for millions of static or dynamic particles without any preprocessing. We illustrate the merits of our method on real‐world data sets gaining several new insights.  相似文献   

15.
This paper proposes two variants of a simple but efficient algorithm for structure‐preserving halftoning. Our algorithm extends Floyd‐Steinberg error diffusion; the goal of our extension is not only to produce good tone similarity but also to preserve structure and especially contrast, motivated by our intuition that human perception is sensitive to contrast. By enhancing contrast we attempt to preserve and enhance structure also. Our basic algorithm employs an adaptive, contrast‐aware mask. To enhance contrast, darker pixels should be more likely to be chosen as black pixels while lighter pixels should be more likely to be set as white. Therefore, when the positive error is diffused to nearby pixels in a mask, the dark pixels absorb less error and the light pixels absorb more. Conversely, negative error is distributed preferentially to dark pixels. We also propose using a mask with values that drop off steeply from the centre, intended to promote good spatial distribution. It is a very fast method whose speed mainly depends on the size of the mask. But this method suffers from distracting patterns. We then propose a variant on the basic idea which overcomes the first algorithm's shortcomings while maintaining its advantages through a priority‐aware scheme. Rather than proceeding in random or raster order, we sort the image first; each pixel is assigned a priority based on its up‐to‐date distance to black or to white, and pixels with extreme intensities are processed earlier. Since we use the same mask strategy as before, we promote good spatial distribution and high contrast. We use tone similarity, structure similarity, and contrast similarity to validate our algorithm. Comparisons with recent structure‐aware algorithms show that our method gives better results without sacrificing speed.  相似文献   

16.
Poisson‐disk sampling is a popular sampling method because of its blue noise power spectrum, but generation of these samples is computationally very expensive. In this paper, we propose an efficient method for fast generation of a large number of blue noise samples using a small initial patch of Poisson‐disk samples that can be generated with any existing approach. Our main idea is to convolve this set of samples with another to generate our final set of samples. We use the convolution theorem from signal processing to show that the spectrum of the resulting sample set preserves the blue noise properties. Since our method is approximate, we have error with respect to the true Poisson‐disk samples, but we show both mathematically and practically that this error is only a function of the number of samples in the small initial patch and is therefore bounded. Our method is parallelizable and we demonstrate an implementation of it on a GPU, running more than 10 times faster than any previous method and generating more than 49 million 2D samples per second. We can also use the proposed approach to generate multidimensional blue noise samples.  相似文献   

17.
Image calibration requires both linearization of pixel values and scaling so that values in the image correspond to real‐world luminances. In this paper we focus on the latter and rather than rely on camera characterization, we calibrate images by analysing their content and metadata, obviating the need for expensive measuring devices or modeling of lens and camera combinations. Our analysis correlates sky pixel values to luminances that would be expected based on geographical metadata. Combined with high dynamic range (HDR) imaging, which gives us linear pixel data, our algorithm allows us to find absolute luminance values for each pixel—effectively turning digital cameras into absolute light meters. To validate our algorithm we have collected and annotated a calibrated set of HDR images and compared our estimation with several other approaches, showing that our approach is able to more accurately recover absolute luminance. We discuss various applications and demonstrate the utility of our method in the context of calibrated color appearance reproduction and lighting design.  相似文献   

18.
Motion Compression using Principal Geodesics Analysis   总被引:1,自引:0,他引:1  
Due to the growing need for large quantities of human animation data in the entertainment industry, it has become a necessity to compress motion capture sequences in order to ease their storage and transmission. We present a novel, lossy compression method for human motion data that exploits both temporal and spatial coherence. Given one motion, we first approximate the poses manifold using Principal Geodesics Analysis (PGA) in the configuration space of the skeleton. We then search this approximate manifold for poses matching end-effectors constraints using an iterative minimization algorithm that allows for real-time, data-driven inverse kinematics. The compression is achieved by only storing the approximate manifold parametrization along with the end-effectors and root joint trajectories, also compressed, in the output data. We recover poses using the IK algorithm given the end-effectors trajectories. Our experimental results show that considerable compression rates can be obtained using our method, with few reconstruction and perceptual errors.  相似文献   

19.
In this paper, we describe a novel approach for the reconstruction of animated meshes from a series of time‐deforming point clouds. Given a set of unordered point clouds that have been captured by a fast 3‐D scanner, our algorithm is able to compute coherent meshes which approximate the input data at arbitrary time instances. Our method is based on the computation of an implicit function in ?4 that approximates the time‐space surface of the time‐varying point cloud. We then use the four‐dimensional implicit function to reconstruct a polygonal model for the first time‐step. By sliding this template mesh along the time‐space surface in an as‐rigid‐as‐possible manner, we obtain reconstructions for further time‐steps which have the same connectivity as the previously extracted mesh while recovering rigid motion exactly. The resulting animated meshes allow accurate motion tracking of arbitrary points and are well suited for animation compression. We demonstrate the qualities of the proposed method by applying it to several data sets acquired by real‐time 3‐D scanners.  相似文献   

20.
Mobile phones and tablets are rapidly gaining significance as omnipresent image and video capture devices. In this context we present an algorithm that allows such devices to capture high dynamic range (HDR) video. The design of the algorithm was informed by a perceptual study that assesses the relative importance of motion and dynamic range. We found that ghosting artefacts are more visually disturbing than a reduction in dynamic range, even if a comparable number of pixels is affected by each. We incorporated these findings into a real‐time, adaptive metering algorithm that seamlessly adjusts its settings to take exposures that will lead to minimal visual artefacts after recombination into an HDR sequence. It is uniquely suitable for real‐time selection of exposure settings. Finally, we present an off‐line HDR reconstruction algorithm that is matched to the adaptive nature of our real‐time metering approach.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号