首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper presents a novel wavelet‐based transform and coding scheme for irregular meshes. The transform preserves geometric features at lower resolutions by adaptive vertex sampling and retriangulation, resulting in more accurate subsampling and better avoidance of smoothing and aliasing artefacts. By employing octree‐based coding techniques, the encoding of both connectivity and geometry information is decoupled from any mesh traversal order, and allows for exploiting the intra‐band statistical dependencies between wavelet coefficients. Improvements over the state of the art obtained by our approach are three‐fold: (1) improved rate–distortion performance over Wavemesh and IPR for both the Hausdorff and root mean square distances at low‐to‐mid‐range bitrates, most obvious when clear geometric features are present while remaining competitive for smooth, feature‐poor models; (2) improved rendering performance at any triangle budget, translating to a better quality for the same runtime memory footprint; (3) improved visual quality when applying similar limits to the bitrate or triangle budget, showing more pronounced improvements than rate–distortion curves.  相似文献   

2.
We address the problem of generating quality surface triangle meshes from 3D point clouds sampled on piecewise smooth surfaces. Using a feature detection process based on the covariance matrices of Voronoi cells, we first extract from the point cloud a set of sharp features. Our algorithm also runs on the input point cloud a reconstruction process, such as Poisson reconstruction, providing an implicit surface. A feature preserving variant of a Delaunay refinement process is then used to generate a mesh approximating the implicit surface and containing a faithful representation of the extracted sharp edges. Such a mesh provides an enhanced trade‐off between accuracy and mesh complexity. The whole process is robust to noise and made versatile through a small set of parameters which govern the mesh sizing, approximation error and shape of the elements. We demonstrate the effectiveness of our method on a variety of models including laser scanned datasets ranging from indoor to outdoor scenes.  相似文献   

3.
This paper presents a novel algorithm for hierarchical random accessible mesh decompression. Our approach progressively decompresses the requested parts of a mesh without decoding less interesting parts. Previous approaches divided a mesh into independently compressed charts and a base coarse mesh. We propose a novel hierarchical representation of the mesh. We build this representation by using a boundary-based approach to recursively split the mesh in two parts, under the constraint that any of the two resulting submeshes should be reconstructible independently.
In addition to this decomposition technique, we introduce the concepts of opposite vertex and context dependant numbering . This enables us to achieve seemingly better compression ratios than previous work on quad and higher degree polygonal meshes. Our coder uses about 3 bits per polygon for connectivity and 14 bits per vertex for geometry using 12 bits quantification.  相似文献   

4.
A recent technique that forms virtual ray lights (VRLs) from path segments in media, reduces the artifacts common to VPL approaches in participating media, however, distracting singularities still remain. We present Virtual Beam Lights (VBLs), a progressive many‐lights algorithm for rendering complex indirect transport paths in, from, and to media. VBLs are efficient and can handle heterogeneous media, anisotropic scattering, and moderately glossy surfaces, while provably converging to ground truth. We inflate ray lights into beam lights with finite thicknesses to eliminate the remaining singularities. Furthermore, we devise several practical schemes for importance sampling the various transport contributions between camera rays, light rays, and surface points. VBLs produce artifact‐free images faster than VRLs, especially when glossy surfaces and/or anisotropic phase functions are present. Lastly, we employ a progressive thickness reduction scheme for VBLs in order to render results that converge to ground truth.  相似文献   

5.
We propose a novel algorithm for construction of bounding volume hierarchies (BVHs) for multi‐core CPU architectures. The algorithm constructs the BVH by a divisive top‐down approach using a progressively refined cut of an existing auxiliary BVH. We propose a new strategy for refining the cut that significantly reduces the workload of individual steps of BVH construction. Additionally, we propose a new method for integrating spatial splits into the BVH construction algorithm. The auxiliary BVH is constructed using a very fast method such as LBVH based on Morton codes. We show that the method provides a very good trade‐off between the build time and ray tracing performance. We evaluated the method within the Embree ray tracing framework and show that it compares favorably with the Embree BVH builders regarding build time while maintaining comparable ray tracing speed.  相似文献   

6.
Social networks collected by historians or sociologists typically have a large number of actors and edge attributes. Applying social network analysis (SNA) algorithms to these networks produces additional attributes such as degree, centrality, and clustering coefficients. Understanding the effects of this plethora of attributes is one of the main challenges of multivariate SNA. We present the design of GraphDice, a multivariate network visualization system for exploring the attribute space of edges and actors. GraphDice builds upon the ScatterDice system for its main multidimensional navigation paradigm, and extends it with novel mechanisms to support network exploration in general and SNA tasks in particular. Novel mechanisms include visualization of attributes of interval type and projection of numerical edge attributes to node attributes. We show how these extensions to the original ScatterDice system allow to support complex visual analysis tasks on networks with hundreds of actors and up to 30 attributes, while providing a simple and consistent interface for interacting with network data.  相似文献   

7.
We introduce an affective image recoloring method for changing the overall mood in the image in a numerically measurable way. Given a semantically segmented source image and a target emotion, our system finds reference image segments from the collection of images that have been tagged via crowdsourcing with numerically measured emotion labels. We then recolorize the source segments using colors from the selected target segments while preserving the gradient of the source image to generate a seamless and natural result. User study confirms the effectiveness of our method in accomplishing the stated goal of altering the mood of the image to match the target emotion level.  相似文献   

8.
Many processing operations are nowadays applied on 3D meshes like compression, watermarking, remeshing and so forth; these processes are mostly driven and/or evaluated using simple distortion measures like the Hausdorff distance and the root mean square error, however these measures do not correlate with the human visual perception while the visual quality of the processed meshes is a crucial issue. In that context we introduce a full‐reference 3D mesh quality metric; this metric can compare two meshes with arbitrary connectivity or sampling density and produces a score that predicts the distortion visibility between them; a visual distortion map is also created. Our metric outperforms its counterparts from the state of the art, in term of correlation with mean opinion scores coming from subjective experiments on three existing databases. Additionally, we present an application of this new metric to the improvement of rate‐distortion evaluation of recent progressive compression algorithms.  相似文献   

9.
10.
We propose a lossless, single‐rate triangle mesh topology codec tailored for fast data‐parallel GPU decompression. Our compression scheme coherently orders generalized triangle strips in memory. To unpack generalized triangle strips efficiently, we propose a novel parallel and scalable algorithm. We order vertices coherently to further improve our compression scheme. We use a variable bit‐length code for additional compression benefits, for which we propose a scalable data‐parallel decompression algorithm. For a set of standard benchmark models, we obtain (min: 3.7, med: 4.6, max: 7.6) bits per triangle. Our CUDA decompression requires only about 15% of the time it takes to render the model even with a simple shader.  相似文献   

11.
State‐of‐the‐art density estimation methods for rendering participating media rely on a dense photon representation of the radiance distribution within a scene. A critical bottleneck of such kernel‐based approaches is the excessive number of photons that are required in practice to resolve fine illumination details, while controlling the amount of noise. In this paper, we propose a parametric density estimation technique that represents radiance using a hierarchical Gaussian mixture. We efficiently obtain the coefficients of this mixture using a progressive and accelerated form of the Expectation‐Maximization algorithm. After this step, we are able to create noise‐free renderings of high‐frequency illumination using only a few thousand Gaussian terms, where millions of photons are traditionally required. Temporal coherence is trivially supported within this framework, and the compact footprint is also useful in the context of real‐time visualization. We demonstrate a hierarchical ray tracing‐based implementation, as well as a fast splatting approach that can interactively render animated volume caustics.  相似文献   

12.
Topological and geometrical methods constitute common tools for the analysis of high‐dimensional scientific data sets. Geometrical methods such as projection algorithms focus on preserving distances in the data set. Topological methods such as contour trees, by contrast, focus on preserving structural and connectivity information. By combining both types of methods, we want to benefit from their individual advantages. To this end, we describe an algorithm that uses persistent homology to analyse the topology of a data set. Persistent homology identifies high‐dimensional holes in data sets, describing them as simplicial chains. We localize these chains using geometrical information of the data set, which we obtain from geodesic distances on a neighbourhood graph. The localized chains describe the structure of point clouds. We represent them using an interactive graph, in which each node describes a single chain and its geometrical properties. This graph yields a more intuitive understanding of multivariate point clouds and simplifies comparisons of time‐varying data. Our method focuses on detecting and analysing inhomogeneous regions, i.e. holes, in a data set because these regions characterize data in a different manner, thereby leading to new insights. We demonstrate the potential of our method on data sets from particle physics, political science and meteorology.  相似文献   

13.
Recently, approaches have been put forward that focus on the recognition of mesh semantic meanings. These methods usually need prior knowledge learned from training dataset, but when the size of the training dataset is small, or the meshes are too complex, the segmentation performance will be greatly effected. This paper introduces an approach to the semantic mesh segmentation and labeling which incorporates knowledge imparted by both segmented, labeled meshes, and unsegmented, unlabeled meshes. A Conditional Random Fields (CRF) based objective function measuring the consistency of labels and faces, labels of neighbouring faces is proposed. To implant the information from the unlabeled meshes, we add an unlabeled conditional entropy into the objective function. With the entropy, the objective function is not convex and hard to optimize, so we modify the Virtual Evidence Boosting (VEB) to solve the semi‐supervised problem efficiently. Our approach yields better results than those methods which only use limited labeled meshes, especially when many unlabeled meshes exist. The approach reduces the overall system cost as well as the human labelling cost required during training. We also show that combining knowledge from labeled and unlabeled meshes outperforms using either type of meshes alone.  相似文献   

14.
Applying lossy data compression to climate model output is an attractive means of reducing the enormous volumes of data generated by climate models. However, because lossy data compression does not exactly preserve the original data, its application to scientific data must be done judiciously. To this end, a collection of measures is being developed to evaluate various aspects of lossy compression quality on climate model output. Given the importance of data visualization to climate scientists interacting with model output, any suite of measures must include a means of assessing whether images generated from the compressed model data are noticeably different from images based on the original model data. Therefore, in this work we conduct a forced‐choice visual evaluation study with climate model data that surveyed more than one hundred participants with domain relevant expertise. In addition to the images created from unaltered climate model data, study images are generated from model data that is subjected to two different types of lossy compression approaches and multiple levels (amounts) of compression. Study participants indicate whether a visual difference can be seen, with respect to the reference image, due to lossy compression effects. We assess the relationship between the perceptual scores from the user study to a number of common (full reference) image quality assessment (IQA) measures, and use statistical models to suggest appropriate measures and thresholds for evaluating lossily compressed climate data. We find the structural similarity index (SSIM) to perform the best, and our findings indicate that the threshold required for climate model data is much higher than previous findings in the literature.  相似文献   

15.
Compression of Dense and Regular Point Clouds   总被引:4,自引:0,他引:4  
We present a simple technique for single‐rate compression of point clouds sampled from a surface, based on a spanning tree of the points. Unlike previous methods, we predict future vertices using both a linear predictor, which uses the previous edge as a predictor for the current edge, and lateral predictors that rotate the previous edge 90°left or right about an estimated normal. By careful construction of the spanning tree and choice of prediction rules, our method improves upon existing compression rates when applied to regularly sampled point sets, such as those produced by laser range scanning or uniform tesselation of higher‐order surfaces. For less regular sets of points, the compression rate is still generally within 1.5 bits per point of other compression algorithms.  相似文献   

16.
We present the results from a user study looking at the ability of observers to mentally integrate wind direction and magnitude over a vector field. The data set chosen for the study is an MM5 (PSU/NCAR Mesoscale Model) simulation of Hurricane Lili over the Gulf of Mexico as it approaches the southeastern United States. Nine observers participated in the study. This study investigates the effect of layering on the observer's ability to determine the magnitude and direction of a vector field. We found a tendency for observers to underestimate the magnitude of the vectors and a counter‐clockwise bias when determining the average direction of a vector field. We completed an additional study with two observers to try to uncover the source of the counter‐clockwise bias. These results have direct implications to atmospheric scientists, but may also be able to be applied to other fields that use 2D vector fields.  相似文献   

17.
This paper considers the problem of interactively finding the cutting contour to extract components from a given mesh. Some existing methods support cuts of arbitrary shape but require careful and tedious input from the user. Others need little user input however they are sensitive to user input and need a postprocessing step to smooth the generated jaggy cutting contours. The popular geometric snake can be used to optimize the cutting contour, but it cannot deal with the topology change. In this paper, we propose a geodesic curvature flow based framework to overcome all these problems. Since in many cases the meaningful cutting contour on a 3D mesh is locally shortest in the sense of some weighted curve length, the geodesic curvature flow is an ideal tool for our problem. It evolves the cutting contour to the nearby local minimum. We should mention that the previous numerical scheme, discretized geodesic curvature flow (dGCF) is too slow and has not been applied to mesh segmentation. With a careful observation to dGCF, we devise here a fast computation scheme called fast geodesic curvature flow (FGCF), which only needs to solve a smaller and easier problem. The initial cutting contour is generated by a variant of random walks algorithm, which is very fast and gives reasonable cutting result with little user input. Experiment results on the benchmark mesh segmentation data set show that our proposed framework is robust to user input and capable of producing good results reflecting geometric features and human shape perception.  相似文献   

18.
Directors employ a process called “color grading” to add color styles to feature films. Color grading is used for a number of reasons, such as accentuating a certain emotion or expressing the signature look of a director. We collect a database of feature film clips and label them with tags such as director, emotion, and genre. We then learn a model that maps from the low‐level color and tone properties of film clips to the associated labels. This model allows us to examine a number of common hypotheses on the use of color to achieve goals, such as specific emotions. We also describe a method to apply our learned color styles to new images and videos. Along with our analysis of color grading techniques, we demonstrate a number of images and videos that are automatically filtered to resemble certain film styles.  相似文献   

19.
The use of high dynamic range (HDR) textures in real‐time graphics applications can increase realism and provide a more vivid experience. However, the increased bandwidth and storage requirements for uncompressed HDR data can become a major bottleneck. Hence, several recent algorithms for HDR texture compression have been proposed. In this paper, we discuss several practical issues one has to confront in order to develop and implement HDR texture compression schemes. These include improved texture filtering and efficient offline compression. For compression, we describe how Procrustes analysis can be used to quickly match a predefined template shape against chrominance data. To reduce the cost of HDR texture filtering, we perform filtering prior to the colour transformation, and use a simple trick to reduce the incurred errors. We also introduce a number of novel compression modes, which can be combined with existing compression schemes, or used on their own.  相似文献   

20.
We introduce adaptive volumetric shadow maps (AVSM), a real‐time shadow algorithm that supports high‐quality shadowing from dynamic volumetric media such as hair and smoke. The key contribution of AVSM is the introduction of a streaming simplification algorithm that generates an accurate volumetric light attenuation function using a small fixed memory footprint. This compression strategy leads to high performance because the visibility data can remain in on‐chip memory during simplification and can be efficiently sampled during rendering. We demonstrate that AVSM compression closely approximates the ground‐truth correct solution and performs competitively to existing real‐time rendering techniques while providing higher quality volumetric shadows.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号