首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 93 毫秒
1.
Matrix Trees     
We propose a new data representation for octrees and kd‐trees that improves upon memory size and algorithm speed of existing techniques. While pointerless approaches exploit the regular structure of the tree to facilitate efficient data access, their memory footprint becomes prohibitively large as the height of the tree increases. Pointerbased trees require memory consumption proportional to the number of tree nodes, thus exploiting the typical sparsity of large trees. Yet, their traversal is slowed by the need to follow explicit pointers across the different levels. Our solution is a pointerless approach that represents each tree level with its own matrix, as opposed to traditional pointerless trees that use only a single vector. This novel data organization allows us to fully exploit the tree's regular structure and improve the performance of tree operations. By using a sparse matrix data structure we obtain a representation that is suited for sparse and dense trees alike. In particular, it uses less total memory than pointer‐based trees even when the data set is extremely sparse. We show how our approach is easily implemented on the GPU and illustrate its performance in typical visualization scenarios.  相似文献   

2.
We introduce a novel method for non‐rigid shape matching, designed to address the symmetric ambiguity problem present when matching shapes with intrinsic symmetries. Unlike the majority of existing methods which try to overcome this ambiguity by sampling a set of landmark correspondences, we address this problem directly by performing shape matching in an appropriate quotient space, where the symmetry has been identified and factored out. This allows us to both simplify the shape matching problem by matching between subspaces, and to return multiple solutions with equally good dense correspondences. Remarkably, both symmetry detection and shape matching are done without establishing any landmark correspondences between either points or parts of the shapes. This allows us to avoid an expensive combinatorial search present in most intrinsic symmetry detection and shape matching methods. We compare our technique with state‐of‐the‐art methods and show that superior performance can be achieved both when the symmetry on each shape is known and when it needs to be estimated.  相似文献   

3.
We propose a method for mapping polynomial volumes. Given a closed surface and an initial template volume grid, our method deforms the template grid by fitting its boundary to the input surface while minimizing a volume distortion criterion. The result is a point‐to‐point map distorting linear cells into curved ones. Our method is based on several extensions of Voronoi Squared Distance Minimization (VSDM) combined with a higher‐order finite element formulation of the deformation energy. This allows us to globally optimize the mapping without prior parameterization. The anisotropic VSDM formulation allows for sharp and semi‐sharp features to be implicitly preserved without tagging. We use a hierarchical finite element function basis that selectively adapts to the geometric details. This makes both the method more efficient and the representation more compact. We apply our method to geometric modeling applications in computer‐aided design and computer graphics, including mixed‐element meshing, mesh optimization, subdivision volume fitting, and shell meshing.  相似文献   

4.
Research on microscopy data from developing biological samples usually requires tracking individual cells over time. When cells are three‐dimensionally and densely packed in a time‐dependent scan of volumes, tracking results can become unreliable and uncertain. Not only are cell segmentation results often inaccurate to start with, but it also lacks a simple method to evaluate the tracking outcome. Previous cell tracking methods have been validated against benchmark data from real scans or artificial data, whose ground truth results are established by manual work or simulation. However, the wide variety of real‐world data makes an exhaustive validation impossible. Established cell tracking tools often fail on new data, whose issues are also difficult to diagnose with only manual examinations. Therefore, data‐independent tracking evaluation methods are desired for an explosion of microscopy data with increasing scale and resolution. In this paper, we propose the uncertainty footprint, an uncertainty quantification and visualization technique that examines nonuniformity at local convergence for an iterative evaluation process on a spatial domain supported by partially overlapping bases. We demonstrate that the patterns revealed by the uncertainty footprint indicate data processing quality in two algorithms from a typical cell tracking workflow – cell identification and association. A detailed analysis of the patterns further allows us to diagnose issues and design methods for improvements. A 4D cell tracking workflow equipped with the uncertainty footprint is capable of self diagnosis and correction for a higher accuracy than previous methods whose evaluation is limited by manual examinations.  相似文献   

5.
In this paper we present an extended critical point concept which allows us to apply vector field topology in the case of unsteady flow. We propose a measure for unsteadiness which describes the rate of change of the velocities in a fluid element over time. This measure allows us to select particles for which topological properties remain intact inside a finite spatio‐temporal neighborhood. One benefit of this approach is that the classification of critical points based on the eigenvalues of the Jacobian remains meaningful. In the steady case the proposed criterion reduces to the classical definition of critical points. As a first step we show that finding an optimal Galilean frame of reference can be obtained implicitly by analyzing the acceleration field. In a second step we show that this can be extended by switching to the Lagrangian frame of reference. This way the criterion can detect critical points moving along intricate trajectories. We analyze the behavior of the proposed criterion based on two analytical vector fields for which a correct solution is defined by their inherent symmetries and present results for numerical vector fields.  相似文献   

6.
Eleven tone‐mapping operators intended for video processing are analyzed and evaluated with camera‐captured and computer‐generated high‐dynamic‐range content. After optimizing the parameters of the operators in a formal experiment, we inspect and rate the artifacts (flickering, ghosting, temporal color consistency) and color rendition problems (brightness, contrast and color saturation) they produce. This allows us to identify major problems and challenges that video tone‐mapping needs to address. Then, we compare the tone‐mapping results in a pair‐wise comparison experiment to identify the operators that, on average, can be expected to perform better than the others and to assess the magnitude of differences between the best performing operators.  相似文献   

7.
In this paper we show how to use two‐colored pixels as a generic tool for image processing. We apply two‐colored pixels as a basic operator as well as a supporting data structure for several image processing applications. Traditionally, images are represented by a regular grid of square pixels with one constant color each. In the two‐colored pixel representation, we reduce the image resolution and replace blocks of N × N pixels by one square that is split by a (feature) line into two regions with constant colors. We show how the conversion of standard mono‐colored pixel images into two‐colored pixel images can be computed efficiently by applying a hierarchical algorithm along with a CUDA‐based implementation. Two‐colored pixels overcome some of the limitations that classical pixel representations have, and their feature lines provide minimal geometric information about the underlying image region that can be effectively exploited for a number of applications. We show how to use two‐colored pixels as an interactive brush tool, achieving realtime performance for image abstraction and non‐photorealistic filtering. Additionally, we propose a realtime solution for image retargeting, defined as a linear minimization problem on a regular or even adaptive two‐colored pixel image. The concept of two‐colored pixels can be easily extended to a video volume, and we demonstrate this for the example of video retargeting.  相似文献   

8.
9.
Dart‐throwing can generate ideal Poisson‐disk distributions with excellent blue noise properties, but is very computationally expensive if a maximal point set is desired. In this paper, we observe that the Poisson‐disk sampling problem can be posed in terms of importance sampling by representing the available space to be sampled as a probability density function (pdf). This allows us to develop an efficient algorithm for the generation of maximal Poisson‐disk distributions with quality similar to naïve dart‐throwing but without rejection of samples. In our algorithm, we first position samples in one dimension based on its marginal cumulative distribution function (cdf). We then throw samples in the other dimension only in the regions which are available for sampling. After each 2D sample is placed, we update the cdf and data structures to keep track of the available regions. In addition to uniform sampling, our method is able to perform variable‐density sampling with small modifications. Finally, we also propose a new min‐conflict metric for variable‐density sampling which results in better adaptation of samples to the underlying importance field.  相似文献   

10.
We present photon beam diffusion, an efficient numerical method for accurately rendering translucent materials. Our approach interprets incident light as a continuous beam of photons inside the material. Numerically integrating diffusion from such extended sources has long been assumed computationally prohibitive, leading to the ubiquitous single‐depth dipole approximation and the recent analytic sum‐of‐Gaussians approach employed by Quantized Diffusion. In this paper, we show that numerical integration of the extended beam is not only feasible, but provides increased speed, flexibility, numerical stability, and ease of implementation, while retaining the benefits of previous approaches. We leverage the improved diffusion model, but propose an efficient and numerically stable Monte Carlo integration scheme that gives equivalent results using only 3–5 samples instead of 20–60 Gaussians as in previous work. Our method can account for finite and multi‐layer materials, and additionally supports directional incident effects at surfaces. We also propose a novel diffuse exact single‐scattering term which can be integrated in tandem with the multi‐scattering approximation. Our numerical approach furthermore allows us to easily correct inaccuracies of the diffusion model and even combine it with more general Monte Carlo rendering algorithms. We provide practical details necessary for efficient implementation, and demonstrate the versatility of our technique by incorporating it on top of several rendering algorithms in both research and production rendering systems.  相似文献   

11.
We present a novel physically‐based method to visualize stress tensor fields. By incorporating photoelasticity into traditional raycasting and extending it with reflection and refraction, taking into account polarization, we obtain the virtual counterpart to traditional experimental polariscopes. This allows us to provide photoelastic analysis of stress tensor fields in arbitrary domains. In our model, the optical material properties, such as stress‐optic coefficient and refractive index, can either be chosen in compliance with the subject under investigation, or, in case of stress problems that do not model optical properties or that are not transparent, be chosen according to known or even new transparent materials. This enables direct application of established polariscope methodology together with respective interpretation. Using a GPU‐based implementation, we compare our technique to experimental data, and demonstrate its utility with several simulated datasets.  相似文献   

12.
In this paper we present a hybrid approach to reconstruct hair dynamics from multi‐view video sequences, captured under uncontrolled lighting conditions. The key of this method is a refinement approach that combines image‐based reconstruction techniques with physically based hair simulation. Given an initially reconstructed sequence of hair fiber models, we develop a hair dynamics refinement system using particle‐based simulation and incompressible fluid simulation. The system allows us to improve reconstructed hair fiber motions and complete missing fibers caused by occlusion or tracking failure. The refined space‐time hair dynamics are consistent with video inputs and can be also used to generate novel hair animations of different hair styles. We validate this method through various real hair examples.  相似文献   

13.
Semi‐regular triangle remeshing algorithms convert irregular surface meshes into semi‐regular ones. Especially in the field of computer graphics, semi‐regularity is an interesting property because it makes meshes highly suitable for multi‐resolution analysis. In this paper, we survey the numerous remeshing algorithms that have been developed over the past two decades. We propose different classifications to give new and comprehensible insights into both existing methods and issues. We describe how considerable obstacles have already been overcome, and discuss promising perspectives.  相似文献   

14.
Skeletons are powerful geometric abstractions that provide useful representations for a number of geometric operations. The straight skeleton has a lower combinatorial complexity compared with the medial axis. Moreover, while the medial axis of a polyhedron is composed of quadric surfaces the straight skeleton just consist of planar faces. Although there exist several methods to compute the straight skeleton of a polygon, the straight skeleton of polyhedra has been paid much less attention. We require to compute the skeleton of very large datasets storing orthogonal polyhedra. Furthermore, we need to treat geometric degeneracies that usually arise when dealing with orthogonal polyhedra. We present a new approach so as to robustly compute the straight skeleton of orthogonal polyhedra. We follow a geometric technique that works directly with the boundary of an orthogonal polyhedron. Our approach is output sensitive with respect to the number of vertices of the skeleton and solves geometric degeneracies. Unlike the existing straight skeleton algorithms that shrink the object boundary to obtain the skeleton, our algorithm relies on the plane sweep paradigm. The resulting skeleton is only composed of axis‐aligned and 45° rotated planar faces and edges.  相似文献   

15.
In this paper, we present new solutions for the interactive modeling of city layouts that combine the power of procedural modeling with the flexibility of manual modeling. Procedural modeling enables us to quickly generate large city layouts, while manual modeling allows us to hand‐craft every aspect of a city. We introduce transformation and merging operators for both topology preserving and topology changing transformations based on graph cuts. In combination with a layering system, this allows intuitive manipulation of urban layouts using operations such as drag and drop, translation, rotation etc. In contrast to previous work, these operations always generate valid, i.e., intersection‐free layouts. Furthermore, we introduce anchored assignments to make sure that modifications are persistent even if the whole urban layout is regenerated.  相似文献   

16.
Significant progress has been made in high-quality hair rendering, but it remains difficult to choose parameter values that reproduce a given real hair appearance. In particular, for applications such as games where naive users want to create their own avatars, tuning complex parameters is not practical. Our approach analyses a single flash photograph and estimates model parameters that reproduce the visual likeness of the observed hair. The estimated parameters include color absorptions, three reflectance lobe parameters of a multiple-scattering rendering model, and a geometric noise parameter. We use a novel melanin-based model to capture the natural subspace of hair absorption parameters. At its core, the method assumes that images of hair with similar color distributions are also similar in appearance. This allows us to recast the issue as an image retrieval problem where the photo is matched with a dataset of rendered images; we thus also match the model parameters used to generate these images. An earth-mover's distance is used between luminance-weighted color distributions to gauge similarity. We conduct a perceptual experiment to evaluate this metric in the context of hair appearance and demonstrate the method on 64 photographs, showing that it can achieve a visual likeness for a large variety of input photos.  相似文献   

17.
The ability to interactively render dynamic scenes with global illumination is one of the main challenges in computer graphics. The improvement in performance of interactive ray tracing brought about by significant advances in hardware and careful exploitation of coherence has rendered the potential of interactive global illumination a reality. However, the simulation of complex light transport phenomena, such as diffuse interreflections, is still quite costly to compute in real time. In this paper we present a caching scheme, termed Instant Caching, based on a combination of irradiance caching and instant radiosity. By reutilising calculations from neighbouring computations this results in a speedup over previous instant radiosity‐based approaches. Additionally, temporal coherence is exploited by identifying which computations have been invalidated due to geometric transformations and updating only those paths. The exploitation of spatial and temporal coherence allows us to achieve superior frame rates for interactive global illumination within dynamic scenes, without any precomputation or quality loss when compared to previous methods; handling of lighting and material changes are also demonstrated.  相似文献   

18.
The wide availability of 3D acquisition devices makes viable their use for shape monitoring. The current techniques for the analysis of time‐varying data can efficiently detect actual significant geometric changes and rule out differences due to irrelevant variations (such as sampling, lighting and coverage). On the other hand, the effective visualization of such detected changes can be challenging when we want to show at the same time the original appearance of the 3D model. In this paper, we propose a dynamic technique for the effective visualization of detected differences between two 3D scenes. The presented approach, while retaining the original appearance, allows the user to switch between the two models in a way that enhances the geometric differences that have been detected as significant. Additionally, the same technique is able to visually hides the other negligible, yet visible, variations. The main idea is to use two distinct screen space time‐based interpolation functions for the significant 3D differences and for the small variations to hide. We have validated the proposed approach in a user study on a different class of datasets, proving the objective and subjective effectiveness of the method.  相似文献   

19.
We present an approach for extracting extremal feature lines of scalar indicators on surface meshes, based on discrete Morse Theory. By computing initial Morse‐Smale complexes of the scalar indicators of the mesh, we obtain a candidate set of extremal feature lines of the surface. A hierarchy of Morse‐Smale complexes is computed by prioritizing feature lines according to a novel criterion and applying a cancellation procedure that allows us to select the most significant lines. Given the scalar indicators on the vertices of the mesh, the presented feature line extraction scheme is interpolation free and needs no derivative estimates. The technique is insensitive to noise and depends only on one parameter: the feature significance. We use the technique to extract surface features yielding impressive, non photorealistic images.  相似文献   

20.
Molecular dynamics simulations are a principal tool for studying molecular systems. Such simulations are used to investigate molecular structure, dynamics, and thermodynamical properties, as well as a replacement for, or complement to, costly and dangerous experiments. With the increasing availability of computational power the resulting data sets are becoming increasingly larger, and benchmarks indicate that the interactive visualization on desktop computers poses a challenge when rendering substantially more than millions of glyphs. Trading visual quality for rendering performance is a common approach when interactivity has to be guaranteed. In this paper we address both problems and present a method for high‐quality visualization of massive molecular dynamics data sets. We employ several optimization strategies on different levels of granularity, such as data quantization, data caching in video memory, and a two‐level occlusion culling strategy: coarse culling via hardware occlusion queries and a vertex‐level culling using maximum depth mipmaps. To ensure optimal image quality we employ GPU raycasting and deferred shading with smooth normal vector generation. We demonstrate that our method allows us to interactively render data sets containing tens of millions of high‐quality glyphs.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号