首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The topological structure of scalar, vector, and second‐order tensor fields provides an important mathematical basis for data analysis and visualization. In this paper, we extend this framework towards higher‐order tensors. First, we establish formal uniqueness properties for a geometrically constrained tensor decomposition. This allows us to define and visualize topological structures in symmetric tensor fields of orders three and four. We clarify that in 2D, degeneracies occur at isolated points, regardless of tensor order. However, for orders higher than two, they are no longer equivalent to isotropic tensors, and their fractional Poincaré index prevents us from deriving continuous vector fields from the tensor decomposition. Instead, sorting the terms by magnitude leads to a new type of feature, lines along which the resulting vector fields are discontinuous. We propose algorithms to extract these features and present results on higher‐order derivatives and higher‐order structure tensors.  相似文献   

2.
In this paper, we introduce a novel coordinate‐free method for manipulating and analyzing vector fields on discrete surfaces. Unlike the commonly used representations of a vector field as an assignment of vectors to the faces of the mesh, or as real values on edges, we argue that vector fields can also be naturally viewed as operators whose domain and range are functions defined on the mesh. Although this point of view is common in differential geometry it has so far not been adopted in geometry processing applications. We recall the theoretical properties of vector fields represented as operators, and show that composition of vector fields with other functional operators is natural in this setup. This leads to the characterization of vector field properties through commutativity with other operators such as the Laplace‐Beltrami and symmetry operators, as well as to a straight‐forward definition of differential properties such as the Lie derivative. Finally, we demonstrate a range of applications, such as Killing vector field design, symmetric vector field estimation and joint design on multiple surfaces.  相似文献   

3.
In this paper we present several techniques to interactively explore representations of 2D vector fields. Through a set of simple hand postures used on large, touch‐sensitive displays, our approach allows individuals to custom‐design glyphs (arrows, lines, etc.) that best reveal patterns of the underlying dataset. Interactive exploration of vector fields is facilitated through freedom of glyph placement, glyph density control, and animation. The custom glyphs can be applied individually to probe specific areas of the data but can also be applied in groups to explore larger regions of a vector field. Re‐positionable sources from which glyphs—animated according to the local vector field—continue to emerge are used to examine the vector field dynamically. The combination of these techniques results in an engaging visualization with which the user can rapidly explore and analyze varying types of 2D vector fields, using a virtually infinite number of custom‐designed glyphs.  相似文献   

4.
Traditionally, vector field visualization is concerned with 2D and 3D flows. Yet, many concepts can be extended to general dynamical systems, including the higher‐dimensional problem of modeling the motion of finite‐sized objects in fluids. In the steady case, the trajectories of these so‐called inertial particles appear as tangent curves of a 4D or 6D vector field. These higher‐dimensional flows are difficult to map to lower‐dimensional spaces, which makes their visualization a challenging problem. We focus on vector field topology, which allows scientists to study asymptotic particle behavior. As recent work on the 2D case has shown, both extraction and classification of isolated critical points depend on the underlying particle model. In this paper, we aim for a model‐independent classification technique, which we apply to two different particle models in not only 2D, but also 3D cases. We show that the classification can be done by performing an eigenanalysis of the spatial derivatives' velocity subspace of the higher‐dimensional 4D or 6D flow. We construct glyphs that depict not only the types of critical points, but also encode the directional information given by the eigenvectors. We show that the eigenvalues and eigenvectors of the inertial phase space have sufficient symmetries and structure so that they can be depicted in 2D or 3D, instead of 4D or 6D.  相似文献   

5.
In the visualization of flow simulation data, feature detectors often tend to result in overly rich response, making some sort of filtering or simplification necessary to convey meaningful images. In this paper we present an approach that builds upon a decomposition of the flow field according to dynamical importance of different scales of motion energy. Focusing on the high‐energy scales leads to a reduction of the flow field while retaining the underlying physical process. The presented method acknowledges the intrinsic structures of the flow according to its energy and therefore allows to focus on the energetically most interesting aspects of the flow. Our analysis shows that this approach can be used for methods based on both local feature extraction and particle integration and we provide a discussion of the error caused by the approximation. Finally, we illustrate the use of the proposed approach for both a local and a global feature detector and in the context of numerical flow simulations.  相似文献   

6.
Inspired by vector field topology, an established tool for the extraction and identification of important features of flows and vector fields, we develop means for the analysis of the structure of light transport. For that, we derive an analogy to vector field topology that defines coherent structures in light transport. We also introduce Finite‐Time Path Deflection (FTPD), a scalar quantity that represents the deflection characteristic of all light transport paths passing through a given point in space. For virtual scenes, the FTPD can be computed directly using path‐space Monte Carlo integration. We visualize the FTPD field for several example scenes and discuss the revealed structures. Lastly, we show that the coherent regions visualized by the FTPD are closely related to the coherent regions in our new topologically‐motivated analysis of light transport. FTPD visualizations are thus also visualizations of the structure of light transport.  相似文献   

7.
A tangent vector field on a surface is the generator of a smooth family of maps from the surface to itself, known as the flow. Given a scalar function on the surface, it can be transported, or advected, by composing it with a vector field's flow. Such transport is exhibited by many physical phenomena, e.g., in fluid dynamics. In this paper, we are interested in the inverse problem: given source and target functions, compute a vector field whose flow advects the source to the target. We propose a method for addressing this problem, by minimizing an energy given by the advection constraint together with a regularizing term for the vector field. Our approach is inspired by a similar method in computational anatomy, known as LDDMM, yet leverages the recent framework of functional vector fields for discretizing the advection and the flow as operators on scalar functions. The latter allows us to efficiently generalize LDDMM to curved surfaces, without explicitly computing the flow lines of the vector field we are optimizing for. We show two approaches for the solution: using linear advection with multiple vector fields, and using non‐linear advection with a single vector field. We additionally derive an approximated gradient of the corresponding energy, which is based on a novel vector field transport operator. Finally, we demonstrate applications of our machinery to intrinsic symmetry analysis, function interpolation and map improvement.  相似文献   

8.
In this paper, we propose a novel framework to represent visual information. Extending the notion of conventional image-based rendering, our framework makes joint use of both light fields and holograms as complementary representations. We demonstrate how light fields can be transformed into holograms, and vice versa. By exploiting the advantages of either representation, our proposed dual representation and processing pipeline is able to overcome the limitations inherent to light fields and holograms alone. We show various examples from synthetic and real light fields to digital holograms demonstrating advantages of either representation, such as speckle-free images, ghosting-free images, aliasing-free recording, natural light recording, aperture-dependent effects and real-time rendering which can all be achieved using the same framework. Capturing holograms under white light illumination is one promising application for future work.  相似文献   

9.
We present an algorithm that allows stream surfaces to recognize and adapt to vector field topology. Standard stream surface algorithms either refine the surface uncontrolled near critical points which slows down the computation considerably and may lead to a poor surface approximation. Alternatively, the concerned region is omitted from the stream surface by severing it into two parts thus generating an incomplete stream surface. Our algorithm utilizes topological information to provide a fast, accurate, and complete triangulation of the stream surface near critical points. The required topological information is calculated in a preprocessing step. We compare our algorithm against the standard approach both visually and in performance.  相似文献   

10.
The parallel vectors (PV) operator is a feature extraction approach for defining line‐type features such as creases (ridges and valleys) in scalar fields, as well as separation, attachment, and vortex core lines in vector fields. In this work, we extend PV feature extraction to higher‐order data represented by piecewise analytical functions defined over grid cells. The extraction uses PV in two distinct stages. First, seed points on the feature lines are placed by evaluating the inclusion form of the PV criterion with reduced affine arithmetic. Second, a feature flow field is derived from the higher‐order PV expression where the features can be extracted as streamlines starting at the seeds. Our approach allows for guaranteed bounds regarding accuracy with respect to existence, position, and topology of the features obtained. The method is suitable for parallel implementation and we present results obtained with our GPU‐based prototype. We apply our method to higher‐order data obtained from discontinuous Galerkin fluid simulations.  相似文献   

11.
Captured reflectance fields tend to provide a relatively coarse sampling of the incident light directions. As a result, sharp illumination features, such as highlights or shadow boundaries, are poorly reconstructed during relighting; highlights are disconnected, and shadows show banding artefacts. In this paper, we propose a novel interpolation technique for 4D reflectance fields that reconstructs plausible images even for non-observed light directions. Given a sparsely sampled reflectance field, we can effectively synthesize images as they would have been obtained from denser sampling. The processing pipeline consists of three steps: (1) segmentation of regions where intermediate lighting cannot be obtained by blending, (2) appropriate flow algorithms for highlights and shadows, plus (3) a final reconstruction technique that uses image-based priors to faithfully correct errors that might be introduced by the segmentation or flow step. The algorithm reliably reproduces scenes that contain specular highlights, interreflections, shadows or caustics.  相似文献   

12.
We present photon beam diffusion, an efficient numerical method for accurately rendering translucent materials. Our approach interprets incident light as a continuous beam of photons inside the material. Numerically integrating diffusion from such extended sources has long been assumed computationally prohibitive, leading to the ubiquitous single‐depth dipole approximation and the recent analytic sum‐of‐Gaussians approach employed by Quantized Diffusion. In this paper, we show that numerical integration of the extended beam is not only feasible, but provides increased speed, flexibility, numerical stability, and ease of implementation, while retaining the benefits of previous approaches. We leverage the improved diffusion model, but propose an efficient and numerically stable Monte Carlo integration scheme that gives equivalent results using only 3–5 samples instead of 20–60 Gaussians as in previous work. Our method can account for finite and multi‐layer materials, and additionally supports directional incident effects at surfaces. We also propose a novel diffuse exact single‐scattering term which can be integrated in tandem with the multi‐scattering approximation. Our numerical approach furthermore allows us to easily correct inaccuracies of the diffusion model and even combine it with more general Monte Carlo rendering algorithms. We provide practical details necessary for efficient implementation, and demonstrate the versatility of our technique by incorporating it on top of several rendering algorithms in both research and production rendering systems.  相似文献   

13.
Current unsteady multi‐field simulation data‐sets consist of millions of data‐points. To efficiently reduce this enormous amount of information, local statistical complexity was recently introduced as a method that identifies distinctive structures using concepts from information theory. Due to high computational costs this method was so far limited to 2D data. In this paper we propose a new strategy for the computation that is substantially faster and allows for a more precise analysis. The bottleneck of the original method is the division of spatio‐temporal configurations in the field (light‐cones) into different classes of behavior. The new algorithm uses a density‐driven Voronoi tessellation for this task that more accurately captures the distribution of configurations in the sparsely sampled high‐dimensional space. The efficient computation is achieved using structures and algorithms from graph theory. The ability of the method to detect distinctive regions in 3D is illustrated using flow and weather simulations.  相似文献   

14.
Despite their high popularity, common high dynamic range (HDR) methods are still limited in their practical applicability: They assume that the input images are perfectly aligned, which is often violated in practise. Our paper does not only free the user from this unrealistic limitation, but even turns the missing alignment into an advantage: By exploiting the multiple exposures, we can create a super‐resolution image. The alignment step is performed by a modern energy‐based optic flow approach that takes into account the varying exposure conditions. Moreover, it produces dense displacement fields with subpixel precision. As a consequence, our approach can handle arbitrary complex motion patterns, caused by severe camera shake and moving objects. Additionally, it benefits from several advantages over existing strategies: (i) It is robust under outliers (noise, occlusions, saturation problems) and allows for sharp discontinuities in the displacement field. (ii) The alignment step neither requires camera calibration nor knowledge of the exposure times. (iii) It can be efficiently implemented on CPU and GPU architectures. After the alignment is performed, we use the obtained subpixel accurate displacement fields as input for an energy‐based, joint super‐resolution and HDR (SR‐HDR) approach. It introduces robust data terms and anisotropic smoothness terms in the SR‐HDR literature. Our experiments with challenging real world data demonstrate that these novelties are pivotal for the favourable performance of our approach.  相似文献   

15.
We present a novel physically‐based method to visualize stress tensor fields. By incorporating photoelasticity into traditional raycasting and extending it with reflection and refraction, taking into account polarization, we obtain the virtual counterpart to traditional experimental polariscopes. This allows us to provide photoelastic analysis of stress tensor fields in arbitrary domains. In our model, the optical material properties, such as stress‐optic coefficient and refractive index, can either be chosen in compliance with the subject under investigation, or, in case of stress problems that do not model optical properties or that are not transparent, be chosen according to known or even new transparent materials. This enables direct application of established polariscope methodology together with respective interpretation. Using a GPU‐based implementation, we compare our technique to experimental data, and demonstrate its utility with several simulated datasets.  相似文献   

16.
In this paper, we consider Centroidal Voronoi Tessellations (CVTs) and study their regularity. CVTs are geometric structures that enable regular tessellations of geometric objects and are widely used in shape modelling and analysis. While several efficient iterative schemes, with defined local convergence properties, have been proposed to compute CVTs, little attention has been paid to the evaluation of the resulting cell decompositions. In this paper, we propose a regularity criterion that allows us to evaluate and compare CVTs independently of their sizes and of their cell numbers. This criterion allows us to compare CVTs on a common basis. It builds on earlier theoretical work showing that second moments of cells converge to a lower bound when optimizing CVTs. In addition to proposing a regularity criterion, this paper also considers computational strategies to determine regular CVTs. We introduce a hierarchical framework that propagates regularity over decomposition levels and hence provides CVTs with provably better regularities than existing methods. We illustrate these principles with a wide range of experiments on synthetic and real models.  相似文献   

17.
Cartoon animation, image warping, and several other tasks in two‐dimensional computer graphics reduce to the formulation of a reasonable model for planar deformation. A deformation is a map from a given shape to a new one, and its quality is determined by the type of distortion it introduces. In many applications, a desirable map is as isometric as possible. Finding such deformations, however, is a nonlinear problem, and most of the existing solutions approach it by minimizing a nonlinear energy. Such methods are not guaranteed to converge to a global optimum and often suffer from robustness issues. We propose a new approach based on approximate Killing vector fields (AKVFs), first introduced in shape processing. AKVFs generate near‐isometric deformations, which can be motivated as direction fields minimizing an “as‐rigid‐as‐possible” (ARAP) energy to first order. We first solve for an AKVF on the domain given user constraints via a linear optimization problem and then use this AKVF as the initial velocity field of the deformation. In this way, we transfer the inherent nonlinearity of the deformation problem to finding trajectories for each point of the domain having the given initial velocities. We show that a specific class of trajectories — the set of logarithmic spirals — is especially suited for this task both in practice and through its relationship to linear holomorphic vector fields. We demonstrate the effectiveness of our method for planar deformation by comparing it with existing state‐of‐the‐art deformation methods.  相似文献   

18.
The evolution of strain and development of material anisotropy in models of the Earth’s mantle flow convey important information about how to interpret the geometric relationship between observation of seismic anisotropy and the actual mantle flow field. By combining feature extraction techniques such as path line integration and tensor accumulation, we compute time‐varying strain vector fields that build the foundation for a number of feature extraction and visualization techniques. The proposed field segmentation, clustering, histograms and multi‐volume visualization techniques facilitate an intuitive understanding of three‐dimensional strain in such flow fields, overcoming limitations of previous methods such as 2‐D line plots and slicing. We present applications of our approach to an artificial time varying flow data set and a real world example of stationary flow in a subduction zone and discuss the challenges of processing these geophysical data sets as well as the insights gained.  相似文献   

19.
We present a novel framework for efficiently computing the indirect illumination in diffuse and moderately glossy scenes using density estimation techniques. Many existing global illumination approaches either quickly compute an overly approximate solution or perform an orders of magnitude slower computation to obtain high-quality results for the indirect illumination. The proposed method improves photon density estimation and leads to significantly better visual quality in particular for complex geometry, while only slightly increasing the computation time. We perform direct splatting of photon rays, which allows us to use simpler search data structures. Since our density estimation is carried out in ray space rather than on surfaces, as in the commonly used photon mapping algorithm, the results are more robust against geometrically incurred sources of bias. This holds also in combination with final gathering where photon mapping often overestimates the illumination near concave geometric features. In addition, we show that our photon splatting technique can be extended to handle moderately glossy surfaces and can be combined with traditional irradiance caching for sparse sampling and filtering in image space.  相似文献   

20.
Defocus Magnification   总被引:1,自引:0,他引:1  
A blurry background due to shallow depth of field is often desired for photographs such as portraits, but, unfortunately, small point-and-shoot cameras do not permit enough defocus because of the small diameter of their lenses. We present an image-processing technique that increases the defocus in an image to simulate the shallow depth of field of a lens with a larger aperture. Our technique estimates the spatially-varying amount of blur over the image, and then uses a simple image-based technique to increase defocus. We first estimate the size of the blur kernel at edges and then propagate this defocus measure over the image. Using our defocus map, we magnify the existing blurriness, which means that we blur blurry regions and keep sharp regions sharp. In contrast to more difficult problems such as depth from defocus, we do not require precise depth estimation and do not need to disambiguate textureless regions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号