首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
When rendering effects such as motion blur and defocus blur, shading can become very expensive if done in a naïve way, i.e. shading each visibility sample. To improve performance, previous work often decouple shading from visibility sampling using shader caching algorithms. We present a novel technique for reusing shading in a stochastic rasterizer. Shading is computed hierarchically and sparsely in an object‐space texture, and by selecting an appropriate mipmap level for each triangle, we ensure that the shading rate is sufficiently high so that no noticeable blurring is introduced in the rendered image. Furthermore, with a two‐pass algorithm, we separate shading from reuse and thus avoid GPU thread synchronization. Our method runs at real‐time frame rates and is up to 3 × faster than previous methods. This is an important step forward for stochastic rasterization in real time.  相似文献   

2.
Inertial particles are finite‐sized objects traveling with a certain velocity that differs from the underlying carrying flow, i.e., they are mass‐dependent and subject to inertia. Their backward integration is in practice infeasible, since a slight change in the initial velocity causes extreme changes in the recovered position. Thus, if an inertial particle is observed, it is difficult to recover where it came from. This is known as the source inversion problem, which has many practical applications in recovering the source of airborne or waterborne pollutions. Inertial trajectories live in a higher dimensional spatio‐velocity space. In this paper, we show that this space is only sparsely populated. Assuming that inertial particles are released with a given initial velocity (e.g., from rest), particles may reach a certain location only with a limited set of possible velocities. In fact, with increasing integration duration and dependent on the particle response time, inertial particles converge to a terminal velocity. We show that the set of initial positions that lead to the same location form a curve. We extract these curves by devising a derived vector field in which they appear as tangent curves. Most importantly, the derived vector field only involves forward integrated flow map gradients, which are much more stable to compute than backward trajectories. After extraction, we interactively visualize the curves in the domain and display the reached velocities using glyphs. In addition, we encode the rate of change of the terminal velocity along the curves, which gives a notion for the convergence to the terminal velocity. With this, we present the first solution to the source inversion problem that considers actual inertial trajectories. We apply the method to steady and unsteady flows in both 2D and 3D domains.  相似文献   

3.
In this paper, we propose an interactive technique for constructing a 3D scene via sparse user inputs. We represent a 3D scene in the form of a Layered Depth Image (LDI) which is composed of a foreground layer and a background layer, and each layer has a corresponding texture and depth map. Given user‐specified sparse depth inputs, depth maps are computed based on superpixels using interpolation with geodesic‐distance weighting and an optimization framework. This computation is done immediately, which allows the user to edit the LDI interactively. Additionally, our technique automatically estimates depth and texture in occluded regions using the depth discontinuity. In our interface, the user paints strokes on the 3D model directly. The drawn strokes serve as 3D handles with which the user can pull out or push the 3D surface easily and intuitively with real‐time feedback. We show our technique enables efficient modeling of LDI that produce sufficient 3D effects.  相似文献   

4.
We present a theoretical analysis of error of combinations of Monte Carlo estimators used in image synthesis. Importance sampling and multiple importance sampling are popular variance‐reduction strategies. Unfortunately, neither strategy improves the rate of convergence of Monte Carlo integration. Jittered sampling (a type of stratified sampling), on the other hand is known to improve the convergence rate. Most rendering software optimistically combine importance sampling with jittered sampling, hoping to achieve both. We derive the exact error of the combination of multiple importance sampling with jittered sampling. In addition, we demonstrate a further benefit of introducing negative correlations (antithetic sampling) between estimates to the convergence rate. As with importance sampling, antithetic sampling is known to reduce error for certain classes of integrands without affecting the convergence rate. In this paper, our analysis and experiments reveal that importance and antithetic sampling, if used judiciously and in conjunction with jittered sampling, may improve convergence rates. We show the impact of such combinations of strategies on the convergence rate of estimators for direct illumination.  相似文献   

5.
In this paper we present Hypersliceplorer, an algorithm for generating 2D slices of multi‐dimensional shapes defined by a simplical mesh. Often, slices are generated by using a parametric form and then constraining parameters to view the slice. In our case, we developed an algorithm to slice a simplical mesh of any number of dimensions with a two‐dimensional slice. In order to get a global appreciation of the multi‐dimensional object, we show multiple slices by sampling a number of different slicing points and projecting the slices into a single view per dimension pair. These slices are shown in an interactive viewer which can switch between a global view (all slices) and a local view (single slice). We show how this method can be used to study regular polytopes, differences between spaces of polynomials, and multi‐objective optimization surfaces.  相似文献   

6.
This survey provides an overview of perceptually motivated techniques for the visualization of medical image data, including physics‐based lighting techniques as well as illustrative rendering that incorporate spatial depth and shape cues. Additionally, we discuss evaluations that were conducted in order to study the perceptual effects of these visualization techniques as compared to conventional techniques. These evaluations assessed depth and shape perception with depth judgment, orientation matching, and related tasks. This overview of existing techniques and their evaluation serves as a basis for defining the evaluation process of medical visualizations and to discuss a research agenda.  相似文献   

7.
A bipartite graph is a powerful abstraction for modeling relationships between two collections. Visualizations of bipartite graphs allow users to understand the mutual relationships between the elements in the two collections, e.g., by identifying clusters of similarly connected elements. However, commonly‐used visual representations do not scale for the analysis of large bipartite graphs containing tens of millions of vertices, often resorting to an a‐priori clustering of the sets. To address this issue, we present the Who's‐Active‐On‐What‐Visualization (WAOW‐Vis) that allows for multiscale exploration of a bipartite social‐network without imposing an a‐priori clustering. To this end, we propose to treat a bipartite graph as a high‐dimensional space and we create the WAOW‐Vis adapting the multiscale dimensionality‐reduction technique HSNE. The application of HSNE for bipartite graph requires several modifications that form the contributions of this work. Given the nature of the problem, a set‐based similarity is proposed. For efficient and scalable computations, we use compressed bitmaps to represent sets and we present a novel space partitioning tree to efficiently compute similarities; the Sets Intersection Tree. Finally, we validate WAOW‐Vis on several datasets connecting Twitter‐users and ‐streams in different domains: news, computer science and politics. We show how WAOW‐Vis is particularly effective in identifying hierarchies of communities among social‐media users.  相似文献   

8.
Scatter plots are mostly used for correlation analysis, but are also a useful tool for understanding the distribution of high‐dimensional point cloud data. An important characteristic of such distributions are clusters, and scatter plots have been used successfully to identify clusters in data. Another characteristic of point cloud data that has received less attention so far are regions that contain no or only very few data points. We show that augmenting scatter plots by projections of flow lines along the gradient vector field of the distance function to the point cloud reveals such empty regions or voids. The augmented scatter plots, that we call sclow plots, enable a much better understanding of the geometry underlying the point cloud than traditional scatter plots, and by that support tasks like dimension inference, detecting outliers, or identifying data points at the interface between clusters. We demonstrate the feasibility of our approach on synthetic and real world data sets.  相似文献   

9.
Tile maps are an important tool in thematic cartography with distinct qualities (and limitations) that distinguish them from better‐known techniques such as choropleths, cartograms and symbol maps. Specifically, tile maps display geographic regions as a grid of identical tiles so large regions do not dominate the viewer's attention and small regions are easily seen. Furthermore, complex data such as time series can be shown on each tile in a consistent format, and the grid layout facilitates comparisons across tiles. Whilst a small number of handcrafted tile maps have become popular, the time‐consuming process of creating new tile maps limits their wider use. To address this issue, we present an algorithm that generates a tile map of the specified type (e.g. square, hexagon, triangle) from raw shape data. Since the ‘best’ tile map depends on the specific geography visualized and the task to be performed, the algorithm generates and ranks multiple tile maps and allows the user to choose the most appropriate. The approach is demonstrated on a range of examples using a prototype browser‐based application.  相似文献   

10.
Displaying geometry inflow visualization is often accompanied by occlusion problems, making it difficult to perceive information that is relevant in the respective application. In a recent technique, named opacity optimization, the balance of occlusion avoidance and the selection of meaningful geometry was recognized to be a view‐dependent, global optimization problem. The method solves a bounded‐variable least‐squares problem, which minimizes energy terms for the reduction of occlusion, background clutter, adding smoothness and regularization. The original technique operates on an object‐space discretization and was shown for line and surface geometry. Recently, it has been extended to volumes, where it was solved locally per ray by dropping the smoothness energy term and replacing it by pre‐filtering the importance measure. In this paper, we pick up the idea of splitting the opacity optimization problem into two smaller problems. The first problem is a minimization with analytic solution, and the second problem is a smoothing of the obtained minimizer in object‐space. Thereby, the minimization problem can be solved locally per pixel, making it possible to combine all geometry types (points, lines and surfaces) consistently in a single optimization framework. We call this decoupled opacity optimization and apply it to a number of steady 3D vector fields.  相似文献   

11.
We consider the problem of sampling points from a collection of smooth curves in the plane, such that the Crust family of proximity‐based reconstruction algorithms can rebuild the curves. Reconstruction requires a dense sampling of local features, i.e., parts of the curve that are close in Euclidean distance but far apart geodesically. We show that ε < 0.47‐sampling is sufficient for our proposed HNN‐Crust variant, improving upon the state‐of‐the‐art requirement of ε < ‐sampling. Thus we may reconstruct curves with many fewer samples. We also present a new sampling scheme that reduces the required density even further than ε < 0.47‐sampling. We achieve this by better controlling the spacing between geodesically consecutive points. Our novel sampling condition is based on the reach, the minimum local feature size along intervals between samples. This is mathematically closer to the reconstruction density requirements, particularly near sharp‐angled features. We prove lower and upper bounds on reach ρ‐sampling density in terms of lfs ε‐sampling and demonstrate that we typically reduce the required number of samples for reconstruction by more than half.  相似文献   

12.
Intrinsic images are a mid‐level representation of an image that decompose the image into reflectance and illumination layers. The reflectance layer captures the color/texture of surfaces in the scene, while the illumination layer captures shading effects caused by interactions between scene illumination and surface geometry. Intrinsic images have a long history in computer vision and recently in computer graphics, and have been shown to be a useful representation for tasks ranging from scene understanding and reconstruction to image editing. In this report, we review and evaluate past work on this problem. Specifically, we discuss each work in terms of the priors they impose on the intrinsic image problem. We introduce a new synthetic ground‐truth dataset that we use to evaluate the validity of these priors and the performance of the methods. Finally, we evaluate the performance of the different methods in the context of image‐editing applications.  相似文献   

13.
For an ensemble of iso‐contours in multi‐dimensional scalar fields, we present new methods to a) visualize their dominant spatial patterns of variability, and b) to compute the conditional probability of the occurrence of a contour at one location given the occurrence at some other location. We first show how to derive a statistical model describing the contour variability, by representing the contours implicitly via signed distance functions and clustering similar functions in a reduced order space. We show that the spatial patterns of the ensemble can then be derived by analytically transforming the boundaries of a confidence interval computed from each cluster into the spatial domain. Furthermore, we introduce a mathematical basis for computing correlations between the occurrences of iso‐contours at different locations. We show that the computation of these correlations can be posed in the reduced order space as an integration problem over a region bounded by four hyper‐planes. To visualize the derived statistical properties we employ a variant of variability plots for streamlines, now including the color coding of probabilities of joint contour occurrences. We demonstrate the use of the proposed techniques for ensemble exploration in a number of 2D and 3D examples, using artificial and meteorological data sets.  相似文献   

14.
The visual analysis of flows with inertial particle trajectories is a challenging problem because time‐dependent particle trajectories additionally depend on mass, which gives rise to an infinite number of possible trajectories passing through every point in space‐time. This paper presents an approach to a comparative visualization of the inertial particles’ separation behavior. For this, we define the Finite‐Time Mass Separation (FTMS), a scalar field that measures at each point in the domain how quickly inertial particles separate that were released from the same location but with slightly different mass. Extracting and visualizing the mass that induces the largest separation provides a simplified view on the critical masses. By using complementary coordinated views, we additionally visualize corresponding inertial particle trajectories in space‐time by integral curves and surfaces. For a quantitative analysis, we plot Euclidean and arc length‐based distances to a reference particle over time, which allows to observe the temporal evolution of separation events. We demonstrate our approach on a number of analytic and one real‐world unsteady 2D field.  相似文献   

15.
In this paper, we propose a method to maintain the temporal coherence of stylized feature lines extracted from 3D models and preserve an artistically intended stylization provided by the user. We formally define the problem of combining spatio‐temporal continuity and artistic intention as a weighted energy minimization problem of competing constraints. The proposed method updates the style properties to provide real‐time smooth transitions from current to goal stylization, by assuring first‐ and second‐order temporal continuity, as well as spatial continuity along each stroke. The proposed weighting scheme guarantees that the stylization of strokes maintains motion coherence with respect to the apparent motion of the underlying surface in consecutive frames. This weighting scheme emphasizes temporal continuity for small apparent motions where the human vision system is able to keep track of the scene, and prioritizes the artistic intention for large apparent motions where temporal coherence is not expected. The proposed method produces temporally coherent and visually pleasing animations without the flickering artifacts of previous methods, while also maintaining the artistic intention of a goal stylization provided by the user.  相似文献   

16.
Linear projections are one of the most common approaches to visualize high‐dimensional data. Since the space of possible projections is large, existing systems usually select a small set of interesting projections by ranking a large set of candidate projections based on a chosen quality measure. However, while highly ranked projections can be informative, some lower ranked ones could offer important complementary information. Therefore, selection based on ranking may miss projections that are important to provide a global picture of the data. The proposed work fills this gap by presenting the Grassmannian Atlas, a framework that captures the global structures of quality measures in the space of all projections, which enables a systematic exploration of many complementary projections and provides new insights into the properties of existing quality measures.  相似文献   

17.
One main task for domain experts in analysing their nD data is to detect and interpret class/cluster separations and outliers. In fact, an important question is, which features/dimensions separate classes best or allow a cluster‐based data classification. Common approaches rely on projections from nD to 2D, which comes with some challenges, such as: The space of projection contains an infinite number of items. How to find the right one? The projection approaches suffers from distortions and misleading effects. How to rely to the projected class/cluster separation? The projections involve the complete set of dimensions/features. How to identify irrelevant dimensions? Thus, to address these challenges, we introduce a visual analytics concept for the feature selection based on linear discriminative star coordinates (DSC), which generate optimal cluster separating views in a linear sense for both labeled and unlabeled data. This way the user is able to explore how each dimension contributes to clustering. To support to explore relations between clusters and data dimensions, we provide a set of cluster‐aware interactions allowing to smartly iterate through subspaces of both records and features in a guided manner. We demonstrate our features selection approach for optimal cluster/class separation analysis with a couple of experiments on real‐life benchmark high‐dimensional data sets.  相似文献   

18.
Restricted Voronoi diagrams are a fundamental geometric structure used in many applications such as surface reconstruction from point sets or optimal transport. Given a set of sites V = { v k}nk=1 ? ?d and a mesh X with vertices in ?d connected by triangles, the restricted Voronoi diagram partitions X by computing for each site the portion of X for which the site is the nearest. The restricted Voronoi diagram is the intersection between the regular Voronoi diagram and the mesh. Depending on the site distribution or the ambient space dimension computing the regular Voronoi diagram may not be feasible using classical algorithms. In this paper, we extend Lévy and Bonneel's approach [ LB12 ] based on nearest neighbor queries. We show that their method is limited when the sites are not located on X . We propose a new algorithm for computing restricted Voronoi which reduces the number of sites considered for each triangle of the mesh and scales smoothly when the sites are far from the surface.  相似文献   

19.
Cartograms visualize quantitative data about a set of regions such as countries or states. There are several different types of cartograms and – for some – algorithms to automatically construct them exist. We focus on mosaic cartograms: cartograms that use multiples of simple tiles – usually squares or hexagons – to represent regions. Mosaic cartograms communicate well data that consist of, or can be cast into, small integer units (for example, electorial college votes). In addition, they allow users to accurately compare regions and can often maintain a (schematized) version of the input regions’ shapes. We propose the first fully automated method to construct mosaic cartograms. To do so, we first introduce mosaic drawings of triangulated planar graphs. We then show how to modify mosaic drawings into mosaic cartograms with low cartographic error while maintaining correct adjacencies between regions. We validate our approach experimentally and compare to other cartogram methods.  相似文献   

20.
In volume visualization, transfer functions are used to classify the volumetric data and assign optical properties to the voxels. In general, transfer functions are generated in a transfer function space, which is the feature space constructed by data values and properties derived from the data. If volumetric objects have the same or overlapping data values, it would be difficult to separate them in the transfer function space. In this paper, we present a rule‐enhanced transfer function design method that allows important structures of the volume to be more effectively separated and highlighted. We define a set of rules based on the local frequency distribution of volume attributes. A rule‐selection method based on a genetic algorithm is proposed to learn the set of rules that can distinguish the user‐specified target tissue from other tissues. In the rendering stage, voxels satisfying these rules are rendered with higher opacities in order to highlight the target tissue. The proposed method was tested on various volumetric datasets to enhance the visualization of important structures that are difficult to be visualized by traditional transfer function design methods. The results demonstrate the effectiveness of the proposed method.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号