首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We present a natural extension of two‐dimensional parallel‐coordinates plots for revealing relationships in time‐dependent multi‐attribute data by building on the idea that time can be considered as the third dimension. A time slice through the visualization represents a certain point in time and can be viewed as a regular parallel‐coordinates display. A vertical slice through one of the axes of the parallel‐coordinates display would show a time‐series plot. For a focus‐and‐context Integration of both views, we embed time‐series plots between two adjacent axes of the parallel‐coordinates plot. Both time‐series plots are drawn using a pseudo three‐dimensional perspective with a single vanishing point. An independent parallel‐coordinates panel that connects the two perspectively displayed time‐series plots can move forward and backward in time to reveal changes in the relationship between the time‐dependent attributes. The visualization of time‐series plots in the context of the parallel‐coordinates plot facilitates the exploration of time‐related aspects of the data without the need to switch to a separate display. We provide a consistent set of tools for selecting and contrasting subsets of the data, which are important for various application domains.  相似文献   

2.
3.
In this paper we present Hypersliceplorer, an algorithm for generating 2D slices of multi‐dimensional shapes defined by a simplical mesh. Often, slices are generated by using a parametric form and then constraining parameters to view the slice. In our case, we developed an algorithm to slice a simplical mesh of any number of dimensions with a two‐dimensional slice. In order to get a global appreciation of the multi‐dimensional object, we show multiple slices by sampling a number of different slicing points and projecting the slices into a single view per dimension pair. These slices are shown in an interactive viewer which can switch between a global view (all slices) and a local view (single slice). We show how this method can be used to study regular polytopes, differences between spaces of polynomials, and multi‐objective optimization surfaces.  相似文献   

4.
We propose a new technique for in‐core and out‐of‐core GPU ray tracing using a generalization of hierarchical occlusion culling in the style of the CHC++ method. Our method exploits the rasterization pipeline and hardware occlusion queries in order to create coherent batches of work for localized shader‐based ray tracing kernels. By combining hierarchies in both ray space and object space, the method is able to share intermediate traversal results among multiple rays. We exploit temporal coherence among similar ray sets between frames and also within the given frame. A suitable management of the current visibility state makes it possible to benefit from occlusion culling for less coherent ray types like diffuse reflections. Since large scenes are still a challenge for modern GPU ray tracers, our method is most useful for scenes with medium to high complexity, especially since our method inherently supports ray tracing highly complex scenes that do not fit in GPU memory. For in‐core scenes our method is comparable to CUDA ray tracing and performs up to 5.94 × better than pure shader‐based ray tracing.  相似文献   

5.
Maintaining consistent styles across glyphs is an arduous task in typeface design. In this work we introduce FlexyFont, a flexible tool for synthesizing a complete typeface that has a consistent style with a given small set of glyphs. Motivated by a key fact that typeface designers often maintain a library of glyph parts to achieve a consistent typeface, we intend to learn part consistency between glyphs of different characters across typefaces. We take a part assembling approach by firstly decomposing the given glyphs into semantic parts and then assembling them according to learned sets of transferring rules to reconstruct the missing glyphs. To maintain style consistency, we represent the style of a font as a vector of pairwise part similarities. By learning a distribution over these feature vectors, we are able to predict the style of a novel typeface given only a few examples. We utilize a popular machine learning method as well as retrieval‐based methods to quantitatively assess the performance of our feature vector, resulting in favorable results. We also present an intuitive interface that allows users to interactively create novel typefaces with ease. The synthesized fonts can be directly used in real‐world design.  相似文献   

6.
Linear projections are one of the most common approaches to visualize high‐dimensional data. Since the space of possible projections is large, existing systems usually select a small set of interesting projections by ranking a large set of candidate projections based on a chosen quality measure. However, while highly ranked projections can be informative, some lower ranked ones could offer important complementary information. Therefore, selection based on ranking may miss projections that are important to provide a global picture of the data. The proposed work fills this gap by presenting the Grassmannian Atlas, a framework that captures the global structures of quality measures in the space of all projections, which enables a systematic exploration of many complementary projections and provides new insights into the properties of existing quality measures.  相似文献   

7.
Multi‐dimensional continuous functions are commonly visualized with 2D slices or topological views. Here, we explore 1D slices as an alternative approach to show such functions. Our goal with 1D slices is to combine the benefits of topological views, that is, screen space efficiency, with those of slices, that is a close resemblance of the underlying function. We compare 1D slices to 2D slices and topological views, first, by looking at their performance with respect to common function analysis tasks. We also demonstrate 3 usage scenarios: the 2D sinc function, neural network regression, and optimization traces. Based on this evaluation, we characterize the advantages and drawbacks of each of these approaches, and show how interaction can be used to overcome some of the shortcomings.  相似文献   

8.
Given a planar point set sampled from an object boundary, the process of approximating the original shape is called curve reconstruction. In this paper, a novel non‐parametric curve reconstruction algorithm based on Delaunay triangulation has been proposed and it has been theoretically proved that the proposed method reconstructs the original curve under ε‐sampling. Starting from an initial Delaunay seed edge, the algorithm proceeds by finding an appropriate neighbouring point and adding an edge between them. Experimental results show that the proposed algorithm is capable of reconstructing curves with different features like sharp corners, outliers, multiple objects, objects with holes, etc. The proposed method also works for open curves. Based on a study by a few users, the paper also discusses an application of the proposed algorithm for reconstructing hand drawn skip stroke sketches, which will be useful in various sketch based interfaces.  相似文献   

9.
Intrinsic images are a mid‐level representation of an image that decompose the image into reflectance and illumination layers. The reflectance layer captures the color/texture of surfaces in the scene, while the illumination layer captures shading effects caused by interactions between scene illumination and surface geometry. Intrinsic images have a long history in computer vision and recently in computer graphics, and have been shown to be a useful representation for tasks ranging from scene understanding and reconstruction to image editing. In this report, we review and evaluate past work on this problem. Specifically, we discuss each work in terms of the priors they impose on the intrinsic image problem. We introduce a new synthetic ground‐truth dataset that we use to evaluate the validity of these priors and the performance of the methods. Finally, we evaluate the performance of the different methods in the context of image‐editing applications.  相似文献   

10.
We present a theoretical analysis of error of combinations of Monte Carlo estimators used in image synthesis. Importance sampling and multiple importance sampling are popular variance‐reduction strategies. Unfortunately, neither strategy improves the rate of convergence of Monte Carlo integration. Jittered sampling (a type of stratified sampling), on the other hand is known to improve the convergence rate. Most rendering software optimistically combine importance sampling with jittered sampling, hoping to achieve both. We derive the exact error of the combination of multiple importance sampling with jittered sampling. In addition, we demonstrate a further benefit of introducing negative correlations (antithetic sampling) between estimates to the convergence rate. As with importance sampling, antithetic sampling is known to reduce error for certain classes of integrands without affecting the convergence rate. In this paper, our analysis and experiments reveal that importance and antithetic sampling, if used judiciously and in conjunction with jittered sampling, may improve convergence rates. We show the impact of such combinations of strategies on the convergence rate of estimators for direct illumination.  相似文献   

11.
We consider the problem of sampling points from a collection of smooth curves in the plane, such that the Crust family of proximity‐based reconstruction algorithms can rebuild the curves. Reconstruction requires a dense sampling of local features, i.e., parts of the curve that are close in Euclidean distance but far apart geodesically. We show that ε < 0.47‐sampling is sufficient for our proposed HNN‐Crust variant, improving upon the state‐of‐the‐art requirement of ε < ‐sampling. Thus we may reconstruct curves with many fewer samples. We also present a new sampling scheme that reduces the required density even further than ε < 0.47‐sampling. We achieve this by better controlling the spacing between geodesically consecutive points. Our novel sampling condition is based on the reach, the minimum local feature size along intervals between samples. This is mathematically closer to the reconstruction density requirements, particularly near sharp‐angled features. We prove lower and upper bounds on reach ρ‐sampling density in terms of lfs ε‐sampling and demonstrate that we typically reduce the required number of samples for reconstruction by more than half.  相似文献   

12.
Displaying geometry inflow visualization is often accompanied by occlusion problems, making it difficult to perceive information that is relevant in the respective application. In a recent technique, named opacity optimization, the balance of occlusion avoidance and the selection of meaningful geometry was recognized to be a view‐dependent, global optimization problem. The method solves a bounded‐variable least‐squares problem, which minimizes energy terms for the reduction of occlusion, background clutter, adding smoothness and regularization. The original technique operates on an object‐space discretization and was shown for line and surface geometry. Recently, it has been extended to volumes, where it was solved locally per ray by dropping the smoothness energy term and replacing it by pre‐filtering the importance measure. In this paper, we pick up the idea of splitting the opacity optimization problem into two smaller problems. The first problem is a minimization with analytic solution, and the second problem is a smoothing of the obtained minimizer in object‐space. Thereby, the minimization problem can be solved locally per pixel, making it possible to combine all geometry types (points, lines and surfaces) consistently in a single optimization framework. We call this decoupled opacity optimization and apply it to a number of steady 3D vector fields.  相似文献   

13.
Tile maps are an important tool in thematic cartography with distinct qualities (and limitations) that distinguish them from better‐known techniques such as choropleths, cartograms and symbol maps. Specifically, tile maps display geographic regions as a grid of identical tiles so large regions do not dominate the viewer's attention and small regions are easily seen. Furthermore, complex data such as time series can be shown on each tile in a consistent format, and the grid layout facilitates comparisons across tiles. Whilst a small number of handcrafted tile maps have become popular, the time‐consuming process of creating new tile maps limits their wider use. To address this issue, we present an algorithm that generates a tile map of the specified type (e.g. square, hexagon, triangle) from raw shape data. Since the ‘best’ tile map depends on the specific geography visualized and the task to be performed, the algorithm generates and ranks multiple tile maps and allows the user to choose the most appropriate. The approach is demonstrated on a range of examples using a prototype browser‐based application.  相似文献   

14.
    
This paper investigates contrast enhancement as an approach to tone reduction, aiming to convert a photograph to black and white. Using a filter‐based approach to strengthen contrast, we avoid making a hard decision about how to assign tones to segmented regions. Our method is inspired by sticks filtering, used to enhance medical images but not previously used in non‐photorealistic rendering. We amplify contrast of pixels along the direction of greatest local difference from the mean, strengthening even weak features if they are most prominent. A final thresholding step converts the contrast‐enhanced image to black and white. Local smoothing and contrast enhancement balances abstraction and structure preservation; the main advantage of our method is its faithful depiction of image detail. Our method can create a set of effects: line drawing, hatching, and black and white, all having superior details to previous black and white methods.  相似文献   

15.
Vector field topology is a powerful and matured tool for the study of the asymptotic behavior of tracer particles in steady flows. Yet, it does not capture the behavior of finite‐sized particles, because they develop inertia and do not move tangential to the flow. In this paper, we use the fact that the trajectories of inertial particles can be described as tangent curves of a higher dimensional vector field. Using this, we conduct a full classification of the first‐order critical points of this higher dimensional flow, and devise a method to their efficient extraction. Further, we interactively visualize the asymptotic behavior of finite‐sized particles by a glyph visualization that encodes the outcome of any initial condition of the governing ODE, i.e., for a varying initial position and/or initial velocity. With this, we present a first approach to extend traditional vector field topology to the inertial case.  相似文献   

16.
In this paper, we propose a method to maintain the temporal coherence of stylized feature lines extracted from 3D models and preserve an artistically intended stylization provided by the user. We formally define the problem of combining spatio‐temporal continuity and artistic intention as a weighted energy minimization problem of competing constraints. The proposed method updates the style properties to provide real‐time smooth transitions from current to goal stylization, by assuring first‐ and second‐order temporal continuity, as well as spatial continuity along each stroke. The proposed weighting scheme guarantees that the stylization of strokes maintains motion coherence with respect to the apparent motion of the underlying surface in consecutive frames. This weighting scheme emphasizes temporal continuity for small apparent motions where the human vision system is able to keep track of the scene, and prioritizes the artistic intention for large apparent motions where temporal coherence is not expected. The proposed method produces temporally coherent and visually pleasing animations without the flickering artifacts of previous methods, while also maintaining the artistic intention of a goal stylization provided by the user.  相似文献   

17.
The visual analysis of flows with inertial particle trajectories is a challenging problem because time‐dependent particle trajectories additionally depend on mass, which gives rise to an infinite number of possible trajectories passing through every point in space‐time. This paper presents an approach to a comparative visualization of the inertial particles’ separation behavior. For this, we define the Finite‐Time Mass Separation (FTMS), a scalar field that measures at each point in the domain how quickly inertial particles separate that were released from the same location but with slightly different mass. Extracting and visualizing the mass that induces the largest separation provides a simplified view on the critical masses. By using complementary coordinated views, we additionally visualize corresponding inertial particle trajectories in space‐time by integral curves and surfaces. For a quantitative analysis, we plot Euclidean and arc length‐based distances to a reference particle over time, which allows to observe the temporal evolution of separation events. We demonstrate our approach on a number of analytic and one real‐world unsteady 2D field.  相似文献   

18.
Visualization researchers have been increasingly leveraging crowdsourcing approaches to overcome a number of limitations of controlled laboratory experiments, including small participant sample sizes and narrow demographic backgrounds of study participants. However, as a community, we have little understanding on when, where, and how researchers use crowdsourcing approaches for visualization research. In this paper, we review the use of crowdsourcing for evaluation in visualization research. We analyzed 190 crowdsourcing experiments, reported in 82 papers that were published in major visualization conferences and journals between 2006 and 2017. We tagged each experiment along 36 dimensions that we identified for crowdsourcing experiments. We grouped our dimensions into six important aspects: study design & procedure, task type, participants, measures & metrics, quality assurance, and reproducibility. We report on the main findings of our review and discuss challenges and opportunities for improvements in conducting crowdsourcing studies for visualization research.  相似文献   

19.
    
This paper generalizes the well‐known Diffusion Curves Images (DCI), which are composed of a set of Bezier curves with colors specified on either side. These colors are diffused as Laplace functions over the image domain, which results in smooth color gradients interrupted by the Bezier curves. Our new formulation allows for more color control away from the boundary, providing a similar expressive power as recent Bilaplace image models without introducing associated issues and computational costs. The new model is based on a special Laplace function blending and a new edge blur formulation. We demonstrate that given some user‐defined boundary curves over an input raster image, fitting colors and edge blur from the image to the new model and subsequent editing and animation is equally convenient as with DCIs. Numerous examples and comparisons to DCIs are presented.  相似文献   

20.
    
Given a cross field over a triangulated surface we present a practical and robust method to compute a field aligned coarse quad layout over the surface. The method works directly on a triangle mesh without requiring any parametrization and it is based on a new technique for tracing field‐coherent geodesic paths directly on a triangle mesh, and on a new relaxed formulation of a binary LP problem, which allows us to extract both conforming quad layouts and coarser layouts containing t‐junctions. Our method is easy to implement, very robust, and, being directly based on the input cross field, it is able to generate better aligned layouts, even with complicated fields containing many singularities. We show results on a number of datasets and comparisons with state‐of‐the‐art methods.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号