首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Visual representation techniques enable perception and exploration of scientific data. Following the topological landscapes metaphor of Weber et al., we provide a new algorithm for visualizing scalar functions defined on simply connected domains of arbitrary dimension. For a potentially high dimensional scalar field, our algorithm produces a collection of, in some sense complete, two‐dimensional terrain models whose contour trees and corresponding topological persistences are identical to those of the input scalar field. The algorithm exactly preserves the volume of each region corresponding to an arc in the contour tree. We also introduce an efficiently computable metric on terrain models we generate. Based on this metric, we develop a tool that can help the users to explore the space of possible terrain models.  相似文献   

2.
Empty‐space skipping is an essential acceleration technique for volume rendering. Image‐order empty‐space skipping is not well suited to GPU implementation, since it must perform checks on, essentially, a per‐sample basis, as in kd‐tree traversal, which can lead to a great deal of divergent branching at runtime, which is very expensive in a modern GPU pipeline. In contrast, object‐order empty‐space skipping is extremely fast on a GPU and has negligible overheads compared with approaches without empty‐space skipping, since it employs the hardware unit for rasterisation. However, previous object‐order algorithms have been able to skip only exterior empty space and not the interior empty space that lies inside or between volume objects. In this paper, we address these issues by proposing a multi‐layer depth‐peeling approach that can obtain all of the depth layers of the tight‐fitting bounding geometry of the isosurface by a single rasterising pass. The maximum count of layers peeled by our approach can be up to thousands, while maintaining 32‐bit float‐point accuracy, which was not possible previously. By raytracing only the valid ray segments between each consecutive pair of depth layers, we can skip both the interior and exterior empty space efficiently. In comparisons with 3 state‐of‐the‐art GPU isosurface rendering algorithms, this technique achieved much faster rendering across a variety of data sets.  相似文献   

3.
Salience detection is a principle mechanism to facilitate visual attention. A good visualization guides the observer's attention to the relevant aspects of the representation. Hence, the distribution of salience over a visualization image is an essential measure of the quality of the visualization. We describe a method for computing such a metric for a visualization image in the context of a given dataset. We show how this technique can be used to analyze a visualization's salience, improve an existing visualization, and choose the best representation from a set of alternatives. The usefulness of this proposed metric is illustrated using examples from information visualization, volume visualization and flow visualization.  相似文献   

4.
We present a practical real‐time approach for rendering lens‐flare effects. While previous work employed costly ray tracing or complex polynomial expressions, we present a coarser, but also significantly faster solution. Our method is based on a first‐order approximation of the ray transfer in an optical system, which allows us to derive a matrix that maps lens flare‐producing light rays directly to the sensor. The resulting approach is easy to implement and produces physically‐plausible images at high framerates on standard off‐the‐shelf graphics hardware.  相似文献   

5.
In this paper we show how to use two‐colored pixels as a generic tool for image processing. We apply two‐colored pixels as a basic operator as well as a supporting data structure for several image processing applications. Traditionally, images are represented by a regular grid of square pixels with one constant color each. In the two‐colored pixel representation, we reduce the image resolution and replace blocks of N × N pixels by one square that is split by a (feature) line into two regions with constant colors. We show how the conversion of standard mono‐colored pixel images into two‐colored pixel images can be computed efficiently by applying a hierarchical algorithm along with a CUDA‐based implementation. Two‐colored pixels overcome some of the limitations that classical pixel representations have, and their feature lines provide minimal geometric information about the underlying image region that can be effectively exploited for a number of applications. We show how to use two‐colored pixels as an interactive brush tool, achieving realtime performance for image abstraction and non‐photorealistic filtering. Additionally, we propose a realtime solution for image retargeting, defined as a linear minimization problem on a regular or even adaptive two‐colored pixel image. The concept of two‐colored pixels can be easily extended to a video volume, and we demonstrate this for the example of video retargeting.  相似文献   

6.
Segmenting a moving foreground (fg) from its background (bg) is a fundamental step in many Machine Vision and Computer Graphics applications. Nevertheless, hardly any attempts have been made to tackle this problem in dynamic 3D scanned scenes. Scanned dynamic scenes are typically challenging due to noise and large missing parts. Here, we present a novel approach for motion segmentation in dynamic point‐cloud scenes designed to cater to the unique properties of such data. Our key idea is to augment fg/bg classification with an active learning framework by refining the segmentation process in an adaptive manner. Our method initially classifies the scene points as either fg or bg in an un‐supervised manner. This, by training discriminative RBF‐SVM classifiers on automatically labeled, high‐certainty fg/bg points. Next, we adaptively detect unreliable classification regions (i.e. where fg/bg separation is uncertain), locally add more training examples to better capture the motion in these areas, and re‐train the classifiers to fine‐tune the segmentation. This not only improves segmentation accuracy, but also allows our method to perform in a coarse‐to‐fine manner, thereby efficiently process high‐density point‐clouds. Additionally, we present a unique interactive paradigm for enhancing this learning process, by using a manual editing tool. The user explicitly edits the RBF‐SVM decision borders in unreliable regions in order to refine and correct the classification. We provide extensive qualitative and quantitative experiments on both real (scanned) and synthetic dynamic scenes.  相似文献   

7.
We present an automatic image‐recoloring technique for enhancing color contrast for dichromats whose computational cost varies linearly with the number of input pixels. Our approach can be efficiently implemented on GPUs, and we show that for typical image sizes it is up to two orders of magnitude faster than the current state‐of‐the‐art technique. Unlike previous approaches, ours preserve temporal coherence and, therefore, is suitable for video recoloring. We demonstrate the effectiveness of our technique by integrating it into a visualization system and showing, for the first time, real‐time high‐quality recolored visualizations for dichromats.  相似文献   

8.
We present user‐controllable and plausible defocus blur for a stochastic rasterizer. We modify circle of confusion coefficients per vertex to express more general defocus blur, and show how the method can be applied to limit the foreground blur, extend the in‐focus range, simulate tilt‐shift photography, and specify per‐object defocus blur. Furthermore, with two simplifying assumptions, we show that existing triangle coverage tests and tile culling tests can be used with very modest modifications. Our solution is temporally stable and handles simultaneous motion blur and depth of field.  相似文献   

9.
Medical illustrations have been used for a long time for teaching and communicating information for diagnosis or surgery planning. Illustrative visualization systems create methods and tools that adapt traditional illustration techniques to enhance the result of renderings. Clipping the volume is a popular operation in volume rendering for inspecting the inner parts, though it may remove some information of the context that is worth preserving. In this paper we present a new editing technique based on the use of clipping planes, direct structure extrusion, and illustrative methods, which preserves the context by adapting the extruded region to the structures of interest of the volumetric model. We will show that users may interactively modify the clipping plane and edit the structures to highlight, in order to easily create the desired result. Our approach works with segmented volume models and non‐segmented ones. In the last case, a local segmentation is performed on‐the‐fly. We will demonstrate the efficiency and utility of our method.  相似文献   

10.
We present a new algorithm for efficient rendering of high‐quality depth‐of‐field (DoF) effects. We start with a single rasterized view (reference view) of the scene, and sample the light field by warping the reference view to nearby views. We implement the algorithm using NVIDIA's CUDA to achieve parallel processing, and exploit the atomic operations to resolve visibility when multiple pixels warp to the same image location. We then directly synthesize DoF effects from the sampled light field. To reduce aliasing artifacts, we propose an image‐space filtering technique that compensates for spatial undersampling using MIP mapping. The main advantages of our algorithm are its simplicity and generality. We demonstrate interactive rendering of DoF effects in several complex scenes. Compared to existing methods, ours does not require ray tracing and hence scales well with scene complexity.  相似文献   

11.
In this paper, we develop an interactive analysis and visualization tool for probabilistic segmentation results in medical imaging. We provide a systematic approach to analyze, interact and highlight regions of segmentation uncertainty. We introduce a set of visual analysis widgets integrating different approaches to analyze multivariate probabilistic field data with direct volume rendering. We demonstrate the user's ability to identify suspicious regions (e.g. tumors) and correct the misclassification results using a novel uncertainty‐based segmentation editing technique. We evaluate our system and demonstrate its usefulness in the context of static and time‐varying medical imaging datasets.  相似文献   

12.
Bidirectional Texture Functions (BTFs) are among the highest quality material representations available today and thus well suited whenever an exact reproduction of the appearance of a material or complete object is required. In recent years, BTFs have started to find application in various industrial settings and there is also a growing interest in the cultural heritage domain. BTFs are usually measured from real‐world samples and easily consist of tens or hundreds of gigabytes. By using data‐driven compression schemes, such as matrix or tensor factorization, a more compact but still faithful representation can be derived. This way, BTFs can be employed for real‐time rendering of photo‐realistic materials on the GPU. However, scenes containing multiple BTFs or even single objects with high‐resolution BTFs easily exceed available GPU memory on today's consumer graphics cards unless quality is drastically reduced by the compression. In this paper, we propose the Bidirectional Sparse Virtual Texture Function, a hierarchical level‐of‐detail approach for the real‐time rendering of large BTFs that requires only a small amount of GPU memory. More importantly, for larger numbers or higher resolutions, the GPU and CPU memory demand grows only marginally and the GPU workload remains constant. For this, we extend the concept of sparse virtual textures by choosing an appropriate prioritization, finding a trade off between factorization components and spatial resolution. Besides GPU memory, the high demand on bandwidth poses a serious limitation for the deployment of conventional BTFs. We show that our proposed representation can be combined with an additional transmission compression and then be employed for streaming the BTF data to the GPU from from local storage media or over the Internet. In combination with the introduced prioritization this allows for the fast visualization of relevant content in the users field of view and a consecutive progressive refinement.  相似文献   

13.
We present a new approach aimed at understanding the structure of connections in edge‐bundling layouts. We combine the advantages of edge bundles with a bundle‐centric simplified visual representation of a graph's structure. For this, we first compute a hierarchical edge clustering of a given graph layout which groups similar edges together. Next, we render clusters at a user‐selected level of detail using a new image‐based technique that combines distance‐based splatting and shape skeletonization. The overall result displays a given graph as a small set of overlapping shaded edge bundles. Luminance, saturation, hue, and shading encode edge density, edge types, and edge similarity. Finally, we add brushing and a new type of semantic lens to help navigation where local structures overlap. We illustrate the proposed method on several real‐world graph datasets.  相似文献   

14.
During the development of car engines, regression models that are based on machine learning techniques are increasingly important for tasks which require a prediction of results in real‐time. While the validation of a model is a key part of its identification process, existing computation‐ or visualization‐based techniques do not adequately support all aspects of model validation. The main contribution of this paper is an interactive approach called HyperMoVal that is designed to support multiple tasks related to model validation: 1) comparing known and predicted results, 2) analyzing regions with a bad fit, 3) assessing the physical plausibility of models also outside regions covered by validation data, and 4) comparing multiple models. The key idea is to visually relate one or more n‐dimensional scalar functions to known validation data within a combined visualization. HyperMoVal lays out multiple 2D and 3D sub‐projections of the n‐dimensional function space around a focal point. We describe how linking HyperMoVal to other views further extends the possibilities for model validation. Based on this integration, we discuss steps towards supporting the entire workflow of identifying regression models. An evaluation illustrates a typical workflow in the application context of car‐engine design and reports general feedback of domain experts and users of our approach. These results indicate that our approach significantly accelerates the identification of regression models and increases the confidence in the overall engineering process.  相似文献   

15.
The visual analysis of multivariate projections is a challenging task, because complex visual structures occur. This causes fatigue or misinterpretations, which distorts the analysis. In fact, the same projection can lead to different analysis results. We provide visual guidance pictograms to improve objectivity of the visual search. A visual guidance pictogram is an iconic visual density map encoding the visual structure of certain data properties. By using them to guide the analysis, structures in the projection can be better understood and mentally linked to properties in the data. We introduce a systematic scheme for designing such pictograms and provide a set of pictograms for standard visual tasks, such as correlation and distribution analysis, for standard projections like scatterplots, RadVis, and Star Coordinates. We conduct a study that compares the visual analysis of real data with and without the support of guidance pictograms. Our tests show that the training effort for a visual search can be decreased and the analysis bias can be reduced by supporting the user's visual search with guidance pictograms.  相似文献   

16.
In this paper we present a method for automatic interpolation between adjacent discrete levels of detail to achieve smooth LOD changes in image space. We achieve this by breaking the problem into two passes: We render the two LOD levels individually and combine them in a separate pass afterwards. The interpolation is formulated in a way that only one level has to be updated per frame and the other can be reused from the previous frame, thereby causing roughly the same render cost as with simple non interpolated discrete LOD rendering, only incurring the slight overhead of the final combination pass. Additionally we describe customized interpolation schemes using visibility textures. The method was designed with the ease of integration into existing engines in mind. It requires neither sorting nor blending of objects, nor does it introduce any constrains in the LOD used. The LODs can be coplanar, alpha masked, animated, impostors, and intersecting, while still interpolating smoothly.  相似文献   

17.
In this report, we review the current state of the art of web‐based visualization applications. Recently, an increasing number of web‐based visualization applications have emerged. This is due to the fact that new technologies offered by modern browsers greatly increased the capabilities for visualizations on the web. We first review these technical aspects that are enabling this development. This includes not only improvements for local rendering like WebGL and HTML5, but also infrastructures like grid or cloud computing platforms. Another important factor is the transfer of data between the server and the client. Therefore, we also discuss advances in this field, for example methods to reduce bandwidth requirements like compression and other optimizations such as progressive rendering and streaming. After establishing these technical foundations, we review existing web‐based visualization applications and prototypes from various application domains. Furthermore, we propose a classification of these web‐based applications based on the technologies and algorithms they employ. Finally, we also discuss promising application areas that would benefit from web‐based visualization and assess their feasibility based on the existing approaches.  相似文献   

18.
We present a flexible and highly efficient hardware‐assisted volume renderer grounded on the original Projected Tetrahedra (PT) algorithm. Unlike recent similar approaches, our method is exclusively based on the rasterization of simple geometric primitives and takes full advantage of graphics hardware. Both vertex and geometry shaders are used to compute the tetrahedral projection, while the volume ray integral is evaluated in a fragment shader; hence, volume rendering is performed entirely on the GPU within a single pass through the pipeline. We apply a CUDA‐based visibility ordering achieving rendering and sorting performance of over 6 M Tet/s for unstructured datasets. Furthermore, as each tetrahedron is processed independently, we employ a data‐parallel solution which is neither bound by GPU memory size nor does it rely on auxiliary volume information. In addition, iso‐surfaces can be readily extracted during the rendering process, and time‐varying data are handled without extra burden.  相似文献   

19.
The parallel vectors (PV) operator is a feature extraction approach for defining line‐type features such as creases (ridges and valleys) in scalar fields, as well as separation, attachment, and vortex core lines in vector fields. In this work, we extend PV feature extraction to higher‐order data represented by piecewise analytical functions defined over grid cells. The extraction uses PV in two distinct stages. First, seed points on the feature lines are placed by evaluating the inclusion form of the PV criterion with reduced affine arithmetic. Second, a feature flow field is derived from the higher‐order PV expression where the features can be extracted as streamlines starting at the seeds. Our approach allows for guaranteed bounds regarding accuracy with respect to existence, position, and topology of the features obtained. The method is suitable for parallel implementation and we present results obtained with our GPU‐based prototype. We apply our method to higher‐order data obtained from discontinuous Galerkin fluid simulations.  相似文献   

20.
This paper proposes two variants of a simple but efficient algorithm for structure‐preserving halftoning. Our algorithm extends Floyd‐Steinberg error diffusion; the goal of our extension is not only to produce good tone similarity but also to preserve structure and especially contrast, motivated by our intuition that human perception is sensitive to contrast. By enhancing contrast we attempt to preserve and enhance structure also. Our basic algorithm employs an adaptive, contrast‐aware mask. To enhance contrast, darker pixels should be more likely to be chosen as black pixels while lighter pixels should be more likely to be set as white. Therefore, when the positive error is diffused to nearby pixels in a mask, the dark pixels absorb less error and the light pixels absorb more. Conversely, negative error is distributed preferentially to dark pixels. We also propose using a mask with values that drop off steeply from the centre, intended to promote good spatial distribution. It is a very fast method whose speed mainly depends on the size of the mask. But this method suffers from distracting patterns. We then propose a variant on the basic idea which overcomes the first algorithm's shortcomings while maintaining its advantages through a priority‐aware scheme. Rather than proceeding in random or raster order, we sort the image first; each pixel is assigned a priority based on its up‐to‐date distance to black or to white, and pixels with extreme intensities are processed earlier. Since we use the same mask strategy as before, we promote good spatial distribution and high contrast. We use tone similarity, structure similarity, and contrast similarity to validate our algorithm. Comparisons with recent structure‐aware algorithms show that our method gives better results without sacrificing speed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号