首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 9 毫秒
1.
In this paper, we present a rapid prototyping framework for GPU‐based volume rendering. Therefore, we propose a dynamic shader pipeline based on the SuperShader concept and illustrate the design decisions. Also, important requirements for the development of our system are presented. In our approach, we break down the rendering shader into areas containing code for different computations, which are defined as freely combinable, modularized shader blocks. Hence, high‐level changes of the rendering configuration result in the implicit modification of the underlying shader pipeline. Furthermore, the prototyping system allows inserting custom shader code between shader blocks of the pipeline at run‐time. A suitable user interface is available within the prototyping environment to allow intuitive modification of the shader pipeline. Thus, appropriate solutions for visualization problems can be interactively developed. We demonstrate the usage and the usefulness of our framework with implementations of dynamic rendering effects for medical applications.  相似文献   

2.
Pixel‐based visualizations have become popular, because they are capable of displaying large amounts of data and at the same time provide many details. However, pixel‐based visualizations are only effective if the data set is not sparse and the data distribution not random. Single pixels – no matter if they are in an empty area or in the middle of a large area of differently colored pixels – are perceptually difficult to discern and may therefore easily be missed. Furthermore, trends and interesting passages may be camouflaged in the sea of details. In this paper we compare different approaches for visual boosting in pixel‐based visualizations. Several boosting techniques such as halos, background coloring, distortion, and hatching are discussed and assessed with respect to their effectiveness in boosting single pixels, trends, and interesting passages. Application examples from three different domains (document analysis, genome analysis, and geospatial analysis) show the general applicability of the techniques and the derived guidelines.  相似文献   

3.
We present a real‐time method for rendering a depth‐of‐field effect based on the per‐pixel layered splatting where source pixels are scattered on one of the three layers of a destination pixel. In addition, the missing information behind foreground objects is filled with an additional image of the areas occluded by nearer objects. The method creates high‐quality depth‐of‐field results even in the presence of partial occlusion, without major artifacts often present in the previous real‐time methods. The method can also be applied to simulating defocused highlights. The entire framework is accelerated by GPU, enabling real‐time post‐processing for both off‐line and interactive applications.  相似文献   

4.
This paper introduces a framebuffer level of detail algorithm for controlling the pixel workload in an interactive rendering application. Our basic strategy is to evaluate the shading in a low resolution buffer and, in a second rendering pass, resample this buffer at the desired screen resolution. The size of the lower resolution buffer provides a trade‐off between rendering time and the level of detail in the final shading. In order to reduce approximation error we use a feature‐preserving reconstruction technique that more faithfully approximates the shading near depth and normal discontinuities. We also demonstrate how intermediate components of the shading can be selectively resized to provide finer‐grained control over resource allocation. Finally, we introduce a simple control mechanism that continuously adjusts the amount of resizing necessary to maintain a target framerate. These techniques do not require any preprocessing, are straightforward to implement on modern GPUs, and are shown to provide significant performance gains for several pixel‐bound scenes.  相似文献   

5.
The quest for the ideal flow visualization reveals two major challenges: interactivity and accuracy. Interactivity stands for explorative capabilities and real‐time control. Accuracy is a prerequisite for every professional visualization in order to provide a reliable base for analysis of a data set. Geometric flow visualization has a long tradition and comes in very different flavors. Among these, stream, path and streak lines are known to be very useful for both 2D and 3D flows. Despite their importance in practice, appropriate algorithms suited for contemporary hardware are rare. In particular, the adaptive construction of the different line types is not sufficiently studied. This study provides a profound representation and discussion of stream, path and streak lines. Two algorithms are proposed for efficiently and accurately generating these lines using modern graphics hardware. Each includes a scheme for adaptive time‐stepping. The adaptivity for stream and path lines is achieved through a new processing idea we call ‘selective transform feedback’. The adaptivity for streak lines combines adaptive time‐stepping and a geometric refinement of the curve itself. Our visualization is applied, among others, to a data set representing a simulated typhoon. The storage as a set of 3D textures requires special attention. Both algorithms explicitly support this storage, as well as the use of precomputed adaptivity information.  相似文献   

6.
One of the standard techniques to visualize three‐dimensional flow is to use geometry primitives. This solution, when opaque primitives are used, results in high levels of occlusion, especially with dense streamline seeding. Using semi‐transparent geometry primitives can alleviate the problem of occlusion. However, with semi‐transparency some parts of the data set become too vague and blurry, while others are still heavily occluded. We conducted a user study that provided us with results on perceptual limits of using semi‐transparent geometry primitives for flow visualization. Texture models for semi‐transparent streamlines were introduced. Test subjects were shown multiple overlaying layers of streamlines and recorded how many different flow directions they were able to perceive. The user study allowed us to identify a set of top scoring textures. We discuss the results of the user study, provide guidelines on using semi‐transparency for three‐dimensional flow visualization and show how varying textures for different streamlines can further enhance the perception of dense streamlines. We also discuss the strategies for dealing with very high levels of occlusion. The strategies are per‐pixel filtering of flow directions, when only some of the streamlines are rendered at a particular pixel, and opacity normalization, a way of altering the opacity of overlapping streamlines with the same direction. We illustrate our results with a variety of visualizations.  相似文献   

7.
We introduce a multifield comparison measure for scalar fields that helps in studying relations between them. The comparison measure is insensitive to noise in the scalar fields and to noise in their gradients. Further, it can be computed robustly and efficiently. Results from the visual analysis of various data sets from climate science and combustion applications demonstrate the effective use of the measure.  相似文献   

8.
We present a dimension reduction and feature extraction method for the visualization and analysis of function field data. Function fields are a class of high-dimensional, multi-variate data in which data samples are one-dimensional scalar functions. Our approach focuses upon the creation of high-dimensional range-space segmentations, from which we can generate meaningful visualizations and extract separating surfaces between features. We demonstrate our approach on high-dimensional spectral imagery, and particulate pollution data from air quality simulations.  相似文献   

9.
The rendering of large data sets can result in cluttered displays and non‐interactive update rates, leading to time consuming analyses. A straightforward solution is to reduce the number of items, thereby producing an abstraction of the data set. For the visual analysis to remain accurate, the graphical representation of the abstraction must preserve the significant features present in the original data. This paper presents a screen space quality method, based on distance transforms, that measures the visual quality of a data abstraction. This screen space measure is shown to better capture significant visual structures in data, compared with data space measures. The presented method is implemented on the GPU, allowing interactive creation of high quality graphical representations of multivariate data sets containing tens of thousands of items.  相似文献   

10.
Display devices, more than ever, are finding their ways into electronic consumer goods as a result of recent trends in providing more functionality and user interaction. Combined with the new developments in display technology towards higher reproducible luminance range, the mobility and variation in capability of display devices are constantly increasing. Consequently, in real life usage it is now very likely that the display emission to be distorted by spatially and temporally varying reflections, and the observer's visual system to be not adapted to the particular display that she is viewing at that moment. The actual perception of the display content cannot be fully understood by only considering steady-state illumination and adaptation conditions. We propose an objective method for display visibility analysis formulating the problem as a full-reference image quality assessment problem, where the display emission under "ideal" conditions is used as the reference for real-life conditions. Our work includes a human visual system model that accounts for maladaptation and temporal recovery of sensitivity. As an example application we integrate our method to a global illumination simulator and analyze the visibility of a car interior display under realistic lighting conditions.  相似文献   

11.
The derivation, manipulation and verification of analytical models from raw data is a process which requires a transformation of information across different levels of abstraction. We introduce a concept for the coupling of data classification and interactive visualization in order to make this transformation visible and steerable for the human user. Data classification techniques generate mappings that formally group data items into categories. Interactive visualization includes the user into an iterative refinement process. The user identifies and selects interesting patterns to define these categories. The following step is the transformation of a visible pattern into the formal definition of a classifier. In the last step the classifier is transformed back into a pattern that is blended with the original data in the same visual display. Our approach allows in intuitive assessment of a formal classifier and its model, the detection of outliers and the handling of noisy data using visual pattern‐matching. We instantiated the concept using decision trees for classification and KVMaps as the visualization technique. The generation of a classifier from visual patterns and its verification is transformed from a cognitive to a mostly pre‐cognitive task.  相似文献   

12.
This survey gives an overview of the current state of the art in GPU techniques for interactive large‐scale volume visualization. Modern techniques in this field have brought about a sea change in how interactive visualization and analysis of giga‐, tera‐ and petabytes of volume data can be enabled on GPUs. In addition to combining the parallel processing power of GPUs with out‐of‐core methods and data streaming, a major enabler for interactivity is making both the computational and the visualization effort proportional to the amount and resolution of data that is actually visible on screen, i.e. ‘output‐sensitive’ algorithms and system designs. This leads to recent output‐sensitive approaches that are ‘ray‐guided’, ‘visualization‐driven’ or ‘display‐aware’. In this survey, we focus on these characteristics and propose a new categorization of GPU‐based large‐scale volume visualization techniques based on the notions of actual output‐resolution visibility and the current working set of volume bricks—the current subset of data that is minimally required to produce an output image of the desired display resolution. Furthermore, we discuss the differences and similarities of different rendering and data traversal strategies in volume rendering by putting them into a common context—the notion of address translation. For our purposes here, we view parallel (distributed) visualization using clusters as an orthogonal set of techniques that we do not discuss in detail but that can be used in conjunction with what we present in this survey.  相似文献   

13.
14.
Material interface reconstruction (MIR) is the task of constructing boundary interfaces between regions of homogeneous material, while satisfying volume constraints, over a structured or unstructured spatial domain. In this paper, we present a discrete approach to MIR based upon optimizing the labeling of fractional volume elements within a discretization of the problem's original domain. We detail how to construct and initially label a discretization, and introduce a volume conservative swap move for optimization. Furthermore, we discuss methods for extracting and visualizing material interfaces from the discretization. Our technique has significant advantages over previous methods: we produce interfaces between multiple materials that are continuous across cell boundaries for time‐varying and static data in arbitrary dimension with bounded error.  相似文献   

15.
In this paper we present a method for automatic interpolation between adjacent discrete levels of detail to achieve smooth LOD changes in image space. We achieve this by breaking the problem into two passes: We render the two LOD levels individually and combine them in a separate pass afterwards. The interpolation is formulated in a way that only one level has to be updated per frame and the other can be reused from the previous frame, thereby causing roughly the same render cost as with simple non interpolated discrete LOD rendering, only incurring the slight overhead of the final combination pass. Additionally we describe customized interpolation schemes using visibility textures. The method was designed with the ease of integration into existing engines in mind. It requires neither sorting nor blending of objects, nor does it introduce any constrains in the LOD used. The LODs can be coplanar, alpha masked, animated, impostors, and intersecting, while still interpolating smoothly.  相似文献   

16.
Vortices are commonly understood as rotating motions in fluid flows. The analysis of vortices plays an important role in numerous scientific applications, such as in engineering, meteorology, oceanology, medicine and many more. The successful analysis consists of three steps: vortex definition, extraction and visualization. All three have a long history, and the early themes and topics from the 1970s survived to this day, namely, the identification of vortex cores, their extent and the choice of suitable reference frames. This paper provides an overview over the advances that have been made in the last 40 years. We provide sufficient background on differential vector field calculus, extraction techniques like critical point search and the parallel vectors operator, and we introduce the notion of reference frame invariance. We explain the most important region‐based and line‐based methods, integration‐based and geometry‐based approaches, recent objective techniques, the selection of reference frames by means of flow decompositions, as well as a recent local optimization‐based technique. We point out relationships between the various approaches, classify the literature and identify open problems and challenges for future work.  相似文献   

17.
Prostate cancer is one of the most prevalent cancers among males, and the use of magnetic resonance imaging (MRI) has been suggested for its detection. A framework is presented for scoring and visualizing various MR data in an efficient and intuitive manner. A classification method is introduced where a cumulative score volume is created which takes into account each of three acquisition types. This score volume is integrated into a volume rendering framework which allows the user to view the prostate gland, the multi‐modal score values, and the surrounding anatomy. A visibility persistence mode is introduced to automatically avoid full occlusion of a selected score and indicate overlaps. The use of GPU‐accelerated multi‐modal single‐pass ray casting provides an interactive experience. User driven importance rendering allows the user to gain insight into the data and can assist in localization of the disease and treatment planning. We evaluate our results against pathology and radiologists'determinations.  相似文献   

18.
Reviewing literatures for a certain research field is always important for academics. One could use Google‐like information seeking tools, but oftentimes he/she would end up obtaining too many possibly related papers, as well as the papers in the associated citation network. During such a process, a user may easily get lost after following a few links for searching or cross‐referencing. It is also difficult for the user to identify relevant/important papers from the resulting huge collection of papers. Our work, called PaperVis, endeavors to provide a user‐friendly interface to help users quickly grasp the intrinsic complex citation‐reference structures among a specific group of papers. We modify the existing Radial Space Filling (RSF) and Bullseye View techniques to arrange involved papers as a node‐link graph that better depicts the relationships among them while saving the screen space at the same time. PaperVis applies visual cues to present node attributes and their transitions among interactions, and it categorizes papers into semantically meaningful hierarchies to facilitate ensuing literature exploration. We conduct experiments on the InfoVis 2004 Contest Dataset to demonstrate the effectiveness of PaperVis.  相似文献   

19.
At a microscopic resolution, biological structures are composed of cells, red blood corpuscles (RBCs), cytoplasm and other microstructural components. There is a natural pattern in terms of distribution, arrangement and packing density of these components in biological organization. In this work, we propose to use N‐point correlation functions to guide the analysis and exploration process in microscopic datasets. These functions provide useful feature spaces to aid segmentation and visualization tasks. We show 3D visualizations of mouse placenta tissue layers and mouse mammary ducts as well as 2D segmentation/tracking of clonal populations. Further confidence in our results stems from validation studies that were performed with manual ground‐truth for segmentation.  相似文献   

20.
This paper describes a fast rendering algorithm for verification of spectacle lens design. Our method simulates refraction corrections of astigmatism as well as myopia or presbyopia. Refraction and defocus are the main issues in the simulation. For refraction, our proposed method uses per-vertex basis ray tracing which warps the environment map and produces a real-time refracted image which is subjectively as good as ray tracing. Conventional defocus simulation was previously done by distribution ray tracing and a real-time solution was impossible. We introduce the concept of a blur field, which we use to displace every vertex according to its position. The blurring information is precomputed as a set of field values distributed to voxels which are formed by evenly subdividing the perspective projected space. The field values can be determined by tracing a wavefront from each voxel through the lens and the eye, and by evaluating the spread of light at the retina considering the best human accommodation effort. The blur field is stored as texture data and referred to by the vertex shader that displaces each vertex. With an interactive frame rate, blending the multiple rendering results produces a blurred image comparable to distribution ray tracing output.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号