首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Volumetric rendering is widely used to examine 3D scalar fields from CT/MRI scanners and numerical simulation datasets. One key aspect of volumetric rendering is the ability to provide perceptual cues to aid in understanding structure contained in the data. While shading models that reproduce natural lighting conditions have been shown to better convey depth information and spatial relationships, they traditionally require considerable (pre)computation. In this paper, a shading model for interactive direct volume rendering is proposed that provides perceptual cues similar to those of ambient occlusion, for both solid and transparent surface-like features. An image space occlusion factor is derived from the radiative transport equation based on a specialized phase function. The method does not rely on any precomputation and thus allows for interactive explorations of volumetric data sets via on-the-fly editing of the shading model parameters or (multi-dimensional) transfer functions while modifications to the volume via clipping planes are incorporated into the resulting occlusion-based shading.  相似文献   

2.
Particle‐based simulation techniques, like the discrete element method or molecular dynamics, are widely used in many research fields. In real‐time explorative visualization it is common to render the resulting data using opaque spherical glyphs with local lighting only. Due to massive overlaps, however, inner structures of the data are often occluded rendering visual analysis impossible. Furthermore, local lighting is not sufficient as several important features like complex shapes, holes, rifts or filaments cannot be perceived well. To address both problems we present a new technique that jointly supports transparency and ambient occlusion in a consistent illumination model. Our approach is based on the emission‐absorption model of volume rendering. We provide analytic solutions to the volume rendering integral for several density distributions within a spherical glyph. Compared to constant transparency our approach preserves the three‐dimensional impression of the glyphs much better. We approximate ambient illumination with a fast hierarchical voxel cone‐tracing approach, which builds on a new real‐time voxelization of the particle data. Our implementation achieves interactive frame rates for millions of static or dynamic particles without any preprocessing. We illustrate the merits of our method on real‐world data sets gaining several new insights.  相似文献   

3.
In this paper, we present a novel technique which simulates directional light scattering for more realistic interactive visualization of volume data. Our method extends the recent directional occlusion shading model by enabling light source positioning with practically no performance penalty. Light transport is approximated using a tilted cone‐shaped function which leaves elliptic footprints in the opacity buffer during slice‐based volume rendering. We perform an incremental blurring operation on the opacity buffer for each slice in front‐to‐back order. This buffer is then used to define the degree of occlusion for the subsequent slice. Our method is capable of generating high‐quality soft shadowing effects, allows interactive modification of all illumination and rendering parameters, and requires no pre‐computation.  相似文献   

4.
Direct volume rendering has become a popular method for visualizing volumetric datasets. Even though computers are continually getting faster, it remains a challenge to incorporate sophisticated illumination models into direct volume rendering while maintaining interactive frame rates. In this paper, we present a novel approach for advanced illumination in direct volume rendering based on GPU ray-casting. Our approach features directional soft shadows taking scattering into account, ambient occlusion and color bleeding effects while achieving very competitive frame rates. In particular, multiple dynamic lights and interactive transfer function changes are fully supported. Commonly, direct volume rendering is based on a very simplified discrete version of the original volume rendering integral, including the development of the original exponential extinction into a-blending. In contrast to a-blending forming a product when sampling along a ray, the original exponential extinction coefficient is an integral and its discretization a Riemann sum. The fact that it is a sum can cleverly be exploited to implement volume lighting effects, i.e. soft directional shadows, ambient occlusion and color bleeding. We will show how this can be achieved and how it can be implemented on the GPU.  相似文献   

5.
In this paper, a method for interactive direct volume rendering is proposed for computing depth of field effects, which previously were shown to aid observers in depth and size perception of synthetically generated images. The presented technique extends those benefits to volume rendering visualizations of 3D scalar fields from CT/MRI scanners or numerical simulations. It is based on incremental filtering and as such does not depend on any pre‐computation, thus allowing interactive explorations of volumetric data sets via on‐the‐fly editing of the shading model parameters or (multi‐dimensional) transfer functions.  相似文献   

6.
The selection of an appropriate global transfer function is essential for visualizing time‐varying simulation data. This is especially challenging when the global data range is not known in advance, as is often the case in remote and in‐situ visualization settings. Since the data range may vary dramatically as the simulation progresses, volume rendering using local transfer functions may not be coherent for all time steps. We present an exploratory technique that enables coherent classification of time‐varying volume data. Unlike previous approaches, which require pre‐processing of all time steps, our approach lets the user explore the transfer function space without accessing the original 3D data. This is useful for interactive visualization, and absolutely essential for in‐situ visualization, where the entire simulation data range is not known in advance. Our approach generates a compact representation of each time step at rendering time in the form of ray attenuation functions, which are used for subsequent operations on the opacity and color mappings. The presented approach offers interactive exploration of time‐varying simulation data that alleviates the cost associated with reloading and caching large data sets.  相似文献   

7.
In volume visualization, transfer functions are used to classify the volumetric data and assign optical properties to the voxels. In general, transfer functions are generated in a transfer function space, which is the feature space constructed by data values and properties derived from the data. If volumetric objects have the same or overlapping data values, it would be difficult to separate them in the transfer function space. In this paper, we present a rule‐enhanced transfer function design method that allows important structures of the volume to be more effectively separated and highlighted. We define a set of rules based on the local frequency distribution of volume attributes. A rule‐selection method based on a genetic algorithm is proposed to learn the set of rules that can distinguish the user‐specified target tissue from other tissues. In the rendering stage, voxels satisfying these rules are rendered with higher opacities in order to highlight the target tissue. The proposed method was tested on various volumetric datasets to enhance the visualization of important structures that are difficult to be visualized by traditional transfer function design methods. The results demonstrate the effectiveness of the proposed method.  相似文献   

8.
We present a GPU accelerated volume ray casting system interactively driving a multi‐user light field display. The display, driven by a single programmable GPU, is based on a specially arranged array of projectors and a holographic screen and provides full horizontal parallax. The characteristics of the display are exploited to develop a specialized volume rendering technique able to provide multiple freely moving naked‐eye viewers the illusion of seeing and manipulating virtual volumetric objects floating in the display workspace. In our approach, a GPU ray‐caster follows rays generated by a multiple‐center‐of‐projection technique while sampling pre‐filtered versions of the dataset at resolutions that match the varying spatial accuracy of the display. The method achieves interactive performance and provides rapid visual understanding of complex volumetric data sets even when using depth oblivious compositing techniques.  相似文献   

9.
Research issues in volume visualization   总被引:6,自引:0,他引:6  
Volume visualization is a method of extracting meaningful information from volumetric data sets through the use of interactive graphics and imaging. It addresses the representation, manipulation, and rendering of volumetric data sets, providing mechanisms for peering into structures and understanding their complexity and dynamics. Typically, the data set is represented as a 3D regular grid of volume elements (voxels) and stored in a volume buffer (also called a cubic frame buffer), which is a large 3D array of voxels. However, data is often defined at scattered or irregular locations that require using alternative representations and rendering algorithms. There are eight major research issues in volume visualization: volume graphics, volume rendering, transform coding of volume data, scattered data, enriching volumes with knowledge, segmentation, real-time rendering and parallelism, and special purpose hardware  相似文献   

10.
Common practice in brain research and brain surgery involves the multi‐modal acquisition of brain anatomy and brain activation data. These highly complex three‐dimensional data have to be displayed simultaneously in order to convey spatial relationships. Unique challenges in information and interaction design have to be solved in order to keep the visualization sufficiently complete and uncluttered at the same time. The visualization method presented in this paper addresses these issues by using a hybrid combination of polygonal rendering of brain structures and direct volume rendering of activation data. Advanced rendering techniques including illustrative display styles and ambient occlusion calculations enhance the clarity of the visual output. The presented rendering pipeline produces real‐time frame rates and offers a high degree of configurability. Newly designed interaction and measurement tools are provided, which enable the user to explore the data at large, but also to inspect specific features closely. We demonstrate the system in the context of a cognitive neurosciences dataset. An initial informal evaluation shows that our visualization method is deemed useful for clinical research.  相似文献   

11.
A practical approach to spectral volume rendering   总被引:1,自引:0,他引:1  
To make a spectral representation of color practicable for volume rendering, a new low-dimensional subspace method is used to act as the carrier of spectral information. With that model, spectral light material interaction can be integrated into existing volume rendering methods at almost no penalty. In addition, slow rendering methods can profit from the new technique of postillumination-generating spectral images in real-time for arbitrary light spectra under a fixed viewpoint. Thus, the capability of spectral rendering to create distinct impressions of a scene under different lighting conditions is established as a method of real-time interaction. Although we use an achromatic opacity in our rendering, we show how spectral rendering permits different data set features to be emphasized or hidden as long as they have not been entirely obscured. The use of postillumination is an order of magnitude faster than changing the transfer function and repeating the projection step. To put the user in control of the spectral visualization, we devise a new widget, a "light-dial", for interactively changing the illumination and include a usability study of this new light space exploration tool. Applied to spectral transfer functions, different lights bring out or hide specific qualities of the data. In conjunction with postillumination, this provides a new means for preparing data for visualization and forms a new degree of freedom for guided exploration of volumetric data sets  相似文献   

12.
LiveSync: deformed viewing spheres for knowledge-based navigation   总被引:1,自引:0,他引:1  
Although real-time interactive volume rendering is available even for very large data sets, this visualization method is used quite rarely in the clinical practice. We suspect this is because it is very complicated and time consuming to adjust the parameters to achieve meaningful results. The clinician has to take care of the appropriate viewpoint, zooming, transfer function setup, clipping planes and other parameters. Because of this, most often only 2D slices of the data set are examined. Our work introduces LiveSync, a new concept to synchronize 2D slice views and volumetric views of medical data sets. Through intuitive picking actions on the slice, the users define the anatomical structures they are interested in. The 3D volumetric view is updated automatically with the goal that the users are provided with expressive result images. To achieve this live synchronization we use a minimal set of derived information without the need for segmented data sets or data-specific pre-computations. The components we consider are the picked point, slice view zoom, patient orientation, viewpoint history, local object shape and visibility. We introduce deformed viewing spheres which encode the viewpoint quality for the components. A combination of these deformed viewing spheres is used to estimate a good viewpoint. Our system provides the physician with synchronized views which help to gain deeper insight into the medical data with minimal user interaction.  相似文献   

13.
We introduce Boundary‐Aware Extinction Maps for interactive rendering of massive heterogeneous volumetric datasets. Our approach is based on the projection of the extinction along light rays into a boundary‐aware function space, focusing on the most relevant sections of the light paths. This technique also provides an alternative representation of the set of participating media, allowing scattering simulation methods to be applied on arbitrary volume representations. Combined with a simple out‐of‐core rendering framework, Boundary‐Aware Extinction Maps are valuable tools for interactive applications as well as production previsualization and rendering.  相似文献   

14.
We propose a technique named progressive light volume to support advanced volumetric illumination effects, such as single scattering and multi scattering. The light volume stores direct lighting information for sample points of the volume data. Using the light volume, we are able to compute the direct lighting for any point in the volume data with a single texture lookup. In order to keep the rendering at an interactive frame rate, we build the light volume progressively if necessary. During the light volume construction period, we use a fast ray casting algorithm to produce a rough rendering estimate from the light volume. After the light volume is built, we use a path tracer to continuously estimate the light intensity for each pixel. Our method puts no restrictions on the number, position, and type of lights. We conducted a comprehensive evaluation for various datasets. The rendering results show that our method is able to produce compelling images, and the performance results indicate that our method is practical for interactive use. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

15.
A good transfer function in volume rendering requires careful consideration of the materials present in a volume. A manual creation is tedious and prone to errors. Furthermore, the user interaction to design a higher dimensional transfer function gets complicated. In this work, we present a graph-based approach to design a transfer function that takes volumetric structures into account. Our novel contribution is in proposing an algorithm for robust deduction of a material graph from a set of disconnected edges. We incorporate stable graph creation under varying noise levels in the volume. We show that the deduced material graph can be used to automatically create a transfer function using the occlusion spectrum of the input volume. Since we compute material topology of the objects, an enhanced rendering is possible with our method. This also allows us to selectively render objects and depict adjacent materials in a volume. Our method considerably reduces manual effort required in designing a transfer function and provides an easy interface for interaction with the volume.  相似文献   

16.
Visualizing dynamic participating media in particle form by fully solving equations from the light transport theory is a computationally very expensive process. In this paper, we present a computational pipeline for particle volume rendering that is easily accelerated by the current GPU. To fully harness its massively parallel computing power, we transform input particles into a volumetric density field using a GPU-assisted, adaptive density estimation technique that iteratively adapts the smoothing length for local grid cells. Then, the volume data is visualized efficiently based on the volume photon mapping method where our GPU techniques further improve the rendering quality offered by previous implementations while performing rendering computation in acceptable time. It is demonstrated that high quality volume renderings can be easily produced from large particle datasets in time frames of a few seconds to less than a minute.  相似文献   

17.
Glossy to glossy reflections are lights bounced between glossy surfaces. Such directional light transports are important for humans to perceive glossy materials, but difficult to simulate. This paper proposes a new method for rendering screen‐space glossy to glossy reflections in realtime. We use spherical von Mises‐Fisher (vMF) distributions to model glossy BRDFs at surfaces, and employ screen space directional occlusion (SSDO) rendering framework to trace indirect light transports bounced in the screen space. As our main contributions, we derive a new parameterization of vMF distribution so as to convert the non‐linear fit of multiple vMF distributions into a linear sum in the new space. Then, we present a new linear filtering technique to build MIP‐maps on glossy BRDFs, which allows us to create filtered radiance transfer functions at runtime, and efficiently estimate indirect glossy to glossy reflections. We demonstrate our method in a realtime application for rendering scenes with dynamic glossy objects. Compared with screen space directional occlusion, our approach only requires one extra texture and has a negligible overhead, 3% ~ 6% loss at frame rate, but enables glossy to glossy reflections.  相似文献   

18.
In contrast to 2D scatterplots, the existing 3D variants have the advantage of showing one additional data dimension, but suffer from inadequate spatial and shape perception and therefore are not well suited to display structures of the underlying data. We improve shape perception by applying a new illumination technique to the pointcloud representation of 3D scatterplots. Points are classified as locally linear, planar, and volumetric structures—according to the eigenvalues of the inverse distance-weighted covariance matrix at each data element. Based on this classification, different lighting models are applied: codimension-2 illumination, surface illumination, and emissive volumetric illumination. Our technique lends itself to efficient GPU point rendering and can be combined with existing methods like semi-transparent rendering, halos, and depth or attribute based color coding. The user can interactively navigate in the dataset and manipulate the classification and other visualization parameters. We demonstrate our visualization technique by showing examples of multi-dimensional data and of generic pointcloud data.  相似文献   

19.
We develop a volumetric video system which supports interactive browsing of compressed time-varying volumetric features (significant isosurfaces and interval volumes). Since the size of even one volumetric frame in a time-varying 3D data set is very large, transmission and on-line reconstruction are the main bottlenecks for interactive remote visualization of time-varying volume and surface data. We describe a compression scheme for encoding time-varying volumetric features in a unified way, which allows for on-line reconstruction and rendering. To increase the run-time decompression speed and compression ratio, we decompose the volume into small blocks and encode only the significant blocks that contribute to the isosurfaces and interval volumes. The results show that our compression scheme achieves high compression ratio with fast reconstruction, which is effective for interactive client-side rendering of time-varying volumetric features.  相似文献   

20.
Style Transfer Functions for Illustrative Volume Rendering   总被引:3,自引:0,他引:3  
Illustrative volume visualization frequently employs non-photorealistic rendering techniques to enhance important features or to suppress unwanted details. However, it is difficult to integrate multiple non-photorealistic rendering approaches into a single framework due to great differences in the individual methods and their parameters. In this paper, we present the concept of style transfer functions. Our approach enables flexible data-driven illumination which goes beyond using the transfer function to just assign colors and opacities. An image-based lighting model uses sphere maps to represent non-photorealistic rendering styles. Style transfer functions allow us to combine a multitude of different shading styles in a single rendering. We extend this concept with a technique for curvature-controlled style contours and an illustrative transparency model. Our implementation of the presented methods allows interactive generation of high-quality volumetric illustrations.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号