首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 906 毫秒
1.
In this paper we present a fast visualization technique for volumetric data, which is based on a recent non-photorealistic rendering technique. Our new approach enables alternative insights into 3D data sets (compared to traditional approaches such as direct volume rendering or iso-surface rendering). Object contours, which usually are characterized by locally high gradient values, are visualized regardless of their density values. Cumbersome tuning of transfer functions, as usually needed for setting up DVR views is avoided. Instead, a small number of parameters is available to adjust the non-photorealistic display. Based on the magnitude of local gradient information as well as on the angle between viewing direction and gradient vector, data values are mapped to visual properties (color, opacity), which then are combined to form the rendered image (MIP is proposed as the default compositing stragtegy here). Due to the fast implementation of this alternative rendering approach, it is possible to interactively investigate the 3D data, and quickly learn about internal structures. Several further extensions of our new approach, such as level lines are also presented in this paper.  相似文献   

2.
In volume visualization, transfer functions are used to classify the volumetric data and assign optical properties to the voxels. In general, transfer functions are generated in a transfer function space, which is the feature space constructed by data values and properties derived from the data. If volumetric objects have the same or overlapping data values, it would be difficult to separate them in the transfer function space. In this paper, we present a rule‐enhanced transfer function design method that allows important structures of the volume to be more effectively separated and highlighted. We define a set of rules based on the local frequency distribution of volume attributes. A rule‐selection method based on a genetic algorithm is proposed to learn the set of rules that can distinguish the user‐specified target tissue from other tissues. In the rendering stage, voxels satisfying these rules are rendered with higher opacities in order to highlight the target tissue. The proposed method was tested on various volumetric datasets to enhance the visualization of important structures that are difficult to be visualized by traditional transfer function design methods. The results demonstrate the effectiveness of the proposed method.  相似文献   

3.
We propose a method for rendering volumetric data sets at interactive frame rates while supporting dynamic ambient occlusion as well as an approximation to color bleeding. In contrast to ambient occlusion approaches for polygonal data, techniques for volumetric data sets have to face additional challenges, since by changing rendering parameters, such as the transfer function or the thresholding, the structure of the data set and thus the light interactions may vary drastically. Therefore, during a preprocessing step which is independent of the rendering parameters we capture light interactions for all combinations of structures extractable from a volumetric data set. In order to compute the light interactions between the different structures, we combine this preprocessed information during rendering based on the rendering parameters defined interactively by the user. Thus our method supports interactive exploration of a volumetric data set but still gives the user control over the most important rendering parameters. For instance, if the user alters the transfer function to extract different structures from a volumetric data set the light interactions between the extracted structures are captured in the rendering while still allowing interactive frame rates. Compared to known local illumination models for volume rendering our method does not introduce any substantial rendering overhead and can be integrated easily into existing volume rendering applications. In this paper we will explain our approach, discuss the implications for interactive volume rendering and present the achieved results.  相似文献   

4.
The design of transfer functions for volume rendering is a non-trivial task. This is particularly true for multi-channel data sets, where multiple data values exist for each voxel, which requires multi-dimensional transfer functions. In this paper, we propose a new method for multi-dimensional transfer function design. Our new method provides a framework to combine multiple computational approaches and pushes the boundary of gradient-based multi-dimensional transfer functions to multiple channels, while keeping the dimensionality of transfer functions at a manageable level, i.e., a maximum of three dimensions, which can be displayed visually in a straightforward way. Our approach utilizes channel intensity, gradient, curvature and texture properties of each voxel. Applying recently developed nonlinear dimensionality reduction algorithms reduces the high-dimensional data of the domain. In this paper, we use Isomap and Locally Linear Embedding as well as a traditional algorithm, Principle Component Analysis. Our results show that these dimensionality reduction algorithms significantly improve the transfer function design process without compromising visualization accuracy. We demonstrate the effectiveness of our new dimensionality reduction algorithms with two volumetric confocal microscopy data sets.  相似文献   

5.
Illustrative context-preserving exploration of volume data   总被引:2,自引:0,他引:2  
In volume rendering, it is very difficult to simultaneously visualize interior and exterior structures while preserving clear shape cues. Highly transparent transfer functions produce cluttered images with many overlapping structures, while clipping techniques completely remove possibly important context information. In this paper, we present a new model for volume rendering, inspired by techniques from illustration. It provides a means of interactively inspecting the interior of a volumetric data set in a feature-driven way which retains context information. The context-preserving volume rendering model uses a function of shading intensity, gradient magnitude, distance to the eye point, and previously accumulated opacity to selectively reduce the opacity in less important data regions. It is controlled by two user-specified parameters. This new method represents an alternative to conventional clipping techniques, sharing their easy and intuitive user control, but does not suffer from the drawback of missing context information  相似文献   

6.
Halos are generally used to enhance depth perception and display spatial relationships in illustrative visualization. In this paper, we present a simple and effective method to create volumetric halo illustration. At the preprocessing stage, we generate, on graphics hardware, a view-independent halo intensity volume, which contains all of the potential halos around the boundaries of features, based on the opacity volume. During halo rendering, the halo intensity volume is used to extract halos only around the contours of structures for the current viewpoint. The performance of our approach is significantly faster than previous halo illustration methods, which perform both halo generation and rendering during direct volume rendering. We further propose depth-dependent halo effects, including depth color fading and depth width fading. These halo effects adaptively modulate the visual properties of halos to provide more perceptual cues for depth interpretation. Experimental results demonstrate the efficiency of our proposed approach and the effectiveness of depth-dependent halos.  相似文献   

7.
In this paper, we present an interactive high dynamic range volume visualization framework (HDR VolVis) for visualizing volumetric data with both high spatial and intensity resolutions. Volumes with high dynamic range values require high precision computing during the rendering process to preserve data precision. Furthermore, it is desirable to render high resolution volumes with low opacity values to reveal detailed internal structures, which also requires high precision compositing. High precision rendering will result in a high precision intermediate image (also known as high dynamic range image). Simply rounding up pixel values to regular display scales will result in loss of computed details. Our method performs high precision compositing followed by dynamic tone mapping to preserve details on regular display devices. Rendering high precision volume data requires corresponding resolution in the transfer function. To assist the users in designing a high resolution transfer function on a limited resolution display device, we propose a novel transfer function specification interface with nonlinear magnification of the density range and logarithmic scaling of the color/opacity range. By leveraging modern commodity graphics hardware, multiresolution rendering techniques and out-of-core acceleration, our system can effectively produce an interactive visualization of large volume data, such as 2.048/sup 3/.  相似文献   

8.
Photographic volumes present a unique, interesting challenge for volume rendering. In photographic volumes, the voxel color is pre-determined, making color selection through transfer functions unnecessary. However, photographic data does not contain a clear mapping from the multi-valued color values to a scalar density or opacity, making projection and compositing much more difficult than with traditional volumes. Moreover, because of the nonlinear nature of color spaces, there is no meaningful norm for the multi-valued voxels. Thus, the individual color channels of photographic data must be treated as incomparable data tuples rather than as vector values. Traditional differential geometric tools, such as intensity gradients, density and Laplacians, are distorted by the nonlinear non-orthonormal color spaces that are the domain of the voxel values. We have developed different techniques for managing these issues while directly rendering volumes from photographic data. We present and justify the normalization of color values by mapping RGB values to the CIE L*u*v* color space. We explore and compare different opacity transfer functions that map three-channel color values to opacity. We apply these many-to-one mappings to the original RGB values as well as to the voxels after conversion to L*u*v* space. Direct rendering using transfer functions allows us to explore photographic volumes without having to commit to an a-priori segmentation that might mask fine variations of interest. We empirically compare the combined effects of each of the two color spaces with our opacity transfer functions using source data from the Visible Human project  相似文献   

9.
In direct volume rendering, features of interest are still typically classified by a transfer function based on the volume data’s intensity and the derived properties. Despite the efforts of previous research, classification remains a challenge. This paper presents a framework for designing new transfer functions that use bionic algorithms to map the frequency of particle occurrences to the color and opacity values. This allows us to extract features from the volume data. In particular, a novel approach is presented to allow a user to design a transfer function using the techniques of swarm intelligence. This approach consists of a population of simple agents interacting locally with one another and with the volume data. The agents scatter around the volume data and approach areas that contain features. Their movements are not only based on solution optimization, but are also governed by global optimization. After the agents have finished searching for features in the volume data, they can automatically modify the transfer function according to agents’ behavior. With these agents, we do not have to preprocess the volume data for visualizing and exploring the features.  相似文献   

10.
Direct volume rendering is an important tool for visualizing complex data sets. However, in the process of generating 2D images from 3D data, information is lost in the form of attenuation and occlusion. The lack of a feedback mechanism to quantify the loss of information in the rendering process makes the design of good transfer functions a difficult and time consuming task. In this paper, we present the general notion of visibility histograms, which are multidimensional graphical representations of the distribution of visibility in a volume-rendered image. In this paper, we explore the 1D and 2D transfer functions that result from intensity values and gradient magnitude. With the help of these histograms, users can manage a complex set of transfer function parameters that maximize the visibility of the intervals of interest and provide high quality images of volume data. We present a semiautomated method for generating transfer functions, which progressively explores the transfer function space toward the goal of maximizing visibility of important structures. Our methodology can be easily deployed in most visualization systems and can be used together with traditional 1D and 2D opacity transfer functions based on scalar values, as well as with other more sophisticated rendering algorithms.  相似文献   

11.
The visualization of complex 3D images remains a challenge, a fact that is magnified by the difficulty to classify or segment volume data. In this paper, we introduce size-based transfer functions, which map the local scale of features to color and opacity. Features in a data set with similar or identical scalar values can be classified based on their relative size. We achieve this with the use of scale fields, which are 3D fields that represent the relative size of the local feature at each voxel. We present a mechanism for obtaining these scale fields at interactive rates, through a continuous scale-space analysis and a set of detection filters. Through a number of examples, we show that size-based transfer functions can improve classification and enhance volume rendering techniques, such as maximum intensity projection. The ability to classify objects based on local size at interactive rates proves to be a powerful method for complex data exploration.  相似文献   

12.
Volumetric rendering is widely used to examine 3D scalar fields from CT/MRI scanners and numerical simulation datasets. One key aspect of volumetric rendering is the ability to provide perceptual cues to aid in understanding structure contained in the data. While shading models that reproduce natural lighting conditions have been shown to better convey depth information and spatial relationships, they traditionally require considerable (pre)computation. In this paper, a shading model for interactive direct volume rendering is proposed that provides perceptual cues similar to those of ambient occlusion, for both solid and transparent surface-like features. An image space occlusion factor is derived from the radiative transport equation based on a specialized phase function. The method does not rely on any precomputation and thus allows for interactive explorations of volumetric data sets via on-the-fly editing of the shading model parameters or (multi-dimensional) transfer functions while modifications to the volume via clipping planes are incorporated into the resulting occlusion-based shading.  相似文献   

13.
Topology provides a foundation for the development of mathematically sound tools for processing and exploration of scalar fields. Existing topology-based methods can be used to identify interesting features in volumetric data sets, to find seed sets for accelerated isosurface extraction, or to treat individual connected components as distinct entities for isosurfacing or interval volume rendering. We describe a framework for direct volume rendering based on segmenting a volume into regions of equivalent contour topology and applying separate transfer functions to each region. Each region corresponds to a branch of a hierarchical contour tree decomposition, and a separate transfer function can be defined for it. The novel contributions of our work are: 1) a volume rendering framework and interface where a unique transfer function can be assigned to each subvolume corresponding to a branch of the contour tree, 2) a runtime method for adjusting data values to reflect contour tree simplifications, 3) an efficient way of mapping a spatial location into the contour tree to determine the applicable transfer function, and 4) an algorithm for hardware-accelerated direct volume rendering that visualizes the contour tree-based segmentation at interactive frame rates using graphics processing units (GPUs) that support loops and conditional branches in fragment programs  相似文献   

14.
A practical approach to spectral volume rendering   总被引:1,自引:0,他引:1  
To make a spectral representation of color practicable for volume rendering, a new low-dimensional subspace method is used to act as the carrier of spectral information. With that model, spectral light material interaction can be integrated into existing volume rendering methods at almost no penalty. In addition, slow rendering methods can profit from the new technique of postillumination-generating spectral images in real-time for arbitrary light spectra under a fixed viewpoint. Thus, the capability of spectral rendering to create distinct impressions of a scene under different lighting conditions is established as a method of real-time interaction. Although we use an achromatic opacity in our rendering, we show how spectral rendering permits different data set features to be emphasized or hidden as long as they have not been entirely obscured. The use of postillumination is an order of magnitude faster than changing the transfer function and repeating the projection step. To put the user in control of the spectral visualization, we devise a new widget, a "light-dial", for interactively changing the illumination and include a usability study of this new light space exploration tool. Applied to spectral transfer functions, different lights bring out or hide specific qualities of the data. In conjunction with postillumination, this provides a new means for preparing data for visualization and forms a new degree of freedom for guided exploration of volumetric data sets  相似文献   

15.
Opacity Peeling for Direct Volume Rendering   总被引:3,自引:0,他引:3  
  相似文献   

16.
This paper presents an interactive technique for the dense texture-based visualization of unsteady 3D flow, taking into account issues of computational efficiency and visual perception. High efficiency is achieved by a 3D graphics processing unit (GPU)-based texture advection mechanism that implements logical 3D grid structures by physical memory in the form of 2D textures. This approach results in fast read and write access to physical memory, independent of GPU architecture. Slice-based direct volume rendering is used for the final display. We investigate two alternative methods for the volumetric illumination of the result of texture advection: First, gradient-based illumination that employs a real-time computation of gradients, and, second, line-based lighting based on illumination in codimension 2. In addition to the Phong model, perception-guided rendering methods are considered, such as cool/warm shading, halo rendering, or color-based depth cueing. The problems of clutter and occlusion are addressed by supporting a volumetric importance function that enhances features of the flow and reduces visual complexity in less interesting regions. GPU implementation aspects, performance measurements, and a discussion of results are included to demonstrate our visualization approach.  相似文献   

17.
Maximum intensity projection (MIP) displays the voxel with the maximum intensity along the viewing ray, and this offers simplicity in usage, as it does not require a complex transfer function, the specification of which is a highly challenging and time-consuming process in direct volume rendering (DVR). However, MIP also has its inherent limitation, the loss of spatial context and shape information. This paper proposes a novel technique, shape-enhanced maximum intensity projection (SEMIP), to resolve this limitation. Inspired by lighting in DVR to emphasize surface structures, SEMIP searches a valid gradient for the maximum intensity of each viewing ray, and applies gradient-based shading to improve shape and depth perception of structures. As SEMIP may result in the pixel values over the maximum intensity of the display device, a tone reduction technique is introduced to compress the intensity range of the rendered image while preserving the original local contrast. In addition, depth-based color cues are employed to enhance the visual perception of internal structures, and a focus and context interaction is used to highlight structures of interest. We demonstrate the effectiveness of the proposed SEMIP with several volume data sets, especially from the medical field.  相似文献   

18.
This paper describes a novel approach to tissue classification using three-dimensional (3D) derivative features in the volume rendering pipeline. In conventional tissue classification for a scalar volume, tissues of interest are characterized by an opacity transfer function defined as a one-dimensional (1D) function of the original volume intensity. To overcome the limitations inherent in conventional 1D opacity functions, we propose a tissue classification method that employs a multidimensional opacity function, which is a function of the 3D derivative features calculated from a scalar volume as well as the volume intensity. Tissues of interest are characterized by explicitly defined classification rules based on 3D filter responses highlighting local structures, such as edge, sheet, line, and blob, which typically correspond to tissue boundaries, cortices, vessels, and nodules, respectively, in medical volume data. The 3D local structure filters are formulated using the gradient vector and Hessian matrix of the volume intensity function combined with isotropic Gaussian blurring. These filter responses and the original intensity define a multidimensional feature space in which multichannel tissue classification strategies are designed. The usefulness of the proposed method is demonstrated by comparisons with conventional single-channel classification using both synthesized data and clinical data acquired with CT (computed tomography) and MRI (magnetic resonance imaging) scanners. The improvement in image quality obtained using multichannel classification is confirmed by evaluating the contrast and contrast-to-noise ratio in the resultant volume-rendered images with variable opacity values  相似文献   

19.
This paper advocates the use of a group of renderers rather than any specific rendering method. We describe a bundle containing four alternative approaches to visualizing volume data. One new approach uses realistic volumetric gas rendering techniques to produce photo-realistic images and animations. The second uses ray casting that is based on a simpler illumination model and is mainly centered around a versatile new tool for the design of transfer functions. The third method employs a simple illumination model and rapid rendering mechanisms to provide efficient preview capabilities. The last one reduces data magnitude by displaying the most visible components and exploits rendering hardware to provide real time browsing capabilities. We show that each rendering tool provides a unique service and demonstrate the combined utility of our group of volume renderers in computational fluid dynamic (CFD) visualization. While one tool allows the explorer to render rapidly for navigation through the data, another tool allows one to emphasize data features (e.g., shock waves), and yet another tool allows one to realistically render the data. We believe that only through the deployment of groups of renderers will the scientist be well served and equipped to form numerous perspectives of the same dataset, each providing different insights into the data  相似文献   

20.
The transfer function bake-off   总被引:12,自引:0,他引:12  
Direct volume rendering is a key technology for visualizing large 3D data sets from scientific or medical applications. Transfer functions are particularly important to the quality of direct volume-rendered images. A transfer function assigns optical properties, such as color and opacity, to original values of the data set being visualized. Unfortunately, finding good transfer functions proves difficult. It is one of the major problems in volume visualization. The article examines four of the currently most promising approaches to transfer function design. The four approaches are: trial and error, with minimum computer aid; data-centric, with no underlying assumed model; data-centric, using an underlying data model; and image-centric, using organized sampling  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号