首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We propose a method for rendering volumetric data sets at interactive frame rates while supporting dynamic ambient occlusion as well as an approximation to color bleeding. In contrast to ambient occlusion approaches for polygonal data, techniques for volumetric data sets have to face additional challenges, since by changing rendering parameters, such as the transfer function or the thresholding, the structure of the data set and thus the light interactions may vary drastically. Therefore, during a preprocessing step which is independent of the rendering parameters we capture light interactions for all combinations of structures extractable from a volumetric data set. In order to compute the light interactions between the different structures, we combine this preprocessed information during rendering based on the rendering parameters defined interactively by the user. Thus our method supports interactive exploration of a volumetric data set but still gives the user control over the most important rendering parameters. For instance, if the user alters the transfer function to extract different structures from a volumetric data set the light interactions between the extracted structures are captured in the rendering while still allowing interactive frame rates. Compared to known local illumination models for volume rendering our method does not introduce any substantial rendering overhead and can be integrated easily into existing volume rendering applications. In this paper we will explain our approach, discuss the implications for interactive volume rendering and present the achieved results.  相似文献   

2.
In this paper, we present a novel technique which simulates directional light scattering for more realistic interactive visualization of volume data. Our method extends the recent directional occlusion shading model by enabling light source positioning with practically no performance penalty. Light transport is approximated using a tilted cone‐shaped function which leaves elliptic footprints in the opacity buffer during slice‐based volume rendering. We perform an incremental blurring operation on the opacity buffer for each slice in front‐to‐back order. This buffer is then used to define the degree of occlusion for the subsequent slice. Our method is capable of generating high‐quality soft shadowing effects, allows interactive modification of all illumination and rendering parameters, and requires no pre‐computation.  相似文献   

3.
Prostate cancer is one of the most prevalent cancers among males, and the use of magnetic resonance imaging (MRI) has been suggested for its detection. A framework is presented for scoring and visualizing various MR data in an efficient and intuitive manner. A classification method is introduced where a cumulative score volume is created which takes into account each of three acquisition types. This score volume is integrated into a volume rendering framework which allows the user to view the prostate gland, the multi‐modal score values, and the surrounding anatomy. A visibility persistence mode is introduced to automatically avoid full occlusion of a selected score and indicate overlaps. The use of GPU‐accelerated multi‐modal single‐pass ray casting provides an interactive experience. User driven importance rendering allows the user to gain insight into the data and can assist in localization of the disease and treatment planning. We evaluate our results against pathology and radiologists'determinations.  相似文献   

4.
目的 光线投射法是一种重要的直接体绘制算法,但其效果取决于复杂的传递函数.为此提出基于结构特征的自适应光线投射算法,从而使得利用简单传递函数即可很好地揭示体数据特征.方法 首先分析光线方向标量值的变化趋势获取结构特征——特征段;然后基于若干意义明确启发式规则(特征段的次序、尺度、重要度)自动计算特征段的可见度,根据特征段可见度调节每个采样点的不透明度;最后基于调节后的不透明度完成绘制.结果 使用合成数据、医学真实采样数据和工业CT(computed tomography)数据进行测试,结果表明本文算法在展示体数据的内部结构特征,尤其是细小结构方面优于其他类似算法;本文算法速度比DVR(direct volume rendering)慢,但仍可满足交互需求.另外,本文算法还提供多个形象直观、意义明确的参数供用户调节,进一步增加了本文算法的灵活性.结论 本文提出的自适应光线投射算法,允许用户使用简单传递函数和调节意义明显的参数即可有效揭示体数据特征,进一步提高了光线透射法的直观性.  相似文献   

5.
In traditional illustration the choice of appropriate styles and rendering techniques is guided by the intention of the artist. For illustrative volume visualizations it is difficult to specify the mapping between the 3D data and the visual representation that preserves the intention of the user. The semantic layers concept establishes this mapping with a linguistic formulation of rules that directly map data features to rendering styles. With semantic layers fuzzy logic is used to evaluate the user defined illustration rules in a preprocessing step. In this paper we introduce interaction‐dependent rules that are evaluated for each frame and are therefore computationally more expensive. Enabling interaction‐dependent rules, however, allows the use of a new class of semantics, resulting in more expressive interactive illustrations. We show that the evaluation of the fuzzy logic can be done on the graphics hardware enabling the efficient use of interaction‐dependent semantics. Further we introduce the flat rendering mode and discuss how different rendering parameters are influenced by the rule base. Our approach provides high quality illustrative volume renderings at interactive frame rates, guided by the specification of illustration rules.  相似文献   

6.
Two-level volume rendering   总被引:7,自引:0,他引:7  
Presents a two-level approach for volume rendering, which allows for selectively using different rendering techniques for different subsets of a 3D data set. Different structures within the data set are rendered locally on an object-by-object basis by either direct volume rendering (DVR), maximum-intensity projection (MIP), surface rendering, value integration (X-ray-like images) or non-photorealistic rendering (NPR). All the results of subsequent object renderings are combined globally in a merging step (usually compositing in our case). This allows us to selectively choose the most suitable technique for depicting each object within the data while keeping the amount of information contained in the image at a reasonable level. This is especially useful when inner structures should be visualized together with semi-transparent outer parts, similar to the focus+context approach known from information visualization. We also present an implementation of our approach which allows us to explore volumetric data using two-level rendering at interactive frame rates  相似文献   

7.
In this paper, a method for interactive direct volume rendering is proposed for computing depth of field effects, which previously were shown to aid observers in depth and size perception of synthetically generated images. The presented technique extends those benefits to volume rendering visualizations of 3D scalar fields from CT/MRI scanners or numerical simulations. It is based on incremental filtering and as such does not depend on any pre‐computation, thus allowing interactive explorations of volumetric data sets via on‐the‐fly editing of the shading model parameters or (multi‐dimensional) transfer functions.  相似文献   

8.
High-quality and interactive animations of 3D time-varying vector fields   总被引:1,自引:0,他引:1  
In this paper, we present an interactive texture-based method for visualizing three-dimensional unsteady vector fields. The visualization method uses a sparse and global representation of the flow, such that it does not suffer from the same perceptual issues as is the case for visualizing dense representations. The animation is made by injecting a collection of particles evenly distributed throughout the physical domain. These particles are then tracked along their path lines. At each time step, these particles are used as seed points to generate field lines using any vector field such as the velocity field or vorticity field. In this way, the animation shows the advection of particles while each frame in the animation shows the instantaneous vector field. In order to maintain a coherent particle density and to avoid clustering as time passes, we have developed a novel particle advection strategy which produces approximately evenly-spaced field lines at each time step. To improve rendering performance, we decouple the rendering stage from the preceding stages of the visualization method. This allows interactive exploration of multiple fields simultaneously, which sets the stage for a more complete analysis of the flow field. The final display is rendered using texture-based direct volume rendering  相似文献   

9.
本文设计了一种基于空间信息的交互式多维传递函数的纹理映射体绘制算法。该算法不仅可以根据体数据的强度而且还利用体素的空间位置来设定绘制的颜色和阻光度。通过采用一种独特的空间投影变换,根据用户需求,将体数据划分为不同区域,并分别定义各自的传递函数。该特点使得本文的算法可以有效地对体数据进行交互式分析。在算法实现中,利用了通用图形硬件的可编程特性,在普通PC上可以达到理想的绘制质量和交互速度。  相似文献   

10.
Style Transfer Functions for Illustrative Volume Rendering   总被引:3,自引:0,他引:3  
Illustrative volume visualization frequently employs non-photorealistic rendering techniques to enhance important features or to suppress unwanted details. However, it is difficult to integrate multiple non-photorealistic rendering approaches into a single framework due to great differences in the individual methods and their parameters. In this paper, we present the concept of style transfer functions. Our approach enables flexible data-driven illumination which goes beyond using the transfer function to just assign colors and opacities. An image-based lighting model uses sphere maps to represent non-photorealistic rendering styles. Style transfer functions allow us to combine a multitude of different shading styles in a single rendering. We extend this concept with a technique for curvature-controlled style contours and an illustrative transparency model. Our implementation of the presented methods allows interactive generation of high-quality volumetric illustrations.  相似文献   

11.
Interactive tensor field design and visualization on surfaces   总被引:1,自引:0,他引:1  
Designing tensor fields in the plane and on surfaces is a necessary task in many graphics applications, such as painterly rendering, pen-and-ink sketching of smooth surfaces, and anisotropic remeshing. In this article, we present an interactive design system that allows a user to create a wide variety of symmetric tensor fields over 3D surfaces either from scratch or by modifying a meaningful input tensor field such as the curvature tensor. Our system converts each user specification into a basis tensor field and combines them with the input field to make an initial tensor field. However, such a field often contains unwanted degenerate points which cannot always be eliminated due to topological constraints of the underlying surface. To reduce the artifacts caused by these degenerate points, our system allows the user to move a degenerate point or to cancel a pair of degenerate points that have opposite tensor indices. These operations provide control over the number and location of the degenerate points in the field. We observe that a tensor field can be locally converted into a vector field so that there is a one-to-one correspondence between the set of degenerate points in the tensor field and the set of singularities in the vector field. This conversion allows us to effectively perform degenerate point pair cancellation and movement by using similar operations for vector fields. In addition, we adapt the image-based flow visualization technique to tensor fields, therefore allowing interactive display of tensor fields on surfaces. We demonstrate the capabilities of our tensor field design system with painterly rendering, pen-and-ink sketching of surfaces, and anisotropic remeshing  相似文献   

12.
This paper presents a novel method for interactive exploration of industrial CT volumes such as cast metal parts, with the goal of interactively detecting, classifying, and quantifying features using a visualization-driven approach. The standard approach for defect detection builds on region growing, which requires manually tuning parameters such as target ranges for density and size, variance, as well as the specification of seed points. If the results are not satisfactory, region growing must be performed again with different parameters. In contrast, our method allows interactive exploration of the parameter space, completely separated from region growing in an unattended pre-processing stage. The pre-computed feature volume tracks a feature size curve for each voxel over time, which is identified with the main region growing parameter such as variance. A novel 3D transfer function domain over (density, feature size, time) allows for interactive exploration of feature classes. Features and feature size curves can also be explored individually, which helps with transfer function specification and allows coloring individual features and disabling features resulting from CT artifacts. Based on the classification obtained through exploration, the classified features can be quantified immediately.  相似文献   

13.
In recent years, interactive methods for segmentation are increasing in popularity due to their success in different domains such as medical image processing, photo editing, etc. We present an interactive segmentation algorithm that can segment an object of interest from its background with minimum guidance from the user, who just has to select a single seed pixel inside the object of interest. Due to minimal requirements from the user, we call our algorithm semiautomatic. To obtain a reliable and robust segmentation with such low user guidance, we have to make several assumptions. Our main assumption is that the object to be segmented is of compact shape, or can be approximated by several connected roughly collinear compact pieces. We base our work on the powerful graph cut segmentation algorithm of Boykov and Jolly, which allows straightforward incorporation of the compact shape constraint. In order to make the graph cut approach suitable for our semiautomatic framework, we address several well-known issues of graph cut segmentation technique. In particular, we counteract the bias towards shorter segmentation boundaries and develop a method for automatic selection of parameters. We demonstrate the effectiveness of our approach on the challenging industrial application of transistor gate segmentation in images of integrated chips. Our approach produces highly accurate results in real-time.  相似文献   

14.
The system described in this paper provides a real-time 3D visual experience by using an array of 64 video cameras and an integral photography display with 60 viewing directions. The live 3D scene in front of the camera array is reproduced by the full-color, full-parallax autostereoscopic display with interactive control of viewing parameters. The main technical challenge is fast and flexible conversion of the data from the 64 multicamera images to the integral photography format. Based on image-based rendering techniques, our conversion method first renders 60 novel images corresponding to the viewing directions of the display, and then arranges the rendered pixels to produce an integral photography image. For real-time processing on a single PC, all the conversion processes are implemented on a GPU with GPGPU techniques. The conversion method also allows a user to interactively control viewing parameters of the displayed image for reproducing the dynamic 3D scene with desirable parameters. This control is performed as a software process, without reconfiguring the hardware system, by changing the rendering parameters such as the convergence point of the rendering cameras and the interval between the viewpoints of the rendering cameras.  相似文献   

15.
16.
This paper describes a level-of-detail rendering technique for large-scale irregular volume datasets. It is well known that the memory bandwidth consumed by visibility sorting becomes the limiting factor when carrying out volume rendering of such datasets. To develop a sorting-free volume rendering technique, we previously proposed a particle-based technique that generates opaque and emissive particles using a density function constant within an irregular volume cell and projects the particles onto an image plane with sub-pixels. When the density function changes significantly in an irregular volume cell, the cell boundary may become prominent, which can cause blocky noise. When the number of the sub-pixels increases, the required frame buffer tends to be large. To solve this problem, this work proposes a new particle-based volume rendering which generates particles using metropolis sampling and renders the particles using the ensemble average. To confirm the effectiveness of this method, we applied our proposed technique to several irregular volume datasets, with the result that the ensemble average outperforms the sub-pixel average in computational complexity and memory usage. In addition, the ensemble average technique allowed us to implement a level of detail in the interactive rendering of a 71-million-cell hexahedral volume dataset and a 26-million-cell quadratic tetrahedral volume dataset.  相似文献   

17.
A model for volume lighting and modeling   总被引:4,自引:0,他引:4  
Direct volume rendering is a commonly used technique in visualization applications. Many of these applications require sophisticated shading models to capture subtle lighting effects and characteristics of volumetric data and materials. For many volumes, homogeneous regions pose problems for typical gradient-based surface shading. Many common objects and natural phenomena exhibit visual quality that cannot be captured using simple lighting models or cannot be solved at interactive rates using more sophisticated methods. We present a simple yet effective interactive shading model which captures volumetric light attenuation effects that incorporates volumetric shadows, an approximation to phase functions, an approximation to forward scattering, and chromatic attenuation that provides the subtle appearance of translucency. We also present a technique for volume displacement or perturbation that allows realistic interactive modeling of high frequency detail for both real and synthetic volumetric data.  相似文献   

18.
We present a new algorithm here for efficient incremental rendering of volumetric datasets. The primary goal of this algorithm is to give average workstations the ability to efficiently render volume data received over relatively low bandwidth network links in such a way that rapid user feedback is maintained. Common limitations of workstation rendering of volume data include: large memory overheads, the requirement of expensive rendering hardware, and high speed processing ability. The rendering algorithm presented here overcomes these problems by making use of the efficient Shear-Warp Factorisation method which does not require specialised graphics hardware. However the original Shear-Warp algorithm suffers from a high memory overhead and does not provide for incremental rendering which is required should rapid user feedback be maintained. Our algorithm represents the volumetric data using a hierarchical data structure which provides for the incremental classification and rendering of volume data. This exploits the multiscale nature of the octree data structure. The algorithm reduces the memory footprint of the original Shear-Warp Factorisation algorithm by a factor of more than two, while maintaining good rendering performance. These factors make our octree algorithm more suitable for implementation on average desktop workstations for the purposes of interactive exploration of volume models over a network. Results from tests using typical volume datasets will be presented which demonstrate the ability of the algorithm to achieve high rendering rates for both incremental rendering and standard rendering while reducing the runtime memory requirements.  相似文献   

19.
The selection of an appropriate global transfer function is essential for visualizing time‐varying simulation data. This is especially challenging when the global data range is not known in advance, as is often the case in remote and in‐situ visualization settings. Since the data range may vary dramatically as the simulation progresses, volume rendering using local transfer functions may not be coherent for all time steps. We present an exploratory technique that enables coherent classification of time‐varying volume data. Unlike previous approaches, which require pre‐processing of all time steps, our approach lets the user explore the transfer function space without accessing the original 3D data. This is useful for interactive visualization, and absolutely essential for in‐situ visualization, where the entire simulation data range is not known in advance. Our approach generates a compact representation of each time step at rendering time in the form of ray attenuation functions, which are used for subsequent operations on the opacity and color mappings. The presented approach offers interactive exploration of time‐varying simulation data that alleviates the cost associated with reloading and caching large data sets.  相似文献   

20.
In recent years, many volumetric illumination models have been proposed, which have the potential to simulate advanced lighting effects and thus support improved image comprehension. Although volume ray-casting is widely accepted as the volume rendering technique which achieves the highest image quality, so far no volumetric illumination algorithm has been designed to be directly incorporated into the ray-casting process. In this paper we propose image plane sweep volume illumination (IPSVI), which allows the integration of advanced illumination effects into a GPU-based volume ray-caster by exploiting the plane sweep paradigm. Thus, we are able to reduce the problem complexity and achieve interactive frame rates, while supporting scattering as well as shadowing. Since all illumination computations are performed directly within a single rendering pass, IPSVI does not require any preprocessing nor does it need to store intermediate results within an illumination volume. It therefore has a significantly lower memory footprint than other techniques. This makes IPSVI directly applicable to large data sets. Furthermore, the integration into a GPU-based ray-caster allows for high image quality as well as improved rendering performance by exploiting early ray termination. This paper discusses the theory behind IPSVI, describes its implementation, demonstrates its visual results and provides performance measurements.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号