首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Interactive ray tracing for volume visualization   总被引:6,自引:0,他引:6  
Presents a brute-force ray-tracing system for interactive volume visualization. The system runs on a conventional (distributed) shared-memory multiprocessor machine. For each pixel, we trace a ray through a volume to compute the color for that pixel. Although this method has a high intrinsic computational cost, its simplicity and scalability make it ideal for large data sets on current high-end parallel systems. To gain efficiency, several optimizations are used, including a volume bricking scheme and a shallow data hierarchy. These optimizations are used in three separate visualization algorithms: isosurfacing of rectilinear data, isosurfacing of unstructured data, and maximum-intensity projection on rectilinear data. The system runs interactively (i.e. at several frames per second) on an SGI Reality Monster. The graphics capabilities of the Reality Monster are used only for display of the final color image  相似文献   

2.
Most of the available algorithms for scalar volume visualization offer predefined techniques such as display of volumetric regions defined by scalar threshold values. The regions can usually be drawn opaque or transparent or appear in combinations. This paper presents an implementation of a volume visualization concept where several modelling and rendering techniques can be applied in any combination, mainly bounded by the creativity of the user. The concept is based on the use of a model for light scattering in a field of varying density emitters, and the use of fixed visual references to improve readability and disambiguate interpretation. An image is computed by means of an interval-based mapping from scalar range to visual parameters. Each interval has a set of associated parameters, such as colour and attenuation. In addition, each interval of the scalar range must be mapped into relative density by means of a transfer function which is selected in a ‘natural way’ depending on application. A methodology is suggested which enable the piecewise transfer functions to be easily determined. A prototype user-interface for the mapping from scalar values to visual parameters is demonstrated. The interface is easy to use for the beginner, while it also encourages creativity and intuition in the process of selecting parameters.  相似文献   

3.
We propose clipping methods that are capable of using complex geometries for volume clipping. The clipping tests exploit per-fragment operations on the graphics hardware to achieve high frame rates. In combination with texture-based volume rendering, these techniques enable the user to interactively select and explore regions of the data set. We present depth-based clipping techniques that analyze the depth structure of the boundary representation of the clip geometry to decide which parts of the volume have to be clipped. In another approach, a voxelized clip object is used to identify the clipped regions. Furthermore, the combination of volume clipping and volume shading is considered. An optical model is introduced to merge aspects of surface-based and volume-based illumination in order to achieve a consistent shading of the clipping surface. It is demonstrated how this model can be efficiently incorporated in the aforementioned clipping techniques.  相似文献   

4.
Large scale and structurally complex volume datasets from high-resolution 3D imaging devices or computational simulations pose a number of technical challenges for interactive visual analysis. In this paper, we present the first integration of a multiscale volume representation based on tensor approximation within a GPU-accelerated out-of-core multiresolution rendering framework. Specific contributions include (a) a hierarchical brick-tensor decomposition approach for pre-processing large volume data, (b) a GPU accelerated tensor reconstruction implementation exploiting CUDA capabilities, and (c) an effective tensor-specific quantization strategy for reducing data transfer bandwidth and out-of-core memory footprint. Our multiscale representation allows for the extraction, analysis and display of structural features at variable spatial scales, while adaptive level-of-detail rendering methods make it possible to interactively explore large datasets within a constrained memory footprint. The quality and performance of our prototype system is evaluated on large structurally complex datasets, including gigabyte-sized micro-tomographic volumes.  相似文献   

5.
Insight into the global structure of a state space is of great help in the analysis of the underlying process. We advocate the use of visualization for this purpose and present a method to visualize the structure of very large state spaces with millions of nodes. The method uses a clustering based on an equivalence relation to obtain a simplified representation, which is used as a backbone for the display of the entire state space. With this visualization we are able to answer questions about the global structure of a state space that cannot easily be answered by conventional methods. We show this by presenting a number of visualizations of real-world protocols .  相似文献   

6.
This paper presents a novel framework for visualizing volumetric data specified on complex polyhedral grids, without the need to perform any kind of a priori tetrahedralization. These grids are composed of polyhedra that often are non-convex and have an arbitrary number of faces, where the faces can be non-planar with an arbitrary number of vertices. The importance of such grids in state-of-the-art simulation packages is increasing rapidly. We propose a very compact, face-based data structure for representing such meshes for visualization, called two-sided face sequence lists (TSFSL), as well as an algorithm for direct GPU-based ray-casting using this representation. The TSFSL data structure is able to represent the entire mesh topology in a 1D TSFSL data array of face records, which facilitates the use of efficient 1D texture accesses for visualization. In order to scale to large data sizes, we employ a mesh decomposition into bricks that can be handled independently, where each brick is then composed of its own TSFSL array. This bricking enables memory savings and performance improvements for large meshes. We illustrate the feasibility of our approach with real-world application results, by visualizing highly complex polyhedral data from commercial state-of-the-art simulation packages.  相似文献   

7.
Finding relevant information in a large and comprehensive collection of cross-referenced documents like Wikipedia usually requires a quite accurate idea where to look for the pieces of data being sought. A user might not yet have enough domain-specific knowledge to form a precise search query to get the desired result on the first try. Another problem arises from the usually highly cross-referenced structure of such document collections. When researching a subject, users usually follow some references to get additional information not covered by a single document. With each document, more opportunities to navigate are added and the structure and relations of the visited documents gets harder to understand.  相似文献   

8.
Riemannian metric tensors are used to control the adaptation of meshes for finite element and finite volume computations. To study the numerous metric construction and manipulation techniques, a new method has been developed to visualize two-dimensional metrics without interference from an adaptation algorithm. This method traces a network of orthogonal tensor lines, tangent to the eigenvectors of the metric field, to form a pseudo-mesh visually close to a perfectly adapted mesh but without many of its constraints. Anisotropic metrics can be visualized directly using such pseudo-meshes but, for isotropic metrics, the eigensystem is degenerate and an anisotropic perturbation has to be used. This perturbation merely preserves directional information usually present during metric construction and is small enough, about 1% of the prescribed target element size, to be visually imperceptible. Both analytical and solution-based examples show the effectiveness and usefulness of the present method. As an example, pseudo-meshes are used to visualize the effect on metrics of Laplacian-like smoothing and gradation control techniques. Application to adaptive quadrilateral mesh generation is also discussed.  相似文献   

9.
Volumetric datasets with multiple variables on each voxel over multiple time steps are often complex, especially when considering the exponentially large attribute space formed by the variables in combination with the spatial and temporal dimensions. It is intuitive, practical, and thus often desirable, to interactively select a subset of the data from within that high-dimensional value space for efficient visualization. This approach is straightforward to implement if the dataset is small enough to be stored entirely in-core. However, to handle datasets sized at hundreds of gigabytes and beyond, this simplistic approach becomes infeasible and thus, more sophisticated solutions are needed. In this work, we developed a system that supports efficient visualization of an arbitrary subset, selected by range-queries, of a large multivariate time-varying dataset. By employing specialized data structures and schemes of data distribution, our system can leverage a large number of networked computers as parallel data servers, and guarantees a near optimal load-balance. We demonstrate our system of scalable data servers using two large time-varying simulation datasets.  相似文献   

10.
Interactive texture-based volume rendering for large data sets   总被引:6,自引:0,他引:6  
To employ direct volume rendering, TRex uses parallel graphics hardware, software-based compositing, and high-performance I/O to provide near-interactive display rates for time-varying, terabyte-sized data sets. We present a scalable, pipelined approach for rendering data sets too large for a single graphics card. To do so, we take advantage of multiple hardware rendering units and parallel software compositing. The goals of TRex, our system for interactive volume rendering of large data sets, are to provide near-interactive display rates for time-varying, terabyte-sized uniformly sampled data sets and provide a low-latency platform for volume visualization in immersive environments. We consider 5 frames per second (fps) to be near-interactive rates for normal viewing environments and immersive environments to have a lower bound frame rate of l0 fps. Using TRex for virtual reality environments requires low latency - around 50 ms per frame or 100 ms per view update or stereo pair. To achieve lower latency renderings, we either render smaller portions of the volume on more graphics pipes or subsample the volume to render fewer samples per frame by each graphics pipe. Unstructured data sets must be resampled to appropriately leverage the 3D texture volume rendering method  相似文献   

11.
We first present the volume-rendering pipeline and the most typical of the existing methods for each pipeline stage. The complexity of each stage in terms of computing time is analyzed for each method. Then the demands and the scope of interactive volume rendering are briefly summarized. Based on this analysis we examine alternate solutions to optimize each pipeline stage in order to allow interactive visualization while maintaining the image quality. The proposed method maximizes interactive manipulation possibilities and minimizes runtimes by sampling at the Nyquist rate and by flexibly trading off quality for performance at any pipeline level. Our approach is suitable for rendering large, scalar, discrete volume fields such as semitransparent clouds (or X-rays) on the fly.  相似文献   

12.
Interactive visualization of volume models in standard mobile devices is a challenging present problem with increasing interest from new application fields like telemedicine. The complexity of present volume models in medical applications is continuously increasing, therefore increasing the gap between the available models and the rendering capabilities in low-end mobile clients. New and efficient rendering algorithms and interaction paradigms are required for these small platforms. In this paper, we propose a transfer function-aware compression and interaction scheme, for client-server architectures with visualization on standard mobile devices. The scheme is block-based, supporting adaptive ray-casting in the client. Our two-level ray-casting allows focusing on small details on targeted regions while keeping bounded memory requirements in the GPU of the client. Our approach includes a transfer function-aware compression scheme based on a local wavelet transformation, together with a bricking scheme that supports interactive inspection and levels of detail in the mobile device client. We also use a quantization technique that takes into account a perceptive metrics of the visual error. Our results show that we can have full interaction with high compression rates and with transmitted model sizes that can be of the order of a single photographic image.  相似文献   

13.
Data visualization of high-dimensional data is possible through the use of dimensionality reduction techniques. However, in deciding which dimensionality reduction techniques to use in practice, quantitative metrics are necessary for evaluating the results of the transformation and visualization of the lower dimensional embedding. In this paper, we propose a manifold visualization metric based on the pairwise correlation of the geodesic distance in a data manifold. This metric is compared with other metrics based on the Euclidean distance, Mahalanobis distance, City Block metric, Minkowski metric, cosine distance, Chebychev distance, and Spearman distance. The results of applying different dimensionality reduction techniques on various types of nonlinear manifolds are compared and discussed. Our experiments show that our proposed metric is suitable for quantitatively evaluating the results of the dimensionality reduction techniques if the data lies on an open planar nonlinear manifold. This has practical significance in the implementation of knowledge-based visualization systems and the application of knowledge-based dimensionality reduction methods.  相似文献   

14.
In this paper, we present an algorithm that accelerates 3D texture-based volume rendering of large, sparse data sets, i.e., data sets where only a traction of the voxels contain relevant information. In texture-based approaches, the rendering performance is affected by the fill-rate, the size of texture memory, and the texture I/O bandwidth. For sparse data, these limitations can be circumvented by restricting most of the rendering work to the relevant parts of the volume. In order to efficiently enclose the corresponding regions with axis-aligned boxes, we employ a hierarchical data structure, known as an AMR (adaptive mesh refinement) tree. The hierarchy is generated utilizing a clustering algorithm. A good balance is thereby achieved between the size of the enclosed volume, i.e., the amount to render in graphics hardware and the number of axis-aligned regions, i.e., the number of texture coordinates to compute in software. The waste of texture memory by the power-of-two restriction is minimized by a 3D packing algorithm which arranges texture bricks economically in memory. Compared to an octree approach, the rendering performance is significantly increased and less parameter tuning is necessary.  相似文献   

15.
In this paper, we explore a novel idea of using high dynamic range (HDR) technology for uncertainty visualization. We focus on scalar volumetric data sets where every data point is associated with scalar uncertainty. We design a transfer function that maps each data point to a color in HDR space. The luminance component of the color is exploited to capture uncertainty. We modify existing tone mapping techniques and suitably integrate them with volume ray casting to obtain a low dynamic range (LDR) image. The resulting image is displayed on a conventional 8-bits-per-channel display device. The usage of HDR mapping reveals fine details in uncertainty distribution and enables the users to interactively study the data in the context of corresponding uncertainty information. We demonstrate the utility of our method and evaluate the results using data sets from ocean modeling.  相似文献   

16.
In Virtual Reality, immersive systems such as the CAVE provide an important tool for the collaborative exploration of large 3D data. Unlike head-mounted displays, these systems are often only partially immersive due to space, access, or cost constraints. The resulting loss of visual information becomes a major obstacle for critical tasks that need to utilize the users' entire field of vision. We have developed a conformal visualization technique that establishes a conformal mapping between the full 360° field of view and the display geometry of a given visualization system. The mapping is provably angle-preserving and has the desirable property of preserving shapes locally, which is important for identifying shape-based features in the visual data. We apply the conformal visualization to both forward and backward rendering pipelines in a variety of retargeting scenarios, including CAVEs and angled arrangements of flat panel displays. In contrast to image-based retargeting approaches, our technique constructs accurate stereoscopic images that are free of resampling artifacts. Our user study shows that on the visual polyp detection task in Immersive Virtual Colonoscopy, conformal visualization leads to improved sensitivity at comparable examination times against the traditional rendering approach. We also develop a novel user interface based on the interactive recreation of the conformal mapping and the real-time regeneration of the view direction correspondence.  相似文献   

17.
18.

This paper presents a fusion featured metric for no-reference image quality assessment of natural images. Natural images exhibit strong statistical properties across the visual contents such as leading edge, high dimensional singularity, scale invariance, etc. The leading edge represents the strong presence of continuous points, whereas high singularity conveys about non-continuous points along the curves. Both edges and curves are equally important in perceiving the natural images. Distortions to the image affect the intensities of these points. The change in the intensities of these key points can be measured using SIFT. However, SIFT tends to ignore certain points such as the points in the low contrast region which can be identified by curvelet transform. Therefore, we propose a fusion of SIFT key points and the points identified by curvelet transform to model these changes. The proposed fused feature metric is computationally efficient and light on resources. The neruofuzzy classifier is employed to evaluate the proposed feature metric. Experimental results show a good correlation between subjective and objective scores for public datasets LIVE, TID2008, and TID2013.

  相似文献   

19.
In medical area, interactive three-dimensional volume visualization of large volume datasets is a challenging task. One of the major challenges in graphics processing unit (GPU)-based volume rendering algorithms is the limited size of texture memory imposed by current GPU architecture. We attempt to overcome this limitation by rendering only visible parts of large CT datasets. In this paper, we present an efficient, high-quality volume rendering algorithm using GPUs for rendering large CT datasets at interactive frame rates on standard PC hardware. We subdivide the volume dataset into uniform sized blocks and take advantage of combinations of early ray termination, empty-space skipping and visibility culling to accelerate the whole rendering process and render visible parts of volume data. We have implemented our volume rendering algorithm for a large volume data of 512 x 304 x 1878 dimensions (visible female), and achieved real-time performance (i.e., 3-4 frames per second) on a Pentium 4 2.4GHz PC equipped with NVIDIA Geforce 6600 graphics card ( 256 MB video memory). This method can be used as a 3D visualization tool of large CT datasets for doctors or radiologists.  相似文献   

20.
SVD-based quality metric for image and video using machine learning   总被引:1,自引:0,他引:1  
We study the use of machine learning for visual quality evaluation with comprehensive singular value decomposition (SVD)-based visual features. In this paper, the two-stage process and the relevant work in the existing visual quality metrics are first introduced followed by an in-depth analysis of SVD for visual quality assessment. Singular values and vectors form the selected features for visual quality assessment. Machine learning is then used for the feature pooling process and demonstrated to be effective. This is to address the limitations of the existing pooling techniques, like simple summation, averaging, Minkowski summation, etc., which tend to be ad hoc. We advocate machine learning for feature pooling because it is more systematic and data driven. The experiments show that the proposed method outperforms the eight existing relevant schemes. Extensive analysis and cross validation are performed with ten publicly available databases (eight for images with a total of 4042 test images and two for video with a total of 228 videos). We use all publicly accessible software and databases in this study, as well as making our own software public, to facilitate comparison in future research.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号