首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到10条相似文献,搜索用时 156 毫秒
1.
可视化大规模体数据在科学和工程领域一直被认为是困难的。特别是对于那些经常要求非常大的运行时间存储空间的数据尤其如此。文章讨论了基于小波理论的对于大规模体数据的有效的三维压缩方案。在设计该压缩方时,对两个重要参数进行了折衷:高的压缩率和快速运行时间随机访问。可视化人体数据集的实验结果表明此方法取了相当好的压缩率,另外,由像素值的运行时间重建引起的开销达到了最小值。这种三维压缩方案在开发用于大规模数据的交互式可视化系统时非常有用,并且使得更多的用户,象基于个人电脑或具有有限容量的低端工作站都可能运可视化技术。  相似文献   

2.
We develop a volumetric video system which supports interactive browsing of compressed time-varying volumetric features (significant isosurfaces and interval volumes). Since the size of even one volumetric frame in a time-varying 3D data set is very large, transmission and on-line reconstruction are the main bottlenecks for interactive remote visualization of time-varying volume and surface data. We describe a compression scheme for encoding time-varying volumetric features in a unified way, which allows for on-line reconstruction and rendering. To increase the run-time decompression speed and compression ratio, we decompose the volume into small blocks and encode only the significant blocks that contribute to the isosurfaces and interval volumes. The results show that our compression scheme achieves high compression ratio with fast reconstruction, which is effective for interactive client-side rendering of time-varying volumetric features.  相似文献   

3.
图形硬件的发展为实时体数据可视化提供了硬件保证,然而随着扫描技术的发展,大数据可视化仍然面临显存不足问题,因此研究保持数据特征的压缩表达方法就非常重要。应用张量近似思想建立了体数据的多尺度表达与可视化方法,一方面多尺度张量近似实现了数据压缩,解决了大数据的绘制问题;另一方面,张量近似的自适应压缩基保持了体数据的尺度特征。实验结果表明,该方法是有效的。  相似文献   

4.
In medical area, interactive three-dimensional volume visualization of large volume datasets is a challenging task. One of the major challenges in graphics processing unit (GPU)-based volume rendering algorithms is the limited size of texture memory imposed by current GPU architecture. We attempt to overcome this limitation by rendering only visible parts of large CT datasets. In this paper, we present an efficient, high-quality volume rendering algorithm using GPUs for rendering large CT datasets at interactive frame rates on standard PC hardware. We subdivide the volume dataset into uniform sized blocks and take advantage of combinations of early ray termination, empty-space skipping and visibility culling to accelerate the whole rendering process and render visible parts of volume data. We have implemented our volume rendering algorithm for a large volume data of 512 x 304 x 1878 dimensions (visible female), and achieved real-time performance (i.e., 3-4 frames per second) on a Pentium 4 2.4GHz PC equipped with NVIDIA Geforce 6600 graphics card ( 256 MB video memory). This method can be used as a 3D visualization tool of large CT datasets for doctors or radiologists.  相似文献   

5.
We present a scalable volume rendering technique that exploits lossy compression and low-cost commodity hardware to permit highly interactive exploration of time-varying scalar volume data. A palette-based decoding technique and an adaptive bit allocation scheme are developed to fully utilize the texturing capability of a commodity 3D graphics card. Using a single PC equipped with a modest amount of memory, a texture-capable graphics card and an inexpensive disk array, we are able to render hundreds of time steps of regularly gridded volume data (up to 42 million voxels each time step) at interactive rates. By clustering multiple PCs together, we demonstrate the data-size scalability of our method. The frame rates achieved make possible the interactive exploration of data in the temporal, spatial and transfer function domains. A comprehensive evaluation of our method based on experimental studies using data sets (up to 134 million voxels per time step) from turbulence flow simulations is also presented.  相似文献   

6.
Many phenomena in nature and engineering happen simultaneously on rather diverse spatial and temporal scales. In other words, they exhibit a multi-scale character. A special numerical multilevel technique associated with a particular hierarchical data structure is adaptive mesh refinement (AMR). This scheme achieves locally very high spatial and temporal resolutions. Due to its popularity, many scientists are in need of interactive visualization tools for AMR data. In this article, we present a 3D texture-based volume-rendering algorithm for AMR data that directly utilizes the hierarchical structure. Thereby fast rendering performance is achieved even for high-resolution data sets. To avoid multiple rendering of regions that are covered by grids of different levels of resolution, we propose a space partitioning scheme to decompose the volume into axis-aligned regions of equal-sized cells. Furthermore the problems of interpolation artifacts, opacity corrections, and texture memory limitations are addressed. Published online: November 6, 2002 Correspondence to: R. K?hler  相似文献   

7.
Histology is the study of the structure of biological tissue using microscopy techniques. As digital imaging technology advances, high resolution microscopy of large tissue volumes is becoming feasible; however, new interactive tools are needed to explore and analyze the enormous datasets. In this paper we present a visualization framework that specifically targets interactive examination of arbitrarily large image stacks. Our framework is built upon two core techniques: display-aware processing and GPU-accelerated texture compression. With display-aware processing, only the currently visible image tiles are fetched and aligned on-the-fly, reducing memory bandwidth and minimizing the need for time-consuming global pre-processing. Our novel texture compression scheme for GPUs is tailored for quick browsing of image stacks. We evaluate the usability of our viewer for two histology applications: digital pathology and visualization of neural structure at nanoscale-resolution in serial electron micrographs.  相似文献   

8.
Interactive visualization of volume models in standard mobile devices is a challenging present problem with increasing interest from new application fields like telemedicine. The complexity of present volume models in medical applications is continuously increasing, therefore increasing the gap between the available models and the rendering capabilities in low-end mobile clients. New and efficient rendering algorithms and interaction paradigms are required for these small platforms. In this paper, we propose a transfer function-aware compression and interaction scheme, for client-server architectures with visualization on standard mobile devices. The scheme is block-based, supporting adaptive ray-casting in the client. Our two-level ray-casting allows focusing on small details on targeted regions while keeping bounded memory requirements in the GPU of the client. Our approach includes a transfer function-aware compression scheme based on a local wavelet transformation, together with a bricking scheme that supports interactive inspection and levels of detail in the mobile device client. We also use a quantization technique that takes into account a perceptive metrics of the visual error. Our results show that we can have full interaction with high compression rates and with transmitted model sizes that can be of the order of a single photographic image.  相似文献   

9.
For large volume visualization, an image-based quality metric is difficult to incorporate for level-of-detail selection and rendering without sacrificing the interactivity. This is because it is usually time-consuming to update view-dependent information as well as to adjust to transfer function changes. In this paper, we introduce an image-based level-of-detail selection algorithm for interactive visualization of large volumetric data. The design of our quality metric is based on an efficient way to evaluate the contribution of multiresolution data blocks to the final image. To ensure real-time update of the quality metric and interactive level-of-detail decisions, we propose a summary table scheme in response to runtime transfer function changes and a GPU-based solution for visibility estimation. Experimental results on large scientific and medical data sets demonstrate the effectiveness and efficiency of our algorithm  相似文献   

10.
Most popular methods in cloth rendering rely on volumetric data in order to model complex optical phenomena such as sub‐surface scattering. These approaches are able to produce very realistic illumination results, but their volumetric representations are costly to compute and render, forfeiting any interactive feedback. In this paper, we introduce a method based on the Graphics Processing Unit (GPU) for voxelization and visualization, suitable for both interactive and offline rendering. Recent features in the OpenGL model, like the ability to dynamically address arbitrary buffers and allocate bindless textures, are combined into our pipeline to interactively voxelize millions of polygons into a set of large three‐dimensional (3D) textures (>109 elements), generating a volume with sub‐voxel accuracy, which is suitable even for high‐density woven cloth such as linen.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号