首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
Particle‐based simulation techniques, like the discrete element method or molecular dynamics, are widely used in many research fields. In real‐time explorative visualization it is common to render the resulting data using opaque spherical glyphs with local lighting only. Due to massive overlaps, however, inner structures of the data are often occluded rendering visual analysis impossible. Furthermore, local lighting is not sufficient as several important features like complex shapes, holes, rifts or filaments cannot be perceived well. To address both problems we present a new technique that jointly supports transparency and ambient occlusion in a consistent illumination model. Our approach is based on the emission‐absorption model of volume rendering. We provide analytic solutions to the volume rendering integral for several density distributions within a spherical glyph. Compared to constant transparency our approach preserves the three‐dimensional impression of the glyphs much better. We approximate ambient illumination with a fast hierarchical voxel cone‐tracing approach, which builds on a new real‐time voxelization of the particle data. Our implementation achieves interactive frame rates for millions of static or dynamic particles without any preprocessing. We illustrate the merits of our method on real‐world data sets gaining several new insights.  相似文献   

2.
Molecular visualization is an important tool for analysing the results of biochemical simulations. With modern GPU ray casting approaches, it is only possible to render several million of atoms interactively unless advanced acceleration methods are employed. Whole‐cell simulations consist of at least several billion atoms even for simplified cell models. However, many instances of only a few different proteins occur in the intracellular environment, which can be exploited to fit the data into the graphics memory. For each protein species, one model is stored and rendered once per instance. The proposed method exploits recent algorithmic advances for particle rendering and the repetitive nature of intracellular proteins to visualize dynamic results from mesoscopic simulations of cellular transport processes. We present two out‐of‐core optimizations for the interactive visualization of data sets composed of billions of atoms as well as details on the data preparation and the employed rendering techniques. Furthermore, we apply advanced shading methods to improve the image quality including methods to enhance depth and shape perception besides non‐photorealistic rendering methods. We also show that the method can be used to render scenes that are composed of triangulated instances, not only implicit surfaces.  相似文献   

3.
Image space occlusion culling is a powerful approach to reduce the rendering load of large polygonal models. However, occlusion culling is not for free; it trades overhead costs with the rendering costs of the possibly occluded geometry. Meanwhile, occlusion queries based on image space occlusion culling are supported on modern graphics hardware. However, a significant consumption of fillrate bandwidth and latency costs are associated with these queries. In this paper, we propose new techniques to reduce redundant occlusion queries. Our approach uses several "Occupancy Maps" to organize scene traversal. The respective information is accumulated efficiently by hardware‐supported asynchronous occlusion queries. To avoid redundant requests, we arrange these multiple occlusion queries according to the information of the Occupancy Maps. Our presented technique is conservative and benefits from a partial depth order of the geometry.  相似文献   

4.
Modern supercomputers enable increasingly large N‐body simulations using unstructured point data. The structures implied by these points can be reconstructed implicitly. Direct volume rendering of radial basis function (RBF) kernels in domain‐space offers flexible classification and robust feature reconstruction, but achieving performant RBF volume rendering remains a challenge for existing methods on both CPUs and accelerators. In this paper, we present a fast CPU method for direct volume rendering of particle data with RBF kernels. We propose a novel two‐pass algorithm: first sampling the RBF field using coherent bounding hierarchy traversal, then subsequently integrating samples along ray segments. Our approach performs interactively for a range of data sets from molecular dynamics and astrophysics up to 82 million particles. It does not rely on level of detail or subsampling, and offers better reconstruction quality than structured volume rendering of the same data, exhibiting comparable performance and requiring no additional preprocessing or memory footprint other than the BVH. Lastly, our technique enables multi‐field, multi‐material classification of particle data, providing better insight and analysis.  相似文献   

5.
Common practice in brain research and brain surgery involves the multi‐modal acquisition of brain anatomy and brain activation data. These highly complex three‐dimensional data have to be displayed simultaneously in order to convey spatial relationships. Unique challenges in information and interaction design have to be solved in order to keep the visualization sufficiently complete and uncluttered at the same time. The visualization method presented in this paper addresses these issues by using a hybrid combination of polygonal rendering of brain structures and direct volume rendering of activation data. Advanced rendering techniques including illustrative display styles and ambient occlusion calculations enhance the clarity of the visual output. The presented rendering pipeline produces real‐time frame rates and offers a high degree of configurability. Newly designed interaction and measurement tools are provided, which enable the user to explore the data at large, but also to inspect specific features closely. We demonstrate the system in the context of a cognitive neurosciences dataset. An initial informal evaluation shows that our visualization method is deemed useful for clinical research.  相似文献   

6.
We propose an efficient approach for interactive visualization of massive models with CPU ray tracing. A voxel‐based hierarchical level‐of‐detail (LOD) framework is employed to minimize rendering time and required system memory. In a pre‐processing phase, a compressed out‐of‐core data structure is constructed, which contains the original primitives of the model and the LOD voxels, organized into a kd‐tree. During rendering, data is loaded asynchronously to ensure a smooth inspection of the model regardless of the available I/O bandwidth. With our technique, we are able to explore data sets consisting of hundreds of millions of triangles in real‐time on a desktop PC with a quad‐core CPU.  相似文献   

7.
In this paper, we present a novel technique which simulates directional light scattering for more realistic interactive visualization of volume data. Our method extends the recent directional occlusion shading model by enabling light source positioning with practically no performance penalty. Light transport is approximated using a tilted cone‐shaped function which leaves elliptic footprints in the opacity buffer during slice‐based volume rendering. We perform an incremental blurring operation on the opacity buffer for each slice in front‐to‐back order. This buffer is then used to define the degree of occlusion for the subsequent slice. Our method is capable of generating high‐quality soft shadowing effects, allows interactive modification of all illumination and rendering parameters, and requires no pre‐computation.  相似文献   

8.
Higher‐order finite element methods have emerged as an important discretization scheme for simulation. They are increasingly used in contemporary numerical solvers, generating a new class of data that must be analyzed by scientists and engineers. Currently available visualization tools for this type of data are either batch oriented or limited to certain cell types and polynomial degrees. Other approaches approximate higher‐order data by resampling resulting in trade‐offs in interactivity and quality. To overcome these limitations, we have developed a distributed visualization system which allows for interactive exploration of non‐conforming unstructured grids, resulting from space‐time discontinuous Galerkin simulations, in which each cell has its own higher‐order polynomial solution. Our system employs GPU‐based raycasting for direct volume rendering of complex grids which feature non‐convex, curvilinear cells with varying polynomial degree. Frequency‐based adaptive sampling accounts for the high variations along rays. For distribution across a GPU cluster, the initial object‐space partitioning is determined by cell characteristics like the polynomial degree and is adapted at runtime by a load balancing mechanism. The performance and utility of our system is evaluated for different aeroacoustic simulations involving the propagation of shock fronts.  相似文献   

9.
Occlusion culling and level-of-detail rendering have become two powerful tools for accelerating the handling of very large models in real-time visualization applications. We present a framework that combines both techniques to improve rendering times. Classical occlusion culling algorithms compute potentially visible sets (PVS), which are supersets of the sets of visible polygons. The novelty of our approach is to estimate the degree of visibility of each object of the PVS using synthesized coarse occluders. This allows to arrange the objects of each PVS into several Hardly-Visible Sets (HVS) with similar occlusion degree. According to image accuracy and frame rate requirements, HVS provide a way to avoid sending to the graphics pipeline those objects whose pixel contribution is low due to partial occlusion. The image error can be bounded by the user at navigation time. On the other hand, as HVS offer a tighter estimation of the pixel contribution for each scene object, it can be used for a more convenient selection of the level-of-detail at which objects are rendered. In this paper, we describe the new framework technique, provide details of its implementation using a visibility octree as the chosen occlusion culling data structure and show some experimental results on the image quality.  相似文献   

10.
The selection of an appropriate global transfer function is essential for visualizing time‐varying simulation data. This is especially challenging when the global data range is not known in advance, as is often the case in remote and in‐situ visualization settings. Since the data range may vary dramatically as the simulation progresses, volume rendering using local transfer functions may not be coherent for all time steps. We present an exploratory technique that enables coherent classification of time‐varying volume data. Unlike previous approaches, which require pre‐processing of all time steps, our approach lets the user explore the transfer function space without accessing the original 3D data. This is useful for interactive visualization, and absolutely essential for in‐situ visualization, where the entire simulation data range is not known in advance. Our approach generates a compact representation of each time step at rendering time in the form of ray attenuation functions, which are used for subsequent operations on the opacity and color mappings. The presented approach offers interactive exploration of time‐varying simulation data that alleviates the cost associated with reloading and caching large data sets.  相似文献   

11.
This paper explores the benefits that can be achieved for stereoscopic visualization when using occlusion culling strategies. Graphics processing units (GPUs) are improving their computational power and functionalities. On the other hand, models are also increasing their size and users require more demanding image quality. Occlusion culling provides significant frame rate speed‐ups in densely occluded scenarios. This paper shows the limitations placed by compact and not densely occluded scenarios and the gains that can be achieved when rendering stereo images using occlusion culling. The experiments carried out test combinations of occlusion stereo coherence and occlusion frame coherence. The methods also take advantage of efficient depth sorting strategy and aggressive occlusion frame coherence, whose image artefacts have been found to be negligible. Results clearly point out that eye‐independent frame coherence offers more benefits than mixing frame and stereo coherence. As in monoscopic occlusion culling, using simplified geometric models, instead of bounding boxes, when posting occlusion queries, improves significantly the number of objects discriminated as occluded. The algorithm presented can be easily implemented and provides a significant boost in performances. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

12.
In many cases, only the combination of geometric and volumetric data sets is able to describe a single phenomenon under observation when visualizing large and complex data. When semi‐transparent geometry is present, correct rendering results require sorting of transparent structures. Additional complexity is introduced as the contributions from volumetric data have to be partitioned according to the geometric objects in the scene. The A‐buffer, an enhanced framebuffer with additional per‐pixel information, has previously been introduced to deal with the complexity caused by transparent objects. In this paper, we present an optimized rendering algorithm for hybrid volume‐geometry data based on the A‐buffer concept. We propose two novel components for modern GPUs that tailor memory utilization to the depth complexity of individual pixels. The proposed components are compatible with modern A‐buffer implementations and yield performance gains of up to eight times compared to existing approaches through reduced allocation and reuse of fast cache memory. We demonstrate the applicability of our approach and its performance with several examples from molecular biology, space weather and medical visualization containing both, volumetric data and geometric structures.  相似文献   

13.
BqR‐Tree, the data structure presented in this paper is an improved R‐Tree data structure based on a quadtree spatial partitioning which improves the rendering speed of the usual R‐trees when view‐culling is implemented, especially in urban scenes. The city is split by means of a spatial quadtree partition and the block is adopted as the basic urban unit. One advantage of blocks is that they can be easily identified in any urban environment, regardless of the origins and structure of the input data. The aim of the structure is to accelerate the visualization of complex scenes containing not only static but dynamic elements. The usefulness of the structure has been tested with low structured data, which makes its application appropriate to almost all city data. The results of the tests show that when using the BqR‐Tree structure to perform walkthroughs and flights, rendering times vastly improve in comparison to the data structures which have yielded best results to date, with average improvements of around 30%.  相似文献   

14.
In this survey we review, classify and compare existing approaches for real‐time crowd rendering. We first overview character animation techniques, as they are highly tied to crowd rendering performance, and then we analyze the state of the art in crowd rendering. We discuss different representations for level‐of‐detail (LoD) rendering of animated characters, including polygon‐based, point‐based, and image‐based techniques, and review different criteria for runtime LoD selection. Besides LoD approaches, we review classic acceleration schemes, such as frustum culling and occlusion culling, and describe how they can be adapted to handle crowds of animated characters. We also discuss specific acceleration techniques for crowd rendering, such as primitive pseudo‐instancing, palette skinning, and dynamic key‐pose caching, which benefit from current graphics hardware. We also address other factors affecting performance and realism of crowds such as lighting, shadowing, clothing and variability. Finally we provide an exhaustive comparison of the most relevant approaches in the field.  相似文献   

15.
In this paper we present several techniques to interactively explore representations of 2D vector fields. Through a set of simple hand postures used on large, touch‐sensitive displays, our approach allows individuals to custom‐design glyphs (arrows, lines, etc.) that best reveal patterns of the underlying dataset. Interactive exploration of vector fields is facilitated through freedom of glyph placement, glyph density control, and animation. The custom glyphs can be applied individually to probe specific areas of the data but can also be applied in groups to explore larger regions of a vector field. Re‐positionable sources from which glyphs—animated according to the local vector field—continue to emerge are used to examine the vector field dynamically. The combination of these techniques results in an engaging visualization with which the user can rapidly explore and analyze varying types of 2D vector fields, using a virtually infinite number of custom‐designed glyphs.  相似文献   

16.
Many 3D scenes (e.g. generated from CAD data) are composed of a multitude of objects that are nested in each other. A showroom, for instance, may contain multiple cars and every car has a gearbox with many gearwheels located inside. Because the objects occlude each other, only few are visible from outside. We present a new technique, Spherical Visibility Sampling (SVS), for real‐time 3D rendering of such – possibly highly complex – scenes. SVS exploits the occlusion and annotates hierarchically structured objects with directional visibility information in a preprocessing step. For different directions, the directional visibility encodes which objects of a scene's region are visible from the outside of the regions' enclosing bounding sphere. Since there is no need to store a separate view space subdivision as in most techniques based on preprocessed visibility, a small memory footprint is achieved. Using the directional visibility information for an interactive walkthrough, the potentially visible objects can be retrieved very efficiently without the need for further visibility tests. Our evaluation shows that using SVS allows to preprocess complex 3D scenes fast and to visualize them in real time (e.g. a Power Plant model and five animated Boeing 777 models with billions of triangles). Because SVS does not require hardware support for occlusion culling during rendering, it is even applicable for rendering large scenes on mobile devices.  相似文献   

17.
In this report, we review the current state of the art of web‐based visualization applications. Recently, an increasing number of web‐based visualization applications have emerged. This is due to the fact that new technologies offered by modern browsers greatly increased the capabilities for visualizations on the web. We first review these technical aspects that are enabling this development. This includes not only improvements for local rendering like WebGL and HTML5, but also infrastructures like grid or cloud computing platforms. Another important factor is the transfer of data between the server and the client. Therefore, we also discuss advances in this field, for example methods to reduce bandwidth requirements like compression and other optimizations such as progressive rendering and streaming. After establishing these technical foundations, we review existing web‐based visualization applications and prototypes from various application domains. Furthermore, we propose a classification of these web‐based applications based on the technologies and algorithms they employ. Finally, we also discuss promising application areas that would benefit from web‐based visualization and assess their feasibility based on the existing approaches.  相似文献   

18.
We present a novel line drawing algorithm that illustrates surfaces in real‐time to convey their shape. We use line integral convolution (LIC) and employ ambient occlusion for illustrative surface rendering. Furthermore, our method depicts salient regions based on the illumination gradient. Our method works on animated surfaces in a frame‐coherent manner. Therefore, it yields an illustrative representation of time‐dependent surfaces as no preprocessing step is needed. In this paper, the method is used to highlight the structure of molecular surfaces and to illustrate important surface features like cavities, channels, and pockets. The benefit of our method was evaluated with domain experts. We also demonstrate the applicability of our method to medical visualization.  相似文献   

19.
We present a new algorithm for efficient occlusion culling using hardware occlusion queries. The algorithm significantly improves on previous techniques by making better use of temporal and spatial coherence of visibility. This is achieved by using adaptive visibility prediction and query batching. As a result of the new optimizations the number of issued occlusion queries and the number of rendering state changes are significantly reduced. We also propose a simple method for determining tighter bounding volumes for occlusion queries and a method which further reduces the pipeline stalls. The proposed method provides up to an order of magnitude speedup over the previous state of the art. The new technique is simple to implement, does not rely on hardware calibration and integrates well with modern game engines.  相似文献   

20.
Time-varying three-dimensional scattered data representing snapshots of atomic configurations produced by molecular dynamics simulations are not illuminating by themselves; gaining insight into them poses a tremendous challenge. In order to take the advantage of maximal information offered by these simulations, we have proposed an efficient scheme, which integrates various analysis and rendering tasks together in order to support interactive visualization of the data at space–time multiresolution. Additional data produced by various analytical techniques on the fly represent the atomic system under consideration at diverse length- (e.g., nearest neighbor, next-nearest neighbor or beyond) and time- (e.g., instantaneous, finite intervals or overall averages) scales. In particular, the radial distribution functions, coordination environments, clusters and rings are computed and visualized to understand the structural behavior whereas a variety of displacement data and covariance matrices are explored to understand the dynamical behavior. While the spatial distributions of atoms need to be reproduced correctly during rendering, we take the advantage of high flexibility in rendering other attributes because of the lack of their direct physical relevance. A combination of different techniques including animation, color maps, pathlines, different types of glyphs, and graphics hardware accelerated approach is exploited to render the original and extracted data. First-principles molecular dynamics simulation data for liquid systems are used to justify the effectiveness and usefulness of the proposed scheme.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号