首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 604 毫秒
1.
In this paper, we present a new approach for shape‐grammar‐based generation and rendering of huge cities in real‐time on the graphics processing unit (GPU). Traditional approaches rely on evaluating a shape grammar and storing the geometry produced as a preprocessing step. During rendering, the pregenerated data is then streamed to the GPU. By interweaving generation and rendering, we overcome the problems and limitations of streaming pregenerated data. Using our methods of visibility pruning and adaptive level of detail, we are able to dynamically generate only the geometry needed to render the current view in real‐time directly on the GPU. We also present a robust and efficient way to dynamically update a scene's derivation tree and geometry, enabling us to exploit frame‐to‐frame coherence. Our combined generation and rendering is significantly faster than all previous work. For detailed scenes, we are capable of generating geometry more rapidly than even just copying pregenerated data from main memory, enabling us to render cities with thousands of buildings at up to 100 frames per second, even with the camera moving at supersonic speed.  相似文献   

2.
We present a novel multi‐view, projective texture mapping technique. While previous multi‐view texturing approaches lead to blurring and ghosting artefacts if 3D geometry and/or camera calibration are imprecise, we propose a texturing algorithm that warps (“floats”) projected textures during run‐time to preserve crisp, detailed texture appearance. Our GPU implementation achieves interactive to real‐time frame rates. The method is very generally applicable and can be used in combination with many image‐based rendering methods or projective texturing applications. By using Floating Textures in conjunction with, e.g., visual hull rendering, light field rendering, or free‐viewpoint video, improved rendering results are obtained from fewer input images, less accurately calibrated cameras, and coarser 3D geometry proxies.  相似文献   

3.
Adaptive Caustic Maps Using Deferred Shading   总被引:1,自引:0,他引:1  
Caustic maps provide an interactive image-space method to render caustics, the focusing of light via reflection and refraction. Unfortunately, caustic mapping suffers problems similar to shadow mapping: aliasing from poor sampling and map projection as well as temporal incoherency from frame-to-frame sampling variations. To reduce these problems, researchers have suggested methods ranging from caustic blurring to building a multiresolution caustic map. Yet these all require a fixed photon sampling, precluding the use of importance-based photon densities. This paper introduces adaptive caustic maps. Instead of densely sampling photons via a rasterization pass, we adaptively emit photons using a deferred shading pass. We describe deferred rendering for refractive surfaces, which speeds rendering of refractive geometry up to 25% and with adaptive sampling speeds caustic rendering up to 200%. These benefits are particularly noticable for complex geometry or using millions of photons. While developed for a GPU rasterizer, adaptive caustic map creation can be performed by any renderer that individually traces photons, e.g., a GPU ray tracer.  相似文献   

4.
Great advancements in commodity graphics hardware have favoured graphics processing unit (GPU)‐based volume rendering as the main adopted solution for interactive exploration of rectilinear scalar volumes on commodity platforms. Nevertheless, long data transfer times and GPU memory size limitations are often the main limiting factors, especially for massive, time‐varying or multi‐volume visualization, as well as for networked visualization on the emerging mobile devices. To address this issue, a variety of level‐of‐detail (LOD) data representations and compression techniques have been introduced. In order to improve capabilities and performance over the entire storage, distribution and rendering pipeline, the encoding/decoding process is typically highly asymmetric, and systems should ideally compress at data production time and decompress on demand at rendering time. Compression and LOD pre‐computation does not have to adhere to real‐time constraints and can be performed off‐line for high‐quality results. In contrast, adaptive real‐time rendering from compressed representations requires fast, transient and spatially independent decompression. In this report, we review the existing compressed GPU volume rendering approaches, covering sampling grid layouts, compact representation models, compression techniques, GPU rendering architectures and fast decoding techniques.  相似文献   

5.
Molecular dynamics simulations are a principal tool for studying molecular systems. Such simulations are used to investigate molecular structure, dynamics, and thermodynamical properties, as well as a replacement for, or complement to, costly and dangerous experiments. With the increasing availability of computational power the resulting data sets are becoming increasingly larger, and benchmarks indicate that the interactive visualization on desktop computers poses a challenge when rendering substantially more than millions of glyphs. Trading visual quality for rendering performance is a common approach when interactivity has to be guaranteed. In this paper we address both problems and present a method for high‐quality visualization of massive molecular dynamics data sets. We employ several optimization strategies on different levels of granularity, such as data quantization, data caching in video memory, and a two‐level occlusion culling strategy: coarse culling via hardware occlusion queries and a vertex‐level culling using maximum depth mipmaps. To ensure optimal image quality we employ GPU raycasting and deferred shading with smooth normal vector generation. We demonstrate that our method allows us to interactively render data sets containing tens of millions of high‐quality glyphs.  相似文献   

6.
We introduce a GPU-friendly technique that efficiently exploits the highly structured nature of urban environments to ensure rendering quality and interactive performance of city exploration tasks. Central to our approach is a novel discrete representation, called BlockMap, for the efficient encoding and rendering of a small set of textured buildings far from the viewer. A BlockMap compactly represents a set of textured vertical prisms with a bounded on-screen footprint. BlockMaps are stored into small fixed size texture chunks and efficiently rendered through GPU raycasting. Blockmaps can be seamlessly integrated into hierarchical data structures for interactive rendering of large textured urban models. We illustrate an efficient output-sensitive framework in which a visibility-aware traversal of the hierarchy renders components close to the viewer with textured polygons and employs BlockMaps for far away geometry. Our approach provides a bounded size far distance representation of cities, naturally scales with the improving shader technology, and outperforms current state of the art approaches. Its efficiency and generality is demonstrated with the interactive exploration of a large textured model of the city of Paris on a commodity graphics platform.  相似文献   

7.
In this paper, we propose an efficient solution that addresses the performance problems of current single-pass GPU raycasting algorithms. Our paper provides more control over the rendering process by introducing tighter ray segments for raycasting, while at the same time avoiding the introduction of any new rendering artefacts. We achieve this by dynamically generating, on the GPU, a coarsely fitted proxy geometry, composed of spheres, for the active blocks. The spheres are then rasterised into two z-buffers by a single rendering pass. The resulting two z-buffers are used as the first-hit and last-hit points for the subsequent raycaster. With this approach, only the valid ray segments between the two z-buffers need to be sampled during raycasting. This also provides more coherent parallelism on the GPU due to more consistent ray length and avoidance of the overheads and dynamic branching of performing checks on a per-sample basis during the raycasting pass.
Our technique is ideal for dynamic data exploration in which both the transfer function and view parameters need to be changed frequently at runtime. The rendering results of our algorithm are identical to the general cube-based proxy geometry algorithm, but the performance can be up to 15.7 times faster. Furthermore, the approach can be adopted by any existing raycasting system in a straightforward way.  相似文献   

8.
9.
We develop an approach for hardware‐accelerated, high‐quality rendering of volume data using trivariate splines. The proposed quasi‐interpolating schemes are realtime reconstructions. The low total degrees provide several advantages for our GPU implementation. In particular, intersecting rays with spline isosurfaces for direct Phong illumination is performed by simple root finding algorithms (analytic and iterative), while the necessary normals result from blossoming. Since visualizations are on a fragment base, our renderer for isosurfaces includes an automatic level of detail. While we use well‐known spatial data structures in the CPU part of the algorithm for hierarchical view frustum culling and memory reduction, our GPU implementations have to take the highly complex structure of the splines into account. These include an appropriate organization of the data streams, i.e. we develop an advanced encoding scheme for the spline coefficients, as well as an implicit scheme for bounding geometry retrieval. In addition, we propose an elaborated clipping procedure to be performed in the fragment shader. These features essentially reduce bus traffic, memory consumption, and data access on the GPU leading to interactive frame rates for renderings of high visual quality. Compared with pure CPU implementations and existing GPU implementations for trivariate polynomials frame rates increase by factors between 10 and 100.  相似文献   

10.
Visualizing dynamic participating media in particle form by fully solving equations from the light transport theory is a computationally very expensive process. In this paper, we present a computational pipeline for particle volume rendering that is easily accelerated by the current GPU. To fully harness its massively parallel computing power, we transform input particles into a volumetric density field using a GPU-assisted, adaptive density estimation technique that iteratively adapts the smoothing length for local grid cells. Then, the volume data is visualized efficiently based on the volume photon mapping method where our GPU techniques further improve the rendering quality offered by previous implementations while performing rendering computation in acceptable time. It is demonstrated that high quality volume renderings can be easily produced from large particle datasets in time frames of a few seconds to less than a minute.  相似文献   

11.
Interactive visualization of large forest scenes is challenging due to the large amount of geometric detail that needs to be generated and stored, particularly in scenarios with a moving observer such as forest walkthroughs or overflights. Here, we present a new method for large‐scale procedural forest generation and visualization at interactive rates. We propose a hybrid approach by combining geometry‐based and volumetric modelling techniques with gradually transitioning level of detail (LOD). Nearer trees are constructed using an extended particle flow algorithm, in which particle trails outline the tree ramification in an inverse direction, i.e. from the leaves towards the roots. Reduced geometric representation of a tree is obtained by subsampling the trails. For distant trees, a new volumetric rendering technique in pixel‐space is introduced, which avoids geometry formation altogether and enables visualization of vast forest areas with millions of unique trees. We demonstrate that a GPU‐based implementation of the proposed method provides interactive frame rates in forest overflight scenarios, where new trees are constructed and their LOD adjusted on the fly.  相似文献   

12.
Dynamic Sampling and Rendering of Algebraic Point Set Surfaces   总被引:2,自引:0,他引:2  
Algebraic Point Set Surfaces (APSS) define a smooth surface from a set of points using local moving least‐squares (MLS) fitting of algebraic spheres. In this paper we first revisit the spherical fitting problem and provide a new, more generic solution that includes intuitive parameters for curvature control of the fitted spheres. As a second contribution we present a novel real‐time rendering system of such surfaces using a dynamic up‐sampling strategy combined with a conventional splatting algorithm for high quality rendering. Our approach also includes a new view dependent geometric error tailored to efficient and adaptive up‐sampling of the surface. One of the key features of our system is its high degree of flexibility that enables us to achieve high performance even for highly dynamic data or complex models by exploiting temporal coherence at the primitive level. We also address the issue of efficient spatial search data structures with respect to construction, access and GPU friendliness. Finally, we present an efficient parallel GPU implementation of the algorithms and search structures.  相似文献   

13.
We propose a unified rendering approach that jointly handles motion and defocus blur for transparent and opaque objects at interactive frame rates. Our key idea is to create a sampled representation of all parts of the scene geometry that are potentially visible at any point in time for the duration of a frame in an initial rasterization step. We store the resulting temporally‐varying fragments (t‐fragments) in a bounding volume hierarchy which is rebuild every frame using a fast spatial median construction algorithm. This makes our approach suitable for interactive applications with dynamic scenes and animations. Next, we perform spatial sampling to determine all t‐fragments that intersect with a specific viewing ray at any point in time. Viewing rays are sampled according to the lens uv‐sampling for depth‐of‐field effects. In a final temporal sampling step, we evaluate the predetermined viewing ray/t‐fragment intersections for one or multiple points in time. This allows us to incorporate all standard shading effects including transparency. We describe the overall framework, present our GPU implementation, and evaluate our rendering approach with respect to scalability, quality, and performance.  相似文献   

14.
Depth and visual hulls are useful for quick reconstruction and rendering of a 3D object based on a number of reference views. However, for many scenes, especially multi‐object, these hulls may contain significant artifacts known as phantom geometry. In depth hulls the phantom geometry appears behind the scene objects in regions occluded from all the reference views. In visual hulls the phantom geometry may also appear in front of the objects because there is not enough information to unambiguously imply the object positions. In this work we identify which parts of the depth and visual hull might constitute phantom geometry. We define the notion of reduced depth hull and reduced visual hull as the parts of the corresponding hull that are phantom‐free. We analyze the role of the depth information in identification of the phantom geometry. Based on this, we provide an algorithm for rendering the reduced depth hull at interactive frame‐rates and suggest an approach for rendering the reduced visual hull. The rendering algorithms take advantage of modern GPU programming techniques. Our techniques bypass explicit reconstruction of the hulls, rendering the reduced depth or visual hull directly from the reference views.  相似文献   

15.
Depth-of-Field Rendering by Pyramidal Image Processing   总被引:1,自引:0,他引:1  
We present an image-based algorithm for interactive rendering depth-of-field effects in images with depth maps. While previously published methods for interactive depth-of-field rendering suffer from various rendering artifacts such as color bleeding and sharpened or darkened silhouettes, our algorithm achieves a significantly improved image quality by employing recently proposed GPU-based pyramid methods for image blurring and pixel disocclusion. Due to the same reason, our algorithm offers an interactive rendering performance on modern GPUs and is suitable for real-time rendering for small circles of confusion. We validate the image quality provided by our algorithm by side-by-side comparisons with results obtained by distributed ray tracing.  相似文献   

16.
The rendering of large data sets can result in cluttered displays and non‐interactive update rates, leading to time consuming analyses. A straightforward solution is to reduce the number of items, thereby producing an abstraction of the data set. For the visual analysis to remain accurate, the graphical representation of the abstraction must preserve the significant features present in the original data. This paper presents a screen space quality method, based on distance transforms, that measures the visual quality of a data abstraction. This screen space measure is shown to better capture significant visual structures in data, compared with data space measures. The presented method is implemented on the GPU, allowing interactive creation of high quality graphical representations of multivariate data sets containing tens of thousands of items.  相似文献   

17.
Style Transfer Functions for Illustrative Volume Rendering   总被引:3,自引:0,他引:3  
Illustrative volume visualization frequently employs non-photorealistic rendering techniques to enhance important features or to suppress unwanted details. However, it is difficult to integrate multiple non-photorealistic rendering approaches into a single framework due to great differences in the individual methods and their parameters. In this paper, we present the concept of style transfer functions. Our approach enables flexible data-driven illumination which goes beyond using the transfer function to just assign colors and opacities. An image-based lighting model uses sphere maps to represent non-photorealistic rendering styles. Style transfer functions allow us to combine a multitude of different shading styles in a single rendering. We extend this concept with a technique for curvature-controlled style contours and an illustrative transparency model. Our implementation of the presented methods allows interactive generation of high-quality volumetric illustrations.  相似文献   

18.
Polyhedral meshes consisting of triangles, quads, and pentagons and polar configurations cover all major sampling and modeling scenarios. We give an algorithm for efficient local, parallel conversion of such meshes to an everywhere smooth surface consisting of low‐degree polynomial pieces. Quadrilateral facets with 4‐valent vertices are ‘regular’ and are mapped to bi‐cubic patches so that adjacent bi‐cubics join C2 as for cubic tensor‐product splines. The algorithm can be implemented in the vertex and geometry shaders of the GPU pipeline and does not use the fragment shader. Its implementation in DirectX 10 achieves conversion plus rendering at 659 frames per second with 42.5 million triangles per second on input of a model of 1300 facets of which 60% are not regular.  相似文献   

19.
We present a flexible and highly efficient hardware‐assisted volume renderer grounded on the original Projected Tetrahedra (PT) algorithm. Unlike recent similar approaches, our method is exclusively based on the rasterization of simple geometric primitives and takes full advantage of graphics hardware. Both vertex and geometry shaders are used to compute the tetrahedral projection, while the volume ray integral is evaluated in a fragment shader; hence, volume rendering is performed entirely on the GPU within a single pass through the pipeline. We apply a CUDA‐based visibility ordering achieving rendering and sorting performance of over 6 M Tet/s for unstructured datasets. Furthermore, as each tetrahedron is processed independently, we employ a data‐parallel solution which is neither bound by GPU memory size nor does it rely on auxiliary volume information. In addition, iso‐surfaces can be readily extracted during the rendering process, and time‐varying data are handled without extra burden.  相似文献   

20.
Simulation of light transport through lens systems plays an important role in graphics. While basic imaging properties can be conveniently derived from linear models (like ABCD matrices), these approximations fail to describe nonlinear effects and aberrations that arise in real optics. Such effects can be computed by proper ray tracing, for which, however, finding suitable sampling and filtering strategies is often not a trivial task. Inspired by aberration theory, which describes the deviation from the linear ray transfer in terms of wavefront distortions, we propose a ray‐space formulation for nonlinear effects. In particular, we approximate the analytical solution to the ray tracing problem by means of a Taylor expansion in the ray parameters. This representation enables a construction‐kit approach to complex optical systems in the spirit of matrix optics. It is also very simple to evaluate, which allows for efficient execution on CPU and GPU alike, including the computation of mixed derivatives of any order. We evaluate fidelity and performance of our polynomial model, and show applications in high‐quality offline rendering and at interactive frame rates.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号