首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We present a performance comparison of bounding volume hierarchies and kd‐trees for ray tracing on many‐core architectures (GPUs). The comparison is focused on rendering times and traversal characteristics on the GPU using data structures that were optimized for very high performance of tracing rays. To achieve low rendering times, we extensively examine the constants used in termination criteria for the two data structures. We show that for a contemporary GPU architecture (NVIDIA Kepler) bounding volume hierarchies have higher ray tracing performance than kd‐trees for simple and moderately complex scenes. On the other hand, kd‐trees have higher performance for complex scenes, in particular for those with high depth complexity. Finally, we analyse the causes of the performance discrepancies using the profiling characteristics of the ray tracing kernels.  相似文献   

2.
We propose a new technique for in‐core and out‐of‐core GPU ray tracing using a generalization of hierarchical occlusion culling in the style of the CHC++ method. Our method exploits the rasterization pipeline and hardware occlusion queries in order to create coherent batches of work for localized shader‐based ray tracing kernels. By combining hierarchies in both ray space and object space, the method is able to share intermediate traversal results among multiple rays. We exploit temporal coherence among similar ray sets between frames and also within the given frame. A suitable management of the current visibility state makes it possible to benefit from occlusion culling for less coherent ray types like diffuse reflections. Since large scenes are still a challenge for modern GPU ray tracers, our method is most useful for scenes with medium to high complexity, especially since our method inherently supports ray tracing highly complex scenes that do not fit in GPU memory. For in‐core scenes our method is comparable to CUDA ray tracing and performs up to 5.94 × better than pure shader‐based ray tracing.  相似文献   

3.
We present a hybrid ray tracing system, where the work is divided between the CPU cores and the GPU in an integrated chip, and communication occurs via shared memory. Rays are organized in large packets that can be distributed among the two units as needed. Testing visibility between rays and the scene is mostly performed using an optimized kernel on the GPU, but the CPU can help as necessary. The CPU cores typically handle most or all shading, which makes it easy to support complex appearances. For efficiency, the CPU cores shade whole batches of rays by sorting them on material and shading each material using a vectorized kernel. In addition, we introduce a method to support light paths with arbitrary recursion, such as multiple recursive Whitted‐style ray tracing and adaptive sampling where the result of a ray is examined before sending the next, while still batching up rays for the benefit of GPU‐accelerated traversal and vectorized shading. This allows our system to achieve high rendering performance while maintaining the flexibility to accommodate different rendering algorithms.  相似文献   

4.
Out-of-core Data Management for Path Tracing on Hybrid Resources   总被引:1,自引:0,他引:1  
We present a software system that enables path-traced rendering of complex scenes. The system consists of two primary components: an application layer that implements the basic rendering algorithm, and an out-of-core scheduling and data-management layer designed to assist the application layer in exploiting hybrid computational resources (e.g., CPUs and GPUs) simultaneously. We describe the basic system architecture, discuss design decisions of the system's data-management layer, and outline an efficient implementation of a path tracer application, where GPUs perform functions such as ray tracing, shadow tracing, importance-driven light sampling, and surface shading. The use of GPUs speeds up the runtime of these components by factors ranging from two to twenty, resulting in a substantial overall increase in rendering speed. The path tracer scales well with respect to CPUs, GPUs and memory per node as well as scaling with the number of nodes. The result is a system that can render large complex scenes with strong performance and scalability.  相似文献   

5.
We present a spatial index structure to accelerate ray tracing on GPUs. It is a flat, non‐hierarchical spatial subdivision of the scene into axis aligned cells of varying size. In order to construct it, we first nest an octree into each cell of a uniform grid. We then apply two optimization passes to increase ray traversal performance: First, we reduce the expected cost for ray traversal by merging cells together. This adapts the structure to complex primitive distributions, solving the “teapot in a stadium” problem. Second, we decouple the cell boundaries used during traversal for rays entering and exiting a given cell. This allows us to extend the exiting boundaries over adjacent cells that are either empty or do not contain additional primitives. Now, exiting rays can skip empty space and avoid repeating intersection tests. Finally, we demonstrate that in addition to the fast ray traversal performance, the structure can be rebuilt efficiently in parallel, allowing for ray tracing dynamic scenes.  相似文献   

6.
We present a novel approach to ray tracing execution on commodity graphics hardware using CUDA. We decompose a standard ray tracing algorithm into several data‐parallel stages that are mapped efficiently to the massively parallel architecture of modern GPUs. These stages include: ray sorting into coherent packets, creation of frustums for packets, breadth‐first frustum traversal through a bounding volume hierarchy for the scene, and localized ray‐primitive intersections. We utilize the well known parallel primitives scan and segmented scan in order to process irregular data structures, to remove the need for a stack, and to minimize branch divergence in all stages. Our ray sorting stage is based on applying hash values to individual rays, ray stream compression, sorting and decompression. Our breadth‐first BVH traversal is based on parallel frustum‐bounding box intersection tests and parallel scan per each BVH level. We demonstrate our algorithm with area light sources to get a soft shadow effect and show that our concept is reasonable for GPU implementation. For the same data sets and ray‐primitive intersection routines our pipeline is ~3x faster than an optimized standard depth first ray tracing implemented in one kernel.  相似文献   

7.
We present an efficient algorithm for object‐space proximity queries between multiple deformable triangular meshes. Our approach uses the rasterization capabilities of the GPU to produce an image‐space representation of the vertices. Using this image‐space representation, inter‐object vertex‐triangle distances and closest points lying under a user‐defined threshold are computed in parallel by conservative rasterization of bounding primitives and sorted using atomic operations. We additionally introduce a similar technique to detect penetrating vertices. We show how mechanisms of modern GPUs such as mipmapping, Early‐Z and Early‐Stencil culling can optimize the performance of our method. Our algorithm is able to compute dense proximity information for complex scenes made of more than a hundred thousand triangles in real time, outperforming a CPU implementation based on bounding volume hierarchies by more than an order of magnitude.  相似文献   

8.
Stackless traversal algorithms for ray tracing acceleration structures require significantly less storage per ray than ordinary stack‐based ones. This advantage is important for massively parallel rendering methods, where there are many rays in flight. On SIMD architectures, a commonly used acceleration structure is the MBVH, which has multiple bounding boxes per node for improved parallelism. It scales to branching factors higher than two, for which, however, only stack‐based traversal methods have been proposed so far. In this paper, we introduce a novel stackless traversal algorithm for MBVHs with up to four‐way branching. Our approach replaces the stack with a small bitmask, supports dynamic ordered traversal, and has a low computation overhead. We also present efficient implementation techniques for recent CPU, MIC (Intel Xeon Phi) and GPU (NVIDIA Kepler) architectures.  相似文献   

9.
This survey gives an overview of the current state of the art in GPU techniques for interactive large‐scale volume visualization. Modern techniques in this field have brought about a sea change in how interactive visualization and analysis of giga‐, tera‐ and petabytes of volume data can be enabled on GPUs. In addition to combining the parallel processing power of GPUs with out‐of‐core methods and data streaming, a major enabler for interactivity is making both the computational and the visualization effort proportional to the amount and resolution of data that is actually visible on screen, i.e. ‘output‐sensitive’ algorithms and system designs. This leads to recent output‐sensitive approaches that are ‘ray‐guided’, ‘visualization‐driven’ or ‘display‐aware’. In this survey, we focus on these characteristics and propose a new categorization of GPU‐based large‐scale volume visualization techniques based on the notions of actual output‐resolution visibility and the current working set of volume bricks—the current subset of data that is minimally required to produce an output image of the desired display resolution. Furthermore, we discuss the differences and similarities of different rendering and data traversal strategies in volume rendering by putting them into a common context—the notion of address translation. For our purposes here, we view parallel (distributed) visualization using clusters as an orthogonal set of techniques that we do not discuss in detail but that can be used in conjunction with what we present in this survey.  相似文献   

10.
We develop an approach for hardware‐accelerated, high‐quality rendering of volume data using trivariate splines. The proposed quasi‐interpolating schemes are realtime reconstructions. The low total degrees provide several advantages for our GPU implementation. In particular, intersecting rays with spline isosurfaces for direct Phong illumination is performed by simple root finding algorithms (analytic and iterative), while the necessary normals result from blossoming. Since visualizations are on a fragment base, our renderer for isosurfaces includes an automatic level of detail. While we use well‐known spatial data structures in the CPU part of the algorithm for hierarchical view frustum culling and memory reduction, our GPU implementations have to take the highly complex structure of the splines into account. These include an appropriate organization of the data streams, i.e. we develop an advanced encoding scheme for the spline coefficients, as well as an implicit scheme for bounding geometry retrieval. In addition, we propose an elaborated clipping procedure to be performed in the fragment shader. These features essentially reduce bus traffic, memory consumption, and data access on the GPU leading to interactive frame rates for renderings of high visual quality. Compared with pure CPU implementations and existing GPU implementations for trivariate polynomials frame rates increase by factors between 10 and 100.  相似文献   

11.
Photorealistic image synthesis is a computationally demanding task that relies on ray tracing for the evaluation of integrals. Rendering time is dominated by tracing long paths that are very incoherent by construction. We therefore investigate the use of SIMD instructions to accelerate incoherent rays. SIMD is used in the hierarchy construction, the tree traversal and the leaf intersection. This is achieved by increasing the arity of acceleration structures, which also reduces memory requirements. We show that the resulting hierarchies can be built quickly and are smaller than acceleration structures known so far while at the same time outperforming them for incoherent rays. Our new acceleration structure speeds up ray tracing by a factor of 1.6 to 2.0 compared to a highly optimized bounding interval hierarchy implementation, and 1.3 to 1.6 compared to an efficient kd‐tree. At the same time, the memory requirements are reduced by 10–50%. Additionally we show how a caching mechanism in conjunction with this memory efficient hierarchy can be used to speed up shadow rays in a global illumination algorithm without increasing the memory footprint. This optimization decreased the number of traversal steps up to 50%.  相似文献   

12.
Depth-of-Field Rendering by Pyramidal Image Processing   总被引:1,自引:0,他引:1  
We present an image-based algorithm for interactive rendering depth-of-field effects in images with depth maps. While previously published methods for interactive depth-of-field rendering suffer from various rendering artifacts such as color bleeding and sharpened or darkened silhouettes, our algorithm achieves a significantly improved image quality by employing recently proposed GPU-based pyramid methods for image blurring and pixel disocclusion. Due to the same reason, our algorithm offers an interactive rendering performance on modern GPUs and is suitable for real-time rendering for small circles of confusion. We validate the image quality provided by our algorithm by side-by-side comparisons with results obtained by distributed ray tracing.  相似文献   

13.
We present an efficient Graphics Processing Unit GPU‐based implementation of the Projected Tetrahedra (PT) algorithm. By reducing most of the CPU–GPU data transfer, the algorithm achieves interactive frame rates (up to 2.0 M Tets/s) on current graphics hardware. Since no topology information is stored, it requires substantially less memory than recent interactive ray casting approaches. The method uses a two‐pass GPU approach with two fragment shaders. This work includes extended volume inspection capabilities by supporting interactive transfer function editing and isosurface highlighting using a Phong illumination model.  相似文献   

14.
Metaballs are implicit surfaces widely used to model curved objects, represented by the isosurface of a density field defined by a set of points. Recently, the results of particle‐based simulations have been often visualized using a large number of metaballs, however, such visualizations have high rendering costs. In this paper we propose a fast technique for rendering metaballs on the GPU. Instead of using polygonization, the isosurface is directly evaluated in a per‐pixel manner. For such evaluation, all metaballs contributing to the isosurface need to be extracted along each viewing ray, on the limited memory of GPUs. We handle this by keeping a list of metaballs contributing to the isosurface and efficiently update it. Our method neither requires expensive precomputation nor acceleration data structures often used in existing ray tracing techniques. With several optimizations, we can display a large number of moving metaballs quickly.  相似文献   

15.
We propose a method for creating a bounding volume hierarchy (BVH) that is optimized for all frames of a given animated scene. The method is based on a novel extension of surface area heuristic to temporal domain (T‐SAH). We perform iterative BVH optimization using T‐SAH and create a single BVH accounting for scene geometry distribution at different frames of the animation. Having a single optimized BVH for the whole animation makes our method extremely easy to integrate to any application using BVHs, limiting the per‐frame overhead only to refitting the bounding volumes. We evaluated the T‐SAH optimized BVHs in the scope of real‐time GPU ray tracing. We demonstrate, that our method can handle even highly complex inputs with large deformations and significant topology changes. The results show, that in a vast majority of tested scenes our method provides significantly better run‐time performance than traditional SAH and also better performance than GPU based per‐frame BVH rebuild.  相似文献   

16.
Ray‐traced global illumination (GI) is becoming widespread in production rendering but incoherent secondary ray traversal limits practical rendering to scenes that fit in memory. Incoherent shading also leads to intractable performance with production‐scale textures forcing renderers to resort to caching of irradiance, radiosity, and other values to amortize expensive shading. Unfortunately, such caching strategies complicate artist workflow, are difficult to parallelize effectively, and contend for precious memory. Worse, these caches involve approximations that compromise quality. In this paper, we introduce a novel path‐tracing framework that avoids these tradeoffs. We sort large, potentially out‐of‐core ray batches to ensure coherence of ray traversal. We then defer shading of ray hits until we have sorted them, achieving perfectly coherent shading and avoiding the need for shading caches.  相似文献   

17.
This paper deals with a problem of finding valid solutions to systems of polynomial constraints. Although there have been several quite successful algorithms based on domain subdivision to resolve this problem, some major issues are still demanding further research. Prime obstacles in developing an efficient subdivision-based polynomial constraint solver are the exhaustive, although hierarchical, search of the zero-set in the parameter domain, which is computationally demanding, and their scalability in terms of the number of variables. In this paper, we present a hybrid parallel algorithm for solving systems of multivariate constraints by exploiting both the CPU and the GPU multicore architectures. We dedicate the CPU for the traversal of the subdivision tree and the GPU for the multivariate polynomial subdivision. By decomposing the constraint solving technique into two different components, hierarchy traversal and polynomial subdivision, each of which is more suitable to CPUs and GPUs, respectively, our solver can fully exploit the availability of hybrid, multicore architectures of CPUs and GPUs. Furthermore, our GPU-based subdivision method takes advantage of the inherent parallelism in the multivariate polynomial subdivision. We demonstrate the efficacy and scalability of the proposed parallel solver through several examples in geometric applications, including Hausdorff distance queries, contact point computations, surface–surface intersections, ray trap constructions, and bisector surface computations. In our experiments, the proposed parallel method achieves up to two orders of magnitude improvement in performance compared to the state-of-the-art subdivision-based CPU solver.  相似文献   

18.
Image‐based rendering techniques are a powerful alternative to traditional polygon‐based computer graphics. This paper presents a novel light field rendering technique which performs per‐pixel depth correction of rays for high‐quality reconstruction. Our technique stores combined RGB and depth values in a parabolic 2D texture for every light field sample acquired at discrete positions on a uniform spherical setup. Image synthesis is implemented on the GPU as a fragment program which extracts the correct image information from adjacent cameras for each fragment by applying per‐pixel depth correction of rays. We show that the presented image‐based rendering technique provides a significant improvement compared to previous approaches. We explain two different rendering implementations which make use of a uniform parametrisation to minimise disparity problems and ensure full six degrees of freedom for virtual view synthesis. While one rendering algorithm implements an iterative refinement approach for rendering light fields with per pixel depth correction, the other approach employs a raycaster, which provides superior rendering quality at moderate frame rates. GPU based per‐fragment depth correction of rays, used in both implementations, helps reducing ghosting artifacts to a non‐noticeable amount and provides a rendering technique that performs without exhaustive pre‐processing for 3D object reconstruction and without real‐time ray‐object intersection calculations at rendering time.  相似文献   

19.
We present a novel framework for efficiently computing the indirect illumination in diffuse and moderately glossy scenes using density estimation techniques. Many existing global illumination approaches either quickly compute an overly approximate solution or perform an orders of magnitude slower computation to obtain high-quality results for the indirect illumination. The proposed method improves photon density estimation and leads to significantly better visual quality in particular for complex geometry, while only slightly increasing the computation time. We perform direct splatting of photon rays, which allows us to use simpler search data structures. Since our density estimation is carried out in ray space rather than on surfaces, as in the commonly used photon mapping algorithm, the results are more robust against geometrically incurred sources of bias. This holds also in combination with final gathering where photon mapping often overestimates the illumination near concave geometric features. In addition, we show that our photon splatting technique can be extended to handle moderately glossy surfaces and can be combined with traditional irradiance caching for sparse sampling and filtering in image space.  相似文献   

20.
Beam tracing combines the flexibility of ray tracing and the speed of polygon rasterization. However, beam tracing so far only handles linear transformations; thus, it is only applicable to linear effects such as planar mirror reflections but not to non‐linear effects such as curved mirror reflection, refraction, caustics and shadows. In this paper, we introduce non‐linear beam tracing to render these non‐linear effects. Non‐linear beam tracing is highly challenging because commodity graphics hardware supports only linear vertex transformation and triangle rasterization. We overcome this difficulty by designing a non‐linear graphics pipeline and implementing it on top of a commodity GPU. This allows beams to be non‐linear where rays within the same beam do not have to be parallel or intersect at a single point. Using these non‐linear beams, real‐time GPU applications can render secondary rays via polygon streaming similar to how they render primary rays. A major strength of this methodology is that it naturally supports fully dynamic scenes without the need to pre‐store a scene database. Utilizing our approach, non‐linear ray tracing effects can be rendered in real‐time on a commodity GPU under a unified framework.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号