首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper proposes an adaptive rendering technique for ray‐bundle tracing. Ray‐bundle tracing can be done by per‐pixel linked‐list construction on a GPU rasterization pipeline. This rasterization based approach offers significant benefits for the efficient generation of light maps (e.g., hardware acceleration, tessellation, and recycling of shaders used in real‐time graphics). However, it is inapplicable to large and complex scenes due to the limited capacity of the GPU memory because it requires a high‐resolution frame buffer and high‐capacity node buffer for the linked‐lists. In addition, memory overflow can potentially occur on the per‐pixel linked‐list since the memory usage of the lists is usually unknown before the rendering process. We introduce an adaptive tiling technique with memory usage prediction. Our method uses an appropriately tiled frame buffer, thus eliminating almost all of the overflow risks thanks to our adaptive tile subdivision scheme. Using this technique, we are able to render high‐quality light maps of large and complex scenes which cannot be computed using previous ray‐bundle based methods.  相似文献   

2.
With ever‐increasing display resolution for wide field‐of‐view displays—such as head‐mounted displays or 8k projectors—shading has become the major computational cost in rasterization. To reduce computational effort, we propose an algorithm that only shades visible features of the image while cost‐effectively interpolating the remaining features without affecting perceived quality. In contrast to previous approaches we do not only simulate acuity falloff but also introduce a sampling scheme that incorporates multiple aspects of the human visual system: acuity, eye motion, contrast (stemming from geometry, material or lighting properties), and brightness adaptation. Our sampling scheme is incorporated into a deferred shading pipeline to shade the image's perceptually relevant fragments while a pull‐push algorithm interpolates the radiance for the rest of the image. Our approach does not impose any restrictions on the performed shading. We conduct a number of psycho‐visual experiments to validate scene‐ and task‐independence of our approach. The number of fragments that need to be shaded is reduced by 50 % to 80 %. Our algorithm scales favorably with increasing resolution and field‐of‐view, rendering it well‐suited for head‐mounted displays and wide‐field‐of‐view projection.  相似文献   

3.
Anti‐aliasing has recently been employed as a post‐processing step to adapt to the deferred shading technique in real‐time applications. Some of these existing algorithms store supersampling geometric information as geometric buffer (G‐buffer) to detect and alleviate sub‐pixel‐level aliasing artifacts. However, the anti‐aliasing filter based on sampled sub‐pixel geometries only may introduce unfaithful shading information to the sub‐pixel color in uniform‐geometry regions, and large G‐buffer will increase memory storage and fetch overheads. In this paper, we present a new Triangle‐based Geometry Anti‐Aliasing (TGAA) algorithm, to address these problems. The coverage triangle of each screen pixel is accessed, and then, the coverage information between the triangle and neighboring sub‐pixels is stored in a screen‐resolution bitmask, which allows the geometric information to be stored and accessed in an inexpensive manner. Using triangle‐based geometry, TGAA can exclude irrelevant neighboring shading samples and achieve faithful anti‐aliasing filtering. In addition, a morphological method of estimating the geometric edges in high‐frequency geometry is incorporated into the TGAA's anti‐aliasing filter to complement the algorithm. The implementation results demonstrate that the algorithm is efficient and scalable for generating high‐quality anti‐aliased images.  相似文献   

4.
When rendering effects such as motion blur and defocus blur, shading can become very expensive if done in a naïve way, i.e. shading each visibility sample. To improve performance, previous work often decouple shading from visibility sampling using shader caching algorithms. We present a novel technique for reusing shading in a stochastic rasterizer. Shading is computed hierarchically and sparsely in an object‐space texture, and by selecting an appropriate mipmap level for each triangle, we ensure that the shading rate is sufficiently high so that no noticeable blurring is introduced in the rendered image. Furthermore, with a two‐pass algorithm, we separate shading from reuse and thus avoid GPU thread synchronization. Our method runs at real‐time frame rates and is up to 3 × faster than previous methods. This is an important step forward for stochastic rasterization in real time.  相似文献   

5.
We introduce a screen‐space statistical filtering method for real‐time rendering with global illumination. It is inspired by statistical filtering proposed by Meyer et al. to reduce the noise in global illumination over a period of time by estimating the principal components from all rendered frames. Our work extends their method to achieve nearly real‐time performance on modern GPUs. More specifically, our method employs the candid covariance‐free incremental PCA to overcome several limitations of the original algorithm by Meyer et al., such as its high computational cost and memory usage that hinders its implementation on GPUs. By combining the reprojection and per‐pixel weighting techniques, our method handles the view changes and object movement in dynamic scenes as well.  相似文献   

6.
Adaptive Caustic Maps Using Deferred Shading   总被引:1,自引:0,他引:1  
Caustic maps provide an interactive image-space method to render caustics, the focusing of light via reflection and refraction. Unfortunately, caustic mapping suffers problems similar to shadow mapping: aliasing from poor sampling and map projection as well as temporal incoherency from frame-to-frame sampling variations. To reduce these problems, researchers have suggested methods ranging from caustic blurring to building a multiresolution caustic map. Yet these all require a fixed photon sampling, precluding the use of importance-based photon densities. This paper introduces adaptive caustic maps. Instead of densely sampling photons via a rasterization pass, we adaptively emit photons using a deferred shading pass. We describe deferred rendering for refractive surfaces, which speeds rendering of refractive geometry up to 25% and with adaptive sampling speeds caustic rendering up to 200%. These benefits are particularly noticable for complex geometry or using millions of photons. While developed for a GPU rasterizer, adaptive caustic map creation can be performed by any renderer that individually traces photons, e.g., a GPU ray tracer.  相似文献   

7.
We present a fast reconstruction filtering method for images generated with Monte Carlo–based rendering techniques. Our approach specializes in reducing global illumination noise in the presence of depth‐of‐field effects at very low sampling rates and interactive frame rates. We employ edge‐aware filtering in the sample space to locally improve outgoing radiance of each sample. The improved samples are then distributed in the image plane using a fast, linear manifold‐based approach supporting very large circles of confusion. We evaluate our filter by applying it to several images containing noise caused by Monte Carlo–simulated global illumination, area light sources and depth of field. We show that our filter can efficiently denoise such images at interactive frame rates on current GPUs and with as few as 4–16 samples per pixel. Our method operates only on the colour and geometric sample information output of the initial rendering process. It does not make any assumptions on the underlying rendering technique and sampling strategy and can therefore be implemented completely as a post‐process filter.  相似文献   

8.
Level‐of‐Detail structures are a key component for scalable rendering. Built from raw 3D data, these structures are often defined as Bounding Volume Hierarchies, providing coarse‐to‐fine adaptive approximations that are well‐adapted for many‐view rasterization. Here, the total number of pixels in each view is usually low, while the cost of choosing the appropriate LoD for each view is high. This task represents a challenge for existing GPU algorithms. We propose ManyLoDs, a new GPU algorithm to efficiently compute many LoDs from a Bounding Volume Hierarchy in parallel by balancing the workload within and among LoDs. Our approach is not specific to a particular rendering technique, can be used on lazy representations such as polygon soups, and can handle dynamic scenes. We apply our method to various many‐view rasterization applications, including Instant Radiosity, Point‐Based Global Illumination, and reflection/refraction mapping. For each of these, we achieve real‐time performance in complex scenes at high resolutions.  相似文献   

9.
The incident indirect light over a range of image pixels is often coherent. Two common approaches to exploit this inter‐pixel coherence to improve rendering performance are Irradiance Caching and Radiance Caching. Both compute incident indirect light only for a small subset of pixels (the cache), and later interpolate between pixels. Irradiance Caching uses scalar values that can be interpolated efficiently, but cannot account for shading variations caused by normal and reflectance variation between cache items. Radiance Caching maintains directional information, e.g., to allow highlights between cache items, but at the cost of storing and evaluating a Spherical Harmonics (SH) function per pixel. The arithmetic and bandwidth cost for this evaluation is linear in the number of coefficients and can be substantial. In this paper, we propose a method to replace it by an efficient per‐cache item pre‐filtering based on MIP maps — such as previously done for environment maps — leading to a single constant‐time lookup per pixel. Additionally, per‐cache item geometry statistics stored in distance‐MIP maps are used to improve the quality of each pixel's lookup. Our approximate interactive global illumination approach is an order of magnitude faster than Radiance Caching with Phong BRDFs and can be combined with Monte Carlo‐raytracing, Point‐based Global Illumination or Instant Radiosity.  相似文献   

10.
This paper aims at rendering interactive visual effects inherent to complex interactions between trees and rain in real‐time in order to increase the realism of natural rainy scenes. Such a complex phenomenon involves a great number of physical processes influenced by various interlinked factors and its rendering represents a thorough challenge in Computer Graphics. We approach this problem by introducing an original method to render drops dripping from leaves after interception of raindrops by foliage. Our method introduces a new hydrological model representing interactions between rain and foliage through a phenomenological approach. Our model reduces the complexity of the phenomenon by representing multiple dripping drops with a new fully functional form evaluated per‐pixel on‐the‐fly and providing improved control over density and physical properties. Furthermore, an efficient real‐time rendering scheme, taking full advantage of latest GPU hardware capabilities, allows the rendering of a large number of dripping drops even for complex scenes.  相似文献   

11.
Ambient occlusion is a cheap but effective approximation of global illumination. Recently, screen‐space ambient occlusion (SSAO) methods, which sample the frame buffer as a discretization of the scene geometry, have become very popular for real‐time rendering. We present temporal SSAO (TSSAO), a new algorithm which exploits temporal coherence to produce high‐quality ambient occlusion in real time. Compared to conventional SSAO, our method reduces both noise as well as blurring artefacts due to strong spatial filtering, faithfully representing fine‐grained geometric structures. Our algorithm caches and reuses previously computed SSAO samples, and adaptively applies more samples and spatial filtering only in regions that do not yet have enough information available from previous frames. The method works well for both static and dynamic scenes.  相似文献   

12.
We present an efficient and scalable system that enables programmable motion effects on GPUs. Our system is based on the framework proposed by Schmid et al. [ [SSBG10] ] that extends the concept of a surface shader to that of a programmable motion effect. While capable of expressing a variety of motion depiction styles, the execution of motion effect programs requires global knowledge about all portions of an object's surface that passes in front of a pixel during an arbitrarily long period of time, resulting in extremely high memory usage and significantly restricting the degree of parallelism of typical GPU rendering algorithms that parallelize computations over pixels in each frame of animations. To address this problem, we design our system to process multiple frames of a pixel in parallel. This new parallelization approach enables better utilization of GPU memory and also makes it possible to design an efficient out‐of‐core algorithm required in rendering real‐world animations. We also develop an analytical visibility algorithm to resolve depth conflicts of objects, reducing the required temporal resampling rate and further exposing parallelism. Experiments show that we are able to handle very large scenes and improve runtime performance up to an order of magnitude.  相似文献   

13.
We propose a method for creating a bounding volume hierarchy (BVH) that is optimized for all frames of a given animated scene. The method is based on a novel extension of surface area heuristic to temporal domain (T‐SAH). We perform iterative BVH optimization using T‐SAH and create a single BVH accounting for scene geometry distribution at different frames of the animation. Having a single optimized BVH for the whole animation makes our method extremely easy to integrate to any application using BVHs, limiting the per‐frame overhead only to refitting the bounding volumes. We evaluated the T‐SAH optimized BVHs in the scope of real‐time GPU ray tracing. We demonstrate, that our method can handle even highly complex inputs with large deformations and significant topology changes. The results show, that in a vast majority of tested scenes our method provides significantly better run‐time performance than traditional SAH and also better performance than GPU based per‐frame BVH rebuild.  相似文献   

14.
Screen‐space ambient occlusion and obscurance have become established methods for rendering global illumination effects in real‐time applications. While they have seen a steady line of refinements, their computational complexity has remained largely unchanged and either undersampling artefacts or too high render times limit their scalability. In this paper we show how the fundamentally quadratic per‐pixel complexity of previous work can be reduced to a linear complexity. We solve obscurance in discrete azimuthal directions by performing line sweeps across the depth buffer in each direction. Our method builds upon the insight that scene points along each line can be incrementally inserted into a data structure such that querying for the largest occluder among the visited samples along the line can be achieved at an amortized constant cost. The obscurance radius therefore has no impact on the execution time and our method produces accurate results with smooth occlusion gradients in a few milliseconds per frame on commodity hardware.  相似文献   

15.
Visual formats have advanced beyond single‐view images and videos: 3D movies are commonplace, researchers have developed multi‐view navigation systems, and VR is helping to push light field cameras to mass market. However, editing tools for these media are still nascent, and even simple filtering operations like color correction or stylization are problematic: naively applying image filters per frame or per view rarely produces satisfying results due to time and space inconsistencies. Our method preserves and stabilizes filter effects while being agnostic to the inner working of the filter. It captures filter effects in the gradient domain, then uses input frame gradients as a reference to impose temporal and spatial consistency. Our least‐squares formulation adds minimal overhead compared to naive data processing. Further, when filter cost is high, we introduce a filter transfer strategy that reduces the number of per‐frame filtering computations by an order of magnitude, with only a small reduction in visual quality. We demonstrate our algorithm on several camera array formats including stereo videos, light fields, and wide baselines.  相似文献   

16.
We present an efficient algorithm for object‐space proximity queries between multiple deformable triangular meshes. Our approach uses the rasterization capabilities of the GPU to produce an image‐space representation of the vertices. Using this image‐space representation, inter‐object vertex‐triangle distances and closest points lying under a user‐defined threshold are computed in parallel by conservative rasterization of bounding primitives and sorted using atomic operations. We additionally introduce a similar technique to detect penetrating vertices. We show how mechanisms of modern GPUs such as mipmapping, Early‐Z and Early‐Stencil culling can optimize the performance of our method. Our algorithm is able to compute dense proximity information for complex scenes made of more than a hundred thousand triangles in real time, outperforming a CPU implementation based on bounding volume hierarchies by more than an order of magnitude.  相似文献   

17.
We introduce a method to dynamically construct highly concurrent linked lists on modern graphics processors. Once constructed, these data structures can be used to implement a host of algorithms useful in creating complex rendering effects in real time. We present a straightforward way to create these linked lists using generic atomic operations available in APIs such as OpenGL 4.0 and DirectX 11. We also describe several possible applications of our algorithm. One example uses per‐pixel linked lists for order‐independent transparency; as a consequence, we are able to directly implement fully programmable blending, which frees developers from the restrictions imposed by current graphics APIs. The second uses linked lists to implement real‐time indirect shadows.  相似文献   

18.
This paper introduces an accurate real‐time soft shadow algorithm that uses sample based visibility. Initially, we present a GPU‐based alias‐free hard shadow map algorithm that typically requires only a single render pass from the light, in contrast to using depth peeling and one pass per layer. For closed objects, we also suppress the need for a bias. The method is extended to soft shadow sampling for an arbitrarily shaped area‐/volumetric light source using 128‐1024 light samples per screen pixel. The alias‐free shadow map guarantees that the visibility is accurately sampled per screen‐space pixel, even for arbitrarily shaped (e.g. non‐planar) surfaces or solid objects. Another contribution is a smooth coherent shading model to avoid common light leakage near shadow borders due to normal interpolation.  相似文献   

19.
Decomposing an input image into its intrinsic shading and reflectance components is a long‐standing ill‐posed problem. We present a novel algorithm that requires no user strokes and works on a single image. Based on simple assumptions about its reflectance and luminance, we first find clusters of similar reflectance in the image, and build a linear system describing the connections and relations between them. Our assumptions are less restrictive than widely‐adopted Retinex‐based approaches, and can be further relaxed in conflicting situations. The resulting system is robust even in the presence of areas where our assumptions do not hold. We show a wide variety of results, including natural images, objects from the MIT dataset and texture images, along with several applications, proving the versatility of our method.  相似文献   

20.
We propose a unified rendering approach that jointly handles motion and defocus blur for transparent and opaque objects at interactive frame rates. Our key idea is to create a sampled representation of all parts of the scene geometry that are potentially visible at any point in time for the duration of a frame in an initial rasterization step. We store the resulting temporally‐varying fragments (t‐fragments) in a bounding volume hierarchy which is rebuild every frame using a fast spatial median construction algorithm. This makes our approach suitable for interactive applications with dynamic scenes and animations. Next, we perform spatial sampling to determine all t‐fragments that intersect with a specific viewing ray at any point in time. Viewing rays are sampled according to the lens uv‐sampling for depth‐of‐field effects. In a final temporal sampling step, we evaluate the predetermined viewing ray/t‐fragment intersections for one or multiple points in time. This allows us to incorporate all standard shading effects including transparency. We describe the overall framework, present our GPU implementation, and evaluate our rendering approach with respect to scalability, quality, and performance.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号