首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We present a fast reconstruction filtering method for images generated with Monte Carlo–based rendering techniques. Our approach specializes in reducing global illumination noise in the presence of depth‐of‐field effects at very low sampling rates and interactive frame rates. We employ edge‐aware filtering in the sample space to locally improve outgoing radiance of each sample. The improved samples are then distributed in the image plane using a fast, linear manifold‐based approach supporting very large circles of confusion. We evaluate our filter by applying it to several images containing noise caused by Monte Carlo–simulated global illumination, area light sources and depth of field. We show that our filter can efficiently denoise such images at interactive frame rates on current GPUs and with as few as 4–16 samples per pixel. Our method operates only on the colour and geometric sample information output of the initial rendering process. It does not make any assumptions on the underlying rendering technique and sampling strategy and can therefore be implemented completely as a post‐process filter.  相似文献   

2.
Light field reconstruction algorithms can substantially decrease the noise in stochastically rendered images. Recent algorithms for defocus blur alone are both fast and accurate. However, motion blur is a considerably more complex type of camera effect, and as a consequence, current algorithms are either slow or too imprecise to use in high quality rendering. We extend previous work on real‐time light field reconstruction for defocus blur to handle the case of simultaneous defocus and motion blur. By carefully introducing a few approximations, we derive a very efficient sheared reconstruction filter, which produces high quality images even for a low number of input samples. Our algorithm is temporally robust, and is about two orders of magnitude faster than previous work, making it suitable for both real‐time rendering and as a post‐processing pass for offline rendering.  相似文献   

3.
Interactive computation of global illumination is a major challenge in current computer graphics research. Global illumination heavily affects the visual quality of generated images. It is therefore a key attribute for the perception of photo‐realistic images. Path tracing is able to simulate the physical behaviour of light using Monte Carlo techniques. However, the computational burden of this technique prohibits interactive rendering times on standard commodity hardware in high‐quality. Trying to solve the Monte Carlo integration with fewer samples results in characteristic noisy images. Global illumination filtering methods take advantage of the fact that the integral for neighbouring pixels may be very similar. Averaging samples of similar characteristics in screen‐space may approximate the correct integral, but may result in visible outliers. In this paper, we present a novel path tracing pipeline based on an edge‐aware filtering method for the indirect illumination which produces visually more pleasing results without noticeable outliers. The key idea is not to filter the noisy path traced images but to use it as a guidance to filter a second image composed from characteristic scene attributes that do not contain noise by default. We show that our approach better approximates the Monte Carlo integral compared to previous methods. Since the computation is carried out completely in screen‐space it is therefore applicable to fully dynamic scenes, arbitrary lighting and allows for high‐quality path tracing at interactive frame rates on commodity hardware.  相似文献   

4.
We propose a novel pre‐filtering method that reduces the noise introduced by depth‐of‐field and motion blur effects in geometric buffers (G‐buffers) such as texture, normal and depth images. Our pre‐filtering uses world positions and their variances to effectively remove high‐frequency noise while carefully preserving high‐frequency edges in the G‐buffers. We design a new anisotropic filter based on a per‐pixel covariance matrix of world position samples. A general error estimator, Stein's unbiased risk estimator, is then applied to estimate the optimal trade‐off between the bias and variance of pre‐filtered results. We have demonstrated that our pre‐filtering improves the results of existing filtering methods numerically and visually for challenging scenes where depth‐of‐field and motion blurring introduce a significant amount of noise in the G‐buffers.  相似文献   

5.
This paper presents a GPU‐based rendering algorithm for real‐time defocus blur effects, which significantly improves the accumulation buffering. The algorithm combines three distinctive techniques: (1) adaptive discrete geometric level of detail (LOD), made popping‐free by blending visibility samples across the two adjacent geometric levels; (2) adaptive visibility/shading sampling via sample reuse; (3) visibility supersampling via height‐field ray casting. All the three techniques are seamlessly integrated to lower the rendering cost of smooth defocus blur with high visibility sampling rates, while maintaining most of the quality of brute‐force accumulation buffering.  相似文献   

6.
We present a novel multi‐view, projective texture mapping technique. While previous multi‐view texturing approaches lead to blurring and ghosting artefacts if 3D geometry and/or camera calibration are imprecise, we propose a texturing algorithm that warps (“floats”) projected textures during run‐time to preserve crisp, detailed texture appearance. Our GPU implementation achieves interactive to real‐time frame rates. The method is very generally applicable and can be used in combination with many image‐based rendering methods or projective texturing applications. By using Floating Textures in conjunction with, e.g., visual hull rendering, light field rendering, or free‐viewpoint video, improved rendering results are obtained from fewer input images, less accurately calibrated cameras, and coarser 3D geometry proxies.  相似文献   

7.
Image‐based rendering techniques are a powerful alternative to traditional polygon‐based computer graphics. This paper presents a novel light field rendering technique which performs per‐pixel depth correction of rays for high‐quality reconstruction. Our technique stores combined RGB and depth values in a parabolic 2D texture for every light field sample acquired at discrete positions on a uniform spherical setup. Image synthesis is implemented on the GPU as a fragment program which extracts the correct image information from adjacent cameras for each fragment by applying per‐pixel depth correction of rays. We show that the presented image‐based rendering technique provides a significant improvement compared to previous approaches. We explain two different rendering implementations which make use of a uniform parametrisation to minimise disparity problems and ensure full six degrees of freedom for virtual view synthesis. While one rendering algorithm implements an iterative refinement approach for rendering light fields with per pixel depth correction, the other approach employs a raycaster, which provides superior rendering quality at moderate frame rates. GPU based per‐fragment depth correction of rays, used in both implementations, helps reducing ghosting artifacts to a non‐noticeable amount and provides a rendering technique that performs without exhaustive pre‐processing for 3D object reconstruction and without real‐time ray‐object intersection calculations at rendering time.  相似文献   

8.
Distribution effects such as diffuse global illumination, soft shadows and depth of field, are most accurately rendered using Monte Carlo ray or path tracing. However, physically accurate algorithms can take hours to converge to a noise‐free image. A recent body of work has begun to bridge this gap, showing that both individual and multiple effects can be achieved accurately and efficiently. These methods use sparse sampling, GPU raytracers, and adaptive filtering for reconstruction. They are based on a Fourier analysis, which models distribution effects as a wedge in the frequency domain. The wedge can be approximated as a single large axis‐aligned filter, which is fast but retains a large area outside the wedge, and therefore requires a higher sampling rate; or a tighter sheared filter, which is slow to compute. The state‐of‐the‐art fast sheared filtering method combines low sampling rate and efficient filtering, but has been demonstrated for individual distribution effects only, and is limited by high‐dimensional data storage and processing. We present a novel filter for efficient rendering of combined effects, involving soft shadows and depth of field, with global (diffuse indirect) illumination. We approximate the wedge spectrum with multiple axis‐aligned filters, marrying the speed of axis‐aligned filtering with an even more accurate (compact and tighter) representation than sheared filtering. We demonstrate rendering of single effects at comparable sampling and frame‐rates to fast sheared filtering. Our main practical contribution is in rendering multiple distribution effects, which have not even been demonstrated accurately with sheared filtering. For this case, we present an average speedup of 6× compared with previous axis‐aligned filtering methods.  相似文献   

9.
This paper introduces an accurate real‐time soft shadow algorithm that uses sample based visibility. Initially, we present a GPU‐based alias‐free hard shadow map algorithm that typically requires only a single render pass from the light, in contrast to using depth peeling and one pass per layer. For closed objects, we also suppress the need for a bias. The method is extended to soft shadow sampling for an arbitrarily shaped area‐/volumetric light source using 128‐1024 light samples per screen pixel. The alias‐free shadow map guarantees that the visibility is accurately sampled per screen‐space pixel, even for arbitrarily shaped (e.g. non‐planar) surfaces or solid objects. Another contribution is a smooth coherent shading model to avoid common light leakage near shadow borders due to normal interpolation.  相似文献   

10.
We present a real‐time method for rendering a depth‐of‐field effect based on the per‐pixel layered splatting where source pixels are scattered on one of the three layers of a destination pixel. In addition, the missing information behind foreground objects is filled with an additional image of the areas occluded by nearer objects. The method creates high‐quality depth‐of‐field results even in the presence of partial occlusion, without major artifacts often present in the previous real‐time methods. The method can also be applied to simulating defocused highlights. The entire framework is accelerated by GPU, enabling real‐time post‐processing for both off‐line and interactive applications.  相似文献   

11.
Defocus Magnification   总被引:1,自引:0,他引:1  
A blurry background due to shallow depth of field is often desired for photographs such as portraits, but, unfortunately, small point-and-shoot cameras do not permit enough defocus because of the small diameter of their lenses. We present an image-processing technique that increases the defocus in an image to simulate the shallow depth of field of a lens with a larger aperture. Our technique estimates the spatially-varying amount of blur over the image, and then uses a simple image-based technique to increase defocus. We first estimate the size of the blur kernel at edges and then propagate this defocus measure over the image. Using our defocus map, we magnify the existing blurriness, which means that we blur blurry regions and keep sharp regions sharp. In contrast to more difficult problems such as depth from defocus, we do not require precise depth estimation and do not need to disambiguate textureless regions.  相似文献   

12.
Depth-of-Field Rendering by Pyramidal Image Processing   总被引:1,自引:0,他引:1  
We present an image-based algorithm for interactive rendering depth-of-field effects in images with depth maps. While previously published methods for interactive depth-of-field rendering suffer from various rendering artifacts such as color bleeding and sharpened or darkened silhouettes, our algorithm achieves a significantly improved image quality by employing recently proposed GPU-based pyramid methods for image blurring and pixel disocclusion. Due to the same reason, our algorithm offers an interactive rendering performance on modern GPUs and is suitable for real-time rendering for small circles of confusion. We validate the image quality provided by our algorithm by side-by-side comparisons with results obtained by distributed ray tracing.  相似文献   

13.
Domain‐continuous visibility determination algorithms have proved to be very efficient at reducing noise otherwise prevalent in stochastic sampling. Even though they come with an increased overhead in terms of geometrical tests and visibility information management, their analytical nature provides such a rich integral that the pay‐off is often worth it. This paper presents a time‐continuous, primary visibility algorithm for motion blur aimed at ray tracing. Two novel intersection tests are derived and implemented. The first is for ray versus moving triangle and the second for ray versus moving AABB intersection. A novel take on shading is presented as well, where the time continuum of visible geometry is adaptively point‐sampled. Static geometry is handled using supplemental stochastic rays in order to reduce spatial aliasing. Finally, a prototype ray tracer with a full time‐continuous traversal kernel is presented in detail. The results are based on a variety of test scenarios and show that even though our time‐continuous algorithm has limitations, it outperforms multi‐jittered quasi‐Monte Carlo ray tracing in terms of image quality at equal rendering time, within wide sampling rate ranges.  相似文献   

14.
This paper introduces a framebuffer level of detail algorithm for controlling the pixel workload in an interactive rendering application. Our basic strategy is to evaluate the shading in a low resolution buffer and, in a second rendering pass, resample this buffer at the desired screen resolution. The size of the lower resolution buffer provides a trade‐off between rendering time and the level of detail in the final shading. In order to reduce approximation error we use a feature‐preserving reconstruction technique that more faithfully approximates the shading near depth and normal discontinuities. We also demonstrate how intermediate components of the shading can be selectively resized to provide finer‐grained control over resource allocation. Finally, we introduce a simple control mechanism that continuously adjusts the amount of resizing necessary to maintain a target framerate. These techniques do not require any preprocessing, are straightforward to implement on modern GPUs, and are shown to provide significant performance gains for several pixel‐bound scenes.  相似文献   

15.
This paper describes a fast rendering algorithm for verification of spectacle lens design. Our method simulates refraction corrections of astigmatism as well as myopia or presbyopia. Refraction and defocus are the main issues in the simulation. For refraction, our proposed method uses per-vertex basis ray tracing which warps the environment map and produces a real-time refracted image which is subjectively as good as ray tracing. Conventional defocus simulation was previously done by distribution ray tracing and a real-time solution was impossible. We introduce the concept of a blur field, which we use to displace every vertex according to its position. The blurring information is precomputed as a set of field values distributed to voxels which are formed by evenly subdividing the perspective projected space. The field values can be determined by tracing a wavefront from each voxel through the lens and the eye, and by evaluating the spread of light at the retina considering the best human accommodation effort. The blur field is stored as texture data and referred to by the vertex shader that displaces each vertex. With an interactive frame rate, blending the multiple rendering results produces a blurred image comparable to distribution ray tracing output.  相似文献   

16.
Sharp edges are important shape features and their extraction has been extensively studied both on point clouds and surfaces. We consider the problem of extracting sharp edges from a sparse set of colour‐and‐depth (RGB‐D) images. The noise‐ridden depth measurements are challenging for existing feature extraction methods that work solely in the geometric domain (e.g. points or meshes). By utilizing both colour and depth information, we propose a novel feature extraction method that produces much cleaner and more coherent feature lines. We make two technical contributions. First, we show that intensity edges can augment the depth map to improve normal estimation and feature localization from a single RGB‐D image. Second, we designed a novel algorithm for consolidating feature points obtained from multiple RGB‐D images. By utilizing normals and ridge/valley types associated with the feature points, our algorithm is effective in suppressing noise without smearing nearby features.  相似文献   

17.
This survey gives an overview of the current state of the art in GPU techniques for interactive large‐scale volume visualization. Modern techniques in this field have brought about a sea change in how interactive visualization and analysis of giga‐, tera‐ and petabytes of volume data can be enabled on GPUs. In addition to combining the parallel processing power of GPUs with out‐of‐core methods and data streaming, a major enabler for interactivity is making both the computational and the visualization effort proportional to the amount and resolution of data that is actually visible on screen, i.e. ‘output‐sensitive’ algorithms and system designs. This leads to recent output‐sensitive approaches that are ‘ray‐guided’, ‘visualization‐driven’ or ‘display‐aware’. In this survey, we focus on these characteristics and propose a new categorization of GPU‐based large‐scale volume visualization techniques based on the notions of actual output‐resolution visibility and the current working set of volume bricks—the current subset of data that is minimally required to produce an output image of the desired display resolution. Furthermore, we discuss the differences and similarities of different rendering and data traversal strategies in volume rendering by putting them into a common context—the notion of address translation. For our purposes here, we view parallel (distributed) visualization using clusters as an orthogonal set of techniques that we do not discuss in detail but that can be used in conjunction with what we present in this survey.  相似文献   

18.
In this work, we present a non‐photorealistic rendering technique to create stylized abstractions from color images and videos. Our approach is based on adaptive line integral convolution in combination with directional shock filtering. The smoothing process regularizes directional image features while the shock filter provides a sharpening effect. Both operations are guided by a flow field derived from the structure tensor. To obtain a high‐quality flow field, we present a novel smoothing scheme for the structure tensor based on Poisson's equation. Our approach effectively regularizes anisotropic image regions while preserving the overall image structure and achieving a consistent level of abstraction. Moreover, it is suitable for per‐frame filtering of video and can be efficiently implemented to process content in real‐time.  相似文献   

19.
We present a new algorithm for efficient rendering of high‐quality depth‐of‐field (DoF) effects. We start with a single rasterized view (reference view) of the scene, and sample the light field by warping the reference view to nearby views. We implement the algorithm using NVIDIA's CUDA to achieve parallel processing, and exploit the atomic operations to resolve visibility when multiple pixels warp to the same image location. We then directly synthesize DoF effects from the sampled light field. To reduce aliasing artifacts, we propose an image‐space filtering technique that compensates for spatial undersampling using MIP mapping. The main advantages of our algorithm are its simplicity and generality. We demonstrate interactive rendering of DoF effects in several complex scenes. Compared to existing methods, ours does not require ray tracing and hence scales well with scene complexity.  相似文献   

20.
We describe how the pipeline for 3D online reconstruction using commodity depth and image scanning hardware can be made scalable for large spatial extents and high scanning resolutions. Our modified pipeline requires less than 10% of the memory that is required by previous approaches at similar speed and resolution. To achieve this, we avoid storing a 3D distance field and weight map during online scene reconstruction. Instead, surface samples are binned into a high‐resolution binary voxel grid. This grid is used in combination with caching and deferred processing of depth images to reconstruct the scene geometry. For pose estimation, GPU ray‐casting is performed on the binary voxel grid. A one‐to‐one comparison to level‐set ray‐casting in a distance volume indicates slightly lower pose accuracy. To enable unlimited spatial extents and store acquired samples at the appropriate level of detail, we combine a hash map with a hierarchical tree representation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号