首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
When rendering effects such as motion blur and defocus blur, shading can become very expensive if done in a naïve way, i.e. shading each visibility sample. To improve performance, previous work often decouple shading from visibility sampling using shader caching algorithms. We present a novel technique for reusing shading in a stochastic rasterizer. Shading is computed hierarchically and sparsely in an object‐space texture, and by selecting an appropriate mipmap level for each triangle, we ensure that the shading rate is sufficiently high so that no noticeable blurring is introduced in the rendered image. Furthermore, with a two‐pass algorithm, we separate shading from reuse and thus avoid GPU thread synchronization. Our method runs at real‐time frame rates and is up to 3 × faster than previous methods. This is an important step forward for stochastic rasterization in real time.  相似文献   

2.
This paper presents a GPU‐based rendering algorithm for real‐time defocus blur effects, which significantly improves the accumulation buffering. The algorithm combines three distinctive techniques: (1) adaptive discrete geometric level of detail (LOD), made popping‐free by blending visibility samples across the two adjacent geometric levels; (2) adaptive visibility/shading sampling via sample reuse; (3) visibility supersampling via height‐field ray casting. All the three techniques are seamlessly integrated to lower the rendering cost of smooth defocus blur with high visibility sampling rates, while maintaining most of the quality of brute‐force accumulation buffering.  相似文献   

3.
Depth‐of‐field is one of the most crucial rendering effects for synthesizing photorealistic images. Unfortunately, this effect is also extremely costly. It can take hundreds to thousands of samples to achieve noise‐free results using Monte Carlo integration. This paper introduces an efficient adaptive depth‐of‐field rendering algorithm that achieves noise‐free results using significantly fewer samples. Our algorithm consists of two main phases: adaptive sampling and image reconstruction. In the adaptive sampling phase, the adaptive sample density is determined by a ‘blur‐size’ map and ‘pixel‐variance’ map computed in the initialization. In the image reconstruction phase, based on the blur‐size map, we use a novel multiscale reconstruction filter to dramatically reduce the noise in the defocused areas where the sampled radiance has high variance. Because of the efficiency of this new filter, only a few samples are required. With the combination of the adaptive sampler and the multiscale filter, our algorithm renders near‐reference quality depth‐of‐field images with significantly fewer samples than previous techniques.  相似文献   

4.
We propose a unified rendering approach that jointly handles motion and defocus blur for transparent and opaque objects at interactive frame rates. Our key idea is to create a sampled representation of all parts of the scene geometry that are potentially visible at any point in time for the duration of a frame in an initial rasterization step. We store the resulting temporally‐varying fragments (t‐fragments) in a bounding volume hierarchy which is rebuild every frame using a fast spatial median construction algorithm. This makes our approach suitable for interactive applications with dynamic scenes and animations. Next, we perform spatial sampling to determine all t‐fragments that intersect with a specific viewing ray at any point in time. Viewing rays are sampled according to the lens uv‐sampling for depth‐of‐field effects. In a final temporal sampling step, we evaluate the predetermined viewing ray/t‐fragment intersections for one or multiple points in time. This allows us to incorporate all standard shading effects including transparency. We describe the overall framework, present our GPU implementation, and evaluate our rendering approach with respect to scalability, quality, and performance.  相似文献   

5.
In this paper, we extend the concept of pre‐filtered shadow mapping to stochastic rasterization, enabling real‐time rendering of soft shadows from planar area lights. Most existing soft shadow mapping methods lose important visibility information by relying on pinhole renderings from an area light source, providing plausible results only for small light sources. Since we sample the entire 4D shadow light field stochastically, we are able to closely approximate shadows of large area lights as well. In order to efficiently reconstruct smooth shadows from this sparse data, we exploit the analogy of soft shadow computation to rendering defocus blur, and introduce a multiplane pre‐filtering algorithm. We demonstrate how existing pre‐filterable approximations of the visibility function, such as variance shadow mapping, can be extended to four dimensions within our framework.  相似文献   

6.
We present user‐controllable and plausible defocus blur for a stochastic rasterizer. We modify circle of confusion coefficients per vertex to express more general defocus blur, and show how the method can be applied to limit the foreground blur, extend the in‐focus range, simulate tilt‐shift photography, and specify per‐object defocus blur. Furthermore, with two simplifying assumptions, we show that existing triangle coverage tests and tile culling tests can be used with very modest modifications. Our solution is temporally stable and handles simultaneous motion blur and depth of field.  相似文献   

7.
This paper presents a novel framework for elliptical weighted average (EWA) surface splatting with time‐varying scenes. We extend the theoretical basis of the original framework by replacing the 2D surface reconstruction filters by 3D kernels which unify the spatial and temporal component of moving objects. Based on the newly derived mathematical framework we introduce a rendering algorithm that supports the generation of high‐quality motion blur for point‐based objects using a piecewise linear approximation of the motion. The rendering algorithm applies ellipsoids as rendering primitives which are constructed by extending planar EWA surface splats into the temporal dimension along the instantaneous motion vector. Finally, we present an implementation of the proposed rendering algorithm with approximated occlusion handling using advanced features of modern GPUs and show its capability of producing motion‐blurred result images at interactive frame rates.  相似文献   

8.
Image‐based rendering techniques are a powerful alternative to traditional polygon‐based computer graphics. This paper presents a novel light field rendering technique which performs per‐pixel depth correction of rays for high‐quality reconstruction. Our technique stores combined RGB and depth values in a parabolic 2D texture for every light field sample acquired at discrete positions on a uniform spherical setup. Image synthesis is implemented on the GPU as a fragment program which extracts the correct image information from adjacent cameras for each fragment by applying per‐pixel depth correction of rays. We show that the presented image‐based rendering technique provides a significant improvement compared to previous approaches. We explain two different rendering implementations which make use of a uniform parametrisation to minimise disparity problems and ensure full six degrees of freedom for virtual view synthesis. While one rendering algorithm implements an iterative refinement approach for rendering light fields with per pixel depth correction, the other approach employs a raycaster, which provides superior rendering quality at moderate frame rates. GPU based per‐fragment depth correction of rays, used in both implementations, helps reducing ghosting artifacts to a non‐noticeable amount and provides a rendering technique that performs without exhaustive pre‐processing for 3D object reconstruction and without real‐time ray‐object intersection calculations at rendering time.  相似文献   

9.
Advances in computer graphics enable us to create digital images of astonishing complexity and realism. However, processing resources are still a limiting factor. Hence, many costly but desirable aspects of realism are often not accounted for, including global illumination, accurate depth of field and motion blur, spectral effects, etc. especially in real‐time rendering. At the same time, there is a strong trend towards more pixels per display due to larger displays, higher pixel densities or larger fields of view. Further observable trends in current display technology include more bits per pixel (high dynamic range, wider color gamut/fidelity), increasing refresh rates (better motion depiction), and an increasing number of displayed views per pixel (stereo, multi‐view, all the way to holographic or lightfield displays). These developments cause significant unsolved technical challenges due to aspects such as limited compute power and bandwidth. Fortunately, the human visual system has certain limitations, which mean that providing the highest possible visual quality is not always necessary. In this report, we present the key research and models that exploit the limitations of perception to tackle visual quality and workload alike. Moreover, we present the open problems and promising future research targeting the question of how we can minimize the effort to compute and display only the necessary pixels while still offering a user full visual experience.  相似文献   

10.
We present a novel multi‐view, projective texture mapping technique. While previous multi‐view texturing approaches lead to blurring and ghosting artefacts if 3D geometry and/or camera calibration are imprecise, we propose a texturing algorithm that warps (“floats”) projected textures during run‐time to preserve crisp, detailed texture appearance. Our GPU implementation achieves interactive to real‐time frame rates. The method is very generally applicable and can be used in combination with many image‐based rendering methods or projective texturing applications. By using Floating Textures in conjunction with, e.g., visual hull rendering, light field rendering, or free‐viewpoint video, improved rendering results are obtained from fewer input images, less accurately calibrated cameras, and coarser 3D geometry proxies.  相似文献   

11.
We present a real‐time method for rendering a depth‐of‐field effect based on the per‐pixel layered splatting where source pixels are scattered on one of the three layers of a destination pixel. In addition, the missing information behind foreground objects is filled with an additional image of the areas occluded by nearer objects. The method creates high‐quality depth‐of‐field results even in the presence of partial occlusion, without major artifacts often present in the previous real‐time methods. The method can also be applied to simulating defocused highlights. The entire framework is accelerated by GPU, enabling real‐time post‐processing for both off‐line and interactive applications.  相似文献   

12.
Automatic camera control for scenes depicting human motion is an imperative topic in motion capture base animation, computer games, and other animation based fields. This challenging control problem is complex and combines both geometric constraints, visibility requirements, and aesthetic elements. Therefore, existing optimization‐based approaches for human action overview are often too demanding for online computation. In this paper, we introduce an effective automatic camera control which is extremely efficient and allows online performance. Rather than optimizing a complex quality measurement, at each time it selects one active camera from a multitude of cameras that render the dynamic scene. The selection is based on the correlation between each view stream and the human motion in the scene. Two factors allow for rapid selection among tens of candidate views in real‐time, even for complex multi‐character scenes: the efficient rendering of the multitude of view streams, and optimized calculations of the correlations using modified CCA. In addition to the method's simplicity and speed, it exhibits good agreement with both cinematic idioms and previous human motion camera control work. Our evaluations show that the method is able to cope with the challenges put forth by severe occlusions, multiple characters and complex scenes.  相似文献   

13.
Owing to recent advances in depth sensors and computer vision algorithms, depth images are often available with co-registered color images. In this paper, we propose a simple but effective method for obtaining an all-in-focus (AIF) color image from a database of color and depth image pairs. Since the defocus blur is inherently depth-dependent, the color pixels are first grouped according to their depth values. The defocus blur parameters are then estimated using the amount of the defocus blur of the grouped pixels. Given a defocused color image and its estimated blur parameters, the AIF image is produced by adopting the conventional pixel-wise mapping technique. In addition, the availability of the depth image disambiguates the objects located far or near from the in-focus object and thus facilitates image refocusing. We demonstrate the effectiveness of the proposed algorithm using both synthetic and real color and depth images.  相似文献   

14.
Motion blur is a fundamental cue in the perception of objects in motion. This phenomenon manifests as a visible trail along the trajectory of the object and is the result of the combination of relative motion and light integration taking place in film and electronic cameras. In this work, we analyse the mechanisms that produce motion blur in recording devices and the methods that can simulate it in computer generated images. Light integration over time is one of the most expensive processes to simulate in high‐quality renders, as such, we make an in‐depth review of the existing algorithms and we categorize them in the context of a formal model that highlights their differences, strengths and limitations. We finalize this report proposing a number of alternative classifications that will help the reader identify the best technique for a particular scenario.  相似文献   

15.
We present a new post processing method of simulating depth of field based on accurate calculations of circles of confusion. Compared to previous work, our method derives actual scene depth information directly from the existing depth buffer, requires no specialized rendering passes, and allows easy integration into existing rendering applications. Our implementation uses an adaptive, two‐pass filter, producing a high quality depth of field effect that can be executed entirely on the GPU, taking advantage of the parallelism of modern graphics cards and permitting real time performance when applied to large numbers of pixels.  相似文献   

16.
Domain‐continuous visibility determination algorithms have proved to be very efficient at reducing noise otherwise prevalent in stochastic sampling. Even though they come with an increased overhead in terms of geometrical tests and visibility information management, their analytical nature provides such a rich integral that the pay‐off is often worth it. This paper presents a time‐continuous, primary visibility algorithm for motion blur aimed at ray tracing. Two novel intersection tests are derived and implemented. The first is for ray versus moving triangle and the second for ray versus moving AABB intersection. A novel take on shading is presented as well, where the time continuum of visible geometry is adaptively point‐sampled. Static geometry is handled using supplemental stochastic rays in order to reduce spatial aliasing. Finally, a prototype ray tracer with a full time‐continuous traversal kernel is presented in detail. The results are based on a variety of test scenarios and show that even though our time‐continuous algorithm has limitations, it outperforms multi‐jittered quasi‐Monte Carlo ray tracing in terms of image quality at equal rendering time, within wide sampling rate ranges.  相似文献   

17.
Images synthesized by light field rendering exhibit aliasing artifacts when the light field is undersampled; adding new light field samples improves the image quality and reduces aliasing but new samples are expensive to acquire. Light field rays are traditionally gathered directly from the source images, but new rays can also be inferred through geometry estimation. This paper describes a light field rendering approach based on this principle that estimates geometry from the set of source images using multi‐baseline stereo reconstruction to supplement the existing light field rays to meet the minimum sampling requirement. The rendering and reconstruction steps are computed over a set of planes in the scene volume, and output images are synthesized by compositing results from these planes together. The planes are each processed independently and the number of planes can be adjusted to scale the amount of computation to achieve the desired frame rate. The reconstruction fidelity (and by extension image quality) is improved by a library of matching templates to support matches along discontinuities in the image or geometry (e.g. object profiles and concavities). Given a set of silhouette images, the visual hull can be constructed and applied to further improve reconstruction by removing outlier matches. The algorithm is efficiently implemented by a set of image filter operations on commodity graphics hardware and achieves image synthesis at interactive rates.  相似文献   

18.
The ability to interpolate between images taken at different time and viewpoints directly in image space opens up new possiblities. The goal of our work is to create plausible in‐between images in real time without the need for an intermediate 3D reconstruction. This enables us to also interpolate between images recorded with uncalibrated and unsynchronized cameras. In our approach we use a novel discontiniuity preserving image deformation model to robustly estimate dense correspondences based on local homographies. Once correspondences have been computed we are able to render plausible in‐between images in real time while properly handling occlusions. We discuss the relation of our approach to human motion perception and other image interpolation techniques.  相似文献   

19.
Capturing exposure sequences to compute high dynamic range (HDR) images causes motion blur in cases of camera movement. This also applies to light‐field cameras: frames rendered from multiple blurred HDR light‐field perspectives are also blurred. While the recording times of exposure sequences cannot be reduced for a single‐sensor camera, we demonstrate how this can be achieved for a camera array. Thus, we decrease capturing time and reduce motion blur for HDR light‐field video recording. Applying a spatio‐temporal exposure pattern while capturing frames with a camera array reduces the overall recording time and enables the estimation of camera movement within one light‐field video frame. By estimating depth maps and local point spread functions (PSFs) from multiple perspectives with the same exposure, regional motion deblurring can be supported. Missing exposures at various perspectives are then interpolated.  相似文献   

20.
This paper describes a fast rendering algorithm for verification of spectacle lens design. Our method simulates refraction corrections of astigmatism as well as myopia or presbyopia. Refraction and defocus are the main issues in the simulation. For refraction, our proposed method uses per-vertex basis ray tracing which warps the environment map and produces a real-time refracted image which is subjectively as good as ray tracing. Conventional defocus simulation was previously done by distribution ray tracing and a real-time solution was impossible. We introduce the concept of a blur field, which we use to displace every vertex according to its position. The blurring information is precomputed as a set of field values distributed to voxels which are formed by evenly subdividing the perspective projected space. The field values can be determined by tracing a wavefront from each voxel through the lens and the eye, and by evaluating the spread of light at the retina considering the best human accommodation effort. The blur field is stored as texture data and referred to by the vertex shader that displaces each vertex. With an interactive frame rate, blending the multiple rendering results produces a blurred image comparable to distribution ray tracing output.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号