共查询到20条相似文献,搜索用时 0 毫秒
1.
Recent advances have made interactive ray tracing (IRT) possible on consumer desktop machines. These advances have brought about the potential for interactive global illumination (IGI) with enhanced realism through physically based lighting. IGI, unlike IRT, has a much higher computational complexity. Furthermore, since non‐primary rays constitute the majority of the computation, the rays are predominantly incoherent, making impractical many of the methods that have made IRT possible. Two methods that have already shown promise in decreasing the computational time of the GI solution are interleaved sampling and adaptive rendering. Interleaved sampling is a generalized sampling scheme that smoothly blends between regular and irregular sampling while maintaining coherence. Adaptive rendering algorithms adjust rendering quality, non‐uniformally, using a guidance scheme. While adaptive rendering has shown to provide speed‐up when used for off‐line rendering it has not been utilized in IRT due to its naturally incoherent nature. In this paper, we combine adaptive rendering and interleaved sampling within a component‐based solution into a new approach we term adaptive interleaved sampling. This allows us to tailor new adaptive heuristics for interleaved sampling of the individual components of the GI solution significantly improving overall performance. We present a novel component‐based IGI framework for which we achieve interactive frame rates for a range of effects such as indirect diffuse lighting, soft shadows and single scatter homogeneous participating media. 相似文献
2.
This paper presents an improvement to the stochastic progressive photon mapping (SPPM), a method for robustly simulating complex global illumination with distributed ray tracing effects. Normally, similar to photon mapping and other particle tracing algorithms, SPPM would become inefficient when the photons are poorly distributed. An inordinate amount of photons are required to reduce the error caused by noise and bias to acceptable levels. In order to optimize the distribution of photons, we propose an extension of SPPM with a Metropolis‐Hastings algorithm, effectively exploiting local coherence among the light paths that contribute to the rendered image. A well‐designed scalar contribution function is introduced as our Metropolis sampling strategy, targeting at specific parts of image areas with large error to improve the efficiency of the radiance estimator. Experimental results demonstrate that the new Metropolis sampling based approach maintains the robustness of the standard SPPM method, while significantly improving the rendering efficiency for a wide range of scenes with complex lighting. 相似文献
3.
Interactive computation of global illumination is a major challenge in current computer graphics research. Global illumination heavily affects the visual quality of generated images. It is therefore a key attribute for the perception of photo‐realistic images. Path tracing is able to simulate the physical behaviour of light using Monte Carlo techniques. However, the computational burden of this technique prohibits interactive rendering times on standard commodity hardware in high‐quality. Trying to solve the Monte Carlo integration with fewer samples results in characteristic noisy images. Global illumination filtering methods take advantage of the fact that the integral for neighbouring pixels may be very similar. Averaging samples of similar characteristics in screen‐space may approximate the correct integral, but may result in visible outliers. In this paper, we present a novel path tracing pipeline based on an edge‐aware filtering method for the indirect illumination which produces visually more pleasing results without noticeable outliers. The key idea is not to filter the noisy path traced images but to use it as a guidance to filter a second image composed from characteristic scene attributes that do not contain noise by default. We show that our approach better approximates the Monte Carlo integral compared to previous methods. Since the computation is carried out completely in screen‐space it is therefore applicable to fully dynamic scenes, arbitrary lighting and allows for high‐quality path tracing at interactive frame rates on commodity hardware. 相似文献
4.
Illustrative volume visualization frequently employs non-photorealistic rendering techniques to enhance important features or to suppress unwanted details. However, it is difficult to integrate multiple non-photorealistic rendering approaches into a single framework due to great differences in the individual methods and their parameters. In this paper, we present the concept of style transfer functions. Our approach enables flexible data-driven illumination which goes beyond using the transfer function to just assign colors and opacities. An image-based lighting model uses sphere maps to represent non-photorealistic rendering styles. Style transfer functions allow us to combine a multitude of different shading styles in a single rendering. We extend this concept with a technique for curvature-controlled style contours and an illustrative transparency model. Our implementation of the presented methods allows interactive generation of high-quality volumetric illustrations. 相似文献
5.
Depth-of-Field Rendering by Pyramidal Image Processing 总被引:1,自引:0,他引:1
We present an image-based algorithm for interactive rendering depth-of-field effects in images with depth maps. While previously published methods for interactive depth-of-field rendering suffer from various rendering artifacts such as color bleeding and sharpened or darkened silhouettes, our algorithm achieves a significantly improved image quality by employing recently proposed GPU-based pyramid methods for image blurring and pixel disocclusion. Due to the same reason, our algorithm offers an interactive rendering performance on modern GPUs and is suitable for real-time rendering for small circles of confusion. We validate the image quality provided by our algorithm by side-by-side comparisons with results obtained by distributed ray tracing. 相似文献
6.
M. Eisemann B. De Decker M. Magnor P. Bekaert E. de Aguiar N. Ahmed C. Theobalt A. Sellent 《Computer Graphics Forum》2008,27(2):409-418
We present a novel multi‐view, projective texture mapping technique. While previous multi‐view texturing approaches lead to blurring and ghosting artefacts if 3D geometry and/or camera calibration are imprecise, we propose a texturing algorithm that warps (“floats”) projected textures during run‐time to preserve crisp, detailed texture appearance. Our GPU implementation achieves interactive to real‐time frame rates. The method is very generally applicable and can be used in combination with many image‐based rendering methods or projective texturing applications. By using Floating Textures in conjunction with, e.g., visual hull rendering, light field rendering, or free‐viewpoint video, improved rendering results are obtained from fewer input images, less accurately calibrated cameras, and coarser 3D geometry proxies. 相似文献
7.
This paper presents a method for the accurate rendering of path‐based surface details such as grooves, scratches and similar features. The method is based on a continuous representation of the features in texture space, and the rendering is performed by means of two approaches: one for isolated or non‐intersecting grooves and another for special situations like intersections or ends. The proposed solutions perform correct antialiasing and take into account visibility and inter‐reflections with little computational effort and memory requirements. Compared to anisotropic BRDFs and scratch models, we have no limitations on the distribution of grooves over the surface or their geometry, thus allowing more general patterns. Compared to displacement mapping techniques, we can efficiently simulate features of all sizes without requiring additional geometry or multiple representations. 相似文献
8.
Yonghao Yue Kei Iwasaki Bing‐Yu Chen Yoshinori Dobashi Tomoyuki Nishita 《Computer Graphics Forum》2011,30(7):1911-1919
Photo‐realistic rendering of inhomogeneous participating media with light scattering in consideration is important in computer graphics, and is typically computed using Monte Carlo based methods. The key technique in such methods is the free path sampling, which is used for determining the distance (free path) between successive scattering events. Recently, it has been shown that efficient and unbiased free path sampling methods can be constructed based on Woodcock tracking. The key concept for improving the efficiency is to utilize space partitioning (e.g., kd‐tree or uniform grid), and a better space partitioning scheme is important for better sampling efficiency. Thus, an estimation framework for investigating the gain in sampling efficiency is important for determining how to partition the space. However, currently, there is no estimation framework that works in 3D space. In this paper, we propose a new estimation framework to overcome this problem. Using our framework, we can analytically estimate the sampling efficiency for any typical partitioned space. Conversely, we can also use this estimation framework for determining the optimal space partitioning. As an application, we show that new space partitioning schemes can be constructed using our estimation framework. Moreover, we show that the differences in the performances using different schemes can be predicted fairly well using our estimation framework. 相似文献
9.
Jonathan Brouillat Christian Bouville Brad Loos Charles Hansen Kadi Bouatouch 《Computer Graphics Forum》2009,28(8):2315-2329
Most Monte Carlo rendering algorithms rely on importance sampling to reduce the variance of estimates. Importance sampling is efficient when the proposal sample distribution is well‐suited to the form of the integrand but fails otherwise. The main reason is that the sample location information is not exploited. All sample values are given the same importance regardless of their proximity to one another. Two samples falling in a similar location will have equal importance whereas they are likely to contain redundant information. The Bayesian approach we propose in this paper uses both the location and value of the data to infer an integral value based on a prior probabilistic model of the integrand. The Bayesian estimate depends only on the sample values and locations, and not how these samples have been chosen. We show how this theory can be applied to the final gathering problem and present results that clearly demonstrate the benefits of Bayesian Monte Carlo. 相似文献
10.
Jiating Chen Bin Wang Yuxiang Wang Ryan S. Overbeck Jun‐Hai Yong Wenping Wang 《Computer Graphics Forum》2011,30(6):1667-1680
Depth‐of‐field is one of the most crucial rendering effects for synthesizing photorealistic images. Unfortunately, this effect is also extremely costly. It can take hundreds to thousands of samples to achieve noise‐free results using Monte Carlo integration. This paper introduces an efficient adaptive depth‐of‐field rendering algorithm that achieves noise‐free results using significantly fewer samples. Our algorithm consists of two main phases: adaptive sampling and image reconstruction. In the adaptive sampling phase, the adaptive sample density is determined by a ‘blur‐size’ map and ‘pixel‐variance’ map computed in the initialization. In the image reconstruction phase, based on the blur‐size map, we use a novel multiscale reconstruction filter to dramatically reduce the noise in the defocused areas where the sampled radiance has high variance. Because of the efficiency of this new filter, only a few samples are required. With the combination of the adaptive sampler and the multiscale filter, our algorithm renders near‐reference quality depth‐of‐field images with significantly fewer samples than previous techniques. 相似文献
11.
We present an optimized pruning algorithm that allows for considerable geometry reduction in large botanical scenes while maintaining high and coherent rendering quality. We improve upon previous techniques by applying model‐specific geometry reduction functions and optimized scaling functions. For this we introduce the use of Precision and Recall (PR) as a measure of quality to rendering and show how PR‐scores can be used to predict better scaling values. We conducted a user‐study letting subjects adjust the scaling value, which shows that the predicted scaling matches the preferred ones. Finally, we extend the originally purely stochastic geometry prioritization for pruning to account for view‐optimized geometry selection, which allows to take global scene information, such as occlusion, into consideration. We demonstrate our method for the rendering of scenes with thousands of complex tree models in real‐time. 相似文献
12.
K. Debattista P. Dubla F. Banterle L.P. Santos A. Chalmers 《Computer Graphics Forum》2009,28(8):2216-2228
The ability to interactively render dynamic scenes with global illumination is one of the main challenges in computer graphics. The improvement in performance of interactive ray tracing brought about by significant advances in hardware and careful exploitation of coherence has rendered the potential of interactive global illumination a reality. However, the simulation of complex light transport phenomena, such as diffuse interreflections, is still quite costly to compute in real time. In this paper we present a caching scheme, termed Instant Caching, based on a combination of irradiance caching and instant radiosity. By reutilising calculations from neighbouring computations this results in a speedup over previous instant radiosity‐based approaches. Additionally, temporal coherence is exploited by identifying which computations have been invalidated due to geometric transformations and updating only those paths. The exploitation of spatial and temporal coherence allows us to achieve superior frame rates for interactive global illumination within dynamic scenes, without any precomputation or quality loss when compared to previous methods; handling of lighting and material changes are also demonstrated. 相似文献
13.
T. Schultz N. Sauber A. Anwander H. Theisel H.‐P. Seidel 《Computer Graphics Forum》2008,27(3):1063-1070
Fiber tracking is a standard tool to estimate the course of major white matter tracts from diffusion tensor magnetic resonance imaging (DT‐MRI) data. In this work, we aim at supporting the visual analysis of classical streamlines from fiber tracking by integrating context from anatomical data, acquired by a T1‐weighted MRI measurement. To this end, we suggest a novel visualization metaphor, which is based on data‐driven deformation of geometry and has been inspired by a technique for anatomical fiber preparation known as Klingler dissection. We demonstrate that our method conveys the relation between streamlines and surrounding anatomical features more effectively than standard techniques like slice images and direct volume rendering. The method works automatically, but its GPU‐based implementation allows for additional, intuitive interaction. 相似文献
14.
Interactive Simulation of the Human Eye Depth of Field and Its Correction by Spectacle Lenses 总被引:2,自引:0,他引:2
Masanori Kakimoto Tomoaki Tatsukawa Yukiteru Mukai Tomoyuki Nishita 《Computer Graphics Forum》2007,26(3):627-636
This paper describes a fast rendering algorithm for verification of spectacle lens design. Our method simulates refraction corrections of astigmatism as well as myopia or presbyopia. Refraction and defocus are the main issues in the simulation. For refraction, our proposed method uses per-vertex basis ray tracing which warps the environment map and produces a real-time refracted image which is subjectively as good as ray tracing. Conventional defocus simulation was previously done by distribution ray tracing and a real-time solution was impossible. We introduce the concept of a blur field, which we use to displace every vertex according to its position. The blurring information is precomputed as a set of field values distributed to voxels which are formed by evenly subdividing the perspective projected space. The field values can be determined by tracing a wavefront from each voxel through the lens and the eye, and by evaluating the spread of light at the retina considering the best human accommodation effort. The blur field is stored as texture data and referred to by the vertex shader that displaces each vertex. With an interactive frame rate, blending the multiple rendering results produces a blurred image comparable to distribution ray tracing output. 相似文献
15.
This paper introduces a framebuffer level of detail algorithm for controlling the pixel workload in an interactive rendering application. Our basic strategy is to evaluate the shading in a low resolution buffer and, in a second rendering pass, resample this buffer at the desired screen resolution. The size of the lower resolution buffer provides a trade‐off between rendering time and the level of detail in the final shading. In order to reduce approximation error we use a feature‐preserving reconstruction technique that more faithfully approximates the shading near depth and normal discontinuities. We also demonstrate how intermediate components of the shading can be selectively resized to provide finer‐grained control over resource allocation. Finally, we introduce a simple control mechanism that continuously adjusts the amount of resizing necessary to maintain a target framerate. These techniques do not require any preprocessing, are straightforward to implement on modern GPUs, and are shown to provide significant performance gains for several pixel‐bound scenes. 相似文献
16.
In this paper we present a technique for computing translational gradients of indirect surface reflectance in scenes containing participating media and significant occlusions. These gradients describe how the incident radiance field changes with respect to translation on surfaces. Previous techniques for computing gradients ignore the effects of volume scattering and attenuation and assume that radiance is constant along rays connecting surfaces. We present a novel gradient formulation that correctly captures the influence of participating media. Our formulation accurately accounts for changes of occlusion, including the effect of surfaces occluding scattering media. We show how the proposed gradients can be used within an irradiance caching framework to more accurately handle scenes with participating media, providing significant improvements in interpolation quality. 相似文献
17.
Witawat Rungjiratananon Zoltan Szego Yoshihiro Kanamori Tomoyuki Nishita 《Computer Graphics Forum》2008,27(7):1887-1893
Recent advances in physically‐based simulations have made it possible to generate realistic animations. However, in the case of solid‐fluid coupling, wetting effects have rarely been noticed despite their visual importance especially in interactions between fluids and granular materials. This paper presents a simple particle‐based method to model the physical mechanism of wetness propagating through granular materials; Fluid particles are absorbed in the spaces between the granular particles and these wetted granular particles then stick together due to liquid bridges that are caused by surface tension and which will subsequently disappear when over‐wetting occurs. Our method can handle these phenomena by introducing a wetness value for each granular particle and by integrating those aspects of behavior that are dependent on wetness into the simulation framework. Using this method, a GPU‐based simulator can achieve highly dynamic animations that include wetting effects in real time. 相似文献
18.
We present a real‐time rendering algorithm for inhomogeneous, single scattering media, where all‐frequency shading effects such as glows, light shafts, and volumetric shadows can all be captured. The algorithm first computes source radiance at a small number of sample points in the medium, then interpolates these values at other points in the volume using a gradient‐based scheme that is efficiently applied by sample splatting. The sample points are dynamically determined based on a recursive sample splitting procedure that adapts the number and locations of sample points for accurate and efficient reproduction of shading variations in the medium. The entire pipeline can be easily implemented on the GPU to achieve real‐time performance for dynamic lighting and scenes. Rendering results of our method are shown to be comparable to those from ray tracing. 相似文献
19.
We present a new and accurate method to render the atmosphere in real time from any viewpoint from ground level to outer space, while taking Rayleigh and Mie multiple scattering into account. Our method reproduces many effects of the scattering of light, such as the daylight and twilight sky color and aerial perspective for all view and light directions, or the Earth and mountain shadows (light shafts) inside the atmosphere. Our method is based on a formulation of the light transport equation that is precomputable for all view points, view directions and sun directions. We show how to store this data compactly and propose a GPU compliant algorithm to precompute it in a few seconds. This precomputed data allows us to evaluate at runtime the light transport equation in constant time, without any sampling, while taking into account the ground for shadows and light shafts. 相似文献
20.
Generating plausible deformations of a character skin within the standard production pipeline is a challenge. This paper presents a volume preservation method dedicated to skinned characters. As usual, the character is defined by a skin mesh at some rest pose and an animation skeleton. At each animation step, skin deformations are first computed using standard SSD. Our method corrects the result using a set of local deformations which model the fold‐over‐free, constant volume behavior of soft tissues. This is done geometrically, without the need of any physically‐based simulation. To make the method easily applicable, we also provide automatic ways to extract the local regions where volume is to be preserved and to compute adequate skinning weights, both based on the character's morphology. 相似文献