首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this paper, we extend the concept of pre‐filtered shadow mapping to stochastic rasterization, enabling real‐time rendering of soft shadows from planar area lights. Most existing soft shadow mapping methods lose important visibility information by relying on pinhole renderings from an area light source, providing plausible results only for small light sources. Since we sample the entire 4D shadow light field stochastically, we are able to closely approximate shadows of large area lights as well. In order to efficiently reconstruct smooth shadows from this sparse data, we exploit the analogy of soft shadow computation to rendering defocus blur, and introduce a multiplane pre‐filtering algorithm. We demonstrate how existing pre‐filterable approximations of the visibility function, such as variance shadow mapping, can be extended to four dimensions within our framework.  相似文献   

2.
This paper proposes a pipeline to accurately acquire, efficiently reproduce and intuitively manipulate phosphorescent appearance. In contrast to common appearance models, a model of phosphorescence needs to account for temporal change (decay) and previous illumination (saturation). For reproduction, we propose a rate equation that can be efficiently solved in combination with other illumination in a mixed integro‐differential equation system. We describe an acquisition system to measure spectral coefficients of this rate equation for actual materials. Our model is evaluated by comparison to photographs of actual phosphorescent objects. Finally, we propose an artist‐friendly interface to control the behavior of phosphorescent materials by specifying spatiotemporal appearance constraints.  相似文献   

3.
We present a novel appearance model for paper. Based on our appearance measurements for matte and glossy paper, we find that paper exhibits a combination of subsurface scattering, specular reflection, retroreflection, and surface sheen. Classic microfacet and simple diffuse reflection models cannot simulate the double‐sided appearance of a thin layer. Our novel BSDF model matches our measurements for paper and accounts for both reflection and transmission properties. At the core of the BSDF model is a method for converting a multi‐layer subsurface scattering model (BSSRDF) into a BSDF, which allows us to retain physically‐based absorption and scattering parameters obtained from the measurements. We also introduce a method for computing the amount of light available for subsurface scattering due to transmission through a rough dielectric surface. Our final model accounts for multiple scattering, single scattering, and surface reflection and is capable of rendering paper with varying levels of roughness and glossiness on both sides.  相似文献   

4.
Distribution effects such as diffuse global illumination, soft shadows and depth of field, are most accurately rendered using Monte Carlo ray or path tracing. However, physically accurate algorithms can take hours to converge to a noise‐free image. A recent body of work has begun to bridge this gap, showing that both individual and multiple effects can be achieved accurately and efficiently. These methods use sparse sampling, GPU raytracers, and adaptive filtering for reconstruction. They are based on a Fourier analysis, which models distribution effects as a wedge in the frequency domain. The wedge can be approximated as a single large axis‐aligned filter, which is fast but retains a large area outside the wedge, and therefore requires a higher sampling rate; or a tighter sheared filter, which is slow to compute. The state‐of‐the‐art fast sheared filtering method combines low sampling rate and efficient filtering, but has been demonstrated for individual distribution effects only, and is limited by high‐dimensional data storage and processing. We present a novel filter for efficient rendering of combined effects, involving soft shadows and depth of field, with global (diffuse indirect) illumination. We approximate the wedge spectrum with multiple axis‐aligned filters, marrying the speed of axis‐aligned filtering with an even more accurate (compact and tighter) representation than sheared filtering. We demonstrate rendering of single effects at comparable sampling and frame‐rates to fast sheared filtering. Our main practical contribution is in rendering multiple distribution effects, which have not even been demonstrated accurately with sheared filtering. For this case, we present an average speedup of 6× compared with previous axis‐aligned filtering methods.  相似文献   

5.
We present a new compressed sensing framework for reconstruction of incomplete and possibly noisy images and their higher dimensional variants, e.g. animations and light‐fields. The algorithm relies on a learning‐based basis representation. We train an ensemble of intrinsically two‐dimensional (2D) dictionaries that operate locally on a set of 2D patches extracted from the input data. We show that one can convert the problem of 2D sparse signal recovery to an equivalent 1D form, enabling us to utilize a large family of sparse solvers. The proposed framework represents the input signals in a reduced union of subspaces model, while allowing sparsity in each subspace. Such a model leads to a much more sparse representation than widely used methods such as K‐SVD. To evaluate our method, we apply it to three different scenarios where the signal dimensionality varies from 2D (images) to 3D (animations) and 4D (light‐fields). We show that our method outperforms state‐of‐the‐art algorithms in computer graphics and image processing literature.  相似文献   

6.
Game and movie studios are switching to physically based rendering en masse, but physically accurate filter convolution is difficult to do quickly enough to update reflection probes in real‐time. Cubemap filtering has also become a bottleneck in the content processing pipeline. We have developed a two‐pass filtering algorithm that is specialized for isotropic reflection kernels, is several times faster than existing algorithms, and produces superior results. The first pass uses a quadratic b‐spline recurrence that is modified for cubemaps. The second pass uses lookup tables to determine optimal sampling in terms of placement, mipmap level, and coefficients. Filtering a full 1282 cubemap on an NVIDIA GeForce GTX 980 takes between 160 µs and 730 µs with out method, depending on the desired quality.  相似文献   

7.
In photorealistic image synthesis the radiative transfer equation is often not solved by simulating every wavelength of light, but instead by computing tristimulus transport, for instance using sRGB primaries as a basis. This choice is convenient, because input texture data is usually stored in RGB colour spaces. However, there are problems with this approach which are often overlooked or ignored. By comparing to spectral reference renderings, we show how rendering in tristimulus colour spaces introduces colour shifts in indirect light, violation of energy conservation, and unexpected behaviour in participating media. Furthermore, we introduce a fast method to compute spectra from almost any given XYZ input colour. It creates spectra that match the input colour precisely. Additionally, like in natural reflectance spectra, their energy is smoothly distributed over wide wavelength bands. This method is both useful to upsample RGB input data when spectral transport is used and as an intermediate step for corrected tristimulus‐based transport. Finally, we show how energy conservation can be enforced in RGB by mapping colours to valid reflectances.  相似文献   

8.
Glossy to glossy reflections are lights bounced between glossy surfaces. Such directional light transports are important for humans to perceive glossy materials, but difficult to simulate. This paper proposes a new method for rendering screen‐space glossy to glossy reflections in realtime. We use spherical von Mises‐Fisher (vMF) distributions to model glossy BRDFs at surfaces, and employ screen space directional occlusion (SSDO) rendering framework to trace indirect light transports bounced in the screen space. As our main contributions, we derive a new parameterization of vMF distribution so as to convert the non‐linear fit of multiple vMF distributions into a linear sum in the new space. Then, we present a new linear filtering technique to build MIP‐maps on glossy BRDFs, which allows us to create filtered radiance transfer functions at runtime, and efficiently estimate indirect glossy to glossy reflections. We demonstrate our method in a realtime application for rendering scenes with dynamic glossy objects. Compared with screen space directional occlusion, our approach only requires one extra texture and has a negligible overhead, 3% ~ 6% loss at frame rate, but enables glossy to glossy reflections.  相似文献   

9.
We present a spectral rendering technique that offers a compelling set of advantages over existing approaches. The key idea is to propagate energy along paths for a small, constant number of changing wavelengths. The first of these, the hero wavelength, is randomly sampled for each path, and all directional sampling is solely based on it. The additional wavelengths are placed at equal distances from the hero wavelength, so that all path wavelengths together always evenly cover the visible range. A related technique, spectral multiple importance sampling, was already introduced a few years ago. We propose a simplified and optimised version of this approach which is easier to implement, has good performance characteristics, and is actually more powerful than the original method. Our proposed method is also superior to techniques which use a static spectral representation, as it does not suffer from any inherent representation bias. We demonstrate the performance of our method in several application areas that are of critical importance for production work, such as fidelity of colour reproduction, sub‐surface scattering, dispersion and volumetric effects. We also discuss how to couple our proposed approach with several technologies that are important in current production systems, such as photon maps, bidirectional path tracing, environment maps, and participating media.  相似文献   

10.
Rendering translucent materials in real time is usually done by using surface diffusion and/or (translucent) shadow maps. The downsides of these approaches are, that surface diffusion cannot handle translucency effects that show up when rendering thin objects, and that translucent shadow maps are only available for point light sources. Furthermore, translucent shadow maps introduce limitations to shadow mapping techniques exploiting the same maps. In this paper we present a novel approach for rendering translucent materials at interactive frame rates. Our approach allows for an efficient calculation of translucency with native support for general illumination conditions, especially area and environment lighting, at high accuracy. The proposed technique's only parameter is the used diffusion profile, and thus it works out of the box without any parameter tuning. Furthermore, it can be used in combination with any existing surface diffusion techniques to add translucency effects. Our approach introduces Spatial Adjacency Maps that depend on precalculations to be done for fixed meshes. We show that these maps can be updated in real time to also handle deforming meshes and that our results are of superior quality as compared to other well known real‐time techniques for rendering translucency.  相似文献   

11.
We present a new technique to jointly MIP‐map BRDF and normal maps. Starting with generating an instant BRDF map, our technique builds its MIP‐mapped versions based on a highly efficient algorithm that interpolates von Mises‐Fisher (vMF) distributions. In our BRDF MIP‐maps, each pixel stores a vMF mixture approximating the average of all BRDF lobes from the finest level. Our method is capable of jointly MIP‐mapping BRDF and normal maps, even with high‐frequency variations, at real‐time while preserving high‐quality reflectance details. Further, it is very fast, easy to implement, and requires no precomputation.  相似文献   

12.
Procedural shaders are a vital part of modern rendering systems. Despite their prevalence, however, procedural shaders remain sensitive to aliasing any time they are sampled at a rate below the Nyquist limit. Antialiasing is typically achieved through numerical techniques like supersampling or precomputing integrals stored in mipmaps. This paper explores the problem of analytically computing a band‐limited version of a procedural shader as a continuous function of the sampling rate. There is currently no known way of analytically computing these integrals in general. We explore the conditions under which exact solutions are possible and develop several approximation strategies for when they are not. Compared to supersampling methods, our approach produces shaders that are less expensive to evaluate and closer to ground truth in many cases. Compared to mipmapping or precomputation, our approach produces shaders that support an arbitrary bandwidth parameter and require less storage. We evaluate our method on a range of spatially‐varying shader functions, automatically producing antialiased versions that have comparable error to 4×4 multisampling but can be over an order of magnitude faster. While not complete, our approach is a promising first step toward this challenging goal and indicates a number of interesting directions for future work.  相似文献   

13.
Virtual point lights (VPLs) are well established for real‐time global illumination. However, this method suffers from spiky artifacts and flickering caused by singularities of VPLs, highly glossy materials, high‐frequency textures, and discontinuous geometries. To avoid these artifacts, this paper introduces a virtual spherical Gaussian light (VSGL) which roughly represents a set of VPLs. For a VSGL, the total radiant intensity and positional distribution of VPLs are approximated using spherical Gaussians and a Gaussian distribution, respectively. Since this approximation can be computed using summations of VPL parameters, VSGLs can be dynamically generated using mipmapped reflective shadow maps. Our VSGL generation is simple and independent from any scene geometries. In addition, reflected radiance for a VSGL is calculated using an analytic formula. Hence, we are able to render one‐bounce glossy interreflections at real‐time frame rates with smaller artifacts.  相似文献   

14.
This paper proposes an interactive rendering method of cloth fabrics under environment lighting. The outgoing radiance from cloth fabrics in the microcylinder model is calculated by integrating the product of the distant environment lighting, the visibility function, the weighting function that includes shadowing/masking effects of threads, and the light scattering function of threads. The radiance calculation at each shading point of the cloth fabrics is simplified to a linear combination of triple product integrals of two circular Gaussians and the visibility function, multiplied by precomputed spherical Gaussian convolutions of the weighting function. We propose an efficient calculation method of the triple product of two circular Gaussians and the visibility function by using the gradient of signed distance function to the visibility boundary where the binary visibility changes in the angular domain of the hemisphere. Our GPU implementation enables interactive rendering of static cloth fabrics with dynamic viewpoints and lighting. In addition, interactive editing of parameters for the scattering function (e.g. thread's albedo) that controls the visual appearances of cloth fabrics can be achieved.  相似文献   

15.
In this paper we present an image‐based algorithm to render visually plausible anti‐aliased soft shadows in real time. Our technique employs a new shadow pre‐filtering method based on an extended exponential shadow mapping theory. The algorithm achieves faithful contact shadows by adopting an optimal approximation to exponential shadow reconstruction function. Benefiting from a novel overflow free summed area table tile grid data structure, numerical stability is guaranteed and error filtering response is avoided. By integrating an adaptive anisotropic filtering method, the proposed algorithm can produce high quality smooth shadows both in large penumbra areas and in high frequency sharp transitions, meanwhile guarantee cheap memory consumption and high performance.  相似文献   

16.
Multiresolution Hierarchies (MH) and Directed Acyclic Graphs (DAG) are two recent approaches for the compression of high‐resolution shadow information. In this paper, we introduce Merged Multiresolution Hierarchies (MMH), a novel data structure that unifies both concepts. An MMH leverages both hierarchical homogeneity exploited in MHs, as well as topological similarities exploited in DAG representations. We propose an efficient hash‐based technique to quickly identify and remove redundant subtree instances in a modified relative MH representation. Our solution remains lossless and significantly improves the compression rate compared to both preceding shadow map compression algorithms, while retaining the full run‐time performance of traditional MH representations.  相似文献   

17.
Renderings of animation sequences with physics‐based Monte Carlo light transport simulations are exceedingly costly to generate frame‐by‐frame, yet much of this computation is highly redundant due to the strong coherence in space, time and among samples. A promising approach pursued in prior work entails subsampling the sequence in space, time, and number of samples, followed by image‐based spatio‐temporal upsampling and denoising. These methods can provide significant performance gains, though major issues remain: firstly, in a multiple scattering simulation, the final pixel color is the composite of many different light transport phenomena, and this conflicting information causes artifacts in image‐based methods. Secondly, motion vectors are needed to establish correspondence between the pixels in different frames, but it is unclear how to obtain them for most kinds of light paths (e.g. an object seen through a curved glass panel). To reduce these ambiguities, we propose a general decomposition framework, where the final pixel color is separated into components corresponding to disjoint subsets of the space of light paths. Each component is accompanied by motion vectors and other auxiliary features such as reflectance and surface normals. The motion vectors of specular paths are computed using a temporal extension of manifold exploration and the remaining components use a specialized variant of optical flow. Our experiments show that this decomposition leads to significant improvements in three image‐based applications: denoising, spatial upsampling, and temporal interpolation.  相似文献   

18.
We present an approach for editing shadows in all‐frequency lighting environments. To support artistic control, we propose to decouple shadowing from lighting and focus on providing intuitive controls to edit the former. To accomplish this task, we precompute and store scene visibility information separately from lighting and BRDFs and allow artists to edit visibility directly, by providing operations to select shadows and edit their shape. To facilitate a wider range of editing operations, we generalize visibility from binary to three‐channel oating point quantities and introduce a novel shadow representation based on computation of visibility ratios between the original render and the edited one. We demonstrate our results for diffuse and glossy surfaces, still scenes and animations.  相似文献   

19.
We present a new technique called Multiple Vertex Next Event Estimation, which outperforms current direct lighting techniques in forward scattering, optically dense media with the Henyey‐Greenstein phase function. Instead of a one‐segment connection from a vertex within the medium to the light source, an entire sub path of arbitrary length can be created and we show experimentally that 4–10 segments work best in practice. This is done by perturbing a seed path within the Monte Carlo context. Our technique was integrated in a Monte Carlo renderer, combining random walk path tracing with multiple vertex next event estimation via multiple importance sampling for an unbiased result. We evaluate this new technique against standard next event estimation and show that it significantly reduces noise and increases performance of multiple scattering renderings in highly anisotropic, optically dense media. Additionally, we discuss multiple light sources and performance implications of memory‐heavy heterogeneous media.  相似文献   

20.
Crowded motions refer to multiple objects moving around and interacting such as crowds, pedestrians and etc. We capture crowded scenes using a depth scanner at video frame rates. Thus, our input is a set of depth frames which sample the scene over time. Processing such data is challenging as it is highly unorganized, with large spatio‐temporal holes due to many occlusions. As no correspondence is given, locally tracking 3D points across frames is hard due to noise and missing regions. Furthermore global segmentation and motion completion in presence of large occlusions is ambiguous and hard to predict. Our algorithm utilizes Gestalt principles of common fate and good continuity to compute motion tracking and completion respectively. Our technique does not assume any pre‐given markers or motion template priors. Our key‐idea is to reduce the motion completion problem to a 1D curve fitting and matching problem which can be solved efficiently using a global optimization scheme. We demonstrate our segmentation and completion method on a variety of synthetic and real world crowded scanned scenes.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号