首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
BRDFs are commonly used for material appearance representation in applications ranging from gaming and the movie industry, to product design and specification. Most applications rely on isotropic BRDFs due to their better availability as a result of their easier acquisition process. On the other hand, anisotropic BRDF due to their structure‐dependent anisotropic highlights, are more challenging to measure and process. This paper thus leverages the measurement process of anisotropic BRDF by representing such BRDF by the collection of isotropic BRDFs. Our method relies on an anisotropic BRDF database decomposition into training isotropic slices forming a linear basis, where appropriate sparse samples are identified using numerical optimization. When an unknown anisotropic BRDF is measured, these samples are repeatably captured in a small set of azimuthal directions. All collected samples are then used for an entire measured BRDF reconstruction from a linear isotropic basis. Typically, below 100 samples are sufficient for the capturing of main visual features of complex anisotropic materials, and we provide a minimal directional samples to be regularly measured at each sample rotation. We conclude, that even simple setups relying on five bidirectional samples (maximum of five stationary sensors/lights) in combination with eight rotations (rotation stage for specimen) can yield a promising reconstruction of anisotropic behavior. Next, we outline extension of the proposed approach to adaptive sampling of anisotropic BRDF to gain even better performance. Finally, we show that our method allows using standard geometries, including industrial multi‐angle reflectometers, for the fast measurement of anisotropic BRDFs.  相似文献   

2.
We present a technique to efficiently importance sample distant, all‐frequency illumination in indoor scenes. Standard environment sampling is inefficient in such cases since the distant lighting is typically only visible through small openings (e.g. windows). This visibility is often addressed by manually placing a portal around each window to direct samples towards the openings; however, uniformly sampling the portal (its area or solid angle) disregards the possibly high frequency environment map. We propose a new portal importance sampling technique which takes into account both the environment map and its visibility through the portal, drawing samples proportional to the product of the two. To make this practical, we propose a novel, portal‐rectified reparametrization of the environment map with the key property that the visible region induced by a rectangular portal projects to an axis‐aligned rectangle. This allows us to sample according to the desired product distribution at an arbitrary shading location using a single (precomputed) summed‐area table per portal. Our technique is unbiased, relevant to many renderers, and can also be applied to rectangular light sources with directional emission profiles, enabling efficient rendering of non‐diffuse light sources with soft shadows.  相似文献   

3.
We present a novel appearance model for paper. Based on our appearance measurements for matte and glossy paper, we find that paper exhibits a combination of subsurface scattering, specular reflection, retroreflection, and surface sheen. Classic microfacet and simple diffuse reflection models cannot simulate the double‐sided appearance of a thin layer. Our novel BSDF model matches our measurements for paper and accounts for both reflection and transmission properties. At the core of the BSDF model is a method for converting a multi‐layer subsurface scattering model (BSSRDF) into a BSDF, which allows us to retain physically‐based absorption and scattering parameters obtained from the measurements. We also introduce a method for computing the amount of light available for subsurface scattering due to transmission through a rough dielectric surface. Our final model accounts for multiple scattering, single scattering, and surface reflection and is capable of rendering paper with varying levels of roughness and glossiness on both sides.  相似文献   

4.
We present a new technique to jointly MIP‐map BRDF and normal maps. Starting with generating an instant BRDF map, our technique builds its MIP‐mapped versions based on a highly efficient algorithm that interpolates von Mises‐Fisher (vMF) distributions. In our BRDF MIP‐maps, each pixel stores a vMF mixture approximating the average of all BRDF lobes from the finest level. Our method is capable of jointly MIP‐mapping BRDF and normal maps, even with high‐frequency variations, at real‐time while preserving high‐quality reflectance details. Further, it is very fast, easy to implement, and requires no precomputation.  相似文献   

5.
Renderings of animation sequences with physics‐based Monte Carlo light transport simulations are exceedingly costly to generate frame‐by‐frame, yet much of this computation is highly redundant due to the strong coherence in space, time and among samples. A promising approach pursued in prior work entails subsampling the sequence in space, time, and number of samples, followed by image‐based spatio‐temporal upsampling and denoising. These methods can provide significant performance gains, though major issues remain: firstly, in a multiple scattering simulation, the final pixel color is the composite of many different light transport phenomena, and this conflicting information causes artifacts in image‐based methods. Secondly, motion vectors are needed to establish correspondence between the pixels in different frames, but it is unclear how to obtain them for most kinds of light paths (e.g. an object seen through a curved glass panel). To reduce these ambiguities, we propose a general decomposition framework, where the final pixel color is separated into components corresponding to disjoint subsets of the space of light paths. Each component is accompanied by motion vectors and other auxiliary features such as reflectance and surface normals. The motion vectors of specular paths are computed using a temporal extension of manifold exploration and the remaining components use a specialized variant of optical flow. Our experiments show that this decomposition leads to significant improvements in three image‐based applications: denoising, spatial upsampling, and temporal interpolation.  相似文献   

6.
We present a spectral rendering technique that offers a compelling set of advantages over existing approaches. The key idea is to propagate energy along paths for a small, constant number of changing wavelengths. The first of these, the hero wavelength, is randomly sampled for each path, and all directional sampling is solely based on it. The additional wavelengths are placed at equal distances from the hero wavelength, so that all path wavelengths together always evenly cover the visible range. A related technique, spectral multiple importance sampling, was already introduced a few years ago. We propose a simplified and optimised version of this approach which is easier to implement, has good performance characteristics, and is actually more powerful than the original method. Our proposed method is also superior to techniques which use a static spectral representation, as it does not suffer from any inherent representation bias. We demonstrate the performance of our method in several application areas that are of critical importance for production work, such as fidelity of colour reproduction, sub‐surface scattering, dispersion and volumetric effects. We also discuss how to couple our proposed approach with several technologies that are important in current production systems, such as photon maps, bidirectional path tracing, environment maps, and participating media.  相似文献   

7.
Procedural shaders are a vital part of modern rendering systems. Despite their prevalence, however, procedural shaders remain sensitive to aliasing any time they are sampled at a rate below the Nyquist limit. Antialiasing is typically achieved through numerical techniques like supersampling or precomputing integrals stored in mipmaps. This paper explores the problem of analytically computing a band‐limited version of a procedural shader as a continuous function of the sampling rate. There is currently no known way of analytically computing these integrals in general. We explore the conditions under which exact solutions are possible and develop several approximation strategies for when they are not. Compared to supersampling methods, our approach produces shaders that are less expensive to evaluate and closer to ground truth in many cases. Compared to mipmapping or precomputation, our approach produces shaders that support an arbitrary bandwidth parameter and require less storage. We evaluate our method on a range of spatially‐varying shader functions, automatically producing antialiased versions that have comparable error to 4×4 multisampling but can be over an order of magnitude faster. While not complete, our approach is a promising first step toward this challenging goal and indicates a number of interesting directions for future work.  相似文献   

8.
Distribution effects such as diffuse global illumination, soft shadows and depth of field, are most accurately rendered using Monte Carlo ray or path tracing. However, physically accurate algorithms can take hours to converge to a noise‐free image. A recent body of work has begun to bridge this gap, showing that both individual and multiple effects can be achieved accurately and efficiently. These methods use sparse sampling, GPU raytracers, and adaptive filtering for reconstruction. They are based on a Fourier analysis, which models distribution effects as a wedge in the frequency domain. The wedge can be approximated as a single large axis‐aligned filter, which is fast but retains a large area outside the wedge, and therefore requires a higher sampling rate; or a tighter sheared filter, which is slow to compute. The state‐of‐the‐art fast sheared filtering method combines low sampling rate and efficient filtering, but has been demonstrated for individual distribution effects only, and is limited by high‐dimensional data storage and processing. We present a novel filter for efficient rendering of combined effects, involving soft shadows and depth of field, with global (diffuse indirect) illumination. We approximate the wedge spectrum with multiple axis‐aligned filters, marrying the speed of axis‐aligned filtering with an even more accurate (compact and tighter) representation than sheared filtering. We demonstrate rendering of single effects at comparable sampling and frame‐rates to fast sheared filtering. Our main practical contribution is in rendering multiple distribution effects, which have not even been demonstrated accurately with sheared filtering. For this case, we present an average speedup of 6× compared with previous axis‐aligned filtering methods.  相似文献   

9.
Virtual point lights (VPLs) are well established for real‐time global illumination. However, this method suffers from spiky artifacts and flickering caused by singularities of VPLs, highly glossy materials, high‐frequency textures, and discontinuous geometries. To avoid these artifacts, this paper introduces a virtual spherical Gaussian light (VSGL) which roughly represents a set of VPLs. For a VSGL, the total radiant intensity and positional distribution of VPLs are approximated using spherical Gaussians and a Gaussian distribution, respectively. Since this approximation can be computed using summations of VPL parameters, VSGLs can be dynamically generated using mipmapped reflective shadow maps. Our VSGL generation is simple and independent from any scene geometries. In addition, reflected radiance for a VSGL is calculated using an analytic formula. Hence, we are able to render one‐bounce glossy interreflections at real‐time frame rates with smaller artifacts.  相似文献   

10.
In this paper, we present a method to model hyperelasticity that is well suited for representing the nonlinearity of real‐world objects, as well as for estimating it from deformation examples. Previous approaches suffer several limitations, such as lack of integrability of elastic forces, failure to enforce energy convexity, lack of robustness of parameter estimation, or difficulty to model cross‐modal effects. Our method avoids these problems by relying on a general energy‐based definition of elastic properties. The accuracy of the resulting elastic model is maximized by defining an additive model of separable energy terms, which allow progressive parameter estimation. In addition, our method supports efficient modeling of extreme nonlinearities thanks to energy‐limiting constraints. We combine our energy‐based model with an optimization method to estimate model parameters from force‐deformation examples, and we show successful modeling of diverse deformable objects, including cloth, human finger skin, and internal human anatomy in a medical imaging application.  相似文献   

11.
We present a new approach to microfacet‐based BSDF importance sampling. Previously proposed sampling schemes for popular analytic BSDFs typically begin by choosing a microfacet normal at random in a way that is independent of direction of incident light. To sample the full BSDF using these normals requires arbitrarily large sample weights leading to possible fireflies. Additionally, at grazing angles nearly half of the sampled normals face away from the incident ray and must be rejected, making the sampling scheme inefficient. Instead, we show how to use the distribution of visible normals directly to generate samples, where normals are weighted by their projection factor toward the incident direction. In this way, no backfacing normals are sampled and the sample weights contain only the shadowing factor of outgoing rays (and additionally a Fresnel term for conductors). Arbitrarily large sample weights are avoided and variance is reduced. Since the BSDF depends on the microsurface model, we describe our sampling algorithm for two models: the V‐cavity and the Smith models. We demonstrate results for both isotropic and anisotropic rough conductors and dielectrics with Beckmann and GGX distributions.  相似文献   

12.
We present a technique for controlling physically simulated characters using user inputs from an off‐the‐shelf depth camera. Our controller takes a real‐time stream of user poses as input, and simulates a stream of target poses of a biped based on it. The simulated biped mimics the user's actions while moving forward at a modest speed and maintaining balance. The controller is parameterized over a set of modulated reference motions that aims to cover the range of possible user actions. For real‐time simulation, the best set of control parameters for the current input pose is chosen from the parameterized sets of pre‐computed control parameters via a regression method. By applying the chosen parameters at each moment, the simulated biped can imitate a range of user actions while walking in various interactive scenarios.  相似文献   

13.
In this paper, we propose an efficient data‐guided method based on Model Predictive Control (MPC) to synthesize a full‐body motion. Guided by a reference motion, our method repeatedly plans the full‐body motion to produce an optimal control policy for predictive control while sliding the fixed‐span window along the time axis. Based on this policy, the method computes the joint torques of a character at every time step. Together with contact forces and external perturbations if there are any, the joint torques are used to update the state of the character. Without including the contact forces in the control vector, our formulation of the trajectory optimization problem enables automatic adjustment of contact timings and positions for balancing in response to environmental changes and external perturbations. For efficiency, we adopt derivative‐based trajectory optimization on top of state‐of‐the‐art smoothed contact dynamics. Use of derivatives enables our method to run much faster than the existing sampling‐based methods. In order to further accelerate the performance of MPC, we propose efficient numerical differentiation of the system dynamics of a full‐body character based on two schemes: data reuse and data interpolation. The former scheme exploits data dependency to reuse physical quantities of the system dynamics at near‐by time points. The latter scheme allows the use of derivatives at sparse sample points to interpolate those at other time points in the window. We further accelerate evaluation of the system dynamics by exploiting the sparsity of physical quantities such as Jacobian matrix resulting from the tree‐like structure of the articulated body. Through experiments, we show that the proposed method efficiently can synthesize realistic motions such as locomotion, dancing, gymnastic motions, and martial arts at interactive rates using moderate computing resources.  相似文献   

14.
Due to the recent advancement of computer graphics hardware and software algorithms, deformable characters have become more and more popular in real‐time applications such as computer games. While there are mature techniques to generate primary deformation from skeletal movement, simulating realistic and stable secondary deformation such as jiggling of fats remains challenging. On one hand, traditional volumetric approaches such as the finite element method require higher computational cost and are infeasible for limited hardware such as game consoles. On the other hand, while shape matching based simulations can produce plausible deformation in real‐time, they suffer from a stiffness problem in which particles either show unrealistic deformation due to high gains, or cannot catch up with the body movement. In this paper, we propose a unified multi‐layer lattice model to simulate the primary and secondary deformation of skeleton‐driven characters. The core idea is to voxelize the input character mesh into multiple anatomical layers including the bone, muscle, fat and skin. Primary deformation is applied on the bone voxels with lattice‐based skinning. The movement of these voxels is propagated to other voxel layers using lattice shape matching simulation, creating a natural secondary deformation. Our multi‐layer lattice framework can produce simulation quality comparable to those from other volumetric approaches with a significantly smaller computational cost. It is best to be applied in real‐time applications such as console games or interactive animation creation.  相似文献   

15.
Despite recent advances in Monte Carlo rendering techniques, dense, high‐albedo participating media such as wax or skin still remain a difficult problem. In such media, random walks tend to become very long, but may still lead to a large contribution to the image. The Dwivedi sampling scheme, which is based on zero variance random walks, biases the sampling probability distributions to exit the medium as quickly as possible. This can reduce variance considerably under the assumption of a locally homogeneous medium with constant phase function. Prior work uses the normal at the Point of Entry as the bias direction. We demonstrate that this technique can fail in common scenarios such as thin geometry with a strong backlight. We propose two new biasing strategies, Closest Point and Incident Illumination biasing, and show that these techniques can speed up convergence by up to an order of magnitude. Additionally, we propose a heuristic approach for combining biased and classical sampling techniques using Multiple Importance Sampling.  相似文献   

16.
Rendering with accurate camera models greatly increases realism and improves the match of synthetic imagery to real‐life footage. Photographic lenses can be simulated by ray tracing, but the performance depends on the complexity of the lens system, and some operations required for modern algorithms, such as deterministic connections, can be difficult to achieve. We generalise the approach of polynomial optics, i.e. expressing the light field transformation from the sensor to the outer pupil using a polynomial, to work with extreme wide angle (fisheye) lenses and aspherical elements. We also show how sparse polynomials can be constructed from the large space of high‐degree terms (we tested up to degree 15). We achieve this using a variant of orthogonal matching pursuit instead of a Taylor series when computing the polynomials. We show two applications: photorealistic rendering using Monte Carlo methods, where we introduce a new aperture sampling technique that is suitable for light tracing, and an interactive preview method suitable for rendering with deep images.  相似文献   

17.
Many‐light methods approximate the light transport in a scene by computing the direct illumination from many virtual point light sources (VPLs), and render low‐noise images covering a wide range of performance and quality goals. However, they are very inefficient at representing glossy light transport. This is because a VPL on a glossy surface illuminates a small fraction of the scene only, and a tremendous number of VPLs might be necessary to render acceptable images. In this paper, we introduce Rich‐VPLs which, in contrast to standard VPLs, represent a multitude of light paths and thus have a more widespread emission profile on glossy surfaces and in scenes with multiple primary light sources. By this, a single Rich‐VPL contributes to larger portions of a scene with negligible additional shading cost. Our second contribution is a placement strategy for (Rich‐)VPLs proportional to sensor importance times radiance. Although both Rich‐VPLs and improved placement can be used individually, they complement each other ideally and share interim computation. Furthermore, both complement existing many‐light methods, e.g. Lightcuts or the Virtual Spherical Lights method, and can improve their efficiency as well as their application for scenes with glossy materials and many primary light sources.  相似文献   

18.
Particle‐based simulation techniques, like the discrete element method or molecular dynamics, are widely used in many research fields. In real‐time explorative visualization it is common to render the resulting data using opaque spherical glyphs with local lighting only. Due to massive overlaps, however, inner structures of the data are often occluded rendering visual analysis impossible. Furthermore, local lighting is not sufficient as several important features like complex shapes, holes, rifts or filaments cannot be perceived well. To address both problems we present a new technique that jointly supports transparency and ambient occlusion in a consistent illumination model. Our approach is based on the emission‐absorption model of volume rendering. We provide analytic solutions to the volume rendering integral for several density distributions within a spherical glyph. Compared to constant transparency our approach preserves the three‐dimensional impression of the glyphs much better. We approximate ambient illumination with a fast hierarchical voxel cone‐tracing approach, which builds on a new real‐time voxelization of the particle data. Our implementation achieves interactive frame rates for millions of static or dynamic particles without any preprocessing. We illustrate the merits of our method on real‐world data sets gaining several new insights.  相似文献   

19.
We present a physically based real‐time water simulation and rendering method that brings volumetric foam to the real‐time domain, significantly increasing the realism of dynamic fluids. We do this by combining a particle‐based fluid model that is capable of accounting for the formation of foam with a layered rendering approach that is able to account for the volumetric properties of water and foam. Foam formation is simulated through Weber number thresholding. For rendering, we approximate the resulting water and foam volumes by storing their respective boundary surfaces in depth maps. This allows us to calculate the attenuation of light rays that pass through these volumes very efficiently. We also introduce an adaptive curvature flow filter that produces consistent fluid surfaces from particles independent of the viewing distance.  相似文献   

20.
A new unbiased sampling approach is presented, which allows the direct illumination from disk and cylinder light sources to be sampled with a uniform probability distribution within their solid angles, as seen from each illuminated point. This approach applies to any form of global illumination path tracing algorithm (forward or bidirectional), where the direct illumination integral from light sources needs to be estimated. We show that taking samples based on the solid angle of these two light sources leads to improved estimates and reduced variance of the Monte Carlo integral for direct illumination. This work follows from previously known unbiased methods for the solid angle sampling of triangular and rectangular light sources and extends the class of lights that can be rendered with these improved sampling algorithms.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号