共查询到20条相似文献,搜索用时 15 毫秒
1.
In this paper we present the first practical method for importance sampling functions represented as spherical harmonics (SH). Given a spherical probability density function (PDF) represented as a vector of SH coefficients, our method warps an input point set to match the target PDF using hierarchical sample warping. Our approach is efficient and produces high quality sample distributions. As a by-product of the sampling procedure we produce a multi-resolution representation of the density function as either a spherical mip-map or Haar wavelet. By exploiting this implicit conversion we can extend the method to distribute samples according to the product of an SH function with a spherical mip-map or Haar wavelet. This generalization has immediate applicability in rendering, e.g., importance sampling the product of a BRDF and an environment map where the lighting is stored as a single high-resolution wavelet and the BRDF is represented in spherical harmonics. Since spherical harmonics can be efficiently rotated, this product can be computed on-the-fly even if the BRDF is stored in local-space. Our sampling approach generates over 6 million samples per second while significantly reducing precomputation time and storage requirements compared to previous techniques. 相似文献
2.
The visibility function in direct illumination describes the binary visibility over a light source, e.g., an environment map. Intuitively, the visibility is often strongly correlated between nearby locations in time and space, but exploiting this correlation without introducing noticeable errors is a hard problem. In this paper, we first study the statistical characteristics of the visibility function. Then, we propose a robust and unbiased method for using estimated visibility information to improve the quality of Monte Carlo evaluation of direct illumination. Our method is based on the theory of control variates, and it can be used on top of existing state‐of‐the‐art schemes for importance sampling. The visibility estimation is obtained by sparsely sampling and caching the 4D visibility field in a compact bitwise representation. In addition to Monte Carlo rendering, the stored visibility information can be used in a number of other applications, for example, ambient occlusion and lighting design. 相似文献
3.
This paper presents a novel method that effectively combines both control variates and importance sampling in a sequential Monte Carlo context. The radiance estimates computed during the rendering process are cached in a 5D adaptive hierarchical structure that defines dynamic predicate functions for both variance reduction techniques and guarantees well‐behaved PDFs, yielding continually increasing efficiencies thanks to a marginal computational overhead. While remaining unbiased, the technique is effective within a single pass as both estimation and caching are done online, exploiting the coherency in illumination while being independent of the actual scene representation. The method is relatively easy to implement and to tune via a single parameter, and we demonstrate its practical benefits with important gains in convergence rate and competitive results with state of the art techniques. 相似文献
4.
Yonghao Yue Kei Iwasaki Bing‐Yu Chen Yoshinori Dobashi Tomoyuki Nishita 《Computer Graphics Forum》2011,30(7):1911-1919
Photo‐realistic rendering of inhomogeneous participating media with light scattering in consideration is important in computer graphics, and is typically computed using Monte Carlo based methods. The key technique in such methods is the free path sampling, which is used for determining the distance (free path) between successive scattering events. Recently, it has been shown that efficient and unbiased free path sampling methods can be constructed based on Woodcock tracking. The key concept for improving the efficiency is to utilize space partitioning (e.g., kd‐tree or uniform grid), and a better space partitioning scheme is important for better sampling efficiency. Thus, an estimation framework for investigating the gain in sampling efficiency is important for determining how to partition the space. However, currently, there is no estimation framework that works in 3D space. In this paper, we propose a new estimation framework to overcome this problem. Using our framework, we can analytically estimate the sampling efficiency for any typical partitioned space. Conversely, we can also use this estimation framework for determining the optimal space partitioning. As an application, we show that new space partitioning schemes can be constructed using our estimation framework. Moreover, we show that the differences in the performances using different schemes can be predicted fairly well using our estimation framework. 相似文献
5.
Recent research in bidirectional importance sampling has focused primarily on structured illumination sources such as distant environment maps, while unstructured illumination has received little attention. In this paper, we present a method for bidirectional importance sampling of unstructured illumination, allowing us to use the same method for sampling both distant as well as local/indirect sources. Building upon recent work in [ WFA*05 ], we model complex illumination as a large set of point lights. The subsequent sampling process draws samples only from this point set. We start by constructing a piecewise constant approximation for the lighting using an illumination cut [ CPWAP08 ]. We show that this cut can be used directly for illumination importance sampling. We then use BRDF importance sampling followed by sample counting to update the cut, resulting in a bidirectional distribution that closely approximates the product of the illumination and BRDF. Drawing visibility samples from this new distribution significantly reduces the sampling variance. As a main advance over previous work, our method allows for unstructured sources, including arbitrary local direct lighting and one-bounce of indirect lighting. 相似文献
6.
Kirill Garanzha 《Computer Graphics Forum》2009,28(4):1199-1206
In this paper we present a hybrid algorithm for building the bounding volume hierarchy (BVH) that is used in accelerating ray tracing of animated models. This algorithm precomputes densely packed clusters of triangles on surfaces. Folowing that, a set of clusters is used to rebuild the BVH in every frame. Our approach utilizes the assumption that groups of connected triangles remain connected throughout the course of the animation. We introduce a novel heuristic to create triangle clusters that are designed for high performance ray tracing. This heuristic combines the density of connectivity, geometric size and the shape of the cluster.
Our approach accelerates the BVH builder by an order of magnitude rebuilding only the set of clusters that is much smaller than the original set of triangles. The speed-up is achieved against a 'brute-force' BVH builder that repartitions all triangles in every frame of animation without using any pre-clustering. The rendering performance is not affected when a cluster contains a few dozen triangles. We demonstrate the real-time/interactive ray tracing performance for highly-dynamic complex models. 相似文献
Our approach accelerates the BVH builder by an order of magnitude rebuilding only the set of clusters that is much smaller than the original set of triangles. The speed-up is achieved against a 'brute-force' BVH builder that repartitions all triangles in every frame of animation without using any pre-clustering. The rendering performance is not affected when a cluster contains a few dozen triangles. We demonstrate the real-time/interactive ray tracing performance for highly-dynamic complex models. 相似文献
7.
Interactive computation of global illumination is a major challenge in current computer graphics research. Global illumination heavily affects the visual quality of generated images. It is therefore a key attribute for the perception of photo‐realistic images. Path tracing is able to simulate the physical behaviour of light using Monte Carlo techniques. However, the computational burden of this technique prohibits interactive rendering times on standard commodity hardware in high‐quality. Trying to solve the Monte Carlo integration with fewer samples results in characteristic noisy images. Global illumination filtering methods take advantage of the fact that the integral for neighbouring pixels may be very similar. Averaging samples of similar characteristics in screen‐space may approximate the correct integral, but may result in visible outliers. In this paper, we present a novel path tracing pipeline based on an edge‐aware filtering method for the indirect illumination which produces visually more pleasing results without noticeable outliers. The key idea is not to filter the noisy path traced images but to use it as a guidance to filter a second image composed from characteristic scene attributes that do not contain noise by default. We show that our approach better approximates the Monte Carlo integral compared to previous methods. Since the computation is carried out completely in screen‐space it is therefore applicable to fully dynamic scenes, arbitrary lighting and allows for high‐quality path tracing at interactive frame rates on commodity hardware. 相似文献
8.
In this paper we revisit the computation and visualization of equivalents to isocontours in uncertain scalar fields. We model uncertainty by discrete random fields and, in contrast to previous methods, also take arbitrary spatial correlations into account. Starting with joint distributions of the random variables associated to the sample locations, we compute level crossing probabilities for cells of the sample grid. This corresponds to computing the probabilities that the well‐known symmetry‐reduced marching cubes cases occur in random field realizations. For Gaussian random fields, only marginal density functions that correspond to the vertices of the considered cell need to be integrated. We compute the integrals for each cell in the sample grid using a Monte Carlo method. The probabilistic ansatz does not suffer from degenerate cases that usually require case distinctions and solutions of ill‐conditioned problems. Applications in 2D and 3D, both to synthetic and real data from ensemble simulations in climate research, illustrate the influence of spatial correlations on the spatial distribution of uncertain isocontours. 相似文献
9.
10.
Visualizing dynamic participating media in particle form by fully solving equations from the light transport theory is a computationally very expensive process. In this paper, we present a computational pipeline for particle volume rendering that is easily accelerated by the current GPU. To fully harness its massively parallel computing power, we transform input particles into a volumetric density field using a GPU-assisted, adaptive density estimation technique that iteratively adapts the smoothing length for local grid cells. Then, the volume data is visualized efficiently based on the volume photon mapping method where our GPU techniques further improve the rendering quality offered by previous implementations while performing rendering computation in acceptable time. It is demonstrated that high quality volume renderings can be easily produced from large particle datasets in time frames of a few seconds to less than a minute. 相似文献
11.
Bochang Moon Jong Yun Jun JongHyeob Lee Kunho Kim Toshiya Hachisuka Sung‐Eui Yoon 《Computer Graphics Forum》2013,32(1):139-151
We propose an efficient and robust image‐space denoising method for noisy images generated by Monte Carlo ray tracing methods. Our method is based on two new concepts: virtual flash images and homogeneous pixels. Inspired by recent developments in flash photography, virtual flash images emulate photographs taken with a flash, to capture various features of rendered images without taking additional samples. Using a virtual flash image as an edge‐stopping function, our method can preserve image features that were not captured well only by existing edge‐stopping functions such as normals and depth values. While denoising each pixel, we consider only homogeneous pixels—pixels that are statistically equivalent to each other. This makes it possible to define a stochastic error bound of our method, and this bound goes to zero as the number of ray samples goes to infinity, irrespective of denoising parameters. To highlight the benefits of our method, we apply our method to two Monte Carlo ray tracing methods, photon mapping and path tracing, with various input scenes. We demonstrate that using virtual flash images and homogeneous pixels with a standard denoising method outperforms state‐of‐the‐art image‐space denoising methods. 相似文献
12.
We present metalights, a novel Virtual Point Light (VPL) encapsulating structure which enhances classic interleaved shading by improving VPL sampling, based on few initial screen space samples to estimate VPL contribution to current view. Our method leads to important noise variance reduction in the final picture by only adding a small fraction of computation. The implementation is straight‐forward and well adapted to both CPU and GPU‐based engines. We also present different image‐space assignment schemes for the VPL subsets to break the regularity of the noise pattern or to adapt it to simple antialiasing. 相似文献
13.
The efficient evaluation of visibility in a three‐dimensional scene is a longstanding problem in computer graphics. Visibility evaluations come in many different forms: figuring out what object is visible in a pixel; determining whether a point is visible to a light source; or evaluating the mutual visibility between 2 surface points. This paper provides a new, experimental view on visibility, based on a probabilistic evaluation of the visibility function. Instead of checking the visibility against all possible intervening geometry the visibility between 2 points is now evaluated by testing only a random subset of objects. The result is not a Boolean value that is either 0 or 1, but a numerical value that can even be negative. Because we use the visibility evaluation as part of the integrand in illumination computations, the probabilistic evaluation of visibility becomes part of the Monte Carlo procedure of estimating the illumination integral, and results in an unbiased computation of illumination values in the scene. Moreover, the number of intersections tests for any given ray is decreased, since only a random selection of geometric primitives is tested. Although probabilistic visibility is an experimental and new idea, we present a practical algorithm for direct illumination that uses the probabilistic nature of visibility evaluations. 相似文献
14.
Simulation of light transport through lens systems plays an important role in graphics. While basic imaging properties can be conveniently derived from linear models (like ABCD matrices), these approximations fail to describe nonlinear effects and aberrations that arise in real optics. Such effects can be computed by proper ray tracing, for which, however, finding suitable sampling and filtering strategies is often not a trivial task. Inspired by aberration theory, which describes the deviation from the linear ray transfer in terms of wavefront distortions, we propose a ray‐space formulation for nonlinear effects. In particular, we approximate the analytical solution to the ray tracing problem by means of a Taylor expansion in the ray parameters. This representation enables a construction‐kit approach to complex optical systems in the spirit of matrix optics. It is also very simple to evaluate, which allows for efficient execution on CPU and GPU alike, including the computation of mixed derivatives of any order. We evaluate fidelity and performance of our polynomial model, and show applications in high‐quality offline rendering and at interactive frame rates. 相似文献
15.
Christian Eisenacher Gregory Nichols Andrew Selle Brent Burley 《Computer Graphics Forum》2013,32(4):125-132
Ray‐traced global illumination (GI) is becoming widespread in production rendering but incoherent secondary ray traversal limits practical rendering to scenes that fit in memory. Incoherent shading also leads to intractable performance with production‐scale textures forcing renderers to resort to caching of irradiance, radiosity, and other values to amortize expensive shading. Unfortunately, such caching strategies complicate artist workflow, are difficult to parallelize effectively, and contend for precious memory. Worse, these caches involve approximations that compromise quality. In this paper, we introduce a novel path‐tracing framework that avoids these tradeoffs. We sort large, potentially out‐of‐core ray batches to ensure coherence of ray traversal. We then defer shading of ray hits until we have sorted them, achieving perfectly coherent shading and avoiding the need for shading caches. 相似文献
16.
Thomas Engelhardt Jan Novák Thorsten‐W. Schmidt Carsten Dachsbacher 《Computer Graphics Forum》2012,31(7):2145-2154
In this paper we present a novel method for high‐quality rendering of scenes with participating media. Our technique is based on instant radiosity, which is used to approximate indirect illumination between surfaces by gathering light from a set of virtual point lights (VPLs). It has been shown that this principle can be applied to participating media as well, so that the combined single scattering contribution of VPLs within the medium yields full multiple scattering. As in the surface case, VPL methods for participating media are prone to singularities, which appear as bright “splotches” in the image. These artifacts are usually countered by clamping the VPLs' contribution, but this leads to energy loss within the short‐distance light transport. Bias compensation recovers the missing energy, but previous approaches are prohibitively costly. We investigate VPL‐based methods for rendering scenes with participating media, and propose a novel and efficient approximate bias compensation technique. We evaluate our technique using various test scenes, showing it to be visually indistinguishable from ground truth. 相似文献
17.
We propose an algorithm for interactive ray‐casting of algebraic surfaces of high degree. A key point of our approach is a polynomial form adapted to the view frustum. This so called frustum form yields simple expressions for the Bernstein form of the ray equations, which can be computed efficiently using matrix products and pre‐computed quantities. Numerical root‐finding is performed using B‐spline and Bézier techniques, and we compare the performances of recent and classical algorithms. Furthermore, we propose a simple and fairly efficient anti‐aliasing scheme, based on a combination of screen space and object space techniques. We show how our algorithms can be implemented on streaming architectures with single precision, and demonstrate interactive frame‐rates for degrees up to 16. 相似文献
18.
There are two major ways of calculating ray and parametric surface intersections in rendering. The first is through the use of tessellated triangles, and the second is to use parametric surfaces together with numerical methods such as Newton's method. Both methods are computationally expensive and complicated to implement. In this paper, we focus on Phong Tessellation and introduce a simple direct ray tracing method for Phong Tessellation. Our method enables rendering smooth surfaces in a computationally inexpensive yet robust way. 相似文献
19.
In this paper, we propose a new constrained interpolation profile (CIP) method that is stable and accurate but requires less amount of computation compared to existing CIP‐based solvers. CIP is a high‐order fluid advection solver that can reproduce rich details of fluids. It has third‐order accuracy but its computation is performed over a compact stencil. These advantageous features of CIP are, however, diluted by the following two shortcomings: (1) CIP contains a defect in the utilization of the grid data, which makes the method suitable only for simulations with a tight CFL restriction; and (2) CIP does not guarantee unconditional stability. There have been several attempts to fix these problems in CIP, but they have been only partially successful. The solutions that fixed both problems ended up introducing other undesirable features, namely increased computation time and/or reduced accuracy. This paper proposes a novel modification of the original CIP method that fixes all of the above problems without increasing the computational load or reducing the accuracy. Both quantitative and visual experiments were performed to test the performance of the new CIP in comparison to existing fluid solvers. The results show that the proposed method brings significant improvements in both accuracy and speed. 相似文献
20.
This paper proposes a new technique for generating three-dimensional speech animation. The proposed technique takes advantage of both data-driven and machine learning approaches. It seeks to utilize the most relevant part of the captured utterances for the synthesis of input phoneme sequences. If highly relevant data are missing or lacking, then it utilizes less relevant (but more abundant) data and relies more heavily on machine learning for the lip-synch generation. This hybrid approach produces results that are more faithful to real data than conventional machine learning approaches, while being better able to handle incompleteness or redundancy in the database than conventional data-driven approaches. Experimental results, obtained by applying the proposed technique to the utterance of various words and phrases, show that (1) the proposed technique generates lip-synchs of different qualities depending on the availability of the data, and (2) the new technique produces more realistic results than conventional machine learning approaches. 相似文献