首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We present a new technique called Multiple Vertex Next Event Estimation, which outperforms current direct lighting techniques in forward scattering, optically dense media with the Henyey‐Greenstein phase function. Instead of a one‐segment connection from a vertex within the medium to the light source, an entire sub path of arbitrary length can be created and we show experimentally that 4–10 segments work best in practice. This is done by perturbing a seed path within the Monte Carlo context. Our technique was integrated in a Monte Carlo renderer, combining random walk path tracing with multiple vertex next event estimation via multiple importance sampling for an unbiased result. We evaluate this new technique against standard next event estimation and show that it significantly reduces noise and increases performance of multiple scattering renderings in highly anisotropic, optically dense media. Additionally, we discuss multiple light sources and performance implications of memory‐heavy heterogeneous media.  相似文献   

2.
Robust statistical methods are employed to reduce the noise in Monte Carlo ray tracing. Through the use of resampling, the sample mean distribution is determined for each pixel. Because this distribution is uni‐modal and normal for a large sample size, robust estimates converge to the true mean of the pixel values. Compared to existing methods, less additional storage is required at each pixel because the sample mean distribution can be distilled down to a compact size, and fewer computations are necessary because the robust estimation process is sampling independent and needs a small input size to compute pixel values. The robust statistical pixel estimators are not only resistant to impulse noise, but they also remove general noise from fat‐tailed distributions. A substantial speedup in rendering can therefore be achieved by reducing the number of samples required for a desired image quality. The effectiveness of the proposed approach is demonstrated for path tracing simulations.  相似文献   

3.
Adaptive filtering techniques have proven successful in handling non‐uniform noise in Monte‐Carlo rendering approaches. A recent trend is to choose an optimal filter per pixel from a selection of non spatially‐varying filters. Nonetheless, the best filter choice is difficult to predict in the absence of a reference rendering. Our approach relies on the observation that the reconstruction error is locally smooth for a given filter. Hence, we propose to construct a dense error prediction from a small set of sparse but robust estimates. The filter selection is then formulated as a non‐local optimization problem, which we solve via graph cuts, to avoid visual artifacts due to inconsistent filter choices. Our approach does not impose any restrictions on the used filters, outperforms previous state‐of‐the‐art techniques and provides an extensible framework for future reconstruction techniques.  相似文献   

4.
Bidirectional path tracing is known to perform poorly for the rendering of highly occluded scenes. Indeed, the connection strategy between light and eye subpaths does not take into account the visibility factor, presenting no contribution for many sampled paths. To improve the efficiency of bidirectional path tracing, we propose a new method for adaptive resampling of connections between light and eye subpaths. Aiming for this objective, we build discrete probability distributions of light subpaths based on a skeleton of the empty space of the scene. In order to demonstrate the efficiency of our algorithm, we compare our method to both standard bidirectional path tracing and a recent important caching method.  相似文献   

5.
In this paper, we present improvements to half vector space light transport (HSLT) [ KHD14 ], which make this approach more practical, robust for difficult input geometry, and faster. Our first contribution is the computation of half vector space ray differentials in a different domain than the original work. This enables a more uniform stratification over the image plane during Markov chain exploration. Furthermore, we introduce a new multi chain perturbation in half vector space, which, if combined appropriately with half vector perturbation, makes the mutation strategy both more robust to geometric configurations with fine displacements and faster due to reduced number of ray casts. We provide and analyze the results of improved HSLT and discuss possible applications of our new half vector ray differentials.  相似文献   

6.
A new unbiased sampling approach is presented, which allows the direct illumination from disk and cylinder light sources to be sampled with a uniform probability distribution within their solid angles, as seen from each illuminated point. This approach applies to any form of global illumination path tracing algorithm (forward or bidirectional), where the direct illumination integral from light sources needs to be estimated. We show that taking samples based on the solid angle of these two light sources leads to improved estimates and reduced variance of the Monte Carlo integral for direct illumination. This work follows from previously known unbiased methods for the solid angle sampling of triangular and rectangular light sources and extends the class of lights that can be rendered with these improved sampling algorithms.  相似文献   

7.
Despite recent advances in Monte Carlo rendering techniques, dense, high‐albedo participating media such as wax or skin still remain a difficult problem. In such media, random walks tend to become very long, but may still lead to a large contribution to the image. The Dwivedi sampling scheme, which is based on zero variance random walks, biases the sampling probability distributions to exit the medium as quickly as possible. This can reduce variance considerably under the assumption of a locally homogeneous medium with constant phase function. Prior work uses the normal at the Point of Entry as the bias direction. We demonstrate that this technique can fail in common scenarios such as thin geometry with a strong backlight. We propose two new biasing strategies, Closest Point and Incident Illumination biasing, and show that these techniques can speed up convergence by up to an order of magnitude. Additionally, we propose a heuristic approach for combining biased and classical sampling techniques using Multiple Importance Sampling.  相似文献   

8.
Rendering with accurate camera models greatly increases realism and improves the match of synthetic imagery to real‐life footage. Photographic lenses can be simulated by ray tracing, but the performance depends on the complexity of the lens system, and some operations required for modern algorithms, such as deterministic connections, can be difficult to achieve. We generalise the approach of polynomial optics, i.e. expressing the light field transformation from the sensor to the outer pupil using a polynomial, to work with extreme wide angle (fisheye) lenses and aspherical elements. We also show how sparse polynomials can be constructed from the large space of high‐degree terms (we tested up to degree 15). We achieve this using a variant of orthogonal matching pursuit instead of a Taylor series when computing the polynomials. We show two applications: photorealistic rendering using Monte Carlo methods, where we introduce a new aperture sampling technique that is suitable for light tracing, and an interactive preview method suitable for rendering with deep images.  相似文献   

9.
We present a novel, compact bounding volume hierarchy, TSS BVH, for ray tracing subdivision surfaces computed by the Catmull‐Clark scheme. We use Tetrahedron Swept Sphere (TSS) as a bounding volume to tightly bound limit surfaces of such subdivision surfaces given a user tolerance. Geometric coordinates defining our TSS bounding volumes are implicitly computed from the subdivided mesh via a simple vertex ordering method, and each level of our TSS BVH is associated with a single distance bound, utilizing the Catmull‐Clark scheme. These features result in a linear space complexity as a function of the tree depth, while many prior BVHs have exponential space complexity. We have tested our method against different benchmarks with path tracing and photon mapping. We found that our method achieves up to two orders of magnitude of memory reduction with a high culling ratio over the prior AABB BVH methods, when we represent models with two to four subdivision levels. Overall, our method achieves three times performance improvement thanks to these results. These results are acquired by our theorem that rigorously computes our TSS bounding volumes.  相似文献   

10.
We present a technique to efficiently importance sample distant, all‐frequency illumination in indoor scenes. Standard environment sampling is inefficient in such cases since the distant lighting is typically only visible through small openings (e.g. windows). This visibility is often addressed by manually placing a portal around each window to direct samples towards the openings; however, uniformly sampling the portal (its area or solid angle) disregards the possibly high frequency environment map. We propose a new portal importance sampling technique which takes into account both the environment map and its visibility through the portal, drawing samples proportional to the product of the two. To make this practical, we propose a novel, portal‐rectified reparametrization of the environment map with the key property that the visible region induced by a rectangular portal projects to an axis‐aligned rectangle. This allows us to sample according to the desired product distribution at an arbitrary shading location using a single (precomputed) summed‐area table per portal. Our technique is unbiased, relevant to many renderers, and can also be applied to rectangular light sources with directional emission profiles, enabling efficient rendering of non‐diffuse light sources with soft shadows.  相似文献   

11.
Many‐light methods approximate the light transport in a scene by computing the direct illumination from many virtual point light sources (VPLs), and render low‐noise images covering a wide range of performance and quality goals. However, they are very inefficient at representing glossy light transport. This is because a VPL on a glossy surface illuminates a small fraction of the scene only, and a tremendous number of VPLs might be necessary to render acceptable images. In this paper, we introduce Rich‐VPLs which, in contrast to standard VPLs, represent a multitude of light paths and thus have a more widespread emission profile on glossy surfaces and in scenes with multiple primary light sources. By this, a single Rich‐VPL contributes to larger portions of a scene with negligible additional shading cost. Our second contribution is a placement strategy for (Rich‐)VPLs proportional to sensor importance times radiance. Although both Rich‐VPLs and improved placement can be used individually, they complement each other ideally and share interim computation. Furthermore, both complement existing many‐light methods, e.g. Lightcuts or the Virtual Spherical Lights method, and can improve their efficiency as well as their application for scenes with glossy materials and many primary light sources.  相似文献   

12.
Presenting stereoscopic content on 3D displays is a challenging task, usually requiring manual adjustments. A number of techniques have been developed to aid this process, but they account for binocular disparity of surfaces that are diffuse and opaque only. However, combinations of transparent as well as specular materials are common in the real and virtual worlds, and pose a significant problem. For example, excessive disparities can be created which cannot be fused by the observer. Also, multiple stereo interpretations become possible, e. g., for glass, that both reflects and refracts, which may confuse the observer and result in poor 3D experience. In this work, we propose an efficient method for analyzing and controlling disparities in computer‐generated images of such scenes where surface positions and a layer decomposition are available. Instead of assuming a single per‐pixel disparity value, we estimate all possibly perceived disparities at each image location. Based on this representation, we define an optimization to find the best per‐pixel camera parameters, assuring that all disparities can be easily fused by a human. A preliminary perceptual study indicates, that our approach combines comfortable viewing with realistic depiction of typical specular scenes.  相似文献   

13.
In this paper, we present a method to model hyperelasticity that is well suited for representing the nonlinearity of real‐world objects, as well as for estimating it from deformation examples. Previous approaches suffer several limitations, such as lack of integrability of elastic forces, failure to enforce energy convexity, lack of robustness of parameter estimation, or difficulty to model cross‐modal effects. Our method avoids these problems by relying on a general energy‐based definition of elastic properties. The accuracy of the resulting elastic model is maximized by defining an additive model of separable energy terms, which allow progressive parameter estimation. In addition, our method supports efficient modeling of extreme nonlinearities thanks to energy‐limiting constraints. We combine our energy‐based model with an optimization method to estimate model parameters from force‐deformation examples, and we show successful modeling of diverse deformable objects, including cloth, human finger skin, and internal human anatomy in a medical imaging application.  相似文献   

14.
We present a spatial index structure to accelerate ray tracing on GPUs. It is a flat, non‐hierarchical spatial subdivision of the scene into axis aligned cells of varying size. In order to construct it, we first nest an octree into each cell of a uniform grid. We then apply two optimization passes to increase ray traversal performance: First, we reduce the expected cost for ray traversal by merging cells together. This adapts the structure to complex primitive distributions, solving the “teapot in a stadium” problem. Second, we decouple the cell boundaries used during traversal for rays entering and exiting a given cell. This allows us to extend the exiting boundaries over adjacent cells that are either empty or do not contain additional primitives. Now, exiting rays can skip empty space and avoid repeating intersection tests. Finally, we demonstrate that in addition to the fast ray traversal performance, the structure can be rebuilt efficiently in parallel, allowing for ray tracing dynamic scenes.  相似文献   

15.
In this paper we present a novel approach to simulate image formation for a wide range of real world lenses in the Monte Carlo ray tracing framework. Our approach sidesteps the overhead of tracing rays through a system of lenses and requires no tabulation. To this end we first improve the precision of polynomial optics to closely match ground‐truth ray tracing. Second, we show how the Jacobian of the optical system enables efficient importance sampling, which is crucial for difficult paths such as sampling the aperture which is hidden behind lenses on both sides. Our results show that this yields converged images significantly faster than previous methods and accurately renders complex lens systems with negligible overhead compared to simple models, e.g. the thin lens model. We demonstrate the practicality of our method by incorporating it into a bidirectional path tracing framework and show how it can provide information needed for sophisticated light transport algorithms.  相似文献   

16.
Crowded motions refer to multiple objects moving around and interacting such as crowds, pedestrians and etc. We capture crowded scenes using a depth scanner at video frame rates. Thus, our input is a set of depth frames which sample the scene over time. Processing such data is challenging as it is highly unorganized, with large spatio‐temporal holes due to many occlusions. As no correspondence is given, locally tracking 3D points across frames is hard due to noise and missing regions. Furthermore global segmentation and motion completion in presence of large occlusions is ambiguous and hard to predict. Our algorithm utilizes Gestalt principles of common fate and good continuity to compute motion tracking and completion respectively. Our technique does not assume any pre‐given markers or motion template priors. Our key‐idea is to reduce the motion completion problem to a 1D curve fitting and matching problem which can be solved efficiently using a global optimization scheme. We demonstrate our segmentation and completion method on a variety of synthetic and real world crowded scanned scenes.  相似文献   

17.
The most common solutions to the light transport problem rely on either Monte Carlo (MC) integration or density estimation methods, such as uni‐ & bi‐directional path tracing or photon mapping. Recent gradient‐domain extensions of MC approaches show great promise; here, gradients of the final image are estimated numerically (instead of the image intensities themselves) with coherent paths generated from a deterministic shift mapping. We extend gradient‐domain approaches to light transport simulation based on density estimation. As with previous gradient‐domain methods, we detail important considerations that arise when moving from a primal‐ to gradient‐domain estimator. We provide an efficient and straightforward solution to these problems. Our solution supports stochastic progressive density estimation, so it is robust to complex transport effects. We show that gradient‐domain photon density estimation converges faster than its primal‐domain counterpart, as well as being generally more robust than gradient‐domain uni‐ & bi‐directional path tracing for scenes dominated by complex transport.  相似文献   

18.
Due to the recent advancement of computer graphics hardware and software algorithms, deformable characters have become more and more popular in real‐time applications such as computer games. While there are mature techniques to generate primary deformation from skeletal movement, simulating realistic and stable secondary deformation such as jiggling of fats remains challenging. On one hand, traditional volumetric approaches such as the finite element method require higher computational cost and are infeasible for limited hardware such as game consoles. On the other hand, while shape matching based simulations can produce plausible deformation in real‐time, they suffer from a stiffness problem in which particles either show unrealistic deformation due to high gains, or cannot catch up with the body movement. In this paper, we propose a unified multi‐layer lattice model to simulate the primary and secondary deformation of skeleton‐driven characters. The core idea is to voxelize the input character mesh into multiple anatomical layers including the bone, muscle, fat and skin. Primary deformation is applied on the bone voxels with lattice‐based skinning. The movement of these voxels is propagated to other voxel layers using lattice shape matching simulation, creating a natural secondary deformation. Our multi‐layer lattice framework can produce simulation quality comparable to those from other volumetric approaches with a significantly smaller computational cost. It is best to be applied in real‐time applications such as console games or interactive animation creation.  相似文献   

19.
We present a novel approach for analyzing the quality of multi‐agent crowd simulation algorithms. Our approach is data‐driven, taking as input a set of user‐defined metrics and reference training data, either synthetic or from video footage of real crowds. Given a simulation, we formulate the crowd analysis problem as an anomaly detection problem and exploit state‐of‐the‐art outlier detection algorithms to address it. To that end, we introduce a new framework for the visual analysis of crowd simulations. Our framework allows us to capture potentially erroneous behaviors on a per‐agent basis either by automatically detecting outliers based on individual evaluation metrics or by accounting for multiple evaluation criteria in a principled fashion using Principle Component Analysis and the notion of Pareto Optimality. We discuss optimizations necessary to allow real‐time performance on large datasets and demonstrate the applicability of our framework through the analysis of simulations created by several widely‐used methods, including a simulation from a commercial game.  相似文献   

20.
Recalculating the subspace basis of a deformable body is a mandatory procedure for subspace simulation, after the body gets modified by interactive applications. However, using linear modal analysis to calculate the basis from scratch is known to be computationally expensive. In the paper, we show that the subspace of a modified body can be efficiently obtained from the subspace of its original version, if mesh changes are small. Our basic idea is to approximate the stiffness matrix by its low‐frequency component, so we can calculate new linear deformation modes by solving an incremental eigenvalue decomposition problem. To further handle nonlinear deformations in the subspace, we present a hybrid approach to calculate modal derivatives from both new and original linear modes. Finally, we demonstrate that the cubature samples trained for the original mesh can be reused in fast reduced force and stiffness matrix evaluation, and we explore the use of our techniques in various simulation problems. Our experiment shows that the updated subspace basis still allows a simulator to generate visual plausible deformation effects. The whole system is efficient and it is compatible with other subspace construction approaches.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号