首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Monte Carlo rendering algorithms generally rely on some form of importance sampling to evaluate the measurement equation. Most of these importance sampling methods only take local information into account, however, so the actual importance function used may not closely resemble the light distribution in the scene. In this paper, we present Table‐driven Adaptive Importance Sampling (TAIS), a sampling technique that augments existing importance functions with tabular importance maps that direct sampling towards undersampled regions of path space. The importance maps are constructed lazily, relying on information gathered during the course of sampling. During sampling the importance maps act either in parallel with or as a preprocess to existing importance sampling methods. We show that our adaptive importance maps can be effective at reducing variance in a number of rendering situations.  相似文献   

2.
Historically, rendering system development has been mainly focused on improving the numerical accuracy of the rendering algorithms and their runtime efficiency. In this paper, we propose a method to improve the correctness not of the algorithms themselves, but of their implementation. Specifically, we show that by combining static type checking and generic programming, rendering system and shader development can take advantage of compile‐time checking to perform dimensional analysis, i.e. to enforce the correctness of physical dimensions and units in light transport, and geometric space analysis, i.e. to ensure that geometric computations respect the spaces in which points, vectors and normals were defined. We demonstrate our methods by implementing a CPU path tracer and a GPU renderer which previews direct illumination. While we build on prior work to develop our implementations, the main contribution of our work is to show that dimensional analysis and geometric space checking can be successfully integrated into the development of rendering systems and shaders.  相似文献   

3.
Interactive computation of global illumination is a major challenge in current computer graphics research. Global illumination heavily affects the visual quality of generated images. It is therefore a key attribute for the perception of photo‐realistic images. Path tracing is able to simulate the physical behaviour of light using Monte Carlo techniques. However, the computational burden of this technique prohibits interactive rendering times on standard commodity hardware in high‐quality. Trying to solve the Monte Carlo integration with fewer samples results in characteristic noisy images. Global illumination filtering methods take advantage of the fact that the integral for neighbouring pixels may be very similar. Averaging samples of similar characteristics in screen‐space may approximate the correct integral, but may result in visible outliers. In this paper, we present a novel path tracing pipeline based on an edge‐aware filtering method for the indirect illumination which produces visually more pleasing results without noticeable outliers. The key idea is not to filter the noisy path traced images but to use it as a guidance to filter a second image composed from characteristic scene attributes that do not contain noise by default. We show that our approach better approximates the Monte Carlo integral compared to previous methods. Since the computation is carried out completely in screen‐space it is therefore applicable to fully dynamic scenes, arbitrary lighting and allows for high‐quality path tracing at interactive frame rates on commodity hardware.  相似文献   

4.
The standard C/C++ implementation of a spatial partitioning data structure, such as octree and quadtree, is often inefficient in terms of storage requirements particularly when the memory overhead for maintaining parent‐to‐child pointers is significant with respect to the amount of actual data in each tree node. In this work, we present a novel data structure that implements uniform spatial partitioning without storing explicit parent‐to‐child pointer links. Our linkless tree encodes the storage locations of subdivided nodes using perfect hashing while retaining important properties of uniform spatial partitioning trees, such as coarse‐to‐fine hierarchical representation, efficient storage usage, and efficient random accessibility. We demonstrate the performance of our linkless trees using image compression and path planning examples.  相似文献   

5.
The visibility function in direct illumination describes the binary visibility over a light source, e.g., an environment map. Intuitively, the visibility is often strongly correlated between nearby locations in time and space, but exploiting this correlation without introducing noticeable errors is a hard problem. In this paper, we first study the statistical characteristics of the visibility function. Then, we propose a robust and unbiased method for using estimated visibility information to improve the quality of Monte Carlo evaluation of direct illumination. Our method is based on the theory of control variates, and it can be used on top of existing state‐of‐the‐art schemes for importance sampling. The visibility estimation is obtained by sparsely sampling and caching the 4D visibility field in a compact bitwise representation. In addition to Monte Carlo rendering, the stored visibility information can be used in a number of other applications, for example, ambient occlusion and lighting design.  相似文献   

6.
We present a novel framework for efficiently computing the indirect illumination in diffuse and moderately glossy scenes using density estimation techniques. Many existing global illumination approaches either quickly compute an overly approximate solution or perform an orders of magnitude slower computation to obtain high-quality results for the indirect illumination. The proposed method improves photon density estimation and leads to significantly better visual quality in particular for complex geometry, while only slightly increasing the computation time. We perform direct splatting of photon rays, which allows us to use simpler search data structures. Since our density estimation is carried out in ray space rather than on surfaces, as in the commonly used photon mapping algorithm, the results are more robust against geometrically incurred sources of bias. This holds also in combination with final gathering where photon mapping often overestimates the illumination near concave geometric features. In addition, we show that our photon splatting technique can be extended to handle moderately glossy surfaces and can be combined with traditional irradiance caching for sparse sampling and filtering in image space.  相似文献   

7.
We describe algorithms for canonically partitioning semi‐regular quadrilateral meshes into structured submeshes, using an adaptation of the geometric motorcycle graph of Eppstein and Erickson to quad meshes. Our partitions may be used to efficiently find isomorphisms between quad meshes. In addition, they may be used as a highly compressed representation of the original mesh. These partitions can be constructed in sublinear time from a list of the extraordinary vertices in a mesh. We also study the problem of further reducing the number of submeshes in our partitions—we prove that optimizing this number is NP‐hard, but it can be efficiently approximated.  相似文献   

8.
Compared with its competitors such as the bounding volume hierarchy, a drawback of the kd‐tree structure is that a large number of triangles are repeatedly duplicated during its construction, which often leads to inefficient, large and tall binary trees with high triangle redundancy. In this paper, we propose a space‐efficient kd‐tree representation where, unlike commonly used methods, an inner node is allowed to optionally store a reference to a triangle, so highly redundant triangles in a kd‐tree can be culled from the leaf nodes and moved to the inner nodes. To avoid the construction of ineffective kd‐trees entailing computational inefficiencies due to early, possibly unnecessary, ray‐triangle intersection calculations that now have to be performed in the inner nodes during the kd‐tree traversal, we present heuristic measures for determining when and how to choose triangles for inner nodes during kd‐tree construction. Based on these metrics, we describe how the new form of kd‐tree is constructed and stored compactly using a carefully designed data layout. Our experiments with several example scenes showed that our kd‐tree representation technique significantly reduced the memory requirements for storing the kd‐tree structure, while effectively suppressing the unavoidable frame‐rate degradation observed during ray tracing.  相似文献   

9.
We present an approach to improve the search efficiency for near‐optimal motion synthesis using motion graphs. An optimal or near‐optimal path through a motion graph often leads to the most intuitive result. However, finding such a path can be computationally expensive. Our main contribution is a bidirectional search algorithm. We dynamically divide the search space evenly and merge two search trees to obtain the final solution. This cuts the maximum search depth almost in half and leads to significant speedup. To illustrate the benefits of our approach, we present an interactive sketching interface that allows users to specify complex motions quickly and intuitively.  相似文献   

10.
This paper introduces an accurate real‐time soft shadow algorithm that uses sample based visibility. Initially, we present a GPU‐based alias‐free hard shadow map algorithm that typically requires only a single render pass from the light, in contrast to using depth peeling and one pass per layer. For closed objects, we also suppress the need for a bias. The method is extended to soft shadow sampling for an arbitrarily shaped area‐/volumetric light source using 128‐1024 light samples per screen pixel. The alias‐free shadow map guarantees that the visibility is accurately sampled per screen‐space pixel, even for arbitrarily shaped (e.g. non‐planar) surfaces or solid objects. Another contribution is a smooth coherent shading model to avoid common light leakage near shadow borders due to normal interpolation.  相似文献   

11.
Higher‐order finite element methods have emerged as an important discretization scheme for simulation. They are increasingly used in contemporary numerical solvers, generating a new class of data that must be analyzed by scientists and engineers. Currently available visualization tools for this type of data are either batch oriented or limited to certain cell types and polynomial degrees. Other approaches approximate higher‐order data by resampling resulting in trade‐offs in interactivity and quality. To overcome these limitations, we have developed a distributed visualization system which allows for interactive exploration of non‐conforming unstructured grids, resulting from space‐time discontinuous Galerkin simulations, in which each cell has its own higher‐order polynomial solution. Our system employs GPU‐based raycasting for direct volume rendering of complex grids which feature non‐convex, curvilinear cells with varying polynomial degree. Frequency‐based adaptive sampling accounts for the high variations along rays. For distribution across a GPU cluster, the initial object‐space partitioning is determined by cell characteristics like the polynomial degree and is adapted at runtime by a load balancing mechanism. The performance and utility of our system is evaluated for different aeroacoustic simulations involving the propagation of shock fronts.  相似文献   

12.
The popularity of many‐light rendering, which converts complex global illumination computations into a simple sum of the illumination from virtual point lights (VPLs), for predictive rendering has increased in recent years. A huge number of VPLs are usually required for predictive rendering at the cost of extensive computational time. While previous methods can achieve significant speedup by clustering VPLs, none of these previous methods can estimate the total errors due to clustering. This drawback imposes on users tedious trial and error processes to obtain rendered images with reliable accuracy. In this paper, we propose an error estimation framework for many‐light rendering. Our method transforms VPL clustering into stratified sampling combined with confidence intervals, which enables the user to estimate the error due to clustering without the costly computing required to sum the illumination from all the VPLs. Our estimation framework is capable of handling arbitrary BRDFs and is accelerated by using visibility caching, both of which make our method more practical. The experimental results demonstrate that our method can estimate the error much more accurately than the previous clustering method.  相似文献   

13.
A recent technique that forms virtual ray lights (VRLs) from path segments in media, reduces the artifacts common to VPL approaches in participating media, however, distracting singularities still remain. We present Virtual Beam Lights (VBLs), a progressive many‐lights algorithm for rendering complex indirect transport paths in, from, and to media. VBLs are efficient and can handle heterogeneous media, anisotropic scattering, and moderately glossy surfaces, while provably converging to ground truth. We inflate ray lights into beam lights with finite thicknesses to eliminate the remaining singularities. Furthermore, we devise several practical schemes for importance sampling the various transport contributions between camera rays, light rays, and surface points. VBLs produce artifact‐free images faster than VRLs, especially when glossy surfaces and/or anisotropic phase functions are present. Lastly, we employ a progressive thickness reduction scheme for VBLs in order to render results that converge to ground truth.  相似文献   

14.
We present a new technique called Multiple Vertex Next Event Estimation, which outperforms current direct lighting techniques in forward scattering, optically dense media with the Henyey‐Greenstein phase function. Instead of a one‐segment connection from a vertex within the medium to the light source, an entire sub path of arbitrary length can be created and we show experimentally that 4–10 segments work best in practice. This is done by perturbing a seed path within the Monte Carlo context. Our technique was integrated in a Monte Carlo renderer, combining random walk path tracing with multiple vertex next event estimation via multiple importance sampling for an unbiased result. We evaluate this new technique against standard next event estimation and show that it significantly reduces noise and increases performance of multiple scattering renderings in highly anisotropic, optically dense media. Additionally, we discuss multiple light sources and performance implications of memory‐heavy heterogeneous media.  相似文献   

15.
In this paper, we present a rapid prototyping framework for GPU‐based volume rendering. Therefore, we propose a dynamic shader pipeline based on the SuperShader concept and illustrate the design decisions. Also, important requirements for the development of our system are presented. In our approach, we break down the rendering shader into areas containing code for different computations, which are defined as freely combinable, modularized shader blocks. Hence, high‐level changes of the rendering configuration result in the implicit modification of the underlying shader pipeline. Furthermore, the prototyping system allows inserting custom shader code between shader blocks of the pipeline at run‐time. A suitable user interface is available within the prototyping environment to allow intuitive modification of the shader pipeline. Thus, appropriate solutions for visualization problems can be interactively developed. We demonstrate the usage and the usefulness of our framework with implementations of dynamic rendering effects for medical applications.  相似文献   

16.
The efficiency of Monte Carlo algorithms for light transport simulation is directly related to their ability to importance‐sample the product of the illumination and reflectance in the rendering equation. Since the optimal sampling strategy would require knowledge about the transport solution itself, importance sampling most often follows only one of the known factors – BRDF or an approximation of the incident illumination. To address this issue, we propose to represent the illumination and the reflectance factors by the Gaussian mixture model (GMM), which we fit by using a combination of weighted expectation maximization and non‐linear optimization methods. The GMM representation then allows us to obtain the resulting product distribution for importance sampling on‐the‐fly at each scene point. For its efficient evaluation and sampling we preform an up‐front adaptive decimation of both factor mixtures. In comparison to state‐of‐the‐art sampling methods, we show that our product importance sampling can lead to significantly better convergence in scenes with complex illumination and reflectance.  相似文献   

17.
We propose a new adaptive algorithm for determining virtual point lights (VPL) in the scope of real‐time instant radiosity methods, which use a limited number of VPLs. The proposed method is based on Metropolis‐Hastings sampling and exhibits better temporal coherence of VPLs, which is particularly important for real‐time applications dealing with dynamic scenes. We evaluate the properties of the proposed method in the context of the algorithm based on imperfect shadow maps and compare it with the commonly used inverse transform method. The results indicate that the proposed technique can significantly reduce the temporal flickering artifacts even for scenes with complex materials and textures. Further, we propose a novel splatting scheme for imperfect shadow maps using hardware tessellation. This scheme significantly improves the rendering performance particularly for complex and deformable scenes. We thoroughly analyze the performance of the proposed techniques on test scenes with detailed materials, moving camera, and deforming geometry.  相似文献   

18.
We present a new method for estimating the radiance function of complex area light sources. The method is based on Jensen's photon mapping algorithm. In order to capture high angular frequencies in the radiance function, we incorporate the angular domain into the density estimation. However, density estimation in position-direction space makes it necessary to find a tradeoff between the spatial and angular accuracy of the estimation. We identify the parameters which are important for this tradeoff and investigate the typical estimation errors. We show how the large data size, which is inherent to the underlying problem, can be handled. The method is applied to different automotive tail lights. It can be applied to a wide range of other real-world light sources.  相似文献   

19.
State‐of‐the‐art density estimation methods for rendering participating media rely on a dense photon representation of the radiance distribution within a scene. A critical bottleneck of such kernel‐based approaches is the excessive number of photons that are required in practice to resolve fine illumination details, while controlling the amount of noise. In this paper, we propose a parametric density estimation technique that represents radiance using a hierarchical Gaussian mixture. We efficiently obtain the coefficients of this mixture using a progressive and accelerated form of the Expectation‐Maximization algorithm. After this step, we are able to create noise‐free renderings of high‐frequency illumination using only a few thousand Gaussian terms, where millions of photons are traditionally required. Temporal coherence is trivially supported within this framework, and the compact footprint is also useful in the context of real‐time visualization. We demonstrate a hierarchical ray tracing‐based implementation, as well as a fast splatting approach that can interactively render animated volume caustics.  相似文献   

20.
This paper presents an improvement to the stochastic progressive photon mapping (SPPM), a method for robustly simulating complex global illumination with distributed ray tracing effects. Normally, similar to photon mapping and other particle tracing algorithms, SPPM would become inefficient when the photons are poorly distributed. An inordinate amount of photons are required to reduce the error caused by noise and bias to acceptable levels. In order to optimize the distribution of photons, we propose an extension of SPPM with a Metropolis‐Hastings algorithm, effectively exploiting local coherence among the light paths that contribute to the rendered image. A well‐designed scalar contribution function is introduced as our Metropolis sampling strategy, targeting at specific parts of image areas with large error to improve the efficiency of the radiance estimator. Experimental results demonstrate that the new Metropolis sampling based approach maintains the robustness of the standard SPPM method, while significantly improving the rendering efficiency for a wide range of scenes with complex lighting.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号