首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 10 毫秒
1.
Several fast global illumination algorithms rely on the Virtual Point Lights framework. This framework separates illumination into two steps: first, propagate radiance in the scene and store it in virtual lights, then gather illumination from these virtual lights. To accelerate the second step, virtual lights and receiving points are grouped hierarchically, for example using Multi-Dimensional Lightcuts. Computing visibility between clusters of virtual lights and receiving points is a bottleneck. Separately, matrix completion algorithms reconstruct completely a low-rank matrix from an incomplete set of sampled elements. In this paper, we use adaptive matrix completion to approximate visibility information after an initial clustering step. We reconstruct visibility information using as little as 10 % to 20 % samples for most scenes, and combine it with shading information computed separately, in parallel on the GPU. Overall, our method computes global illumination 3 or more times faster than previous state-of-the-art methods.  相似文献   

2.
Creating photorealistic materials for light transport algorithms requires carefully fine-tuning a set of material properties to achieve a desired artistic effect. This is typically a lengthy process that involves a trained artist with specialized knowledge. In this work, we present a technique that aims to empower novice and intermediate-level users to synthesize high-quality photorealistic materials by only requiring basic image processing knowledge. In the proposed workflow, the user starts with an input image and applies a few intuitive transforms (e.g., colorization, image inpainting) within a 2D image editor of their choice, and in the next step, our technique produces a photorealistic result that approximates this target image. Our method combines the advantages of a neural network-augmented optimizer and an encoder neural network to produce high-quality output results within 30 seconds. We also demonstrate that it is resilient against poorly-edited target images and propose a simple extension to predict image sequences with a strict time budget of 1–2 seconds per image.  相似文献   

3.
We present a novel technique to efficiently render complex direct illumination in real-time. It is based on a spatio-temporal randomized mixture model of von Mises-Fisher (vMF) distributions in screen space. For every pixel we determine the vMF distribution to sample from using a Markov chain process which is targeted to capture important features of the integrand. By this we avoid the storage overhead of finite-component deterministic mixture models, for which, in addition, determining the optimal component count is challenging. We use stochastic multiple importance sampling (SMIS) to be independent of the equilibrium distribution of our Markov chain process, since it cancels out in the estimator. Further, we use the same sample to advance the Markov chain and to construct the SMIS estimator and local Markov chain state permutations avoid the resulting bias due to dependent sampling. As a consequence we require one ray per sample and pixel only. We evaluate our technique using implementations in a research renderer as well as a classic game engine with highly dynamic content. Our results show that it is efficient and quickly readapts to dynamic conditions. We compare to spatio-temporal resampling (ReSTIR), which can suffer from correlation artifacts due to its non-adapting candidate distributions that can deviate strongly from the integrand. While we focus on direct illumination, our approach is more widely applicable and we exemplarily show the rendering of caustics.  相似文献   

4.
Gradient-domain rendering can highly improve the convergence of light transport simulation using the smoothness in image space. These methods generate image gradients and solve an image reconstruction problem with rendered image and the gradient images. Recently, a previous work proposed a gradient-domain volumetric photon density estimation for homogeneous participating media. However, the image reconstruction relies on traditional L1 reconstruction, which leads to obvious artifacts when only a few rendering passes are performed. Deep learning based reconstruction methods have been exploited for surface rendering, but they are not suitable for volume density estimation. In this paper, we propose an unsupervised neural network for image reconstruction of gradient-domain volumetric photon density estimation, more specifically for volumetric photon mapping, using a variant of GradNet with an encoded shift connection and a separated auxiliary feature branch, which includes volume based auxiliary features such as transmittance and photon density. Our network smooths the images on global scale and preserves the high frequency details on a small scale. We demonstrate that our network produces a higher quality result, compared to previous work. Although we only considered volumetric photon mapping, it's straightforward to extend our method for other forms, like beam radiance estimation.  相似文献   

5.
Recent advances in bidirectional path tracing (BPT) reveal that the use of multiple light sub-paths and the resampling of a small number of these can improve the efficiency of BPT. By increasing the number of pre-sampled light sub-paths, the possibility of generating light paths that provide large contributions can be better explored and this can alleviate the correlation of light paths due to the reuse of pre-sampled light sub-paths by all eye sub-paths. The increased number of pre-sampled light subpaths, however, also incurs a high computational cost. In this paper, we propose a two-stage resampling method for BPT to efficiently handle a large number of pre-sampled light sub-paths. We also derive a weighting function that can treat the changes in path probability due to the two-stage resampling. Our method can handle a two orders of magnitude larger number of presampled light sub-paths than previous methods in equal-time rendering, resulting in stable and better noise reduction than state-of-the-art methods.  相似文献   

6.
Fluorescent materials can shift energy between wavelengths, thereby creating bright and saturated colors both in natural and artificial materials. However, rendering fluorescence for continuous wavelengths or combined with wavelength dependent path configurations so far has only been feasible using spectral unidirectional methods. We present a regularization-based approach for supporting fluorescence in a spectral bidirectional path tracer. Our algorithm samples camera and light sub-paths with independent wavelengths, and when connecting them mollifies the BSDF at one of the connecting vertices such that it reradiates light across multiple wavelengths. We discuss arising issues such as color bias in early iterations, consistency of the method and MIS weights in the presence of spectral mollification. We demonstrate our method in scenes combining fluorescence and transport phenomena that are difficult to render with unidirectional or spectrally discrete methods.  相似文献   

7.
We present a physically based method to render the atmosphere of a planet from ground to space views. Our method is cheap to compute and, as compared to previous successful methods, does not require any high dimensional Lookup Tables (LUTs) and thus does not suffer from visual artifacts associated with them. We also propose a new approximation to evaluate light multiple scattering within the atmosphere in real time. We take a new look at what it means to render natural atmospheric effects, and propose a set of simple look up tables and parameterizations to render a sky and its aerial perspective. The atmosphere composition can change dynamically to match artistic visions and weather changes without requiring heavy LUT update. The complete technique can be used in real-time applications such as games, simulators or architecture pre-visualizations. The technique also scales from power-efficient mobile platforms up to PCs with high-end GPUs, and is also useful for accelerating path tracing.  相似文献   

8.
Despite recent advances in Monte Carlo path tracing at interactive rates, denoised image sequences generated with few samples per-pixel often yield temporally unstable results and loss of high-frequency details. We present a novel adaptive rendering method that increases temporal stability and image fidelity of low sample count path tracing by distributing samples via spatio-temporal joint optimization of sampling and denoising. Adding temporal optimization to the sample predictor enables it to learn spatio-temporal sampling strategies such as placing more samples in disoccluded regions, tracking specular highlights, etc; adding temporal feedback to the denoiser boosts the effective input sample count and increases temporal stability. The temporal approach also allows us to remove the initial uniform sampling step typically present in adaptive sampling algorithms. The sample predictor and denoiser are deep neural networks that we co-train end-to-end over multiple consecutive frames. Our approach is scalable, allowing trade-off between quality and performance, and runs at near real-time rates while achieving significantly better image quality and temporal stability than previous methods.  相似文献   

9.
We propose a novel approach for denoising Monte Carlo path traced images, which uses data from individual samples rather than relying on pixel aggregates. Samples are partitioned into layers, which are filtered separately, giving the network more freedom to handle outliers and complex visibility. Finally the layers are composited front-to-back using alpha blending. The system is trained end-to-end, with learned layer partitioning, filter kernels, and compositing. We obtain similar image quality as recent state-of-the-art sample based denoisers at a fraction of the computational cost and memory requirements.  相似文献   

10.
Looking at a cup of hot tea, an observer can see color patterns and granular textures both on the water surface and in the steam. Motivated by this example, we model the appearance of iridescent water droplets. Mie scattering describes the scattering of light waves by individual spherical particles and is the building block for both effects, but we show that other mechanisms must also be considered in order to faithfully reproduce the appearance. Iridescence on the water surface is caused by droplets levitating above the surface, and interference between light scattered by drops and reflected by the water surface, known as Quetelet scattering, is essential to producing the color. We propose a model, new to computer graphics, for rendering this phenomenon, which we validate against photographs. For iridescent steam, we show that variation in droplet size is essential to the characteristic color patterns. We build a droplet growth model and apply it as a post-processing step to an existing computer graphics fluid simulation to compute collections of particles for rendering. We significantly accelerate the rendering of sparse particles with motion blur by intersecting rays with particle trajectories, blending contributions along viewing rays. Our model reproduces the distinctive color patterns correlated with the steam flow. For both effects, we instantiate individual droplets and render them explicitly, since the granularity of droplets is readily observed in reality, and demonstrate that Mie scattering alone cannot reproduce the visual appearance.  相似文献   

11.
Research in rendering large point clouds traditionally focused on the generation and use of hierarchical acceleration structures that allow systems to load and render the smallest fraction of the data with the largest impact on the output. The generation of these structures is slow and time consuming, however, and therefore ill-suited for tasks such as quickly looking at scan data stored in widely used unstructured file formats, or to immediately display the results of point-cloud processing tasks. We propose a progressive method that is capable of rendering any point cloud that fits in GPU memory in real time, without the need to generate hierarchical acceleration structures in advance. Our method supports data sets with a large amount of attributes per point, achieves a load performance of up to 100 million points per second, displays already loaded data in real time while remaining data is still being loaded, and is capable of rendering up to one billion points using an on-the-fly generated shuffled vertex buffer as its data structure, instead of slow-to-generate hierarchical structures. Shuffling is done during loading in order to allow efficiently filling holes with random subsets, which leads to a higher quality convergence behavior.  相似文献   

12.
By-example aperiodic tilings are popular texture synthesis techniques that allow a fast, on-the-fly generation of unbounded and non-periodic textures with an appearance matching an arbitrary input sample called the “exemplar”. But by relying on uniform random sampling, these algorithms fail to preserve the autocovariance function, resulting in correlations that do not match the ones in the exemplar. The output can then be perceived as excessively random. In this work, we present a new method which can well preserve the autocovariance function of the exemplar. It consists in fetching contents with an importance sampler taking the explicit autocovariance function as the probability density function (pdf) of the sampler. Our method can be controlled for increasing or decreasing the randomness aspect of the texture. Besides significantly improving synthesis quality for classes of textures characterized by pronounced autocovariance functions, we moreover propose a real-time tiling and blending scheme that permits the generation of high-quality textures faster than former algorithms with minimal downsides by reducing the number of texture fetches.  相似文献   

13.
We present a new software ray tracing solution that efficiently computes visibilities in dynamic scenes. We first introduce a novel scene representation: ray-aligned occupancy map array (ROMA) that is generated by rasterizing the dynamic scene once per frame. Our key contribution is a fast and low-divergence tracing method computing visibilities in constant time, without constructing and traversing the traditional intersection acceleration data structures such as BVH. To further improve accuracy and alleviate aliasing, we use a spatiotemporal scheme to stochastically distribute the candidate ray samples. We demonstrate the practicality of our method by integrating it into a modern real-time renderer and showing better performance compared to existing techniques based on distance fields (DFs). Our method is free of the typical artifacts caused by incomplete scene information, and is about 2.5×–10× faster than generating and tracing DFs at the same resolution and equal storage.  相似文献   

14.
Environment maps are commonly used to represent and compute far-field illumination in virtual scenes. However, they are expensive to evaluate and sample from, limiting their applicability to real-time rendering. Previous methods have focused on compression through spherical-domain approximations, or on learning priors for natural, day-light illumination. These hinder both accuracy and generality, and do not provide the probability information required for importance-sampling Monte Carlo integration. We propose NEnv, a deep-learning fully-differentiable method, capable of compressing and learning to sample from a single environment map. NEnv is composed of two different neural networks: A normalizing flow, able to map samples from uniform distributions to the probability density of the illumination, also providing their corresponding probabilities; and an implicit neural representation which compresses the environment map into an efficient differentiable function. The computation time of environment samples with NEnv is two orders of magnitude less than with traditional methods. NEnv makes no assumptions regarding the content (i.e. natural illumination), thus achieving higher generality than previous learning-based approaches. We share our implementation and a diverse dataset of trained neural environment maps, which can be easily integrated into existing rendering engines.  相似文献   

15.
In this paper, we provide a comprehensive theory of anti-aliasing sampling patterns that explains and revises known results, and introduce a variational optimization framework to generate point patterns with any desired power spectra and anti-aliasing properties. We start by deriving the exact spectral expression for expected error in reconstructing a function in terms of power spectra of sampling patterns, and analyzing how the shape of power spectra is related to anti-aliasing properties. Based on this analysis, we then formulate the problem of generating anti-aliasing sampling patterns as constrained variational optimization on power spectra. This allows us to not rely on any parametric form, and thus explore the whole space of realizable spectra. We show that the resulting optimized sampling patterns lead to reconstructions with less visible aliasing artifacts, while keeping low frequencies as clean as possible. Although we focus on image plane sampling, our theory and algorithms apply in any dimensions, and the variational optimization framework can be utilized in all problems where point pattern characteristics are given or optimized.  相似文献   

16.
Oriented bounding box (OBB) hierarchies can be used instead of hierarchies based on axis-aligned bounding boxes (AABB), providing tighter fitting to the underlying geometric structures and resulting in improved interference tests, such as ray-geometry intersections. In this paper, we present a method for the fast, parallel transformation of an existing bounding volume hierarchy (BVH), based on AABBs, into a hierarchy based on oriented bounding boxes. To this end, we parallelise a high-quality OBB extraction algorithm from the literature to operate as a standalone OBB estimator and further extend it to efficiently build an OBB hierarchy in a bottom up manner. This agglomerative approach allows for fast parallel execution and the formation of arbitrary, high-quality OBBs in bounding volume hierarchies. The method is fully implemented on the GPU and extensively evaluated with ray intersections.  相似文献   

17.
Recently, deep learning-based denoising approaches have led to dramatic improvements in low sample-count Monte Carlo rendering. These approaches are aimed at path tracing, which is not ideal for simulating challenging light transport effects like caustics, where photon mapping is the method of choice. However, photon mapping requires very large numbers of traced photons to achieve high-quality reconstructions. In this paper, we develop the first deep learning-based method for particle-based rendering, and specifically focus on photon density estimation, the core of all particle-based methods. We train a novel deep neural network to predict a kernel function to aggregate photon contributions at shading points. Our network encodes individual photons into per-photon features, aggregates them in the neighborhood of a shading point to construct a photon local context vector, and infers a kernel function from the per-photon and photon local context features. This network is easy to incorporate in many previous photon mapping methods (by simply swapping the kernel density estimator) and can produce high-quality reconstructions of complex global illumination effects like caustics with an order of magnitude fewer photons compared to previous photon mapping methods. Our approach largely reduces the required number of photons, significantly advancing the computational efficiency in photon mapping.  相似文献   

18.
Layered materials capture subtle, realistic reflection behaviors that traditional single-layer models lack. Much of this is due to the complex subsurface light transport at the interfaces of – and in the media between – layers. Rendering with these materials can be costly, since we must simulate these transport effects at every evaluation of the underlying reflectance model. Rendering an image requires thousands of such evaluations, per pixel. Recent work treats this complexity by introducing significant approximations, requiring large precomputed datasetsper material, or simplifying the light transport simulations within the materials. Even the most effective of these methods struggle with the complexity induced by high-frequency variation in reflectance parameters and micro-surface normal variation, as well as anisotropic volumetric scattering between the layer interfaces. We present a more efficient, unbiased estimator for light transport in such general, complex layered appearance models. By conducting an analysis of the types of transport paths that contribute most to the aggregate reflectance dynamics, we propose an effective and unbiased path sampling method that reduces variance in the reflectance evaluations. Our method additionally supports reflectance importance sampling, does not rely on any precomputation, and so integrates readily into existing renderers. We consistently outperform the state-of-the-art by ~2 – 6 × in equal-quality (i.e., equal error) comparisons.  相似文献   

19.
Procedural material models have been gaining traction in many applications thanks to their flexibility, compactness, and easy editability. We explore the inverse rendering problem of procedural material parameter estimation from photographs, presenting a unified view of the problem in a Bayesian framework. In addition to computing point estimates of the parameters by optimization, our framework uses a Markov Chain Monte Carlo approach to sample the space of plausible material parameters, providing a collection of plausible matches that a user can choose from, and efficiently handling both discrete and continuous model parameters. To demonstrate the effectiveness of our framework, we fit procedural models of a range of materials—wall plaster, leather, wood, anisotropic brushed metals and layered metallic paints—to both synthetic and real target images.  相似文献   

20.
Computer graphics artists often resort to compositing to rework light effects in a synthetic image without requiring a new render. Shadows are primary subjects of artistic manipulation as they carry important stylistic information while our perception is tolerant with their editing. In this paper we formalize the notion of global shadow, generalizing direct shadow found in previous work to a global illumination context. We define an object's shadow layer as the difference between two altered renders of the scene. A shadow layer contains the radiance lost on the camera film because of a given object. We translate this definition in the theoretical framework of Monte‐Carlo integration, obtaining a concise expression of the shadow layer. Building on it, we propose a path tracing algorithm that renders both the original image and any number of shadow layers in a single pass: the user may choose to separate shadows on a per‐object and per‐light basis, enabling intuitive and decoupled edits.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号