共查询到20条相似文献,搜索用时 15 毫秒
1.
It is still challenging to render directional but non-specular reflections in complex scenes. The SG-based (Spherical Gaussian) many-light framework provides a scalable solution but still requires a large number of glossy virtual lights to avoid spikes as well as reduce clamping errors. Directly gathering contributions from these glossy virtual lights to each pixel in a pairwise way is very inefficient. In this paper, we propose an adaptive algorithm with tighter error bounds to efficiently compute glossy interreflections from glossy virtual lights. This approach is an extension of the Lightcuts that builds hierarchies on both lights and pixels with new error bounds and new GPU-based traversal methods between light and pixel hierarchies. Results demonstrate that our method is able to faithfully and efficiently compute glossy interreflections in scenes with highly glossy and spatial varying reflectance. Compared with the conventional Lightcuts method, our approach generates lightcuts with only one-fourth to one-fifth light nodes therefore exhibits better scalability. Additionally, after being implemented on GPU, our algorithms achieve a magnitude of faster performance than the previous method. 相似文献
2.
On the foundations of many rendering algorithms it is the symmetry between the path traversed by light and its adjoint path starting from the camera. However, several effects, including polarization or fluorescence, break that symmetry, and are defined only on the direction of light propagation. This reduces the applicability of bidirectional methods that exploit this symmetry for simulating effectively light transport. In this work, we focus on how to include these non‐symmetric effects within a bidirectional rendering algorithm. We generalize the path integral to support the constraints imposed by non‐symmetric light transport. Based on this theoretical framework, we propose modifications on two bidirectional methods, namely bidirectional path tracing and photon mapping, extending them to support polarization and fluorescence, in both steady and transient state. 相似文献
3.
Monte-Carlo rendering requires determining the visibility between scene points as the most common and compute intense operation to establish paths between camera and light source. Unfortunately, many tests reveal occlusions and the corresponding paths do not contribute to the final image. In this work, we present next event estimation++ (NEE++): a visibility mapping technique to perform visibility tests in a more informed way by caching voxel to voxel visibility probabilities. We show two scenarios: Russian roulette style rejection of visibility tests and direct importance sampling of the visibility. We show applications to next event estimation and light sampling in a uni-directional path tracer, and light-subpath sampling in Bi-Directional Path Tracing. The technique is simple to implement, easy to add to existing rendering systems, and comes at almost no cost, as the required information can be directly extracted from the rendering process itself. It discards up to 80% of visibility tests on average, while reducing variance by ∼20% compared to other state-of-the-art light sampling techniques with the same number of samples. It gracefully handles complex scenes with efficiency similar to Metropolis light transport techniques but with a more uniform convergence. 相似文献
4.
Stavros Diolatzis Jan Novak Fabrice Rousselle Jonathan Granskog Miika Aittala Ravi Ramamoorthi George Drettakis 《Computer Graphics Forum》2023,42(6):e14846
We introduce MesoGAN, a model for generative 3D neural textures. This new graphics primitive represents mesoscale appearance by combining the strengths of generative adversarial networks (StyleGAN) and volumetric neural field rendering. The primitive can be applied to surfaces as a neural reflectance shell; a thin volumetric layer above the surface with appearance parameters defined by a neural network. To construct the neural shell, we first generate a 2D feature texture using StyleGAN with carefully randomized Fourier features to support arbitrarily sized textures without repeating artefacts. We augment the 2D feature texture with a learned height feature, which aids the neural field renderer in producing volumetric parameters from the 2D texture. To facilitate filtering, and to enable end-to-end training within memory constraints of current hardware, we utilize a hierarchical texturing approach and train our model on multi-scale synthetic datasets of 3D mesoscale structures. We propose one possible approach for conditioning MesoGAN on artistic parameters (e.g. fibre length, density of strands, lighting direction) and demonstrate and discuss integration into physically based renderers. 相似文献
5.
The wide adoption of path‐tracing algorithms in high‐end realistic rendering has stimulated many diverse research initiatives. In this paper we present a coherent survey of methods that utilize Monte Carlo integration for estimating light transport in scenes containing participating media. Our work complements the volume‐rendering state‐of‐the‐art report by Cerezo et al. [ CPP*05 ]; we review publications accumulated since its publication over a decade ago, and include earlier methods that are key for building light transport paths in a stochastic manner. We begin by describing analog and non‐analog procedures for free‐path sampling and discuss various expected‐value, collision, and track‐length estimators for computing transmittance. We then review the various rendering algorithms that employ these as building blocks for path sampling. Special attention is devoted to null‐collision methods that utilize fictitious matter to handle spatially varying densities; we import two “next‐flight” estimators originally developed in nuclear sciences. Whenever possible, we draw connections between image‐synthesis techniques and methods from particle physics and neutron transport to provide the reader with a broader context. 相似文献
6.
Adaptive Representation of Specular Light 总被引:1,自引:0,他引:1
Caustics produce beautiful and intriguing illumination patterns. However, their complex behavior makes them difficult to simulate accurately in all but the simplest configurations. To capture their appearance, we present an adaptive approach based upon light beams. Exploiting the coherence between the light rays forming a beam greatly reduces the number of samples required for precise illumination reconstruction. The beams characterize the light distribution due to interactions with specular surfaces in 3D space. They thus allow for the treatment of illumination within single-scattering participating media. A hierarchical structure enclosing the light beams possesses inherent properties to detect efficiently all beams reaching any 3D point, to adapt itself according to illumination effects in the final image, and to reduce memory consumption via caching. 相似文献
7.
Carsten Dachsbacher Jaroslav Křivánek Miloš Hašan Adam Arbree Bruce Walter Jan Novák 《Computer Graphics Forum》2014,33(1):88-104
Recent years have seen increasing attention and significant progress in many‐light rendering, a class of methods for efficient computation of global illumination. The many‐light formulation offers a unified mathematical framework for the problem reducing the full lighting transport simulation to the calculation of the direct illumination from many virtual light sources. These methods are unrivaled in their scalability: they are able to produce plausible images in a fraction of a second but also converge to the full solution over time. In this state‐of‐the‐art report, we give an easy‐to‐follow, introductory tutorial of the many‐light theory; provide a comprehensive, unified survey of the topic with a comparison of the main algorithms; discuss limitations regarding materials and light transport phenomena and present a vision to motivate and guide future research. We will cover both the fundamental concepts as well as improvements, extensions and applications of many‐light rendering. 相似文献
8.
This paper presents a novel algorithm for volumetric reconstruction of objects from planar sections using Delaunay triangulation, which solves the main problems posed to models defined by reconstruction, particularly from the viewpoint of producing meshes that are suitable for interaction and simulation tasks. The requirements for these applications are discussed here and the results of the method are presented. Additionally, it is compared to another commonly used reconstruction algorithm based on Delaunay triangulation, showing the advantages of the reconstructions obtained by our technique. 相似文献
9.
We propose a method to accelerate direct volume rendering using programmable graphics hardware (GPU). In the method, texture slices are grouped together to form a texture slab. Rendering non-empty slabs from front to back viewing order generates the resultant image. Considering each pixel of the image as a ray, slab silhouette maps (SSMs) are used to skip empty spaces along the ray direction per pixel basis. Additionally, SSMs contain terminated ray information. The method relies on hardware z-occlusion culling and hardware occlusion queries to accelerate ray traversals. The advantage of this method is that SSMs are created on the fly by the GPU without any pre-processing. The cost of generating the acceleration structure is very small with respect to the total rendering time. 相似文献
10.
José Pedro Aguerre Elena García-Nevado Jairo Acuña Paz y Miño Eduardo Fernández Benoit Beckers 《Computer Graphics Forum》2020,39(6):377-391
Urban thermography is a non-invasive measurement technique commonly used for building diagnosis and energy efficiency evaluation. The physical interpretation of thermal images is a challenging task because they do not necessarily depict the real temperature of the surfaces, but one estimated from the measured incoming radiation. In this sense, the computational rendering of a thermal image can be useful to understand the results captured in a measurement campaign. The computer graphics community has proposed techniques for light rendering that are used for its thermal counterpart. In this work, a physically based simulation methodology based on a combination of the finite element method (FEM) and ray tracing is presented. The proposed methods were tested using a highly detailed urban geometry. Directional emissivity models, glossy reflectivity functions and importance sampling were used to render thermal images. The simulation results were compared with a set of measured thermograms, showing good agreement between them. 相似文献
11.
Ray tracing algorithms that sample both the light received directly from light sources and the light received indirectly by diffuse reflection from other patches, can accurately render the global illumination in a scene and can display complex scenes with accurate shadowing. A drawback of these algorithms, however, is the high cost for sampling the direct light which is done by shadow ray testing. Although several strategies are available to reduce the number of shadow rays, still a large number of rays will be needed, in particular to sample large area light sources. An adaptive sampling strategy is proposed that reduces the number of shadow rays by using statistical information from the sampling process and by applying information from a radiosity preprocessing. A further reduction in shadow rays is obtained by applying shadow pattern coherence, i.e. reusing the adaptive sampling pattern for neighboring sampling points. 相似文献
12.
Today, Monte Carlo light transport algorithms are used in many applications to render realistic images. Depending on the complexity of the used methods, several light effects can or cannot be found by the sampling process. Especially, specular and smooth glossy surfaces often lead to high noise and missing light effects. Path space regularization provides a solution, improving any sampling algorithm, by modifying the material evaluation code. Previously, Kaplanyan and Dachsbacher [KD13] introduced the concept for pure specular interactions. We extend this idea to the commonly used microfacet models by manipulating the roughness parameter prior to the evaluation. We also show that this kind of regularization requires a change in the MIS weight computation and provide the solution. Finally, we propose two heuristics to adaptively reduce the introduced bias. Using our method, many complex light effects are reproduced and the fidelity of smooth objects is increased. Additionally, if a path was sampleable before, the variance is partially reduced. 相似文献
13.
Samples with high contribution but low probability density, often called fireflies, occur in all practical Monte Carlo estimators and are part of computing unbiased estimates. For finite‐sample estimates, however, they can lead to excessive variance. Rejecting all samples classified as outliers, as suggested in previous work, leads to estimates that are too low and can cause undesirable artefacts. In this paper, we show how samples can be re‐weighted depending on their contribution and sampling frequency such that the finite‐sample estimate gets closer to the correct expected value and the variance can be controlled. For this, we first derive a theory for how samples should ideally be re‐weighted and that this would require the probability density function of the optimal sampling strategy. As this probability density function is generally unknown, we show how the discrepancy between the optimal and the actual sampling strategy can be estimated and used for re‐weighting in practice. We describe an efficient algorithm that allows for the necessary analysis of per‐pixel sample distributions in the context of Monte Carlo rendering without storing any individual samples, with only minimal changes to the rendering algorithm. It causes negligible runtime overhead, works in constant memory and is well suited for parallel and progressive rendering. The re‐weighting runs as a fast post‐process, can be controlled interactively and our approach is non‐destructive in that the unbiased result can be reconstructed at any time. 相似文献
14.
José M. Noguera Antonio J. Rueda Miguel A. Espada Máximo Martín 《Computer Graphics Forum》2014,33(8):145-156
The generation of a stereoscopic animation film requires doubling the rendering times and hence the cost. In this paper, we address this problem and propose an automatic system for generating a stereo pair from a given image and its depth map. Although several solutions exist in the literature, the high standards of image quality required in the context of a professional animation studio forced us to develop specially crafted algorithms that avoid artefacts caused by occlusions, anti‐aliasing filters, etc. This paper describes all the algorithms involved in our system and provides their GPU implementation. The proposed system has been tested with real‐life working scenarios. Our experiments show that the second view of the stereoscopic pair can be computed with as little as 15% of the effort of the original image while guaranteeing a similar quality. 相似文献
15.
K. Arnavaz M. Kragballe Nielsen P. G. Kry M. Macklin K. Erleben 《Computer Graphics Forum》2023,42(1):277-289
In this work, we present a novel approach for calibrating material model parameters for soft body simulations using real data. We use a fully differentiable pipeline, combining a differentiable soft body simulator and differentiable depth rendering, which permits fast gradient-based optimizations. Our method requires no data pre-processing, and minimal experimental set-up, as we directly minimize the L2-norm between raw LIDAR scans and rendered simulation states. In essence, we provide the first marker-free approach for calibrating a soft-body simulator to match observed real-world deformations. Our approach is inexpensive as it solely requires a consumer-level LIDAR sensor compared to acquiring a professional marker-based motion capture system. We investigate the effects of different material parameterizations and evaluate convergence for parameter optimization in both single and multi-material scenarios of varying complexity. Finally, we show that our set-up can be extended to optimize for dynamic behaviour as well. 相似文献
16.
17.
Bochang Moon Jong Yun Jun JongHyeob Lee Kunho Kim Toshiya Hachisuka Sung‐Eui Yoon 《Computer Graphics Forum》2013,32(1):139-151
We propose an efficient and robust image‐space denoising method for noisy images generated by Monte Carlo ray tracing methods. Our method is based on two new concepts: virtual flash images and homogeneous pixels. Inspired by recent developments in flash photography, virtual flash images emulate photographs taken with a flash, to capture various features of rendered images without taking additional samples. Using a virtual flash image as an edge‐stopping function, our method can preserve image features that were not captured well only by existing edge‐stopping functions such as normals and depth values. While denoising each pixel, we consider only homogeneous pixels—pixels that are statistically equivalent to each other. This makes it possible to define a stochastic error bound of our method, and this bound goes to zero as the number of ray samples goes to infinity, irrespective of denoising parameters. To highlight the benefits of our method, we apply our method to two Monte Carlo ray tracing methods, photon mapping and path tracing, with various input scenes. We demonstrate that using virtual flash images and homogeneous pixels with a standard denoising method outperforms state‐of‐the‐art image‐space denoising methods. 相似文献
18.
视点相关的层次采样:一种硬件加速体光线投射算法 总被引:2,自引:0,他引:2
光线投射是一种高质量的体绘制方法.它以图像空间为序,逐根光线遍历和采样体数据.因此,传统上,它只能在CPU上实现,因而速度慢,交互性不好.提出了一个新的视点相关的层次采样VDLS (view dependent layer sampling)结构,VDLS将光线上的所有采样点重新组织成一系列层,并简化为两个视点相关的几何缓冲器,进而在GPU(graphics processing unit)中用两个动态纹理表示.利用GPU的可编程性,光线投射算法的6个步骤(光线生成、光线遍历、插值、分类、着色和颜色合成)得以完全在GPU中实现.在此基础上,提出两个基于体空间和图像空间连贯性的加速技巧,快速剔除无效的光线.结合其他与渲染和颜色合成有关的技巧,VDLS将面向多边形绘制的图形引擎转化为体光线投射算法引擎,在透视投影方式下,每秒能处理1.5亿个插值、后分类与着色的光线采样点.实验结果表明,提出的方法能用于医学可视化、真实物理现象模拟、材质检测中灰度体数据快速交互的可视化与漫游. 相似文献
19.
Gurprit Singh Kartic Subr David Coeurjolly Victor Ostromoukhov Wojciech Jarosz 《Computer Graphics Forum》2020,39(1):7-19
Fourier analysis is gaining popularity in image synthesis as a tool for the analysis of error in Monte Carlo (MC) integration. Still, existing tools are only able to analyse convergence under simplifying assumptions (such as randomized shifts) which are not applied in practice during rendering. We reformulate the expressions for bias and variance of sampling-based integrators to unify non-uniform sample distributions [importance sampling (IS)] as well as correlations between samples while respecting finite sampling domains. Our unified formulation hints at fundamental limitations of Fourier-based tools in performing variance analysis for MC integration. At the same time, it reveals that, when combined with correlated sampling, IS can impact convergence rate by introducing or inhibiting discontinuities in the integrand. We demonstrate that the convergence of multiple importance sampling (MIS) is determined by the strategy which converges slowest and propose several simple approaches to overcome this limitation. We show that smoothing light boundaries (as commonly done in production to reduce variance) can improve (M)IS convergence (at a cost of introducing a small amount of bias) since it removes C0 discontinuities within the integration domain. We also propose practical integrand- and sample-mirroring approaches which cancel the impact of boundary discontinuities on the convergence rate of estimators. 相似文献
20.
Hidden images contain one or several concealed foregrounds which can be recognized with the assistance of clues preserved by artists. Experienced artists are trained for years to be skilled enough to find appropriate hidden positions for a given image. However, it is not an easy task for amateurs to quickly find these positions when they try to create satisfactory hidden images. In this paper, we present an interactive framework to suggest the hidden positions and corresponding results. The suggested results generated by our approach are sequenced according to the levels of their recognition difficulties. To this end, we propose a novel approach for assessing the levels of recognition difficulty of the hidden images and a new hidden image synthesis method that takes spatial influence into account to make the foreground harmonious with the local surroundings. During the synthesis stage, we extract the characteristics of the foreground as the clues based on the visual attention model. We validate the effectiveness of our approach by performing two user studies, including the quality of the hidden images and the suggestion accuracy. 相似文献