首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We present a photon mapping technique capable of computing high quality global illumination at interactive frame rates. By extending the concept of photon differentials to efficiently handle diffuse reflections, we generate footprints at all photon hit points. These enable illumination reconstruction by density estimation with variable kernel bandwidths without having to locate the k nearest photon hits first. Adapting an efficient BVH construction process for ray tracing acceleration, we build photon maps that enable the fast retrieval of all hits relevant to a shading point. We present a heuristic that automatically tunes the BVH build's termination criterion to the scene and illumination conditions. As all stages of the algorithm are highly parallelizable, we demonstrate an implementation using NVidia's CUDA manycore architecture running at interactive rates on a single GPU. Both light source and camera may be freely moved with global illumination fully recalculated in each frame.  相似文献   

2.
The ability to interactively render dynamic scenes with global illumination is one of the main challenges in computer graphics. The improvement in performance of interactive ray tracing brought about by significant advances in hardware and careful exploitation of coherence has rendered the potential of interactive global illumination a reality. However, the simulation of complex light transport phenomena, such as diffuse interreflections, is still quite costly to compute in real time. In this paper we present a caching scheme, termed Instant Caching, based on a combination of irradiance caching and instant radiosity. By reutilising calculations from neighbouring computations this results in a speedup over previous instant radiosity‐based approaches. Additionally, temporal coherence is exploited by identifying which computations have been invalidated due to geometric transformations and updating only those paths. The exploitation of spatial and temporal coherence allows us to achieve superior frame rates for interactive global illumination within dynamic scenes, without any precomputation or quality loss when compared to previous methods; handling of lighting and material changes are also demonstrated.  相似文献   

3.
We present a new method for estimating the radiance function of complex area light sources. The method is based on Jensen's photon mapping algorithm. In order to capture high angular frequencies in the radiance function, we incorporate the angular domain into the density estimation. However, density estimation in position-direction space makes it necessary to find a tradeoff between the spatial and angular accuracy of the estimation. We identify the parameters which are important for this tradeoff and investigate the typical estimation errors. We show how the large data size, which is inherent to the underlying problem, can be handled. The method is applied to different automotive tail lights. It can be applied to a wide range of other real-world light sources.  相似文献   

4.
Real-time Light Animation   总被引:2,自引:0,他引:2  
Light source animation is a particularly hard field of real‐time global illumination algorithms since moving light sources result in drastic illumination changes and make coherence techniques less effective. However, the animation of small (point‐like) light sources represents a special but practically very important case, for which the reuse of the results of other frames is possible. This paper presents a fast light source animation algorithm based on the virtual light sources illumination method. The speed up is close to the length of the animation, and is due to reusing paths in all frames and not only in the frame where they were obtained. The possible applications of this algorithm are the lighting design and systems to convey shape and features with relighting.  相似文献   

5.
《Computers & Graphics》2012,36(8):1096-1108
In this paper we propose a new method for solving inverse lighting design problems that can include diverse sources such as diffuse roof skylights or artificial light sources. Given a user specification of illumination requirements, our approach provides optimal light source positions as well as optimal shapes for skylight installations in interior architectural models. The well known huge computational effort that involves searching for an optimal solution is tackled by combining two concepts: exploiting the scene coherence to compute global illumination and using a metaheuristic technique for optimization.Results and analysis show that our method provides both fast and accurate results, making it suitable for lighting design in indoor environments while supporting interactive visualization of global illumination.  相似文献   

6.
Interactive global illumination for fully deformable scenes with dynamic relighting is currently a very elusive goal in the area of realistic rendering. In this work we propose a system that is based on explicit visibility calculations and which is highly efficient and scalable. The rendering equation defines the light exchange between surfaces, which we approximate by subsampling. By utilizing the power of modern parallel GPUs using the CUDA framework we achieve interactive frame rates. Since we update the global illumination continuously in an asynchronous fashion, we maintain interactivity at all times for moderately complex scenes. We show that we can achieve higher frame rates for scenes with moving light sources, diffuse indirect illumination and dynamic geometry than other current methods, while maintaining a high image quality.  相似文献   

7.
We present a fast reconstruction filtering method for images generated with Monte Carlo–based rendering techniques. Our approach specializes in reducing global illumination noise in the presence of depth‐of‐field effects at very low sampling rates and interactive frame rates. We employ edge‐aware filtering in the sample space to locally improve outgoing radiance of each sample. The improved samples are then distributed in the image plane using a fast, linear manifold‐based approach supporting very large circles of confusion. We evaluate our filter by applying it to several images containing noise caused by Monte Carlo–simulated global illumination, area light sources and depth of field. We show that our filter can efficiently denoise such images at interactive frame rates on current GPUs and with as few as 4–16 samples per pixel. Our method operates only on the colour and geometric sample information output of the initial rendering process. It does not make any assumptions on the underlying rendering technique and sampling strategy and can therefore be implemented completely as a post‐process filter.  相似文献   

8.
We present an algorithm that enables real-time dynamic shading in direct volume rendering using general lighting, including directional lights, point lights, and environment maps. Real-time performance is achieved by encoding local and global volumetric visibility using spherical harmonic (SH) basis functions stored in an efficient multiresolution grid over the extent of the volume. Our method enables high-frequency shadows in the spatial domain, but is limited to a low-frequency approximation of visibility and illumination in the angular domain. In a first pass, level of detail (LOD) selection in the grid is based on the current transfer function setting. This enables rapid online computation and SH projection of the local spherical distribution of visibility information. Using a piecewise integration of the SH coefficients over the local regions, the global visibility within the volume is then computed. By representing the light sources using their SH projections, the integral over lighting, visibility, and isotropic phase functions can be efficiently computed during rendering. The utility of our method is demonstrated in several examples showing the generality and interactive performance of the approach.  相似文献   

9.
Despite great efforts in recent years to accelerate global illumination computation, the real-time ray tracing of fully dynamic scenes to support photorealistic indirect illumination effects has yet to be achieved in computer graphics. In this paper, we propose an extended ray tracing model that can be readily implemented on a GPU to facilitate the interactive generation of diffuse indirect illumination, the quality of which is comparable to that generated by the traditional, time-consuming photon mapping method and final gathering. Our method employs three types of (multilevel) grids to represent the indirect light in a scene using a form that facilitates the efficient estimation of the reflected radiance caused by diffuse interreflection. This method includes the mathematical tool of spherical harmonics and a rendering scheme that performs the final gathering step with a minimal cost during ray tracing, which guarantees the interactive frame rates. We evaluated our technique using several dynamic scenes with nontrivial complexity, which demonstrated its effectiveness.  相似文献   

10.
We present a novel visibility representation, Intersection Field (i-Field), to compute global illumination in interactive rates. The i-Field provides fast visibility and line-scene intersection queries. We factorize the direct illumination into local irradiance and visibility ratio. The latter is efficiently evaluated by querying the i-Field. The indirect illumination is simulated by photon tracing, which is also accelerated by the i-Field. By quickly detecting invalid portions, our approach can handle highly dynamic scenes, allowing light sources and scene geometries to be manipulated at interactive rates through rigid transformations and free deformations.  相似文献   

11.
This paper presents an interactive system for realistic visualization of earth-scale clouds. Realistic images can be generated at interactive frame rates while the viewpoint and the sunlight directions can be changed interactively. The realistic display of earth-scale clouds requires us to render large volume data representing the density distribution of the clouds. However, this is generally time-consuming and it is difficult to achieve the interactive performance, especially when the sunlight direction ca...  相似文献   

12.
This paper introduces a caching technique based on a volumetric representation that captures low-frequency indirect illumination. This structure is intended for efficient storage and manipulation of illumination. It is based on a 3D grid that stores a fixed set of irradiance vectors. During preprocessing, this representation can be built using almost any existing global illumination software. During rendering, the indirect illumination within a voxel is interpolated from its associated irradiance vectors, and is used as additional local light sources. Compared with other techniques, the 3D vector-based representation of our technique offers increased robustness against local geometric variations of a scene. We thus demonstrate that it may be employed as an efficient and high-quality caching data structure for bidirectional rendering techniques such as particle tracing or photon mapping.  相似文献   

13.
We present a novel framework for efficiently computing the indirect illumination in diffuse and moderately glossy scenes using density estimation techniques. Many existing global illumination approaches either quickly compute an overly approximate solution or perform an orders of magnitude slower computation to obtain high-quality results for the indirect illumination. The proposed method improves photon density estimation and leads to significantly better visual quality in particular for complex geometry, while only slightly increasing the computation time. We perform direct splatting of photon rays, which allows us to use simpler search data structures. Since our density estimation is carried out in ray space rather than on surfaces, as in the commonly used photon mapping algorithm, the results are more robust against geometrically incurred sources of bias. This holds also in combination with final gathering where photon mapping often overestimates the illumination near concave geometric features. In addition, we show that our photon splatting technique can be extended to handle moderately glossy surfaces and can be combined with traditional irradiance caching for sparse sampling and filtering in image space.  相似文献   

14.
This paper presents a novel compressed sensing (CS) algorithm and camera design for light field video capture using a single sensor consumer camera module. Unlike microlens light field cameras which sacrifice spatial resolution to obtain angular information, our CS approach is designed for capturing light field videos with high angular, spatial, and temporal resolution. The compressive measurements required by CS are obtained using a random color-coded mask placed between the sensor and aperture planes. The convolution of the incoming light rays from different angles with the mask results in a single image on the sensor; hence, achieving a significant reduction on the required bandwidth for capturing light field videos. We propose to change the random pattern on the spectral mask between each consecutive frame in a video sequence and extracting spatio-angular-spectral-temporal 6D patches. Our CS reconstruction algorithm for light field videos recovers each frame while taking into account the neighboring frames to achieve significantly higher reconstruction quality with reduced temporal incoherencies, as compared with previous methods. Moreover, a thorough analysis of various sensing models for compressive light field video acquisition is conducted to highlight the advantages of our method. The results show a clear advantage of our method for monochrome sensors, as well as sensors with color filter arrays.  相似文献   

15.
Real-Time Cloud Rendering   总被引:13,自引:0,他引:13  
This paper presents a method for realistic real-time rendering of clouds suitable for flight simulation and games. It provides a cloud shading algorithm that approximates multiple forward scattering in a preprocess, and first order anisotropic scattering at runtime. Impostors are used to accelerate cloud rendering by exploiting frame-to-frame coherence in an interactive flight simulation. Impostors are shown to be particularly well suited to clouds, even in circumstances under which they cannot be applied to the rendering of polygonal geometry. The method allows hundreds of clouds and hundreds of thousands of particles to be rendered at high frame rates, and improves interaction with clouds by reducing artifacts introduced by direct particle rendering techniques.  相似文献   

16.
In augmented reality, one of key tasks to achieve a convincing visual appearance consistency between virtual objects and video scenes is to have a coherent illumination along the whole sequence. As outdoor illumination is largely dependent on the weather, the lighting condition may change from frame to frame. In this paper, we propose a full image-based approach for online tracking of outdoor illumination variations from videos captured with moving cameras. Our key idea is to estimate the relative intensities of sunlight and skylight via a sparse set of planar feature-points extracted from each frame. To address the inevitable feature misalignments, a set of constraints are introduced to select the most reliable ones. Exploiting the spatial and temporal coherence of illumination, the relative intensities of sunlight and skylight are finally estimated by using an optimization process. We validate our technique on a set of real-life videos and show that the results with our estimations are visually coherent along the video sequences.  相似文献   

17.
Environment sampling is a popular technique for rendering scenes with distant environment illumination. However, the temporal consistency of animations synthesized under dynamic environment sequences has not been fully studied. This paper addresses this problem and proposes a novel method, namely spatiotemporal sampling, to fully exploit both the temporal and spatial coherence of environment sequences. Our method treats an environment sequence as a spatiotemporal volume and samples the sequence by stratifying the volume adaptively. For this purpose, we first present a new metric to measure the importance of each stratified volume. A stratification algorithm is then proposed to adaptively suppress the abrupt temporal and spatial changes in the generated sampling patterns. The proposed method is able to automatically adjust the number of samples for each environment frame and produce temporally coherent sampling patterns. Comparative experiments demonstrate the capability of our method to produce smooth and consistent animations under dynamic environment sequences.  相似文献   

18.
We present a hybrid approach to simulate global illumination and soft shadows at interactive frame rates. The strengths of hardware-accelerated GPU techniques are combined with CPU methods to achieve physically consistent results while maintaining reasonable performance. The process of image synthesis is subdivided into multiple passes accounting for the different illumination effects. While direct lighting is rendered efficiently by rasterization, soft shadows are simulated using a novel approach combining the speed of shadow mapping and the accuracy of visibility ray tracing. A shadow refinement mask is derived from the result of the direct lighting pass and from a small number of shadow maps to identify the penumbral region of an area light source. This region is accurately rendered by ray tracing. For diffuse indirect illumination, we introduce radiosity photons to profit from the flexibility of a point-based sampling while maintaining the benefits of interpolation over scattered data approximation or density estimation. A sparse sampling of the scene is generated by particle tracing. An area is approximated for each point sample to compute the radiosity solution using a relaxation approach. The indirect illumination is interpolated between neighboring radiosity photons, stored in a multidimensional search tree. We compare different neighborhood search algorithms in terms of image quality and performance. Our method yields interactive frame rates and results consistent with path tracing reference solutions.  相似文献   

19.
中尺度环流在复杂的天气环境下时常被大尺度环流掩盖,不能及时获取中尺度云团的时空密度,影响了中尺度云团三维流场可视化模拟效果,为了提升可视化模拟性能,提出基于四叉树算法(levels of detail,LOD)的中尺度云团三维流场可视化模拟分析方法。分析了中尺度云团三维相对流场,从中获取了中尺度云团三维流场运动规律,并基于二维直方图取得中尺度云团三维流场特征;将粒子源与四叉树结构相结合,构成四叉树粒子系统,把取得的特征映射到系统内,利用该系统中的粒子完成中尺度云团三维流场的渲染绘制,从而实现中尺度云团三维流场的可视化模拟。实验结果表明,绘制帧率对比测试、可视化模拟速率对比测试和可视化模拟效果对比测试结果清晰度较高,可视化程度较高,实用性强、可靠性高。  相似文献   

20.
While many methods exist for simulating diffuse light inter-reflections, relatively few of them are adapted to dynamic scenes. Despite approximations made on the formal rendering equation, managing dynamic environments at interactive or real-time frame rates still remains one of the most challenging problems. This paper presents a lighting simulation system based on photon streaming, performed continuously on the central processor unit. The power corresponding to each photon impact is accumulated onto predefined points, called virtual light accumulators (or VLA). VLA are used during the rendering phase as virtual light sources. We also introduce a priority management system that automatically adapts to brutal changes during lighting simulation (for instance due to visibility changes or fast object motion). Our system naturally benefits from multi-core architecture. The rendering process is performed in real time using a graphics processor unit, independently from the lighting simulation process. As shown in the results, our method provides high framerates for dynamic scenes, with moving viewpoint, objects and light sources.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号