首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
We proposed an approach to create plausible free-viewpoint relighting video using multi-view cameras array under general illumination. Given the multi-view video dataset recorded using a set of industrial cameras under general uncontrolled and unknown illumination, we first reconstruct 3D model of the captured target using existing multi-view stereo approach. Using the coarse geometry reconstruction, we estimate the spatially varying surface reflectance in the spherical harmonics domain considering the spatial and temporal coherence. With the estimated geometry and reflectance, the 3D target is relit to the novel illumination with the environment map of the target environment. Relit performance is enhanced using a flow- and quotient-based transfer strategy to achieve detailed and plausible performance relighting. Finally, the free-viewpoint video is generated using a view-dependent rendering strategy. Experimental results on various dataset show that our approach enables plausible free-view relighting, and opens up a path towards relightable free-viewpoint video using less complex acquisition setups.  相似文献   

2.
Image‐based lighting has allowed the creation of photo‐realistic computer‐generated content. However, it requires the accurate capture of the illumination conditions, a task neither easy nor intuitive, especially to the average digital photography enthusiast. This paper presents an approach to directly estimate an HDR light probe from a single LDR photograph, shot outdoors with a consumer camera, without specialized calibration targets or equipment. Our insight is to use a person's face as an outdoor light probe. To estimate HDR light probes from LDR faces we use an inverse rendering approach which employs data‐driven priors to guide the estimation of realistic, HDR lighting. We build compact, realistic representations of outdoor lighting both parametrically and in a data‐driven way, by training a deep convolutional autoencoder on a large dataset of HDR sky environment maps. Our approach can recover high‐frequency, extremely high dynamic range lighting environments. For quantitative evaluation of lighting estimation accuracy and relighting accuracy, we also contribute a new database of face photographs with corresponding HDR light probes. We show that relighting objects with HDR light probes estimated by our method yields realistic results in a wide variety of settings.  相似文献   

3.
Existing techniques for fast, high-quality rendering of translucent materials often fix BSSRDF parameters at precomputation time. We present a novel method for accurate rendering and relighting of translucent materials that also enables real-time editing and manipulation of homogeneous diffuse BSSRDFs. We first apply PCA analysis on diffuse multiple scattering to derive a compact basis set, consisting of only twelve 1D functions. We discovered that this small basis set is accurate enough to approximate a general diffuse scattering profile. For each basis, we then precompute light transport data representing the translucent transfer from a set of local illumination samples to each rendered vertex. This local transfer model allows our system to integrate a variety of lighting models in a single framework, including environment lighting, local area lights, and point lights. To reduce the PRT data size, we compress both the illumination and spatial dimensions using efficient nonlinear wavelets. To edit material properties in real-time, a user-defined diffuse BSSRDF is dynamically projected onto our precomputed basis set, and is then multiplied with the translucent transfer information on the fly. Using our system, we demonstrate realistic, real-time translucent material editing and relighting effects under a variety of complex, dynamic lighting scenarios.  相似文献   

4.
Precomputed Radiance Transfer (PRT) remains an attractive solution for real-time rendering of complex light transport effects such as glossy global illumination. After precomputation, we can relight the scene with new environment maps while changing viewpoint in real-time. However, practical PRT methods are usually limited to low-frequency spherical harmonic lighting. All-frequency techniques using wavelets are promising but have so far had little practical impact. The curse of dimensionality and much higher data requirements have typically limited them to relighting with fixed view or only direct lighting with triple product integrals. In this paper, we demonstrate a hybrid neural-wavelet PRT solution to high-frequency indirect illumination, including glossy reflection, for relighting with changing view. Specifically, we seek to represent the light transport function in the Haar wavelet basis. For global illumination, we learn the wavelet transport using a small multi-layer perceptron (MLP) applied to a feature field as a function of spatial location and wavelet index, with reflected direction and material parameters being other MLP inputs. We optimize/learn the feature field (compactly represented by a tensor decomposition) and MLP parameters from multiple images of the scene under different lighting and viewing conditions. We demonstrate real-time (512 x 512 at 24 FPS, 800 x 600 at 13 FPS) precomputed rendering of challenging scenes involving view-dependent reflections and even caustics.  相似文献   

5.
Although realistic textile rendering has been widely used in virtual garment and try-on systems, a robust method to simulate textile with a realistic appearance and high fidelity is yet to be established. We propose to use a novel hybrid geometric- and image-based rendering (GIBR) method to achieve photo realistic representation of textile products. The image-based technique, with its radiance synthesis algorithm, enables us to recover the reflectance properties of textile in an environment photo, and thus can render the appearance of textile material. The geometry-based technique, with its traditional illumination model of assigning illumination parameters extracted from the original scene (such as radiance and chroma dispatch), makes it possible to interactively manipulate 3D virtual objects in the “real” environment. Our realistic textile rendering method has advantages over the traditional ones in its easiness to implement and its wide range of applications.  相似文献   

6.
Systems for the creation of photorealistic models using range scans and digital photographs are becoming increasingly popular in a wide range of fields, from reverse engineering to cultural heritage preservation. These systems employ a range finder to acquire the geometry information and a digital camera to measure color detail. But bringing together a set of range scans and color images to produce an accurate and usable model is still an area of research with many unsolved problems. In this paper we address the problem of how to build illumination coherent integrated texture maps from images that were taken under different illumination conditions. To achieve this we present two different solutions. The first one is to align all the images to the same illumination, for which we have developed a technique that computes a relighting operator over the area of overlap of a pair of images that we then use to relight the entire image. Our proposed method can handle images with shadows and can effectively remove the shadows from the image, if required. The second technique uses the ratio of two images to factor out the diffuse reflectance of an image from its illumination. We do this without any light measuring device. By computing the actual reflectance we remove from the images any effects of the illumination, allowing us to create new renderings under novel illumination conditions.  相似文献   

7.
We present two practical methods for measurement of spectral skin reflectance suited for live subjects, and drive a spectral BSSRDF model with appropriate complexity to match skin appearance in photographs, including human faces. Our primary measurement method employs illuminating a subject with two complementary uniform spectral illumination conditions using a multispectral LED sphere to estimate spatially varying parameters of chromophore concentrations including melanin and hemoglobin concentration, melanin blend-type fraction, and epidermal hemoglobin fraction. We demonstrate that our proposed complementary measurements enable higher-quality estimate of chromophores than those obtained using standard broadband illumination, while being suitable for integration with multiview facial capture using regular color cameras. Besides novel optimal measurements under controlled illumination, we also demonstrate how to adapt practical skin patch measurements using a hand-held dermatological skin measurement device, a Miravex Antera 3D camera, for skin appearance reconstruction and rendering. Furthermore, we introduce a novel approach for parameter estimation given the measurements using neural networks which is significantly faster than a lookup table search and avoids parameter quantization. We demonstrate high quality matches of skin appearance with photographs for a variety of skin types with our proposed practical measurement procedures, including photorealistic spectral reproduction and renderings of facial appearance.  相似文献   

8.
Interactive global illumination for fully deformable scenes with dynamic relighting is currently a very elusive goal in the area of realistic rendering. In this work we propose a system that is based on explicit visibility calculations and which is highly efficient and scalable. The rendering equation defines the light exchange between surfaces, which we approximate by subsampling. By utilizing the power of modern parallel GPUs using the CUDA framework we achieve interactive frame rates. Since we update the global illumination continuously in an asynchronous fashion, we maintain interactivity at all times for moderately complex scenes. We show that we can achieve higher frame rates for scenes with moving light sources, diffuse indirect illumination and dynamic geometry than other current methods, while maintaining a high image quality.  相似文献   

9.
Image‐based techniques have become very popular over the past couple of years. Ranging from modeling to rendering and lighting, the use of images as direct input for graphics algorithms has become as important as processing polygons or other forms of data. This talk will focus on some of the challenges posed by Image‐based relighting. Starting from a set of photographs of an object under various illumination conditions, Image‐based relighting computes novel renderings of the objects. Also, the inverse problem stated as ``What is the required lighting configuration to reach a desired illumination on the object?'' will be discussed, as well as some thoughts on how results from computer vision can be used to accelerate the process.  相似文献   

10.
We propose a novel rendering method which supports interactive BRDF editing as well as relighting on a 3D scene. For interactive BRDF editing, we linearize an analytic BRDF model with basis BRDFs obtained from a principal component analysis. For each basis BRDF, the radiance transfer is precomputed and stored in vector form. In rendering time, illumination of a point is computed by multiplying the radiance transfer vectors of the basis BRDFs by the incoming radiance from gather samples and then linearly combining the results weighted by user‐controlled parameters. To improve the level of accuracy, a set of sub‐area samples associated with a gather sample refines the glossy reflection of the geometric details without increasing the precomputation time. We demonstrate this program with a number of examples to verify the real‐time performance of relighting and BRDF editing on 3D scenes with complex lighting and geometry.  相似文献   

11.
We present a method for uniformly sampling points inside the projection of a spherical cap onto a plane through the sphere's center. To achieve this, we devise two novel area‐preserving mappings from the unit square to this projection, which is often an ellipse but generally has a more complex shape. Our maps allow for low‐variance rendering of direct illumination from finite and infinite (e.g. sun‐like) spherical light sources by sampling their projected solid angle in a stratified manner. We discuss the practical implementation of our maps and show significant quality improvement over traditional uniform spherical cap sampling in a production renderer.  相似文献   

12.
Capturing the appearance of objects under different lighting conditions is useful in texture acquisition, bidirectional reflectance distribution function (BRDF) measurement, and image-based data acquisition. With the acquired data, we can render virtual objects realistically. During the data capture, we must record both the radiance and the light vector. Measuring the light vector is not necessarily easy. We propose a capturing system that estimates the light vector by applying a vision-based technique. It lets the user freely position a handheld light source during capture. The system estimates light-source orientation in real time through a Web cam mounted on the light source. Software running on an ordinary PC analyzes images from the Web cam to recognize the light source direction (the light vector). Our goal is to design a low-cost, portable, and adaptable system. Vision-based approaches offer such characteristics. We use a pose-estimation system to acquire image-based data for relighting.  相似文献   

13.
We present an image-based approach to relighting photographs of tree canopies. Our goal is to minimize capture overhead; thus the only input required is a set of photographs of the tree taken at a single time of day, while allowing relighting at any other time. We first analyze lighting in a tree canopy both theoretically and using simulations. From this analysis, we observe that tree canopy lighting is similar to volumetric illumination. We assume a single-scattering volumetric lighting model for tree canopies, and diffuse leaf reflectance; we validate our assumptions with synthetic renderings. We create a volumetric representation of the tree from 10-12 images taken at a single time of day and use a single-scattering participating media lighting model. An analytical sun and sky illumination model provides consistent representation of lighting for the captured input and unknown target times. We relight the input image by applying a ratio of the target and input time lighting representations. We compute this representation efficiently by simultaneously coding transmittance from the sky and to the eye in spherical harmonics. We validate our method by relighting images of synthetic trees and comparing to path-traced solutions. We also present results for photographs, validating with time-lapse ground truth sequences.  相似文献   

14.
Rendering a complex surface accurately and without aliasing requires the evaluation of an integral for each pixel, namely, a weighted average of the outgoing radiance over the pixel footprint on the surface. The outgoing radiance is itself given by a local illumination equation as a function of the incident radiance and of the surface properties. Computing all this numerically during rendering can be extremely costly. For efficiency, especially for real-time rendering, it is necessary to use precomputations. When the fine scale surface geometry, reflectance, and illumination properties are specified with maps on a coarse mesh (such as color maps, normal maps, horizon maps, or shadow maps), a frequently used simple idea is to prefilter each map linearly and separately. The averaged outgoing radiance, i.e., the average of the values given by the local illumination equation is then estimated by applying this equation to the averaged surface parameters. But this is really not accurate because this equation is nonlinear, due to self-occlusions, self-shadowing, nonlinear reflectance functions, etc. Some methods use more complex prefiltering algorithms to cope with these nonlinear effects. This paper is a survey of these methods. We start with a general presentation of the problem of prefiltering complex surfaces. We then present and classify the existing methods according to the approximations they make to tackle this difficult problem. Finally, an analysis of these methods allows us to highlight some generic tools to prefilter maps used in nonlinear functions, and to identify open issues to address the general problem.  相似文献   

15.
Spectral reflectance is an intrinsic characteristic of objects that is independent of illumination and the used imaging sensors. This direct representation of objects is useful for various computer vision tasks, such as color constancy and material discrimination. In this work, we present a novel system for spectral reflectance recovery with high temporal resolution by exploiting the unique color-forming mechanism of digital light processing (DLP) projectors. DLP projectors use color wheels, which are composed of a number of color segments and rotate quickly to produce the desired colors. Making effective use of this mechanism, we show that a DLP projector can be used as a light source with spectrally distinct illuminations when the appearance of a scene under the projector’s irradiation is captured with a high-speed camera. Based on the measurements, the spectral reflectance of scene points can be recovered using a linear approximation of the surface reflectance. Our imaging system is built from off-the-shelf devices, and is capable of taking multi-spectral measurements as fast as 100 Hz. We carefully evaluated the accuracy of our system and demonstrated its effectiveness by spectral relighting of static as well as dynamic scenes containing different objects.  相似文献   

16.
Many high‐level image processing tasks require an estimate of the positions, directions and relative intensities of the light sources that illuminated the depicted scene. In image‐based rendering, augmented reality and computer vision, such tasks include matching image contents based on illumination, inserting rendered synthetic objects into a natural image, intrinsic images, shape from shading and image relighting. Yet, accurate and robust illumination estimation, particularly from a single image, is a highly ill‐posed problem. In this paper, we present a new method to estimate the illumination in a single image as a combination of achromatic lights with their 3D directions and relative intensities. In contrast to previous methods, we base our azimuth angle estimation on curve fitting and recursive refinement of the number of light sources. Similarly, we present a novel surface normal approximation using an osculating arc for the estimation of zenith angles. By means of a new data set of ground‐truth data and images, we demonstrate that our approach produces more robust and accurate results, and show its versatility through novel applications such as image compositing and analysis.  相似文献   

17.
变化光照的对象图像合成   总被引:3,自引:0,他引:3  
徐丹  王平安 《软件学报》2002,13(4):501-509
光照是真实感图形绘制和许多图像应用中的一个非常重要的因素.提出了一种完全基于图像的方法来反映光照变化在绘制对图像时的影响.所提出的方法不是直接去估计对象反射模型中的参数,或是去拟合BRDF函数,而是用奇异值分解(SVD)来拟合Lambertian 对象在光照和几何朝向变化情况下的所有图像集合.其中,光线方向的解析表达可以由样本图像、基图像以及已知类对象的图像集导出,对象在新的光线方向下的图像可通过适当地线性组合基图像而有效地绘出.另外,利用对SVD系数的线性插值可以生成反映对象几何朝向和光线变化的连续变形  相似文献   

18.
郑作勇  马利庄  曾洲 《软件学报》2008,19(11):3083-3090
提出了一种从真实物体中提取纹理的方法.利用具有复杂纹理的参考球体作为被采样物体,计算其组成材质的BRDF(bidirectional reflectance distribution function)模型参数以及各点由不同材质构成的比例,形成一幅材质权重图.该图作为纹理映射到3D物体上后,配合BRDF模型参数进行渲染,形成一种适用于重光照(relighting)的纹理.被渲染物体可根据自身方位以及光源亮度/方位呈现出自然的光影变化,达到较为逼真的外观效果.  相似文献   

19.
Helmholtz Stereopsis is a powerful technique for reconstruction of scenes with arbitrary reflectance properties. However, previous formulations have been limited to static objects due to the requirement to sequentially capture reciprocal image pairs (i.e. two images with the camera and light source positions mutually interchanged). In this paper, we propose colour Helmholtz Stereopsis—a novel framework for Helmholtz Stereopsis based on wavelength multiplexing. To address the new set of challenges introduced by multispectral data acquisition, the proposed Colour Helmholtz Stereopsis pipeline uniquely combines a tailored photometric calibration for multiple camera/light source pairs, a novel procedure for spatio-temporal surface chromaticity calibration and a state-of-the-art Bayesian formulation necessary for accurate reconstruction from a minimal number of reciprocal pairs. In this framework, reflectance is spatially unconstrained both in terms of its chromaticity and the directional component dependent on the illumination incidence and viewing angles. The proposed approach for the first time enables modelling of dynamic scenes with arbitrary unknown and spatially varying reflectance using a practical acquisition set-up consisting of a small number of cameras and light sources. Experimental results demonstrate the accuracy and flexibility of the technique on a variety of static and dynamic scenes with arbitrary unknown BRDF and chromaticity ranging from uniform to arbitrary and spatially varying.  相似文献   

20.
4D Video Textures (4DVT) introduce a novel representation for rendering video‐realistic interactive character animation from a database of 4D actor performance captured in a multiple camera studio. 4D performance capture reconstructs dynamic shape and appearance over time but is limited to free‐viewpoint video replay of the same motion. Interactive animation from 4D performance capture has so far been limited to surface shape only. 4DVT is the final piece in the puzzle enabling video‐realistic interactive animation through two contributions: a layered view‐dependent texture map representation which supports efficient storage, transmission and rendering from multiple view video capture; and a rendering approach that combines multiple 4DVT sequences in a parametric motion space, maintaining video quality rendering of dynamic surface appearance whilst allowing high‐level interactive control of character motion and viewpoint. 4DVT is demonstrated for multiple characters and evaluated both quantitatively and through a user‐study which confirms that the visual quality of captured video is maintained. The 4DVT representation achieves >90% reduction in size and halves the rendering cost.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号