首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Stereo Light Probe   总被引:1,自引:0,他引:1  
In this paper we present a practical, simple and robust method to acquire the spatially‐varying illumination of a real‐world scene. The basic idea of the proposed method is to acquire the radiance distribution of the scene using high‐dynamic range images of two reflective balls. The use of two light probes instead of a single one allows to estimate, not only the direction and intensity of the light sources, but also the actual position in space of the light sources. To robustly achieve this goal we first rectify the two input spherical images, then, using a region‐based stereo matching algorithm, we establish correspondences and compute the position of each light. The radiance distribution so obtained can be used for augmented reality applications, photo‐realistic rendering and accurate reflectance properties estimation. The accuracy and the effectiveness of the method have been tested by measuring the computed light position and rendering synthetic version of a real object in the same scene. The comparison with standard method that uses a simple spherical lighting environment is also shown.  相似文献   

2.
We propose a novel rendering method which supports interactive BRDF editing as well as relighting on a 3D scene. For interactive BRDF editing, we linearize an analytic BRDF model with basis BRDFs obtained from a principal component analysis. For each basis BRDF, the radiance transfer is precomputed and stored in vector form. In rendering time, illumination of a point is computed by multiplying the radiance transfer vectors of the basis BRDFs by the incoming radiance from gather samples and then linearly combining the results weighted by user‐controlled parameters. To improve the level of accuracy, a set of sub‐area samples associated with a gather sample refines the glossy reflection of the geometric details without increasing the precomputation time. We demonstrate this program with a number of examples to verify the real‐time performance of relighting and BRDF editing on 3D scenes with complex lighting and geometry.  相似文献   

3.
This paper presents methods for photo‐realistic rendering using strongly spatially variant illumination captured from real scenes. The illumination is captured along arbitrary paths in space using a high dynamic range, HDR, video camera system with position tracking. Light samples are rearranged into 4‐D incident light fields (ILF) suitable for direct use as illumination in renderings. Analysis of the captured data allows for estimation of the shape, position and spatial and angular properties of light sources in the scene. The estimated light sources can be extracted from the large 4D data set and handled separately to render scenes more efficiently and with higher quality. The ILF lighting can also be edited for detailed artistic control.  相似文献   

4.
Lighting design plays a crucial role in indoor lighting design, computer cinematograph and many other applications. Computer‐assisted lighting design aims to find a lighting configuration that best approximates the illumination effect specified by designers. In this paper, we present an automatic approach for lighting design, in which discrete and continuous optimization of the lighting configuration, including the number, intensity, and position of lights, are achieved. Our lighting design algorithm consists of two major steps. The first step estimates an initial lighting configuration by light sampling and clustering. The initial light clusters are then recursively merged to form a light hierarchy. The second step optimizes the lighting configuration by alternatively selecting a light cut on the light hierarchy to determine the number of representative lights and optimizing the lighting parameters using the simplex method. To speed up the optimization computation, only illumination at scene vertices that are important to rendering are sampled and taken into account in the optimization. Using the proposed approach, we develop a lighting design system that can compute appropriate lighting configurations to generate the illumination effects iteratively painted and modified by a designer interactively.  相似文献   

5.
Noisy volumetric details like clouds, grounds, plaster, bark, roughcast, etc. are frequently encountered in nature and bring an important contribution to the realism of outdoor scenes. We introduce a new interactive approach, easing the creation of procedural representations of “stochastic” volumetric details by using a single example photograph. Instead of attempting to reconstruct an accurate geometric representation from the photograph, we use a stochastic multi‐scale approach that fits parameters of a multi‐layered noise‐based 3D deformation model, using a multi‐resolution filter banks error metric. Once computed, visually similar details can be applied to arbitrary objects with a high degree of visual realism, since lighting and parallax effects are naturally taken into account. Our approach is inspired by image‐based techniques. In practice, the user supplies a photograph of an object covered by noisy details, provides a corresponding coarse approximation of the shape of this object as well as an estimated lighting condition (generally a light source direction). Our system then determines the corresponding noise‐based representation as well as some diffuse, ambient, specular and semi‐transparency reflectance parameters. The resulting details are fully procedural and, as such, have the advantage of extreme compactness, while they can be infinitely extended without repetition in order to cover huge surfaces.  相似文献   

6.
Inappropriate lighting is often responsible for poor quality video. In most offices and homes, lighting is not designed for video conferencing. This can result in unevenly lit faces, distracting shadows, and unnatural colors. We present a method for relighting faces that reduces the effects of uneven lighting and color. Our setup consists of a compact lighting rig and a camera that is both inexpensive and inconspicuous to the user. We use unperceivable infrared (IR) lights to obtain an illumination bases of the scene. Our algorithm computes an optimally weighted combination of IR bases to minimize lighting inconsistencies in foreground areas and reduce the effects of colored monitor light. However, IR relighting alone results in images with an unnatural ghostly appearance, thus a retargeting technique is presented which removes the unnatural IR effects and produces videos that have substantially more balanced intensity and color than the original video.  相似文献   

7.
This paper proposes a method for efficiently rendering indirect highlights. Indirect highlights are caused by the primary light source reflecting off two or more glossy surfaces. Accurately simulating such highlights is important to convey the realistic appearance of materials such as chrome and shiny metal. Our method models the glossy BRDF at a surface point as a directional distribution, using a spherical von Mises‐Fisher (vMF) distribution. As our main contribution, we merge multiple vMFs into a combined multimodal distribution. This effectively creates a filtered radiance response function, allowing us to efficiently estimate indirect highlights. We demonstrate our method in a near‐interactive application for rendering scenes with highly glossy objects. Our results produce realistic reflections under both local and environment lighting.  相似文献   

8.
We propose a novel approach to simulate the illumination of augmented outdoor scene based on a legacy photograph. Unlike previous works which only take surface radiosity or lighting related prior information as the basis of illumination estimation, our method integrates both of these two items. By adopting spherical harmonics, we deduce a linear model with only six illumination parameters. The illumination of an outdoor scene is finally calculated by solving a linear least square problem with the color constraint of the sunlight and the skylight. A high quality environment map is then set up, leading to realistic rendering results. We also explore the problem of shadow casting between real and virtual objects without knowing the geometry of objects which cast shadows. An efficient method is proposed to project complex shadows (such as tree's shadows) on the ground of the real scene to the surface of the virtual object with texture mapping. Finally, we present an unified scheme for image composition of a real outdoor scene with virtual objects ensuring their illumination consistency and shadow consistency. Experiments demonstrate the effectiveness and flexibility of our method.  相似文献   

9.
Thanks to an increase in rendering efficiency, indirect illumination has recently begun to be integrated in cinematic lighting design, an application where physical accuracy is less important than careful control of scene appearance. This paper presents a comprehensive, efficient, and intuitive representation for artistic control of indirect illumination. We encode user's adjustments to indirect lighting as scale and offset coefficients of the transfer operator. We take advantage of the nature of indirect illumination and of the edits themselves to efficiently sample and compress them. A major benefit of this sampled representation, compared to encoding adjustments as procedural shaders, is the renderer‐independence. This allowed us to easily implement several tools to produce our final images: an interactive relighting engine to view adjustments, a painting interface to define them, and a final renderer to render high quality results. We demonstrate edits to scenes with diffuse and glossy surfaces and animation.  相似文献   

10.
We present a new method suitable for general purpose graphics processing units to render self‐shadows on dynamic height fields under dynamic light environments in real‐time. Visibility for each point in the height field is determined as the exact horizon for a set of azimuthal directions in time linear in height field size and the number of directions. The surface is shaded using the horizon information and a high‐resolution light environment extracted on‐line from a high dynamic range cube map, allowing for detailed extended shadows. The desired accuracy for any geometric content and lighting complexity can be matched by choosing a suitable number of azimuthal directions. Our method is able to represent arbitrary features of both high‐ and low‐frequency, unifying hard and soft shadowing. We achieve 23 fps on 1024×1024 height fields with 64 azimuthal directions under a 256×64 environment lighting on an Nvidia GTX 280 GPU.  相似文献   

11.
In this paper, we present EnvyDepth, an interface for recovering local illumination from a single HDR environment map. In EnvyDepth, the user quickly indicates strokes to mark regions of the environment map that should be grouped together in a single geometric primitive. From these annotated strokes, EnvyDepth uses edit propagation to create a detailed collection of virtual point lights that reproduce both the local and the distant lighting effects in the original scene. When compared to the sole use of the distant illumination, the added spatial information better reproduces a variety of local effects such as shadows, highlights and caustics. Without the effort needed to create precise scene reconstructions, EnvyDepth annotations take only tens of seconds to produce a plausible lighting without visible artifacts. This is easy to obtain even in the case of complex scenes, both indoors and outdoors. The generated lighting environments work well in a production pipeline since they are efficient to use and able to produce accurate renderings.  相似文献   

12.
Finding the best makeup for a given human face is an art in its own right. Experienced makeup artists train for years to be skilled enough to propose a best‐fit makeup for an individual. In this work we propose a system that automates this task. We acquired the appearance of 56 human faces, both without and with professional makeup. To this end, we use a controlled‐light setup, which allows to capture detailed facial appearance information, such as diffuse reflectance, normals, subsurface‐scattering, specularity, or glossiness. A 3D morphable face model is used to obtain 3D positional information and to register all faces into a common parameterization. We then define makeup to be the change of facial appearance and use the acquired database to find a mapping from the space of human facial appearance to makeup. Our main application is to use this mapping to suggest the best‐fit makeup for novel faces that are not in the database. Further applications are makeup transfer, automatic rating of makeup, makeup‐training, or makeup‐exaggeration. As our makeup representation captures a change in reflectance and scattering, it allows us to synthesize faces with makeup in novel 3D views and novel lighting with high realism. The effectiveness of our approach is further validated in a user‐study.  相似文献   

13.
Glare is a consequence of light scattered within the human eye when looking at bright light sources. This effect can be exploited for tone mapping since adding glare to the depiction of high-dynamic range (HDR) imagery on a low-dynamic range (LDR) medium can dramatically increase perceived contrast. Even though most, if not all, subjects report perceiving glare as a bright pattern that fluctuates in time, up to now it has only been modeled as a static phenomenon. We argue that the temporal properties of glare are a strong means to increase perceived brightness and to produce realistic and attractive renderings of bright light sources. Based on the anatomy of the human eye, we propose a model that enables real-time simulation of dynamic glare on a GPU. This allows an improved depiction of HDR images on LDR media for interactive applications like games, feature films, or even by adding movement to initially static HDR images. By conducting psychophysical studies, we validate that our method improves perceived brightness and that dynamic glare-renderings are often perceived as more attractive depending on the chosen scene.  相似文献   

14.
We present a statistical method for the estimation of the Spatially Varying Bidirectional Reflectance Distribution Function (SVBRDF) of an object with complex geometry, starting from video sequences acquired with fixed but general lighting conditions. The aim of this work is to define a method that simplifies the acquisition phase of the object surface appearance and allows to reconstruct an approximated SVBRDF. The final output is suitable to be used with a 3D model of the object to obtain accurate and photo‐realistic renderings. The method is composed by three steps: the approximation of the environment map of the acquisition scene, using the same object as a probe; the estimation of the diffuse color of the object; the estimation of the specular components of the main materials of the object, by using a Phong model. All the steps are based on statistical analysis of the color samples projected by the video sequences on the surface of the object. Although the method presents some limitations, the trade‐off between the easiness of acquisition and the obtained results makes it useful for practical applications.  相似文献   

15.
Image calibration requires both linearization of pixel values and scaling so that values in the image correspond to real‐world luminances. In this paper we focus on the latter and rather than rely on camera characterization, we calibrate images by analysing their content and metadata, obviating the need for expensive measuring devices or modeling of lens and camera combinations. Our analysis correlates sky pixel values to luminances that would be expected based on geographical metadata. Combined with high dynamic range (HDR) imaging, which gives us linear pixel data, our algorithm allows us to find absolute luminance values for each pixel—effectively turning digital cameras into absolute light meters. To validate our algorithm we have collected and annotated a calibrated set of HDR images and compared our estimation with several other approaches, showing that our approach is able to more accurately recover absolute luminance. We discuss various applications and demonstrate the utility of our method in the context of calibrated color appearance reproduction and lighting design.  相似文献   

16.
Creating realistic human movement is a time consuming and labour intensive task. The major difficulty is that the user has to edit individual joints while maintaining an overall realistic and collision free posture. Previous research suggests the use of data‐driven inverse kinematics, such that one can focus on the control of a few joints, while the system automatically composes a natural posture. However, as a common problem of kinematics synthesis, penetration of body parts is difficult to avoid in complex movements. In this paper, we propose a new data‐driven inverse kinematics framework that conserves the topology of the synthesizing postures. Our system monitors and regulates the topology changes using the Gauss Linking Integral (GUI), such that penetration can be efficiently prevented. As a result, complex motions with tight body movements, as well as those involving interaction with external objects, can be simulated with minimal manual intervention. Experimental results show that using our system, the user can create high quality human motion in real‐time by controlling a few joints using a mouse or a multi‐touch screen. The movement generated is both realistic and penetration free. Our system is best applied for interactive motion design in computer animations and games.  相似文献   

17.
The popularity of many‐light rendering, which converts complex global illumination computations into a simple sum of the illumination from virtual point lights (VPLs), for predictive rendering has increased in recent years. A huge number of VPLs are usually required for predictive rendering at the cost of extensive computational time. While previous methods can achieve significant speedup by clustering VPLs, none of these previous methods can estimate the total errors due to clustering. This drawback imposes on users tedious trial and error processes to obtain rendered images with reliable accuracy. In this paper, we propose an error estimation framework for many‐light rendering. Our method transforms VPL clustering into stratified sampling combined with confidence intervals, which enables the user to estimate the error due to clustering without the costly computing required to sum the illumination from all the VPLs. Our estimation framework is capable of handling arbitrary BRDFs and is accelerated by using visibility caching, both of which make our method more practical. The experimental results demonstrate that our method can estimate the error much more accurately than the previous clustering method.  相似文献   

18.
In computer cinematography, artists routinely use non‐physical lighting models to achieve desired appearances. This paper presents BendyLights, a non‐physical lighting model where light travels nonlinearly along splines, allowing artists to control light direction and shadow position at different points in the scene independently. Since the light deformation is smoothly defined at all world‐space positions, the resulting non‐physical lighting effects remain spatially consistent, avoiding the frequent incongruences of many non‐physical models. BendyLights are controlled simply by reshaping splines, using familiar interfaces, and require very few parameters. BendyLight control points can be keyframed to support animated lighting effects. We demonstrate BendyLights both in a realtime rendering system for editing and a production renderer for final rendering, where we show that BendyLights can also be used with global illumination.  相似文献   

19.
Since high dynamic range (HDR) displays are not yet widely available, there is still a need to perform a dynamic range reduction of HDR content to reproduce it properly on standard dynamic range (SDR) displays. The most common techniques for performing this reduction are termed tone‐mapping operators (TMOs). Although mobile devices are becoming widespread, methods for displaying HDR content on these SDR screens are still very much in their infancy. While several studies have been conducted to evaluate TMOs, few have been done with a goal of testing small screen displays (SSDs), common on mobile devices. This paper presents an evaluation of six state‐of‐the‐art HDR video TMOs. The experiments considered three different levels of ambient luminance under which 180 participants were asked to rank the TMOs for seven tone‐mapped HDR video sequences. A comparison was conducted between tone‐mapped HDR video footage shown on an SSD and on a large screen SDR display using an HDR display as reference. The results show that there are differences between the performance of the TMOs under different ambient lighting levels and the TMOs that perform well on traditional large screen displays also perform well on SSDs at the same given luminance level.  相似文献   

20.
In recent years, much work was devoted to the design of light editing methods such as relighting and light path editing. So far, little work addressed the target‐based manipulation and animation of caustics, for instance to a differently‐shaped caustic, text or an image. The aim of this work is the animation of caustics by blending towards a given target irradiance distribution. This enables an artist to coherently change appearance and style of caustics, e.g., for marketing applications and visual effects. Generating a smooth animation is nontrivial, as photon density and caustic structure may change significantly. Our method is based on the efficient solution of a discrete assignment problem that incorporates constraints appropriate to make intermediate blends plausibly resemble caustics. The algorithm generates temporally coherent results that are rendered with stochastic progressive photon mapping. We demonstrate our system in a number of scenes and show blends as well as a key frame animation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号