首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 296 毫秒
1.
In many remote sensing and machine vision applications, the shape of a specular surface such as water, glass, or polished metal must be determined instantaneously and under natural lighting conditions. Most image analysis techniques, however, assume surface reflectance properties or lighting conditions that are incompatible with these situations. To retrieve the shape of smooth specular surfaces, a technique known as specular surface stereo was developed. The method analyzes multiple images of a surface and finds a surface shape that results in a set of synthetic images that match the observed ones. An image synthesis model is used to predict image irradiance values as a function of the shape and reflectance properties of the surface, camera geometry, and radiance distribution of the illumination. The specular surface stereo technique was tested by processing four numerical simulations-a water surface illuminated by a low- and high-contrast extended light source, and a mirrored surface illuminated by a low- and high-contrast extended light source. Under these controlled circumstances, the recovered surface shape showed good agreement with the known input  相似文献   

2.
In vision and graphics, advanced object models require not only 3D shape, but also surface detail. While several scanning devices exist to capture the global shape of an object, few methods concentrate on capturing the fine-scale detail. Fine-scale surface geometry (relief texture), such as surface markings, roughness, and imprints, is essential in highly realistic rendering and accurate prediction. We present a novel approach for measuring the relief texture of specular or partially specular surfaces using a specialized imaging device with a concave parabolic mirror to view multiple angles in a single image. Laser scanning typically fails for specular surfaces because of light scattering, but our method is explicitly designed for specular surfaces. Also, the spatial resolution of the measured geometry is significantly higher than standard methods, so very small surface details are captured. Furthermore, spatially varying reflectance is measured simultaneously, i.e., both texture color and texture shape are retrieved.  相似文献   

3.
Several techniques have been developed for recovering reflectance properties of real surfaces under unknown illumination. However, in most cases, those techniques assume that the light sources are located at infinity, which cannot be applied safely to, for example, reflectance modeling of indoor environments. In this paper, we propose two types of methods to estimate the surface reflectance property of an object, as well as the position of a light source from a single view without the distant illumination assumption, thus relaxing the conditions in the previous methods. Given a real image and a 3D geometric model of an object with specular reflection as inputs, the first method estimates the light source position by fitting to the Lambertian diffuse component, while separating the specular and diffuse components by using an iterative relaxation scheme. Our second method extends that first method by using as input a specular component image, which is acquired by analyzing multiple polarization images taken from a single view, thus removing its constraints on the diffuse reflectance property. This method simultaneously recovers the reflectance properties and the light source positions by optimizing the linearity of a log-transformed Torrance-Sparrow model. By estimating the object's reflectance property and the light source position, we can freely generate synthetic images of the target object under arbitrary lighting conditions with not only source direction modification but also source-surface distance modification. Experimental results show the accuracy of our estimation framework.  相似文献   

4.
In this paper, we present a new method to modify the appearance of a face image by manipulating the illumination condition, when the face geometry and albedo information is unknown. This problem is particularly difficult when there is only a single image of the subject available. Recent research demonstrates that the set of images of a convex Lambertian object obtained under a wide variety of lighting conditions can be approximated accurately by a low-dimensional linear subspace using a spherical harmonic representation. Moreover, morphable models are statistical ensembles of facial properties such as shape and texture. In this paper, we integrate spherical harmonics into the morphable model framework by proposing a 3D spherical harmonic basis morphable model (SHBMM). The proposed method can represent a face under arbitrary unknown lighting and pose simply by three low-dimensional vectors, i.e., shape parameters, spherical harmonic basis parameters, and illumination coefficients, which are called the SHBMM parameters. However, when the image was taken under an extreme lighting condition, the approximation error can be large, thus making it difficult to recover albedo information. In order to address this problem, we propose a subregion-based framework that uses a Markov random field to model the statistical distribution and spatial coherence of face texture, which makes our approach not only robust to extreme lighting conditions, but also insensitive to partial occlusions. The performance of our framework is demonstrated through various experimental results, including the improved rates for face recognition under extreme lighting conditions.  相似文献   

5.
变化光照的对象图像合成   总被引:3,自引:0,他引:3  
徐丹  王平安 《软件学报》2002,13(4):501-509
光照是真实感图形绘制和许多图像应用中的一个非常重要的因素.提出了一种完全基于图像的方法来反映光照变化在绘制对图像时的影响.所提出的方法不是直接去估计对象反射模型中的参数,或是去拟合BRDF函数,而是用奇异值分解(SVD)来拟合Lambertian 对象在光照和几何朝向变化情况下的所有图像集合.其中,光线方向的解析表达可以由样本图像、基图像以及已知类对象的图像集导出,对象在新的光线方向下的图像可通过适当地线性组合基图像而有效地绘出.另外,利用对SVD系数的线性插值可以生成反映对象几何朝向和光线变化的连续变形  相似文献   

6.
通过综合运用人脸空间的超球流形约束、基于梯度的启发式全局优化、光照的球面谐波描述以及凸包可见点集的直接消隐方法,提出一种三维可形变模型的图像匹配方法.首先通过形状超球流形约束下的全局优化算法求解摄像机参数和形状参数,然后使用以上参数和凸包点集的直接消隐方法确定物像点对应关系,最后根据物像点对应关系由反射率超球流形约束下的全局优化算法求解光照参数和反射率参数.定量的对比实验结果表明,该方法无需借助分区域拟合、人为估计参数值、层次匹配策略或复杂的特征组合,即可由单幅图像恢复三维可形变模型(3DMM)的全部参数.  相似文献   

7.
Recent face recognition algorithm can achieve high accuracy when the tested face samples are frontal. However, when the face pose changes largely, the performance of existing methods drop drastically. Efforts on pose-robust face recognition are highly desirable, especially when each face class has only one frontal training sample. In this study, we propose a 2D face fitting-assisted 3D face reconstruction algorithm that aims at recognizing faces of different poses when each face class has only one frontal training sample. For each frontal training sample, a 3D face is reconstructed by optimizing the parameters of 3D morphable model (3DMM). By rotating the reconstructed 3D face to different views, pose virtual face images are generated to enlarge the training set of face recognition. Different from the conventional 3D face reconstruction methods, the proposed algorithm utilizes automatic 2D face fitting to assist 3D face reconstruction. We automatically locate 88 sparse points of the frontal face by 2D face-fitting algorithm. Such 2D face-fitting algorithm is so-called Random Forest Embedded Active Shape Model, which embeds random forest learning into the framework of Active Shape Model. Results of 2D face fitting are added to the 3D face reconstruction objective function as shape constraints. The optimization objective energy function takes not only image intensity, but also 2D fitting results into account. Shape and texture parameters of 3DMM are thus estimated by fitting the 3DMM to the 2D frontal face sample, which is a non-linear optimization problem. We experiment the proposed method on the publicly available CMUPIE database, which includes faces viewed from 11 different poses, and the results show that the proposed method is effective and the face recognition results toward pose variants are promising.  相似文献   

8.
In this paper, we present a complete framework for recovering an object shape, estimating its reflectance properties and light sources from a set of images. The whole process is performed automatically. We use the shape from silhouette approach proposed by R. Szeliski (1993) combined with image pixels for reconstructing a triangular mesh according to the marching cubes algorithm. A classification process identifies regions of the object having the same appearance. For each region, a single point or directional light source is detected. Therefore, we use specular lobes, lambertian regions of the surface or specular highlights seen on images. An identification method jointly (i) decides what light sources are actually significant and (ii) estimates diffuse and specular coefficients for a surface represented by the modified Phong model (Lewis, 1994). In order to validate our algorithm efficiency, we present a case study with various objects, light sources and surface properties. As shown in the results, our system proves accurate even for real objects images obtained with an inexpensive acquisition system.  相似文献   

9.
In this paper we show how to estimate facial surface reflectance properties (a slice of the BRDF and the albedo) in conjunction with the facial shape from a single image. The key idea underpinning our approach is to iteratively interleave the two processes of estimating reflectance properties based on the current shape estimate and updating the shape estimate based on the current estimate of the reflectance function. For frontally illuminated faces, the reflectance properties can be described by a function of one variable which we estimate by fitting a curve to the scattered and noisy reflectance samples provided by the input image and estimated shape. For non-frontal illumination, we fit a smooth surface to the scattered 2D reflectance samples. We make use of a novel statistical face shape constraint which we term ‘model-based integrability’ which we use to regularise the shape estimation. We show that the method is capable of recovering accurate shape and reflectance information from single grayscale or colour images using both synthetic and real world imagery. We use the estimated reflectance measurements to render synthetic images of the face in varying poses. To synthesise images under novel illumination, we show how to fit a parametric model of reflectance to the estimated reflectance function.  相似文献   

10.
基于HMM的单样本可变光照、姿态人脸识别   总被引:2,自引:1,他引:2  
提出了一种基于HMM的单样本可变光照、姿态人脸识别算法.该算法首先利用人工配准的训练集对单张正面人脸输入图像与Candide3模型进行自动配准,在配准的基础上重建特定人脸三维模型.对重建模型进行各种角度的旋转可得到姿态不同的数字人脸,然后利用球面谐波基图像调整数字人脸的光照系数可产生光照不同的数字人脸.将产生的光照、姿态不同的数字人脸同原始样本图像一起作为训练数据,为每个用户建立其独立的人脸隐马尔可夫模型.将所提算法对现有人脸库进行识别,并与基于光照补偿和姿态校正的识别方法进行比较.结果显示,该算法能有效避免光照补偿、姿态校正方法因对某些光照、姿态校正不理想而造成的识别率低的情况,能更好地适应光照、姿态不同条件下的人脸识别.  相似文献   

11.
We consider the problem of estimating the 3D shape and reflectance properties of an object made of a single material from a set of calibrated views. To model the reflectance, we propose to use the View Independent Reflectance Map (VIRM), which is a representation of the joint effect of the diffuse+specular Bidirectional Reflectance Distribution Function (BRDF) and the environment illumination. The object shape is parameterized using a triangular mesh. We pose the estimation problem as minimizing the cost of matching input images, and the images synthesized using the shape and VIRM estimates. We show that by enforcing a constant value of VIRM as a global constraint, we can minimize the cost function by iterating between the VIRM and shape estimation. Experimental results on both synthetic and real objects show that our algorithm can recover both the 3D shape and the diffuse/specular reflectance information. Our algorithm does not require the light sources to be known or calibrated. The estimated VIRM can be used to predict the appearances of objects with the same material from novel viewpoints and under transformed illumination. The support of National Science Foundation under grant ECS 02-25523 is gratefully acknowledged. Tianli Yu was supported in part by a Beckman Institute Graduate Fellowship.  相似文献   

12.
为了研究不同材质的色彩真实表现,应用了图形学中对非镜面反射材质和镜面反射材质的不同计算方法,以及直接光照模型和全局光照模型的不同计算方法.进行了两种材质在两种光照模型中的渲染实验,实验结果分析表明非镜面反射材质颜色在直接光照模型中可以得到比较好的表达,镜面反射材质颜色在全局光照模型中的表达更真实.  相似文献   

13.
In this paper we study the problem of recovering the 3D shape, reflectance, and non-rigid motion properties of a dynamic 3D scene. Because these properties are completely unknown and because the scene's shape and motion may be non-smooth, our approach uses multiple views to build a piecewise-continuous geometric and radiometric representation of the scene's trace in space-time. A basic primitive of this representation is the dynamic surfel, which (1) encodes the instantaneous local shape, reflectance, and motion of a small and bounded region in the scene, and (2) enables accurate prediction of the region's dynamic appearance under known illumination conditions. We show that complete surfel-based reconstructions can be created by repeatedly applying an algorithm called Surfel Sampling that combines sampling and parameter estimation to fit a single surfel to a small, bounded region of space-time. Experimental results with the Phong reflectancemodel and complex real scenes (clothing, shiny objects, skin) illustrate our method's ability to explain pixels and pixel variations in terms of their underlying causes—shape, reflectance, motion, illumination, and visibility.  相似文献   

14.
Variations in illumination degrade the performance of appearance based face recognition. We present a novel algorithm for the normalization of color facial images using a single image and its co-registered 3D pointcloud (3D image). The algorithm borrows the physically based Phong’s lighting model from computer graphics which is used for rendering computer images and employs it in a reverse mode for the calculation of face albedo from real facial images. Our algorithm estimates the number of the dominant light sources and their directions from the specularities in the facial image and the corresponding 3D points. The intensities of the light sources and the parameters of the Phong’s model are estimated by fitting the Phong’s model onto the facial skin data. Unlike existing approaches, our algorithm takes into account both Lambertian and specular reflections as well as attached and cast shadows. Moreover, our algorithm is invariant to facial pose and expression and can effectively handle the case of multiple extended light sources. The algorithm was tested on the challenging FRGC v2.0 data and satisfactory results were achieved. The mean fitting error was 6.3% of the maximum color value. Performing face recognition using the normalized images increased both identification and verification rates.  相似文献   

15.
The irradiance volume   总被引:4,自引:0,他引:4  
A major goal in computer graphics is realistic image synthesis. To this end, illumination methods have evolved from simple local shading models to physically based global illumination algorithms. Local illumination methods consider only the light energy transfer between an emitter and a surface (direct lighting), while global methods account for light energy interactions between all surfaces in an environment, considering both direct and indirect lighting. Even though the realistic effects that global illumination algorithms provide are frequently desirable, the computational expense of these methods is too great for many applications. Dynamic environments and scenes containing a very large number of surfaces often pose problems for global illumination methods. This article presents a different approach to calculating the global illumination of objects. Instead of striving for accuracy at the expense of performance, we rephrase the goal: to achieve a reasonable approximation with high performance. This places global illumination effects within reach of many applications in which visual appearance is more important than absolute numerical accuracy  相似文献   

16.
Standard texture mapping hardware enables rapid rendering of color mapped surfaces with interpolated surface shading. New algorithms extend this to bump mapping, Phong shading, and reflection mapping. We first introduce the bidirectional reflectance function we wish to optimize, split into diffuse, specular and environment terms. Casting the diffuse term as a table lookup, we introduce lighting tables and efficient ways to compute them for distant lights. We also revisit the geometry of bump mapping, extending Blinn's (1978) results. We consider caching intermediate results for rendering animated rigid bodies, generalizing this to animated surfaces using a technique called parametric rasterization. Finally, we describe efficient reflection mapping and discuss implications for bump-mapped surfaces. We present a fast method for rendering Phong highlights and discuss a special case of a planar surface with simulated water ripples  相似文献   

17.
Estimation of human shape from images has numerous applications ranging from graphics to surveillance. A single image provides insufficient constraints (e.g. clothing), making human shape estimation more challenging. We propose a method to simultaneously estimate a person’s clothed and naked shapes from a single image of that person wearing clothing. The key component of our method is a deformable model of clothed human shape. We learn our deformable model, which spans variations in pose, body, and clothes, from a training dataset. These variations are derived by the non-rigid surface deformation, and encoded in various low-dimension parameters. Our deformable model can be used to produce clothed 3D meshes for different people in different poses, which neither appears in the training dataset. Afterward, given an input image, our deformable model is initialized with a few user-specified 2D joints and contours of the person. We optimize the parameters of the deformable model by pose fitting and body fitting in an iterative way. Then the clothed and naked 3D shapes of the person can be obtained simultaneously. We illustrate our method for texture mapping and animation. The experimental results on real images demonstrate the effectiveness of our method.  相似文献   

18.
Separation of Reflection Components Using Color and Polarization   总被引:4,自引:0,他引:4  
Specular reflections and interreflections produce strong highlights in brightness images. These highlights can cause vision algorithms for segmentation, shape from shading, binocular stereo, and motion estimation to produce erroneous results. A technique is developed for separating the specular and diffuse components of reflection from images. The approach is to use color and polarization information, simultaneously, to obtain constraints on the reflection components at each image point. Polarization yields local and independent estimates of the color of specular reflection. The result is a linear subspace in color space in which the local diffuse component must lie. This subspace constraint is applied to neighboring image points to determine the diffuse component. In contrast to previous separation algorithms, the proposed method can handle highlights on surfaces with substantial texture, smoothly varying diffuse reflectance, and varying material properties. The separation algorithm is applied to several complex scenes with textured objects and strong interreflections. The separation results are then used to solve three problems pertinent to visual perception; determining illumination color, estimating illumination direction, and shape recovery.  相似文献   

19.
This paper presents an interactive technique for the dense texture-based visualization of unsteady 3D flow, taking into account issues of computational efficiency and visual perception. High efficiency is achieved by a 3D graphics processing unit (GPU)-based texture advection mechanism that implements logical 3D grid structures by physical memory in the form of 2D textures. This approach results in fast read and write access to physical memory, independent of GPU architecture. Slice-based direct volume rendering is used for the final display. We investigate two alternative methods for the volumetric illumination of the result of texture advection: First, gradient-based illumination that employs a real-time computation of gradients, and, second, line-based lighting based on illumination in codimension 2. In addition to the Phong model, perception-guided rendering methods are considered, such as cool/warm shading, halo rendering, or color-based depth cueing. The problems of clutter and occlusion are addressed by supporting a volumetric importance function that enhances features of the flow and reduces visual complexity in less interesting regions. GPU implementation aspects, performance measurements, and a discussion of results are included to demonstrate our visualization approach.  相似文献   

20.
Exchanging Faces in Images   总被引:1,自引:0,他引:1  
  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号