首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Faces in natural images are often occluded by a variety of objects. We propose a fully automated, probabilistic and occlusion-aware 3D morphable face model adaptation framework following an analysis-by-synthesis setup. The key idea is to segment the image into regions explained by separate models. Our framework includes a 3D morphable face model, a prototype-based beard model and a simple model for occlusions and background regions. The segmentation and all the model parameters have to be inferred from the single target image. Face model adaptation and segmentation are solved jointly using an expectation–maximization-like procedure. During the E-step, we update the segmentation and in the M-step the face model parameters are updated. For face model adaptation we apply a stochastic sampling strategy based on the Metropolis–Hastings algorithm. For segmentation, we apply loopy belief propagation for inference in a Markov random field. Illumination estimation is critical for occlusion handling. Our combined segmentation and model adaptation needs a proper initialization of the illumination parameters. We propose a RANSAC-based robust illumination estimation technique. By applying this method to a large face image database we obtain a first empirical distribution of real-world illumination conditions. The obtained empirical distribution is made publicly available and can be used as prior in probabilistic frameworks, for regularization or to synthesize data for deep learning methods.  相似文献   

2.
Total variation models for variable lighting face recognition   总被引:1,自引:0,他引:1  
In this paper, we present the logarithmic total variation (LTV) model for face recognition under varying illumination, including natural lighting conditions, where we rarely know the strength, direction, or number of light sources. The proposed LTV model has the ability to factorize a single face image and obtain the illumination invariant facial structure, which is then used for face recognition. Our model is inspired by the SQI model but has better edge-preserving ability and simpler parameter selection. The merit of this model is that neither does it require any lighting assumption nor does it need any training. The LTV model reaches very high recognition rates in the tests using both Yale and CMU PIE face databases as well as a face database containing 765 subjects under outdoor lighting conditions.  相似文献   

3.
As part of the face recognition task in a robust security system, we propose a novel approach for the illumination recovery of faces with cast shadows and specularities. Given a single 2D face image, we relight the face object by extracting the nine spherical harmonic bases and the face spherical illumination coefficients by using the face spherical spaces properties. First, an illumination training database is generated by computing the properties of the spherical spaces out of face albedo and normal values estimated from 2D training images. The training database is then discriminately divided into two directions in terms of the illumination quality and light direction of each image. Based on the generated multi-level illumination discriminative training space, we analyze the target face pixels and compare them with the appropriate training subspace using pre-generated tiles. When designing the framework, practical real-time processing speed and small image size were considered. In contrast to other approaches, our technique requires neither 3D face models nor restricted illumination conditions for the training process. Furthermore, the proposed approach uses one single face image to estimate the face albedo and face spherical spaces. In this work, we also provide the results of a series of experiments performed on publicly available databases to show the significant improvements in the face recognition rates.  相似文献   

4.
In this paper, we propose two novel methods for face recognition under arbitrary unknown lighting by using spherical harmonics illumination representation, which require only one training image per subject and no 3D shape information. Our methods are based on the result which demonstrated that the set of images of a convex Lambertian object obtained under a wide variety of lighting conditions can be approximated accurately by a low-dimensional linear subspace. We provide two methods to estimate the spherical harmonic basis images spanning this space from just one image. Our first method builds the statistical model based on a collection of 2D basis images. We demonstrate that, by using the learned statistics, we can estimate the spherical harmonic basis images from just one image taken under arbitrary illumination conditions if there is no pose variation. Compared to the first method, the second method builds the statistical models directly in 3D spaces by combining the spherical harmonic illumination representation and a 3D morphable model of human faces to recover basis images from images across both poses and illuminations. After estimating the basis images, we use the same recognition scheme for both methods: we recognize the face for which there exists a weighted combination of basis images that is the closest to the test face image. We provide a series of experiments that achieve high recognition rates, under a wide range of illumination conditions, including multiple sources of illumination. Our methods achieve comparable levels of accuracy with methods that have much more onerous training data requirements. Comparison of the two methods is also provided.  相似文献   

5.
《Pattern recognition》2005,38(10):1705-1716
The appearance of a face will vary drastically when the illumination changes. Variations in lighting conditions make face recognition an even more challenging and difficult task. In this paper, we propose a novel approach to handle the illumination problem. Our method can restore a face image captured under arbitrary lighting conditions to one with frontal illumination by using a ratio-image between the face image and a reference face image, both of which are blurred by a Gaussian filter. An iterative algorithm is then used to update the reference image, which is reconstructed from the restored image by means of principal component analysis (PCA), in order to obtain a visually better restored image. Image processing techniques are also used to improve the quality of the restored image. To evaluate the performance of our algorithm, restored images with frontal illumination are used for face recognition by means of PCA. Experimental results demonstrate that face recognition using our method can achieve a higher recognition rate based on the Yale B database and the Yale database. Our algorithm has several advantages over other previous algorithms: (1) it does not need to estimate the face surface normals and the light source directions, (2) it does not need many images captured under different lighting conditions for each person, nor a set of bootstrap images that includes many images with different illuminations, and (3) it does not need to detect accurate positions of some facial feature points or to warp the image for alignment, etc.  相似文献   

6.
Exchanging Faces in Images   总被引:1,自引:0,他引:1  
  相似文献   

7.
3D face reconstruction is an efficient method for pedestrian recognition in non-cooperative environment because of its outstanding performance in robust face recognition for uncontrolled pose and illumination changes. Visual sensor network is widely used in target surveillance as powerful unattended distributed measurement systems. This paper proposes a collaborative multi-view non-cooperative 3D face reconstruction method in visual sensor network. A peer-to-peer paradigm-based visual sensor network is employed for distributed pedestrian tracking and optimal face image acquisition. Gaussian probability distribution-based multi-view data fusion is used for target localization, and kalman filter is applied for target tracking. A lightweight face image quality evaluation method is presented to search optimal face images. A self-adaptive morphable model is designed for multiview 3D face reconstruction. To adjust the self-adaptive morphable model, the optimal face images and their poses estimation are used. Cooperative chaotic particle swarm optimization is employed for parameters optimization of the self-adaptive morphable model. Experimental results on real data show that the proposed method can acquire optimal face images and achieve non-cooperative 3D reconstruction efficiently.  相似文献   

8.
刘浩  胡可鑫  刘艳丽 《软件学报》2014,25(S2):236-246
提出了一种基于本征图像分解的人脸光照迁移算法.首先,针对本征图像分解效果不彻底的情况,提出了一种改进的本征图像分解方法.在此基础上,为了保持人脸细节特征,提出了一种基于边缘保留的光照滤波算法,对参照人脸进行光照迁移至目标人脸,最后融合目标材质图像与滤波后光照图像进行人脸重光照.实验结果表明,与已有算法相比,该算法能够很好地保留迁移后的人脸肤色,并且所生成的重光照效果更准确、自然.  相似文献   

9.
This paper proposes a novel illumination compensation algorithm, which can compensate for the uneven illuminations on human faces and reconstruct face images in normal lighting conditions. A simple yet effective local contrast enhancement method, namely block-based histogram equalization (BHE), is first proposed. The resulting image processed using BHE is then compared with the original face image processed using histogram equalization (HE) to estimate the category of its light source. In our scheme, we divide the light source for a human face into 65 categories. Based on the category identified, a corresponding lighting compensation model is used to reconstruct an image that will visually be under normal illumination. In order to eliminate the influence of uneven illumination while retaining the shape information about a human face, a 2D face shape model is used. Experimental results show that, with the use of principal component analysis for face recognition, the recognition rate can be improved by 53.3% to 62.6% when our proposed algorithm for lighting compensation is used.  相似文献   

10.
We develop a face recognition algorithm which is insensitive to large variation in lighting direction and facial expression. Taking a pattern classification approach, we consider each pixel in an image as a coordinate in a high-dimensional space. We take advantage of the observation that the images of a particular face, under varying illumination but fixed pose, lie in a 3D linear subspace of the high dimensional image space-if the face is a Lambertian surface without shadowing. However, since faces are not truly Lambertian surfaces and do indeed produce self-shadowing, images will deviate from this linear subspace. Rather than explicitly modeling this deviation, we linearly project the image into a subspace in a manner which discounts those regions of the face with large deviation. Our projection method is based on Fisher's linear discriminant and produces well separated classes in a low-dimensional subspace, even under severe variation in lighting and facial expressions. The eigenface technique, another method based on linearly projecting the image space to a low dimensional subspace, has similar computational requirements. Yet, extensive experimental results demonstrate that the proposed “Fisherface” method has error rates that are lower than those of the eigenface technique for tests on the Harvard and Yale face databases  相似文献   

11.
Many classification algorithms see a reduction in performance when tested on data with properties different from that used for training. This problem arises very naturally in face recognition where images corresponding to the source domain (gallery, training data) and the target domain (probe, testing data) are acquired under varying degree of factors such as illumination, expression, blur and alignment. In this paper, we account for the domain shift by deriving a latent subspace or domain, which jointly characterizes the multifactor variations using appropriate image formation models for each factor. We formulate the latent domain as a product of Grassmann manifolds based on the underlying geometry of the tensor space, and perform recognition across domain shift using statistics consistent with the tensor geometry. More specifically, given a face image from the source or target domain, we first synthesize multiple images of that subject under different illuminations, blur conditions and 2D perturbations to form a tensor representation of the face. The orthogonal matrices obtained from the decomposition of this tensor, where each matrix corresponds to a factor variation, are used to characterize the subject as a point on a product of Grassmann manifolds. For cases with only one image per subject in the source domain, the identity of target domain faces is estimated using the geodesic distance on product manifolds. When multiple images per subject are available, an extension of kernel discriminant analysis is developed using a novel kernel based on the projection metric on product spaces. Furthermore, a probabilistic approach to the problem of classifying image sets on product manifolds is introduced. We demonstrate the effectiveness of our approach through comprehensive evaluations on constrained and unconstrained face datasets, including still images and videos.  相似文献   

12.
Face recognition under varying lighting conditions is challenging, especially for single image based recognition system. Exacting illumination invariant features is an effective approach to solve this problem. However, existing methods are hard to extract both multi-scale and multi-directivity geometrical structures at the same time, which is important for capturing the intrinsic features of a face image. In this paper, we propose to utilize the logarithmic nonsubsampled contourlet transform (LNSCT) to estimate the reflectance component from a single face image and refer it as the illumination invariant feature for face recognition, where NSCT is a fully shift-invariant, multi-scale, and multi-direction transform. LNSCT can extract strong edges, weak edges, and noise from a face image using NSCT in the logarithm domain. We analyze that in the logarithm domain the low-pass subband of a face image and the low frequency part of strong edges can be regarded as the illumination effects, while the weak edges and the high frequency part of strong edges can be considered as the reflectance component. Moreover, even though a face image is polluted by noise (in particular the multiplicative noise), the reflectance component can still be well estimated and meanwhile the noise is removed. The LNSCT can be applied flexibly as neither assumption on lighting condition nor information about 3D shape is required. Experimental results show the promising performance of LNSCT for face recognition on Extended Yale B and CMU-PIE databases.  相似文献   

13.
A face recognition system must recognize a face from a novel image despite the variations between images of the same face. A common approach to overcoming image variations because of changes in the illumination conditions is to use image representations that are relatively insensitive to these variations. Examples of such representations are edge maps, image intensity derivatives, and images convolved with 2D Gabor-like filters. Here we present an empirical study that evaluates the sensitivity of these representations to changes in illumination, as well as viewpoint and facial expression. Our findings indicated that none of the representations considered is sufficient by itself to overcome image variations because of a change in the direction of illumination. Similar results were obtained for changes due to viewpoint and expression. Image representations that emphasized the horizontal features were found to be less sensitive to changes in the direction of illumination. However, systems based only on such representations failed to recognize up to 20 percent of the faces in our database. Humans performed considerably better under the same conditions. We discuss possible reasons for this superiority and alternative methods for overcoming illumination effects in recognition  相似文献   

14.
One of the main challenges in face recognition is represented by pose and illumination variations that drastically affect the recognition performance, as confirmed by the results of recent face recognition large-scale evaluations. This paper presents a new technique for face recognition, based on the joint use of 3D models and 2D images, specifically conceived to be robust with respect to pose and illumination changes. A 3D model of each user is exploited in the training stage (i.e. enrollment) to generate a large number of 2D images representing virtual views of the face with varying pose and illumination. Such images are then used to learn in a supervised manner a set of subspaces constituting the user's template. Recognition occurs by matching 2D images with the templates and no 3D information (neither images nor face models) is required. The experiments carried out confirm the efficacy of the proposed technique.  相似文献   

15.
针对基于可见光的人脸图像的识别容易受光照和表情变化的影响,人脸的表情变化仅限于局部等问题,以及图像的相位一致性特征不受图像的亮度或对比度影响的特点,提出了一种基于分块相位一致性的人脸识别算法。该算法用log-gabor滤波器对图像进行滤波,利用相位一致性模型提取相位一致性特征图像;对每幅特征图像进行分块主元分析(PCA)处理;融合所有子图像的距离信息,采用最近邻分类器进行分类识别。实验证明该方法具有更好的识别性能。  相似文献   

16.
Modeling illumination effects and pose variations of a face is of fundamental importance in the field of facial image analysis. Most of the conventional techniques that simultaneously address both of these problems work with the Lambertian assumption and thus fall short of accurately capturing the complex intensity variation that the facial images exhibit or recovering their 3D shape in the presence of specularities and cast shadows. In this paper, we present a novel Tensor-Spline-based framework for facial image analysis. We show that, using this framework, the facial apparent BRDF field can be accurately estimated while seamlessly accounting for cast shadows and specularities. Further, using local neighborhood information, the same framework can be exploited to recover the 3D shape of the face (to handle pose variation). We quantitatively validate the accuracy of the Tensor Spline model using a more general model based on the mixture of single-lobed spherical functions. We demonstrate the effectiveness of our technique by presenting extensive experimental results for face relighting, 3D shape recovery, and face recognition using the Extended Yale B and CMU PIE benchmark data sets.  相似文献   

17.
提出并实现一种基于两张正交图像和一个标准3维头模型,并利用2D图像特征点和3D模型特征点的匹配进行3维头模型重建的算法。首先,进行面部区域和头发区域的分割,利用色彩传递对输入图像进行颜色处理。对正面图像利用改进后的ASM(主动形状模型)模型进行特征点定位。改进局部最大曲率跟踪(LMCT)方法,更为鲁棒的定位了侧面特征点。在匹配图像特征点与标准3维头上预先定义的特征点的基础上,利用径向基函数进行标准头形变,获得特定人的3维头部形状模型。采用重建好的3维头作为桥梁,自动匹配输入图像,进行无缝纹理融合。最后,将所得纹理映射到形状模型上,获得对应输入图像的特定真实感3维头模型。  相似文献   

18.
We introduce a system that processes a sequence of images of a front-facing human face and recognises a set of facial expressions. We use an efficient appearance-based face tracker to locate the face in the image sequence and estimate the deformation of its non-rigid components. The tracker works in real time. It is robust to strong illumination changes and factors out changes in appearance caused by illumination from changes due to face deformation. We adopt a model-based approach for facial expression recognition. In our model, an image of a face is represented by a point in a deformation space. The variability of the classes of images associated with facial expressions is represented by a set of samples which model a low-dimensional manifold in the space of deformations. We introduce a probabilistic procedure based on a nearest-neighbour approach to combine the information provided by the incoming image sequence with the prior information stored in the expression manifold to compute a posterior probability associated with a facial expression. In the experiments conducted we show that this system is able to work in an unconstrained environment with strong changes in illumination and face location. It achieves an 89% recognition rate in a set of 333 sequences from the Cohn–Kanade database.  相似文献   

19.
熊平  卢烨 《计算机应用》2013,33(8):2359-2361
传统人脸三维重建算法难以确定人脸形状,并且计算复杂。针对此问题,提出一种以水平集方法获取人脸轮廓并结合明暗恢复形状(SFS)算法重建三维模型的方法,该方法仅需单张正面人脸照片。首先采用主动形状模型确定人脸轮廓,将其作为水平集的初始演化曲线,分割出完整的人脸形状;然后对人脸区域进行灰度变换,求出灰度图像;最后通过SFS算法重建已知光照条件的人脸图像的三维模型,将该模型作为参考与灰度图像匹配,进而确定其光照条件和三维模型。实验结果表明,与基于网格模型的算法相比,该方法可快速地重建具有完整形状的人脸模型。  相似文献   

20.
一种人脸标准光照图像的线性重构方法   总被引:2,自引:0,他引:2  
基于相同光照下不同人脸图像与其标准光照图像之间的稳定关系,文中提出一种人脸标准光照图像重构方法。首先,为消除人脸结构影响,引入人脸三维变形,实现图像像素级对齐。其次,根据图像明暗变化,给出一种基于图像分块的光照分类方法。最后,对于形状对齐后的不同光照类别样本,训练出基于子空间的线性重构模型。该方法有效避免传统预处理方法带来的重构图像纹理丢失和子空间方法引起的图像失真。Extended Yale B数据库上实验表明,该方法对图像真实度与人脸识别率的提升,也验证文中人脸对齐和光照分类方法的有效性。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号