首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We present a generative appearance-based method for recognizing human faces under variation in lighting and viewpoint. Our method exploits the fact that the set of images of an object in fixed pose, but under all possible illumination conditions, is a convex cone in the space of images. Using a small number of training images of each face taken with different lighting directions, the shape and albedo of the face can be reconstructed. In turn, this reconstruction serves as a generative model that can be used to render (or synthesize) images of the face under novel poses and illumination conditions. The pose space is then sampled and, for each pose, the corresponding illumination cone is approximated by a low-dimensional linear subspace whose basis vectors are estimated using the generative model. Our recognition algorithm assigns to a test image the identity of the closest approximated illumination cone. Test results show that the method performs almost without error, except on the most extreme lighting directions  相似文献   

2.
Recently, the importance of face recognition has been increasingly emphasized since popular CCD cameras are distributed to various applications. However, facial images are dramatically changed by lighting variations, so that facial appearance changes caused serious performance degradation in face recognition. Many researchers have tried to overcome these illumination problems using diverse approaches, which have required a multiple registered images per person or the prior knowledge of lighting conditions. In this paper, we propose a new method for face recognition under arbitrary lighting conditions, given only a single registered image and training data under unknown illuminations. Our proposed method is based on the illuminated exemplars which are synthesized from photometric stereo images of training data. The linear combination of illuminated exemplars can represent the new face and the weighted coefficients of those illuminated exemplars are used as identity signature. We make experiments for verifying our approach and compare it with two traditional approaches. As a result, higher recognition rates are reported in these experiments using the illumination subset of Max-Planck Institute face database and Korean face database.  相似文献   

3.
Face recognition under variable pose and illumination is a challenging problem in computer vision tasks. In this paper, we solve this problem by proposing a new residual based deep face reconstruction neural network to extract discriminative pose-and-illumination-invariant (PII) features. Our deep model can change arbitrary pose and illumination face images to the frontal view with standard illumination. We propose a new triplet-loss training method instead of Euclidean loss to optimize our model, which has two advantages: a) The training triplets can be easily augmented by freely choosing combinations of labeled face images, in this way, overfitting can be avoided; b) The triplet-loss training makes the PII features more discriminative even when training samples have similar appearance. By using our PII features, we achieve 83.8% average recognition accuracy on MultiPIE face dataset which is competitive to the state-of-the-art face recognition methods.  相似文献   

4.
Face recognition under uncontrolled illumination conditions is still considered an unsolved problem. In order to correct for these illumination conditions, we propose a virtual illumination grid (VIG) approach to model the unknown illumination conditions. Furthermore, we use coupled subspace models of both the facial surface and albedo to estimate the face shape. In order to obtain a representation of the face under frontal illumination, we relight the estimated face shape. We show that the frontal illuminated facial images achieve better performance in face recognition. We have performed the challenging Experiment 4 of the FRGCv2 database, which compares uncontrolled probe images to controlled gallery images. Our illumination correction method results in considerably better recognition rates for a number of well-known face recognition methods. By fusing our global illumination correction method with a local illumination correction method, further improvements are achieved.  相似文献   

5.
6.
Face recognition is challenging because variations can be introduced to the pattern of a face by varying pose, lighting, scale, and expression. A new face recognition approach using rank correlation of Gabor-filtered images is presented. Using this technique, Gabor filters of different sizes and orientations are applied on images before using rank correlation for matching the face representation. The representation used for each face is computed from the Gabor-filtered images and the original image. Although training requires a fairly substantial length of time, the computation time required for recognition is very short. Recognition rates ranging between 83.5% and 96% are obtained using the AT&T (formerly ORL) database using different permutations of 5 and 9 training images per subject. In addition, the effect of pose variation on the recognition system is systematically determined using images from the UMIST database.  相似文献   

7.
This paper presents a novel illumination normalization approach for face recognition under varying lighting conditions. In the proposed approach, a discrete cosine transform (DCT) is employed to compensate for illumination variations in the logarithm domain. Since illumination variations mainly lie in the low-frequency band, an appropriate number of DCT coefficients are truncated to minimize variations under different lighting conditions. Experimental results on the Yale B database and CMU PIE database show that the proposed approach improves the performance significantly for the face images with large illumination variations. Moreover, the advantage of our approach is that it does not require any modeling steps and can be easily implemented in a real-time face recognition system.  相似文献   

8.
The features of a face can change drastically as the illumination changes. In contrast to pose position and expression, illumination changes present a much greater challenge to face recognition. In this paper, we propose a novel wavelet based approach that considers the correlation of neighboring wavelet coefficients to extract an illumination invariant. This invariant represents the key facial structure needed for face recognition. Our method has better edge preserving ability in low frequency illumination fields and better useful information saving ability in high frequency fields using wavelet based NeighShrink denoise techniques. This method proposes different process approaches for training images and testing images since these images always have different illuminations. More importantly, by having different processes, a simple processing algorithm with low time complexity can be applied to the testing image. This leads to an easy application to real face recognition systems. Experimental results on Yale face database B and CMU PIE Face Database show that excellent recognition rates can be achieved by the proposed method.  相似文献   

9.
Illumination variation is one of the critical factors affecting face recognition rate. A novel approach for human face illumination compensation is presented in this paper. It constructs the nine-dimension face illumination subspace based on quotient image. In addition, with the aim to improve algorithm efficiency, a half-face illumination image is proposed and the low-dimension training set of the face image under different illumination conditions are obtained by means of PCA and wavelet transform. After processing, two different illumination compensation strategies are given: one is adding light, and the other is removing light. Based on the illumination compensation strategy, we implement the typical illumination sample image synthesis and the standard illumination sample image synthesis on a PCA feature subspace and a wavelet transform subspace, respectively, and the illumination compensation of the gray images and the color images are further realized. Experimental results based on the Yale Face Database B, the Extended Yale Face Database B and the CAS-PEAL Face Database indicate that execution time after compensation is approximately half the time and face recognition rate is improved by 20% compared with that of the original images.  相似文献   

10.
In this paper, we propose two novel methods for face recognition under arbitrary unknown lighting by using spherical harmonics illumination representation, which require only one training image per subject and no 3D shape information. Our methods are based on the result which demonstrated that the set of images of a convex Lambertian object obtained under a wide variety of lighting conditions can be approximated accurately by a low-dimensional linear subspace. We provide two methods to estimate the spherical harmonic basis images spanning this space from just one image. Our first method builds the statistical model based on a collection of 2D basis images. We demonstrate that, by using the learned statistics, we can estimate the spherical harmonic basis images from just one image taken under arbitrary illumination conditions if there is no pose variation. Compared to the first method, the second method builds the statistical models directly in 3D spaces by combining the spherical harmonic illumination representation and a 3D morphable model of human faces to recover basis images from images across both poses and illuminations. After estimating the basis images, we use the same recognition scheme for both methods: we recognize the face for which there exists a weighted combination of basis images that is the closest to the test face image. We provide a series of experiments that achieve high recognition rates, under a wide range of illumination conditions, including multiple sources of illumination. Our methods achieve comparable levels of accuracy with methods that have much more onerous training data requirements. Comparison of the two methods is also provided.  相似文献   

11.
The paper proposes a novel, pose-invariant face recognition system based on a deformable, generic 3D face model, that is a composite of: (1) an edge model, (2) a color region model and (3) a wireframe model for jointly describing the shape and important features of the face. The first two submodels are used for image analysis and the third mainly for face synthesis. In order to match the model to face images in arbitrary poses, the 3D model can be projected onto different 2D viewplanes based on rotation, translation and scale parameters, thereby generating multiple face-image templates (in different sizes and orientations). Face shape variations among people are taken into account by the deformation parameters of the model. Given an unknown face, its pose is estimated by model matching and the system synthesizes face images of known subjects in the same pose. The face is then classified as the subject whose synthesized image is most similar. The synthesized images are generated using a 3D face representation scheme which encodes the 3D shape and texture characteristics of the faces. This face representation is automatically derived from training face images of the subject. Experimental results show that the method is capable of determining pose and recognizing faces accurately over a wide range of poses and with naturally varying lighting conditions. Recognition rates of 92.3% have been achieved by the method with 10 training face images per person.  相似文献   

12.
《Pattern recognition》2005,38(10):1705-1716
The appearance of a face will vary drastically when the illumination changes. Variations in lighting conditions make face recognition an even more challenging and difficult task. In this paper, we propose a novel approach to handle the illumination problem. Our method can restore a face image captured under arbitrary lighting conditions to one with frontal illumination by using a ratio-image between the face image and a reference face image, both of which are blurred by a Gaussian filter. An iterative algorithm is then used to update the reference image, which is reconstructed from the restored image by means of principal component analysis (PCA), in order to obtain a visually better restored image. Image processing techniques are also used to improve the quality of the restored image. To evaluate the performance of our algorithm, restored images with frontal illumination are used for face recognition by means of PCA. Experimental results demonstrate that face recognition using our method can achieve a higher recognition rate based on the Yale B database and the Yale database. Our algorithm has several advantages over other previous algorithms: (1) it does not need to estimate the face surface normals and the light source directions, (2) it does not need many images captured under different lighting conditions for each person, nor a set of bootstrap images that includes many images with different illuminations, and (3) it does not need to detect accurate positions of some facial feature points or to warp the image for alignment, etc.  相似文献   

13.
The appearance of a face image is severely affected by illumination conditions that will hinder the automatic face recognition process. To recognize faces under varying lighting conditions, a homomorphic filtering-based illumination normalization method is proposed in this paper. In this work, the effect of illumination is effectively reduced by a modified implementation of homomorphic filtering whose key component is a Difference of Gaussian (DoG) filter, and the contrast is enhanced by histogram equalization. The resulted face image is not only reduced illumination effect but also preserved edges and details that will facilitate the further face recognition task. Among others, our method has the following advantages: (1) neither does it need any prior information of 3D shape or light sources, nor many training samples thus can be directly applied to single training image per person condition; and (2) it is simple and computationally fast because there are mature and fast algorithms for the Fourier transform used in homomorphic filter. The Eigenfaces method is chosen to recognize the normalized face images. Experimental results on the Yale face database B and the CMU PIE face database demonstrate the significant performance improvement of the proposed method in the face recognition system for the face images with large illumination variations.  相似文献   

14.
光照和姿态变化带来的影响是自动人脸识别的两个主要瓶颈问题。提出了消除这两方面影响的处理方法:首先对训练集里的图像应用灰度归一化处理,降低对光照强度的敏感度;然后进行姿态估计,并用特征脸方法计算不同姿态的特征子空间,最后提出了“姿态权重PWV(Pose’s Weight Value)”这一概念,据此设计了加权的最小距离分类器WMDC(Weighted Minimum Distance Classifier),分配不同姿态权重消除姿态变化影响。在FERET和Yale B数据库上的实验结果表明,此方法能在很大程度上提高人脸光照和姿态改变时的识别率。  相似文献   

15.
何晓光  田捷  毋立芳  张瑶瑶  杨鑫 《软件学报》2007,18(9):2318-2325
复杂光照条件下的人脸识别是一个困难但需迫切解决的问题,为此提出了一种有效的光照归一化算法.该方法根据面部光照特点,基于数学形态学和商图像技术对各种光照条件下的人脸图像进行归一化处理,并且将它发展到动态地估计光照强度,进一步增强消除光照和保留特征的效果.与传统的技术相比,该方法无须训练数据集以及假定光源位置,并且每人只需一幅注册图像.在耶鲁人脸图像库B上的测试表明,该算法以较小的计算代价取得了优良的识别性能.  相似文献   

16.
Total variation models for variable lighting face recognition   总被引:1,自引:0,他引:1  
In this paper, we present the logarithmic total variation (LTV) model for face recognition under varying illumination, including natural lighting conditions, where we rarely know the strength, direction, or number of light sources. The proposed LTV model has the ability to factorize a single face image and obtain the illumination invariant facial structure, which is then used for face recognition. Our model is inspired by the SQI model but has better edge-preserving ability and simpler parameter selection. The merit of this model is that neither does it require any lighting assumption nor does it need any training. The LTV model reaches very high recognition rates in the tests using both Yale and CMU PIE face databases as well as a face database containing 765 subjects under outdoor lighting conditions.  相似文献   

17.
Face recognition is a rapidly growing research area due to increasing demands for security in commercial and law enforcement applications. This paper provides an up-to-date review of research efforts in face recognition techniques based on two-dimensional (2D) images in the visual and infrared (IR) spectra. Face recognition systems based on visual images have reached a significant level of maturity with some practical success. However, the performance of visual face recognition may degrade under poor illumination conditions or for subjects of various skin colors. IR imagery represents a viable alternative to visible imaging in the search for a robust and practical identification system. While visual face recognition systems perform relatively reliably under controlled illumination conditions, thermal IR face recognition systems are advantageous when there is no control over illumination or for detecting disguised faces. Face recognition using 3D images is another active area of face recognition, which provides robust face recognition with changes in pose. Recent research has also demonstrated that the fusion of different imaging modalities and spectral components can improve the overall performance of face recognition.  相似文献   

18.
基于球面谐波基图像的任意光照下的人脸识别   总被引:13,自引:0,他引:13  
提出了一种基于球面谐波基图像的光照补偿算法,用以在任意光照条件下进行人脸识别.算法分两步进行:光照估计和光照补偿.基于人脸形状大致相同和每个人脸的反射率基本相等的假设,首先估计了输入人脸图像光照的9个低频谐波系数.根据光照估计的结果,提出了两种光照补偿方法:纹理图像和差图像.纹理图像为输入图像与其光照辐照图之商,与输入图像的光照条件无关.差图像为输入图像与平均人脸在相同光照下的图像之差,通过减去平均人脸在相同光照下的图像,减弱了光照的影响.在CMU-PIE人脸库和Yale B人脸库上的实验表明,通过光照补偿,不同光照下人脸图像识别率有了很大提高.  相似文献   

19.
目的 现实中采集到的人脸图像通常受到光照、遮挡等环境因素的影响,使得同一类的人脸图像具有不同程度的差异性,不同类的人脸图像又具有不同程度的相似性,这极大地影响了人脸识别的准确性。为了解决上述问题对人脸识别造成的影响,在低秩矩阵恢复理论的基础上提出了具有识别力的结构化低秩字典学习的人脸识别算法。方法 该算法基于训练样本的标签信息将低秩正则化以及结构化稀疏同时引入到学习的具有识别力的字典上。在字典学习过程中,首先利用样本的重建误差约束样本与字典之间的关系;其次将Fisher准则应用到稀疏编码过程中,使其编码系数具有识别能力;由于训练样本中的噪声信息会影响字典的识别力,所以在低秩矩阵恢复理论的基础上将低秩正则化应用到字典学习过程中;接着,在字典学习过程中加入了结构化稀疏使其不丢失结构信息以保证对样本进行最优分类;最后再利用误差重构法对测试样本进行分类识别。结果 本文算法在AR以及ORL人脸数据库上分别进行了实验仿真。在AR人脸数据库中,为了分析样本不同维数对实验结果造成的影响,选取了第一时期拍摄的每人6幅图像,包括1幅围巾遮挡,2幅墨镜遮挡以及3幅脸部表情变化以及光照变化(未被遮挡)的图像作为训练样本,同时选取相同组合的样本图像作为测试样本,无论哪种方法,图像的维度越高识别率越高。对比SRC (sparse representation based on classification)算法与DKSVD (discriminative K-means singular value decomposition)算法的识别率可知,DKSVD算法通过字典学习减缓了训练样本中的不确定因素对识别结果的影响;对比DLRD_SR (discriminative low-rank dictionary learning for sparse representation)算法与FDDL (Fisher discriminative dictionary learning)算法的识别率可知,当图像有遮挡等噪声信息存在时,字典低秩化可以提高至少5.8%的识别率;对比本文算法与DLRD_SR算法可知,在字典学习的过程中加入Fisher准则后识别率显著提高,同时理想稀疏值能保证对样本进行最优的分类。当样本图像的维度达到500维时人脸图像在有围巾、墨镜遮挡的情况下识别率可达到85.2%;其中墨镜和围巾的遮挡程度分别可以看成是人脸图像的20%和40%,为了验证本文算法在不同脸部表情变化、光照改变以及遮挡情况下的有效性,根据训练样本的具体图像组合情况进行实验。无论哪种样本图像组合,本文算法在有遮挡存在的样本识别中具有显著优势。在训练样本只包含脸部表情变化、光照变化以及墨镜遮挡图像的情况下,本文算法的识别率高于其他算法至少2.7%,在训练样本只包含脸部表情变化、光照变化以及围巾遮挡图像的情况下,本文算法的识别率高于其他算法至少3.6%,在训练样本包含脸部表情变化、光照变化、围巾遮挡以及墨镜遮挡图像的情况下,其识别率高于其他算法至少1.9%。在ORL人脸数据库中,人脸图像在无遮挡的情况下识别率达到95.2%,稍低于FDDL算法的识别率;在随机块遮挡程度达到20%时,相比较于SRC算法、DKSVD算法、FDDL算法以及DLRD_SR算法,本文算法的识别率最高;当随机块遮挡程度达到50%时,以上算法的识别率均不高,但本文算法的其识别率仍然最高。结论 本文算法在人脸图像受到遮挡等因素的影响时具有一定的鲁棒性,实验结果表明该算法在人脸识别方面具有可行性。  相似文献   

20.
To investigate the robustness of face recognition algorithms under the complicated variations of illumination, facial expression and posture, the advantages and disadvantages of seven typical algorithms on extracting global and local features are studied through the experiments respectively on the Olivetti Research Laboratory database and the other three databases (the three subsets of illumination, expression and posture that are constructed by selecting images from several existing face databases). By taking the above experimental results into consideration, two schemes of face recognition which are based on the decision fusion of the two-dimensional linear discriminant analysis (2DLDA) and local binary pattern (LBP) are proposed in this paper to heighten the recognition rates. In addition, partitioning a face non-uniformly for its LBP histograms is conducted to improve the performance. Our experimental results have shown the complementarities of the two kinds of features, the 2DLDA and LBP, and have verified the effectiveness of the proposed fusion algorithms.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号