首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 203 毫秒
1.
基于3D人脸重建的光照、姿态不变人脸识别   总被引:19,自引:0,他引:19  
待匹配人脸图像与库存原型图像之间姿态和光照的差异是自动人脸识别的两个主要瓶颈问题,已有的解决方法往往只能单独处理二者之-,而不能同时处理光照和姿态问题.提出了一种对人脸图像中的姿态和光照变化同时进行校正处理的方法,即通过光照不变的3D人脸重建过程,将姿态和光照都校正到预先定义的标准条件下.首先,利用先验的统计变形模型,结合人脸图像上的一些关键点来恢复较为精细的人脸3D形状.基于此重建的3D形状,进而通过球面谐波商图像的方法估计输入图像的光照属性并提取输入图像的光照无关的纹理信息,从而将光照无关的3D人脸完全重构出来,生成输入人脸图像在标准姿态和光照条件下的虚拟视图,用于最终的分类识别,实现了对光照和姿态问题的同时处理.在CMU PIE数据库上的实验结果表明,此方法可以在很大程度上提高现有人脸识别方法对于原型集合(gallery)和测试集合中图像在姿态和光照不一致情况下识别结果的正确性  相似文献   

2.
基于HMM的单样本可变光照、姿态人脸识别   总被引:2,自引:1,他引:2  
提出了一种基于HMM的单样本可变光照、姿态人脸识别算法.该算法首先利用人工配准的训练集对单张正面人脸输入图像与Candide3模型进行自动配准,在配准的基础上重建特定人脸三维模型.对重建模型进行各种角度的旋转可得到姿态不同的数字人脸,然后利用球面谐波基图像调整数字人脸的光照系数可产生光照不同的数字人脸.将产生的光照、姿态不同的数字人脸同原始样本图像一起作为训练数据,为每个用户建立其独立的人脸隐马尔可夫模型.将所提算法对现有人脸库进行识别,并与基于光照补偿和姿态校正的识别方法进行比较.结果显示,该算法能有效避免光照补偿、姿态校正方法因对某些光照、姿态校正不理想而造成的识别率低的情况,能更好地适应光照、姿态不同条件下的人脸识别.  相似文献   

3.
待匹配人脸图像与原始图像存在姿态和光照的差异,是自动人脸识别的两个主要瓶颈问题.给出了采用三维人脸模型来解决人脸姿态的变化对人脸识别的影响问题.通过正侧面图像,利用B样条曲线与径向基函数相结合的方法进行三维人脸重建,生成三维人脸模型库.分别计算待匹配人脸图像的3个自由度,快速估计出人脸的姿态;结合待匹配人脸图像的姿态参数及三维人脸模型库,获得与待匹配图像相同姿态的三维人脸模型库中的二维人脸图像.最后完成了相同人脸姿态的二维人脸识别.实验结果证明,该方法无需复杂的设备、简单易行、识别时间短,是一种非常实用的解决人脸姿态问题的识别方法.  相似文献   

4.
待匹配的人脸图像与数据库中的原型图像之间的光照差异是自动人脸识别的主要瓶颈问题之一。提出了一种基于样例学习方式的3D人脸形状重建方法,既可以生成任意光照条件下的数据库中人脸图像,也可以对待识别图像进行重新光照,合成无阴影的图像。该方法在建立人脸数据库时利用光度立体技术分离人脸图像的纹理和形状信息,并用多面体模型在最小二乘意义下恢复其3D信息并更新法向量场以克服阴影误差,从而可以利用计算机图形学的方法合成任意光照条件下和小角度姿态改变时的人脸图像;在识别时采用数据库中3D数据的线性组合形式对输入图像建模,以估计其3D信息,从而可以重新照明。在YaleB人脸数据库上的实验表明,在建立3D人脸数据库后,该方法可以快速恢复输入单幅图像中人脸的3D信息,并生成任意光照条件的该人脸图像。  相似文献   

5.
王奇  雷航  王旭鹏 《计算机应用》2023,43(2):595-600
人脸验证广泛应用于生活中各种场景,而普通RGB图像的获取依赖于光照条件。为解决光照和头部姿态对任务的干扰,提出了一个基于卷积神经网络的孪生网络L2-Siamese。首先,直接将成对的深度图作为输入;然后,用两个共享权重的卷积神经网络分别提取面部特征后,引入L2范数将不同姿态的人脸特征约束在一个半径固定的超球上;最后,通过全连接层将特征之间的差异映射为(0,1)区间的概率值来判断该组图像是否属于同一对象。为了验证L2-Siamese的有效性,在公共数据集Pandora上进行了测试。实验结果显示,L2-Siamese整体性能良好。将Pandora根据头部姿态干扰大小进行分组后的测试结果表明,在头部最大姿态干扰下,与当前最好的算法全卷积孪生网络相比,该网络预测准确率提高了4个百分点,有明显提升。  相似文献   

6.
非约束环境下,光照、姿态、表情、遮挡、复杂背景等因素给人脸识别带来严重影响。主动表观模型(Active Appearance Model, AAM) 能够建立包含人脸形状和纹理信息的先验模型对图像中的人脸进行匹配,合成新的人脸图像。Gabor特征被广泛地应用在人脸识别中,并取得了很好的效果。利用AAM对人脸图像进行姿态校正,合成标准正面人脸图像,然后提取图像的熵增强Gabor jets特征,使用带有阈值的Borda count分类器进行人脸识别。在IMM数据库上的试验表明,改进的方法对姿态、表情以及遮挡具有更高的鲁棒性,可以得到更好的识别效果。  相似文献   

7.
基于特征三角形的多姿态视频图像人脸跟踪   总被引:2,自引:0,他引:2       下载免费PDF全文
提出了一种能在复杂环境中进行人脸跟踪的鲁棒、有效的视频图像人脸跟踪的算法。该算法根据面部特征构造特征三角形包括等腰三角形和直角三角形,根据刚体约束生成潜在人脸跟踪矩形区域。该算法能够在不同尺寸、不同光照、不同姿态和不同表情甚至不同噪音情况下检测人脸,有效率达98.18%。  相似文献   

8.
针对图像光照的变化对静态头部姿态估计的影响,该文提出一种基于有向梯度直方图和主成分分析的姿态特征,并利用SVM分类器进行分类。该算法分别在CMU姿态、光照、表情数据库和CVL人脸图像库上进行了测试。实验表明,即使图像光照变化很大,该算法仍可准确地估计头部姿态,识别率达到90%以上。  相似文献   

9.
一种人脸图像的匹配识别方法   总被引:1,自引:0,他引:1  
为进行人脸图像的识别,需要提取面部图像的特征点,通过提取鼻尖、眼角、鼻角、嘴角等特征点,并且以鼻尖作为基准点,分别与两只眼睛的左右眼角以及左右两个鼻角和两个嘴角等构成11维特征向量,作为标准人脸特征向量存入数据库。在分析某人脸图像时,按上述方法提取特征点构成11维特征向量,并与数据库中的特征向量进行对照匹配,当数据接近或匹配时,则认为是同一个人。实验证明,该系统的识别率达到设计要求。  相似文献   

10.
针对人脸移植中输入图像与目标图像的脸部姿态、光照环境与颜色分布不一致的问题,提出了一种基于多尺度分析的自动人脸照片移植方法。通过多线性模型从单张图像中恢复三维人脸模型,从而自动变换输入图像中的人脸姿态。提出了一种多尺度增强与融合算法,根据目标图像的细节特征对输入图像自动调整,并通过无缝融合合成新的人脸照片。实验结果表明该方法可以使输入图像有效匹配目标图像的明暗变化与颜色分布,并自适应调整局部细节。该方法对各种人脸图像之间的移植鲁棒性高,合成照片真实感强。  相似文献   

11.
Feature extraction from images, which are typical of high dimensionality, is crucial to the recognition performance. To explore the discriminative information while depressing the intra-class variations due to variable illumination and view conditions, we propose a factor analysis framework for separate “content” from “style,” identifying a familiar face seen under unfamiliar viewing conditions, classifying familiar poses presented in an unfamiliar face, estimating age across unfamiliar faces. The framework applies efficient algorithms derived from objective factor separating functions and space mapping functions, which can produce sufficiently expressive representations of feature extraction and dimensionality reduction. We report promising results on three different tasks in the high-dimensional image perceptual domains: face identification with two benchmark face databases, facial pose classification with a benchmark facial pose database, extrapolation of age to unseen facial image. Experimental results show that our approach produced higher classification performance when compared to classical LDA, WLDA, LPP, MFA, and DLA algorithms.  相似文献   

12.
We present a generative appearance-based method for recognizing human faces under variation in lighting and viewpoint. Our method exploits the fact that the set of images of an object in fixed pose, but under all possible illumination conditions, is a convex cone in the space of images. Using a small number of training images of each face taken with different lighting directions, the shape and albedo of the face can be reconstructed. In turn, this reconstruction serves as a generative model that can be used to render (or synthesize) images of the face under novel poses and illumination conditions. The pose space is then sampled and, for each pose, the corresponding illumination cone is approximated by a low-dimensional linear subspace whose basis vectors are estimated using the generative model. Our recognition algorithm assigns to a test image the identity of the closest approximated illumination cone. Test results show that the method performs almost without error, except on the most extreme lighting directions  相似文献   

13.
可变光照条件下的人脸图像识别   总被引:3,自引:0,他引:3       下载免费PDF全文
对于人脸图像识别中光照变化的影响,传统的解决方法是对待识别图像进行光照补偿,先使它成为标准光照条件下的图像,然后和模板图像匹配来进行识别。为了提高在光照条件大范围变化时,人脸图像的识别率,提出了一种新的可变光照条件下的人脸图像识别方法。该方法首先利用在9个基本光照方向下分别获得的9幅图像来构成人脸光照特征空间,再通过这个光照特征空间,将图像库中的人脸图像变换成与待识别图像具有相同光照条件的图像,并将其作为模板图像;然后利用特征脸方法进行识别。实验结果表明,这种方法不仅能够有效地解决人脸识别中由于光照变化影响所造成的识别率下降的问题,而且对于光照条件大范围变化的情况,也可以得到比较高的正确识别率。  相似文献   

14.
基于特征运动的表情人脸识别   总被引:3,自引:0,他引:3       下载免费PDF全文
人脸像的面部表情识别一直是人脸识别的一个难点,为了提高表情人脸识别的鲁棒性,提出了一种基于特征运动的人脸识别方法,该方法首先利用块匹配的方法来确定表情人脸和无表情人脸之间的运动向量,然后利用主成分分析方法(PCA)从这些运动向量中,产生低维子空间,称之为特征运动空间,测试时,先将测试人脸与无表情人脸之间的运动向量投影到特征运动空间,再根据这个运动向量在特征运动空间里的残差进行人脸识别,同时还介绍了基于特征运动的个人模型方法和公共模型方法,实验结果证明,该新算法在表情人脸的识别上,优于特征脸方法,有非常高的识别率。  相似文献   

15.
In this paper, an effective method of facial features detection is proposed for human-robot interaction (HRI). Considering the mobility of mobile robot, it is inevitable that any vision system for a mobile robot is bound to be faced with various imaging conditions such as pose variations, illumination changes, and cluttered backgrounds. To detecting face correctly under such difficult conditions, we focus on the local intensity pattern of the facial features. The characteristics of relatively dark and directionally different pattern can provide robust clues for detecting facial features. Based on this observation, we suggest a new directional template for detecting the major facial features, namely the two eyes and the mouth. By applying this template to a facial image, we can make a new convolved image, which we refer to as the edge-like blob map. One distinctive characteristic of this map image is that it provides the local and directional convolution values for each image pixel, which makes it easier to construct the candidate blobs of the major facial features without the information of facial boundary. Then, these candidates are filtered using the conditions associated with the spatial relationship of the two eyes and the mouth, and the face detection process is completed by applying appearance-based facial templates to the refined facial features. The overall detection results obtained with various color images and gray-level face database images demonstrate the usefulness of the proposed method in HRI applications.  相似文献   

16.
This paper presents a new face recognition algorithm that is insensitive to variations in lighting conditions. In the proposed algorithm, the MCT (Modified Census Transform) was embedded to extract the local facial features that are invariant under illumination changes. In this study, we also employed an appearance-based method to incorporate both local and global features. First, input facial images are transformed by the MCT and a bit string from the MCT is converted to a decimal number to generate an MCT domain image. This domain image is recognized using principle component analysis (PCA) or linear discriminate analysis (LDA). Experimental results reveal that the recognition rate of the proposed approach is better than that of conventional appearance-based algorithms by approximately 20% for the Yale B database, in the case of severe variations in illumination conditions. We also found that the proposed algorithm yields better performance for the Yale database for various face expressions, eye-wear, and lighting conditions.  相似文献   

17.
Recently, the importance of face recognition has been increasingly emphasized since popular CCD cameras are distributed to various applications. However, facial images are dramatically changed by lighting variations, so that facial appearance changes caused serious performance degradation in face recognition. Many researchers have tried to overcome these illumination problems using diverse approaches, which have required a multiple registered images per person or the prior knowledge of lighting conditions. In this paper, we propose a new method for face recognition under arbitrary lighting conditions, given only a single registered image and training data under unknown illuminations. Our proposed method is based on the illuminated exemplars which are synthesized from photometric stereo images of training data. The linear combination of illuminated exemplars can represent the new face and the weighted coefficients of those illuminated exemplars are used as identity signature. We make experiments for verifying our approach and compare it with two traditional approaches. As a result, higher recognition rates are reported in these experiments using the illumination subset of Max-Planck Institute face database and Korean face database.  相似文献   

18.
《Pattern recognition》2005,38(10):1705-1716
The appearance of a face will vary drastically when the illumination changes. Variations in lighting conditions make face recognition an even more challenging and difficult task. In this paper, we propose a novel approach to handle the illumination problem. Our method can restore a face image captured under arbitrary lighting conditions to one with frontal illumination by using a ratio-image between the face image and a reference face image, both of which are blurred by a Gaussian filter. An iterative algorithm is then used to update the reference image, which is reconstructed from the restored image by means of principal component analysis (PCA), in order to obtain a visually better restored image. Image processing techniques are also used to improve the quality of the restored image. To evaluate the performance of our algorithm, restored images with frontal illumination are used for face recognition by means of PCA. Experimental results demonstrate that face recognition using our method can achieve a higher recognition rate based on the Yale B database and the Yale database. Our algorithm has several advantages over other previous algorithms: (1) it does not need to estimate the face surface normals and the light source directions, (2) it does not need many images captured under different lighting conditions for each person, nor a set of bootstrap images that includes many images with different illuminations, and (3) it does not need to detect accurate positions of some facial feature points or to warp the image for alignment, etc.  相似文献   

19.
In this paper, we present a new method to modify the appearance of a face image by manipulating the illumination condition, when the face geometry and albedo information is unknown. This problem is particularly difficult when there is only a single image of the subject available. Recent research demonstrates that the set of images of a convex Lambertian object obtained under a wide variety of lighting conditions can be approximated accurately by a low-dimensional linear subspace using a spherical harmonic representation. Moreover, morphable models are statistical ensembles of facial properties such as shape and texture. In this paper, we integrate spherical harmonics into the morphable model framework by proposing a 3D spherical harmonic basis morphable model (SHBMM). The proposed method can represent a face under arbitrary unknown lighting and pose simply by three low-dimensional vectors, i.e., shape parameters, spherical harmonic basis parameters, and illumination coefficients, which are called the SHBMM parameters. However, when the image was taken under an extreme lighting condition, the approximation error can be large, thus making it difficult to recover albedo information. In order to address this problem, we propose a subregion-based framework that uses a Markov random field to model the statistical distribution and spatial coherence of face texture, which makes our approach not only robust to extreme lighting conditions, but also insensitive to partial occlusions. The performance of our framework is demonstrated through various experimental results, including the improved rates for face recognition under extreme lighting conditions.  相似文献   

20.
In this paper, we propose two novel methods for face recognition under arbitrary unknown lighting by using spherical harmonics illumination representation, which require only one training image per subject and no 3D shape information. Our methods are based on the result which demonstrated that the set of images of a convex Lambertian object obtained under a wide variety of lighting conditions can be approximated accurately by a low-dimensional linear subspace. We provide two methods to estimate the spherical harmonic basis images spanning this space from just one image. Our first method builds the statistical model based on a collection of 2D basis images. We demonstrate that, by using the learned statistics, we can estimate the spherical harmonic basis images from just one image taken under arbitrary illumination conditions if there is no pose variation. Compared to the first method, the second method builds the statistical models directly in 3D spaces by combining the spherical harmonic illumination representation and a 3D morphable model of human faces to recover basis images from images across both poses and illuminations. After estimating the basis images, we use the same recognition scheme for both methods: we recognize the face for which there exists a weighted combination of basis images that is the closest to the test face image. We provide a series of experiments that achieve high recognition rates, under a wide range of illumination conditions, including multiple sources of illumination. Our methods achieve comparable levels of accuracy with methods that have much more onerous training data requirements. Comparison of the two methods is also provided.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号