首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 171 毫秒
1.
In this paper, we presented algorithms to assess the quality of facial images affected by factors such as blurriness, lighting conditions, head pose variations, and facial expressions. We developed face recognition prediction functions for images affected by blurriness, lighting conditions, and head pose variations based upon the eigenface technique. We also developed a classifier for images affected by facial expressions to assess their quality for recognition by the eigenface technique. Our experiments using different facial image databases showed that our algorithms are capable of assessing the quality of facial images. These algorithms could be used in a module for facial image quality assessment in a face recognition system. In the future, we will integrate the different measures of image quality to produce a single measure that indicates the overall quality of a face image  相似文献   

2.
We develop a face recognition algorithm which is insensitive to large variation in lighting direction and facial expression. Taking a pattern classification approach, we consider each pixel in an image as a coordinate in a high-dimensional space. We take advantage of the observation that the images of a particular face, under varying illumination but fixed pose, lie in a 3D linear subspace of the high dimensional image space-if the face is a Lambertian surface without shadowing. However, since faces are not truly Lambertian surfaces and do indeed produce self-shadowing, images will deviate from this linear subspace. Rather than explicitly modeling this deviation, we linearly project the image into a subspace in a manner which discounts those regions of the face with large deviation. Our projection method is based on Fisher's linear discriminant and produces well separated classes in a low-dimensional subspace, even under severe variation in lighting and facial expressions. The eigenface technique, another method based on linearly projecting the image space to a low dimensional subspace, has similar computational requirements. Yet, extensive experimental results demonstrate that the proposed “Fisherface” method has error rates that are lower than those of the eigenface technique for tests on the Harvard and Yale face databases  相似文献   

3.
Face images are difficult to interpret because they are highly variable. Sources of variability include individual appearance, 3D pose, facial expression, and lighting. We describe a compact parametrized model of facial appearance which takes into account all these sources of variability. The model represents both shape and gray-level appearance, and is created by performing a statistical analysis over a training set of face images. A robust multiresolution search algorithm is used to fit the model to faces in new images. This allows the main facial features to be located, and a set of shape, and gray-level appearance parameters to be recovered. A good approximation to a given face can be reconstructed using less than 100 of these parameters. This representation can be used for tasks such as image coding, person identification, 3D pose recovery, gender recognition, and expression recognition. Experimental results are presented for a database of 690 face images obtained under widely varying conditions of 3D pose, lighting, and facial expression. The system performs well on all the tasks listed above  相似文献   

4.
光照和姿态变化带来的影响是自动人脸识别的两个主要瓶颈问题。提出了消除这两方面影响的处理方法:首先对训练集里的图像应用灰度归一化处理,降低对光照强度的敏感度;然后进行姿态估计,并用特征脸方法计算不同姿态的特征子空间,最后提出了“姿态权重PWV(Pose’s Weight Value)”这一概念,据此设计了加权的最小距离分类器WMDC(Weighted Minimum Distance Classifier),分配不同姿态权重消除姿态变化影响。在FERET和Yale B数据库上的实验结果表明,此方法能在很大程度上提高人脸光照和姿态改变时的识别率。  相似文献   

5.
基于特征运动的表情人脸识别   总被引:3,自引:0,他引:3       下载免费PDF全文
人脸像的面部表情识别一直是人脸识别的一个难点,为了提高表情人脸识别的鲁棒性,提出了一种基于特征运动的人脸识别方法,该方法首先利用块匹配的方法来确定表情人脸和无表情人脸之间的运动向量,然后利用主成分分析方法(PCA)从这些运动向量中,产生低维子空间,称之为特征运动空间,测试时,先将测试人脸与无表情人脸之间的运动向量投影到特征运动空间,再根据这个运动向量在特征运动空间里的残差进行人脸识别,同时还介绍了基于特征运动的个人模型方法和公共模型方法,实验结果证明,该新算法在表情人脸的识别上,优于特征脸方法,有非常高的识别率。  相似文献   

6.
This paper presents a new technique of unified probabilistic models for face recognition from only one single example image per person. The unified models, trained on an obtained training set with multiple samples per person, are used to recognize facial images from another disjoint database with a single sample per person. Variations between facial images are modeled as two unified probabilistic models: within-class variations and between-class variations. Gaussian Mixture Models are used to approximate the distributions of the two variations and exploit a classifier combination method to improve the performance. Extensive experimental results on the ORL face database and the authors‘ database (the ICT-JDL database) including totally 1,750 facial images of 350 individuals demonstrate that the proposed technique, compared with traditional eigenface method and some well-known traditional algorithms, is a significantly more effective and robust approach for face recognition.  相似文献   

7.
This paper proposes a novel framework of real-time face tracking and recognition by combining two eigen-based methods. The first method is a novel extension of eigenface called augmented eigenface and the second method is a sparse 3D eigentemplate tracker controlled by a particle filter. The augmented eigenface is an eigenface augmented by an associative mapping to 3D shape that is specified by a set of volumetric face models. This paper discusses how to make up the augmented eigenface and how it can be used for inference of 3D shape from partial images. The associative mapping is also generalized to subspace-to-one mappings to cover photometric image changes for a fixed shape. A novel technique, called photometric adjustment, is introduced for simple implementation of associative mapping when an image subspace should be combined to a shape. The sparse 3D eigentemplate tracker is an extension of the 3D template tracker proposed by Oka et al. In combination with the augmented eigenface, the sparse 3D eigentemplate tracker facilitates real-time 3D tracking and recognition when a monocular image sequence is provided. In the tracking, sparse 3D eigentemplate is updated by the augmented eigenface while face pose is estimated by the sparse eigentracker. Since the augmented eigenface is constructed on the conventional eigenfaces, face identification and expression recognition are also accomplished efficiently during the tracking. In the experiment, an augmented eigenface was constructed from 25 faces where 24 images were taken in different lighting conditions for each face. Experimental results show that the augmented eigenface works with the 3D eigentemplate tracker for real-time tracking and recognition.  相似文献   

8.
本文报告了一种多姿态人脸图象识别原型系统,它不同于现有系统和方法,该系统可工作在合作对象下允许姿态变化(存在图象平面内旋转和深度方向上旋转,限于双眼可见)的人脸图象识别。由于对成象条件有所放松,故可望应用于身份验证、保安和视频会议等领域。对姿态可变条件下的人脸特征检测、姿态估计、识别建模以及基于模板相关的匹配等技术进行了深人研究,分析了光照、姿态及分辨率变化等因素对识别的影响程度。实验结果表明,对于30类人脸,每人18幅图象大小的测试集,达到了100%的识别率。  相似文献   

9.
In this work, we have proposed a self-adaptive radial basis function neural network (RBFNN)-based method for high-speed recognition of human faces. It has been seen that the variations between the images of a person, under varying pose, facial expressions, illumination, etc., are quite high. Therefore, in face recognition problem to achieve high recognition rate, it is necessary to consider the structural information lying within these images in the classification process. In the present study, it has been realized by modeling each of the training images as a hidden layer neuron in the proposed RBFNN. Now, to classify a facial image, a confidence measure has been imposed on the outputs of the hidden layer neurons to reduce the influences of the images belonging to other classes. This process makes the RBFNN as self-adaptive for choosing a subset of the hidden layer neurons, which are in close neighborhood of the input image, to be considered for classifying the input image. The process reduces the computation time at the output layer of the RBFNN by neglecting the ineffective radial basis functions and makes the proposed method to recognize face images in high speed and also in interframe period of video. The performance of the proposed method has been evaluated on the basis of sensitivity and specificity on two popular face recognition databases, the ORL and the UMIST face databases. On the ORL database, the best average sensitivity (recognition) and specificity rates are found to be 97.30 and 99.94%, respectively using five samples per person in the training set. Whereas, on the UMIST database, the above quantities are found to be 96.36 and 99.81%, respectively using eight samples per person in the training set. The experimental results indicate that the proposed method outperforms some of the face recognition approaches.  相似文献   

10.
刘树利  胡茂林 《微机发展》2006,16(6):213-215
对在不同视角下,得到的人脸模型,文中提出一种基于人脸表面的识别方法,采用平面射影变换,将人脸的图像变换到一个相同的图像,使图像对齐;而后运用主成分分析法(PCA)进行分类。基于这种方法,由光线、面部表情、姿态的改变引起的不必要变化就可能被消除或可以忽略。这种方法可以达到比较准确的识别人脸的目的。实验结果显示,文中的方法对人脸模型提供了更好的表达,并且人脸识别的错误率更低。  相似文献   

11.
任意光照下人脸图像的低维光照空间表示   总被引:3,自引:0,他引:3  
本文提出一种不同光照条件下人脸图像的低维光照空间表示方法.这种低维光照空间表示不仅能够由输入图像估计其光照参数,而且能够由给定的光照条件生成虚拟的人脸图像.利用主成分分析和最近邻聚类方法得到9个基本点光源的位置,这9个基本点光源可以近似人脸识别应用中几乎所有的光照条件.在这9个基本光源照射下的9幅人脸基图像构成了低维人脸光照空间,它可以表示不同光照条件下的人脸图像,结合光照比图像方法,可以生成不同光照下的虚拟人脸图像.本文提出的低维光照空间的最大优点是利用某个人脸的图像建立的光照空间,可以用于不同的人脸.图像重构和不同光照下的人脸识别实验说明了本文算法的有效性.  相似文献   

12.
基于PCA余像空间的ICA混合特征人脸识别方法   总被引:1,自引:0,他引:1  
武妍  宋金晶 《计算机应用》2005,25(7):1608-1610
为改善传统的基于特征脸的人脸识别方法在识别光照变化较大的人脸时效果不尽理想的缺陷,提出一种基于“PCA余像空间”的ICA混合特征人脸识别方法。不同于2阶PCA人脸识别方法,用独立元分析法代替主元分析法,对“PCA余像特征脸集”进行独立元特征抽取得到人脸图像基于PCA余像空间的独立元特征,并综合人脸图像的原始独立元特征得到混合特征作为最终识别的特征。实验表明,基于PCA余像空间的ICA混合特征人脸识别方法,在识别光照、表情等外界因素变化较大的人脸图像时,要优于传统的基于特征脸的识别方法、基于ICA的识别方法以及基于2阶PCA的人脸识别方法,并具有较强的适用性。  相似文献   

13.
Face detection in color images   总被引:9,自引:0,他引:9  
Human face detection plays an important role in applications such as video surveillance, human computer interface, face recognition, and face image database management. We propose a face detection algorithm for color images in the presence of varying lighting conditions as well as complex backgrounds. Based on a novel lighting compensation technique and a nonlinear color transformation, our method detects skin regions over the entire image and then generates face candidates based on the spatial arrangement of these skin patches. The algorithm constructs eye, mouth, and boundary maps for verifying each face candidate. Experimental results demonstrate successful face detection over a wide range of facial variations in color, position, scale, orientation, 3D pose, and expression in images from several photo collections (both indoors and outdoors)  相似文献   

14.
Face recognition in hyperspectral images   总被引:3,自引:0,他引:3  
Hyperspectral cameras provide useful discriminants for human face recognition that cannot be obtained by other imaging methods. We examine the utility of using near-infrared hyperspectral images for the recognition of faces over a database of 200 subjects. The hyperspectral images were collected using a CCD camera equipped with a liquid crystal tunable filter to provide 31 bands over the near-infrared (0.7 /spl mu/m-1.0 /spl mu/m). Spectral measurements over the near-infrared allow the sensing of subsurface tissue structure which is significantly different from person to person, but relatively stable over time. The local spectral properties of human tissue are nearly invariant to face orientation and expression which allows hyperspectral discriminants to be used for recognition over a large range of poses and expressions. We describe a face recognition algorithm that exploits spectral measurements for multiple facial tissue types. We demonstrate experimentally that this algorithm can be used to recognize faces over time in the presence of changes in facial pose and expression.  相似文献   

15.
基于神经网络集成的多视角人脸识别   总被引:15,自引:0,他引:15  
人脸在图像深度方向上发生偏转时,即使同一对象的人脸图像也会发生极大的变化。在此,将神经网络集成应用于多视角人脸识别,所用的人脸特征通过多视角特征脸分析获得。为每一视角的特征空间各训练一个神经网络,并利用另一个神经网络对其进行结合。利用训练好的神经网络集成进行识别时不仅不需进行偏转角度估计预处理,而且还可以在给出识别结果的同时给出角度估计信息。实验结果表明,该方法的识别精度高于根据精确的偏转角度估计信息挑选最佳单一神经网络所能达到的效果。  相似文献   

16.
人脸识别中光照、伪装及姿态等变化一直是富有挑战性的问题,其中特征提取是很关键的一步。为提高人脸识别率,结合压缩感知和空间金字塔模型,本文提出了一种新的特征提取方法,首先用尺度不变特征变换算法提取图像特征,然后与随机生成的字典进行稀疏编码,再用金字塔模型分层提取不同尺度空间的特征,并用最大池融合特征,最后运用核稀疏表示分类。在Extended Yale B,AR 和CMU PIE人脸数据库上的实验结果表明,该方法对于人脸图像的光照、伪装及姿态等变化有较强的鲁棒性,而且该算法有较快的运行速度。  相似文献   

17.
基于因子分析与稀疏表示的多姿态人脸识别   总被引:1,自引:0,他引:1  
在非可控环境下,人脸识别面临的最大难题之一是姿态变化与遮挡问题。基于稀疏表示的人脸识别方法将测试人脸表示成训练人脸的稀疏线性组合,根据其组合系数的稀疏性进行人脸识别。该方法对人脸的噪声和遮挡变化具有很好的鲁棒性,但对人脸的姿态变化表现力极差,这是因为当人脸具有姿态变化时,同一个人不同姿态情况下很难对应起来,这违背线性组合的前提条件。为了克服稀疏表示方法对人脸姿态变化表现力极差问题,对人脸进行因子分析,分离出人脸姿态因子,得到合成的正面人脸;利用稀疏表示进行人脸分类识别。实验结果表明,该方法对人脸的遮挡和姿态变化具有很好的鲁棒性。  相似文献   

18.
目的 人脸识别已经得到了广泛应用,但大姿态人脸识别问题仍未完美解决。已有方法或提取姿态鲁棒特征,或进行人脸姿态的正面化。其中主流的人脸正面化方法包括2D回归生成和3D模型形变建模,前者能够生成相对自然真实的人脸,但会引入额外的噪声导致图像信息的扭曲;后者能够保持原始的人脸结构信息,但生成过程是基于物理模型的,不够自然灵活。为此,结合2D和3D方法的优势,本文提出了基于由粗到细形变场的人脸正面化方法。方法 该形变场由深度网络以2D回归方式学得,反映的是不同视角人脸图像像素之间的语义级对应关系,可以类3D的方式实现非正面人脸图像的正面化,因此该方法兼具了2D正面化方法的灵活性与3D正面化方法的保真性,且借鉴分步渐进的思路,本文提出了由粗到细的形变场学习框架,以获得更加准确鲁棒的形变场。结果 本文采用大姿态人脸识别实验来验证本文方法的有效性,在MultiPIE (multi pose,illumination,expressions)、LFW (labeled faces in the wild)、CFP (celebrities in frontal-profile in the wild)、IJB-A (intelligence advanced research projects activity Janus benchmark-A)等4个数据集上均取得了比已有方法更高的人脸识别精度。结论 本文提出的基于由粗到细的形变场学习的人脸正面化方法,综合了2D和3D人脸正面化方法的优点,使人脸正面化结果的学习更加灵活、准确,保持了更多有利于识别的身份信息。  相似文献   

19.
This paper presents a new face recognition algorithm that is insensitive to variations in lighting conditions. In the proposed algorithm, the MCT (Modified Census Transform) was embedded to extract the local facial features that are invariant under illumination changes. In this study, we also employed an appearance-based method to incorporate both local and global features. First, input facial images are transformed by the MCT and a bit string from the MCT is converted to a decimal number to generate an MCT domain image. This domain image is recognized using principle component analysis (PCA) or linear discriminate analysis (LDA). Experimental results reveal that the recognition rate of the proposed approach is better than that of conventional appearance-based algorithms by approximately 20% for the Yale B database, in the case of severe variations in illumination conditions. We also found that the proposed algorithm yields better performance for the Yale database for various face expressions, eye-wear, and lighting conditions.  相似文献   

20.
This paper presents a hierarchical multi-state pose-dependent approach for facial feature detection and tracking under varying facial expression and face pose. For effective and efficient representation of feature points, a hybrid representation that integrates Gabor wavelets and gray-level profiles is proposed. To model the spatial relations among feature points, a hierarchical statistical face shape model is proposed to characterize both the global shape of human face and the local structural details of each facial component. Furthermore, multi-state local shape models are introduced to deal with shape variations of some facial components under different facial expressions. During detection and tracking, both facial component states and feature point positions, constrained by the hierarchical face shape model, are dynamically estimated using a switching hypothesized measurements (SHM) model. Experimental results demonstrate that the proposed method accurately and robustly tracks facial features in real time under different facial expressions and face poses.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号