首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Most face recognition techniques have been successful in dealing with high-resolution (HR) frontal face images. However, real-world face recognition systems are often confronted with the low-resolution (LR) face images with pose and illumination variations. This is a very challenging issue, especially under the constraint of using only a single gallery image per person. To address the problem, we propose a novel approach called coupled kernel-based enhanced discriminant analysis (CKEDA). CKEDA aims to simultaneously project the features from LR non-frontal probe images and HR frontal gallery ones into a common space where discrimination property is maximized. There are four advantages of the proposed approach: 1) by using the appropriate kernel function, the data becomes linearly separable, which is beneficial for recognition; 2) inspired by linear discriminant analysis (LDA), we integrate multiple discriminant factors into our objective function to enhance the discrimination property; 3) we use the gallery extended trick to improve the recognition performance for a single gallery image per person problem; 4) our approach can address the problem of matching LR non-frontal probe images with HR frontal gallery images, which is difficult for most existing face recognition techniques. Experimental evaluation on the multi-PIE dataset signifies highly competitive performance of our algorithm.   相似文献   

2.
The open-set problem is among the problems that have significantly changed the performance of face recognition algorithms in real-world scenarios. Open-set operates under the supposition that not all the probes have a pair in the gallery. Most face recognition systems in real-world scenarios focus on handling pose, expression and illumination problems on face recognition. In addition to these challenges, when the number of subjects is increased for face recognition, these problems are intensified by look-alike faces for which there are two subjects with lower intra-class variations. In such challenges, the inter-class similarity is higher than the intra-class variation for these two subjects. In fact, these look-alike faces can be created as intrinsic, situation-based and also by facial plastic surgery. This work introduces three real-world open-set face recognition methods across facial plastic surgery changes and a look-alike face by 3D face reconstruction and sparse representation. Since some real-world databases for face recognition do not have multiple images per person in the gallery, with just one image per subject in the gallery, this paper proposes a novel idea to overcome this challenge by 3D modeling from gallery images and synthesizing them for generating several images. Accordingly, a 3D model is initially reconstructed from frontal face images in a real-world gallery. Then, each 3D reconstructed face in the gallery is synthesized to several possible views and a sparse dictionary is generated based on the synthesized face image for each person. Also, a likeness dictionary is defined and its optimization problem is solved by the proposed method. Finally, the face recognition is performed for open-set face recognition using three proposed representation classifications. Promising results are achieved for face recognition across plastic surgery and look-alike faces on three databases including the plastic surgery face, look-alike face and LFW databases compared to several state-of-the-art methods. Also, several real-world and open-set scenarios are performed to evaluate the proposed method on these databases in real-world scenarios.  相似文献   

3.
Recent face recognition algorithm can achieve high accuracy when the tested face samples are frontal. However, when the face pose changes largely, the performance of existing methods drop drastically. Efforts on pose-robust face recognition are highly desirable, especially when each face class has only one frontal training sample. In this study, we propose a 2D face fitting-assisted 3D face reconstruction algorithm that aims at recognizing faces of different poses when each face class has only one frontal training sample. For each frontal training sample, a 3D face is reconstructed by optimizing the parameters of 3D morphable model (3DMM). By rotating the reconstructed 3D face to different views, pose virtual face images are generated to enlarge the training set of face recognition. Different from the conventional 3D face reconstruction methods, the proposed algorithm utilizes automatic 2D face fitting to assist 3D face reconstruction. We automatically locate 88 sparse points of the frontal face by 2D face-fitting algorithm. Such 2D face-fitting algorithm is so-called Random Forest Embedded Active Shape Model, which embeds random forest learning into the framework of Active Shape Model. Results of 2D face fitting are added to the 3D face reconstruction objective function as shape constraints. The optimization objective energy function takes not only image intensity, but also 2D fitting results into account. Shape and texture parameters of 3DMM are thus estimated by fitting the 3DMM to the 2D frontal face sample, which is a non-linear optimization problem. We experiment the proposed method on the publicly available CMUPIE database, which includes faces viewed from 11 different poses, and the results show that the proposed method is effective and the face recognition results toward pose variants are promising.  相似文献   

4.
Recognizing face images across pose is one of the challenging tasks for reliable face recognition. This paper presents a new method to tackle this challenge based on orthogonal discriminant vector (ODV). The result of our theoretical analysis shows that an individual’s probe image captured with a new pose can be represented by a linear combination of his/her gallery images. Based on this observation, in contrast to the conventional methods which model face images of different individuals on a single manifold, we propose to model face images of different individuals on different linear manifolds. The contribution of our approach includes: (1) to prove that the orthogonality to ODVs is a pose-invariant feature.; (2) to categorize each person with a set of ODVs, where his/her face images posses zero projections while other persons’ images are characterized by maximum projections; (3) to define a metric to measure the distance between a face image and an ODV, and classify the face images based on this metric. Our experimental results validate the feasibility of modeling the face images of different individuals on different linear manifolds. The proposed method achieves higher accuracy on face recognition and verification than the existing techniques.  相似文献   

5.

Face recognition techniques are widely used in many applications, such as automatic detection of crime scenes from surveillance cameras for public safety. In these real cases, the pose and illumination variances between two matching faces have a big influence on the identification performance. Handling pose changes is an especially challenging task. In this paper, we propose the learning warps based similarity method to deal with face recognition across the pose problem. Warps are learned between two patches from probe faces and gallery faces using the Lucas-Kanade algorithm. Based on these warps, a frontal face registered in the gallery is transformed into a series of non-frontal viewpoints, which enables non-frontal probe face matching with the frontal gallery face. Scale-invariant feature transform (SIFT) keypoints (interest points) are detected from the generated viewpoints and matched with the probe faces. Moreover, based on the learned warps, the probability likelihood is used to calculate the probability of two faces being the same subject. Finally, a hybrid similarity combining the number of matching keypoints and the probability likelihood is proposed to describe the similarity between a gallery face and a probe face. Experimental results show that our proposed method achieves better recognition accuracy than other algorithms it was compared to, especially when the pose difference is within 40 degrees.

  相似文献   

6.
目的 人脸姿态偏转是影响人脸识别准确率的一个重要因素,本文利用3维人脸重建中常用的3维形变模型以及深度卷积神经网络,提出一种用于多姿态人脸识别的人脸姿态矫正算法,在一定程度上提高了大姿态下人脸识别的准确率。方法 对传统的3维形变模型拟合方法进行改进,利用人脸形状参数和表情参数对3维形变模型进行建模,针对面部不同区域的关键点赋予不同的权值,加权拟合3维形变模型,使得具有不同姿态和面部表情的人脸图像拟合效果更好。然后,对3维人脸模型进行姿态矫正并利用深度学习对人脸图像进行修复,修复不规则的人脸空洞区域,并使用最新的局部卷积技术同时在新的数据集上重新训练卷积神经网络,使得网络参数达到最优。结果 在LFW(labeled faces in the wild)人脸数据库和StirlingESRC(Economic Social Research Council)3维人脸数据库上,将本文算法与其他方法进行比较,实验结果表明,本文算法的人脸识别精度有一定程度的提高。在LFW数据库上,通过对具有任意姿态的人脸图像进行姿态矫正和修复后,本文方法达到了96.57%的人脸识别精确度。在StirlingESRC数据库上,本文方法在人脸姿态为±22°的情况下,人脸识别准确率分别提高5.195%和2.265%;在人脸姿态为±45°情况下,人脸识别准确率分别提高5.875%和11.095%;平均人脸识别率分别提高5.53%和7.13%。对比实验结果表明,本文提出的人脸姿态矫正算法有效提高了人脸识别的准确率。结论 本文提出的人脸姿态矫正算法,综合了3维形变模型和深度学习模型的优点,在各个人脸姿态角度下,均能使人脸识别准确率在一定程度上有所提高。  相似文献   

7.
基于联合模型的人脸识别定位算法仿真研究   总被引:1,自引:0,他引:1  
研究人脸定位和识别精度问题。由于人脸在拍摄过程中可能出现的形变,位置变化,光照变换等因素影响,造成人脸模糊不清,为了提高人脸识别定位的精确度,提出了一种新的ASM和AAM联合模型迭代的人脸定位和识别算法。首先利用ASM提取人脸轮廓上关键点特征,并对人脸定位。在初始定位的基础上,利用AAM对人脸进行投影,产生训练集合中没有出现的合成人脸数据。以上两步交替进行,产生足够的、稳定的人脸形变图像。识别过程中,将变换矩阵与原始合成数据进行比对。仿真结果显示,改进的方法能稳定地提取人脸轮廓,并准确定位,具有很高的识别效率。  相似文献   

8.
Mosaicing entails the consolidation of information represented by multiple images through the application of a registration and blending procedure. We describe a face mosaicing scheme that generates a composite face image during enrollment based on the evidence provided by frontal and semiprofile face images of an individual. Face mosaicing obviates the need to store multiple face templates representing multiple poses of a user's face image. In the proposed scheme, the side profile images are aligned with the frontal image using a hierarchical registration algorithm that exploits neighborhood properties to determine the transformation relating the two images. Multiresolution splining is then used to blend the side profiles with the frontal image, thereby generating a composite face image of the user. A texture-based face recognition technique that is a slightly modified version of the C2 algorithm proposed by Serre et al. is used to compare a probe face image with the gallery face mosaic. Experiments conducted on three different databases indicate that face mosaicing, as described in this paper, offers significant benefits by accounting for the pose variations that are commonly observed in face images.  相似文献   

9.
基于支持向量机的人脸识别方法   总被引:8,自引:0,他引:8  
1.引言人脸是人类视觉中的常见模式,人脸识别在安全验证系统、公安(犯罪识别等)、医学、视频会议、交通量控制等方面有着广阔的应用前景。现有的基于生物特征的识别技术,包括语音识别、虹膜识别、指纹识别等,都已用于商业应用。然而最吸引人的还是人脸识别,因为从人机交互的方式来看,人脸识别更符合人们的理想。虽然人能毫不费力地识别出人脸及其表情,但人脸的机器自动识别仍然是一个具挑战性的研究领域。由于人脸结构的复杂性以及人脸表情的多样性、成像过  相似文献   

10.
基于多姿态人脸图像合成的识别方法研究   总被引:1,自引:0,他引:1  
为了解决多姿态人脸识别问题,提出基于独立成分分析(ICA)进行正面人脸合成的新方法。首先利用ICA和PCA提取不同姿态人脸的特征子空间,然后利用通过训练得到的姿态转换矩阵合成其相对应的正面人脸图像,实验表明ICA人脸识别算法要优于PCA人脸识别算法,并在此基础上用小波对人脸图像进行预处理,据姿态转换矩阵得到的正面人脸特征系数直接进行分类比较,识别率得到了很大的提高。  相似文献   

11.
Pose and low resolution seriously affect the synthesis of high-quality frontal face images. With the development of deep learning, a large number of models based on the deep neural network are used to solve the problem of face pose and image super-resolution. However, the synthesis of the high-resolution frontal face is still a problem that has not been fully studied. Therefore, in this paper, we propose a method to realize image super-resolution and face frontal generation simultaneously. Specifically, we propose a frontal face model FFSR_GAN used to generate super-resolution. This model mainly solves the problem of low resolution and large face pose. There are two main improvements: 1) Aiming at the problem of artifacts in the image generated by the face frontal generation module, the face frontal generation module is designed based on 3DDFA and CBAM; 2) Aiming at the problem of low resolution in frontal face generation, a face super-resolution module is carefully designed, which is used for super-resolution of the generated frontal face. The method proposed in this paper solves the problem of face pose and super-resolution for the first time and improves the recognition accuracy of low-resolution and face images with larger posture. The experimental results on the existing public dataset prove the advantages of the FFSR_GAN model.  相似文献   

12.
A major drawback of statistical models of non-rigid, deformable objects, such as the active appearance model (AAM), is the required pseudo-dense annotation of landmark points for every training image. We propose a regression-based approach for automatic annotation of face images at arbitrary pose and expression, and for deformable model building using only the annotated frontal images. We pose the problem of learning the pattern of manual annotation as a data-driven regression problem and explore several regression strategies to effectively predict the spatial arrangement of the landmark points for unseen face images, with arbitrary expression, at arbitrary poses. We show that the proposed fully sparse non-linear regression approach outperforms other regression strategies by effectively modelling the changes in the shape of the face under varying pose and is capable of capturing the subtleties of different facial expressions at the same time, thus, ensuring the high quality of the generated synthetic images. We show the generalisability of the proposed approach by automatically annotating the face images from four different databases and verifying the results by comparing them with a ground truth obtained from manual annotations.  相似文献   

13.
Pose-Robust Facial Expression Recognition Using View-Based 2D $+$ 3D AAM   总被引:1,自引:0,他引:1  
This paper proposes a pose-robust face tracking and facial expression recognition method using a view-based 2D 3D active appearance model (AAM) that extends the 2D 3D AAM to the view-based approach, where one independent face model is used for a specific view and an appropriate face model is selected for the input face image. Our extension has been conducted in many aspects. First, we use principal component analysis with missing data to construct the 2D 3D AAM due to the missing data in the posed face images. Second, we develop an effective model selection method that directly uses the estimated pose angle from the 2D 3D AAM, which makes face tracking pose-robust and feature extraction for facial expression recognition accurate. Third, we propose a double-layered generalized discriminant analysis (GDA) for facial expression recognition. Experimental results show the following: 1) The face tracking by the view-based 2D 3D AAM, which uses multiple face models with one face model per each view, is more robust to pose change than that by an integrated 2D 3D AAM, which uses an integrated face model for all three views; 2) the double-layered GDA extracts good features for facial expression recognition; and 3) the view-based 2D 3D AAM outperforms other existing models at pose-varying facial expression recognition.  相似文献   

14.
Face recognition under uncontrolled illumination conditions is still considered an unsolved problem. In order to correct for these illumination conditions, we propose a virtual illumination grid (VIG) approach to model the unknown illumination conditions. Furthermore, we use coupled subspace models of both the facial surface and albedo to estimate the face shape. In order to obtain a representation of the face under frontal illumination, we relight the estimated face shape. We show that the frontal illuminated facial images achieve better performance in face recognition. We have performed the challenging Experiment 4 of the FRGCv2 database, which compares uncontrolled probe images to controlled gallery images. Our illumination correction method results in considerably better recognition rates for a number of well-known face recognition methods. By fusing our global illumination correction method with a local illumination correction method, further improvements are achieved.  相似文献   

15.
16.
In this work, we have proposed a self-adaptive radial basis function neural network (RBFNN)-based method for high-speed recognition of human faces. It has been seen that the variations between the images of a person, under varying pose, facial expressions, illumination, etc., are quite high. Therefore, in face recognition problem to achieve high recognition rate, it is necessary to consider the structural information lying within these images in the classification process. In the present study, it has been realized by modeling each of the training images as a hidden layer neuron in the proposed RBFNN. Now, to classify a facial image, a confidence measure has been imposed on the outputs of the hidden layer neurons to reduce the influences of the images belonging to other classes. This process makes the RBFNN as self-adaptive for choosing a subset of the hidden layer neurons, which are in close neighborhood of the input image, to be considered for classifying the input image. The process reduces the computation time at the output layer of the RBFNN by neglecting the ineffective radial basis functions and makes the proposed method to recognize face images in high speed and also in interframe period of video. The performance of the proposed method has been evaluated on the basis of sensitivity and specificity on two popular face recognition databases, the ORL and the UMIST face databases. On the ORL database, the best average sensitivity (recognition) and specificity rates are found to be 97.30 and 99.94%, respectively using five samples per person in the training set. Whereas, on the UMIST database, the above quantities are found to be 96.36 and 99.81%, respectively using eight samples per person in the training set. The experimental results indicate that the proposed method outperforms some of the face recognition approaches.  相似文献   

17.
稀疏表示在人脸识别问题上取得了非常优秀的识别结果,但在单样本条件下,算法性能下降严重。为提高单样本条件下稀疏表示的应用能力,提出一种鲁棒稀疏表示单样本人脸识别算法(RSR)。通过使用每张人脸图像创建一组位置图像,扩充每个对象训练样本,并利用L2,1范数约束,保证RSR选择正确对象的位置图像。在AR和Extended Yale B人脸数据库上进行评测,实验结果表明RSR能够有效处理存在遮挡或光照变化的人脸图像,获得了较好的单样本人脸识别准确率,具有很强的鲁棒性。  相似文献   

18.
19.
可变光照条件下的人脸图像识别   总被引:3,自引:0,他引:3       下载免费PDF全文
对于人脸图像识别中光照变化的影响,传统的解决方法是对待识别图像进行光照补偿,先使它成为标准光照条件下的图像,然后和模板图像匹配来进行识别。为了提高在光照条件大范围变化时,人脸图像的识别率,提出了一种新的可变光照条件下的人脸图像识别方法。该方法首先利用在9个基本光照方向下分别获得的9幅图像来构成人脸光照特征空间,再通过这个光照特征空间,将图像库中的人脸图像变换成与待识别图像具有相同光照条件的图像,并将其作为模板图像;然后利用特征脸方法进行识别。实验结果表明,这种方法不仅能够有效地解决人脸识别中由于光照变化影响所造成的识别率下降的问题,而且对于光照条件大范围变化的情况,也可以得到比较高的正确识别率。  相似文献   

20.
对于单训练样本人脸识别,基于每人多个训练样本的传统人脸识别算法效果均不太理想。尤其是基于Fisher线性鉴别准则的一些方法,由于类内散布矩阵为零矩阵,根本无法进行识别。针对这一问题进行了分析研究,提出了一种新的样本扩充方法,即泛滑动窗法。采用“大窗口,小步长”的机制进行窗口图像采集和样本扩充,不仅增加了训练样本,而且充分保持和强化了原始样本模式固有的类内和类间信息。然后,使用加权二维线性鉴别分析方法(Weighted 2DLDA)对上面获得的窗口图像进行特征抽取。在ORL国际标准人脸库上进行的实验表明了所提算法的可行性和有效性。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号