首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 546 毫秒
1.
A major drawback of statistical models of non-rigid, deformable objects, such as the active appearance model (AAM), is the required pseudo-dense annotation of landmark points for every training image. We propose a regression-based approach for automatic annotation of face images at arbitrary pose and expression, and for deformable model building using only the annotated frontal images. We pose the problem of learning the pattern of manual annotation as a data-driven regression problem and explore several regression strategies to effectively predict the spatial arrangement of the landmark points for unseen face images, with arbitrary expression, at arbitrary poses. We show that the proposed fully sparse non-linear regression approach outperforms other regression strategies by effectively modelling the changes in the shape of the face under varying pose and is capable of capturing the subtleties of different facial expressions at the same time, thus, ensuring the high quality of the generated synthetic images. We show the generalisability of the proposed approach by automatically annotating the face images from four different databases and verifying the results by comparing them with a ground truth obtained from manual annotations.  相似文献   

2.
3.
Recent face recognition algorithm can achieve high accuracy when the tested face samples are frontal. However, when the face pose changes largely, the performance of existing methods drop drastically. Efforts on pose-robust face recognition are highly desirable, especially when each face class has only one frontal training sample. In this study, we propose a 2D face fitting-assisted 3D face reconstruction algorithm that aims at recognizing faces of different poses when each face class has only one frontal training sample. For each frontal training sample, a 3D face is reconstructed by optimizing the parameters of 3D morphable model (3DMM). By rotating the reconstructed 3D face to different views, pose virtual face images are generated to enlarge the training set of face recognition. Different from the conventional 3D face reconstruction methods, the proposed algorithm utilizes automatic 2D face fitting to assist 3D face reconstruction. We automatically locate 88 sparse points of the frontal face by 2D face-fitting algorithm. Such 2D face-fitting algorithm is so-called Random Forest Embedded Active Shape Model, which embeds random forest learning into the framework of Active Shape Model. Results of 2D face fitting are added to the 3D face reconstruction objective function as shape constraints. The optimization objective energy function takes not only image intensity, but also 2D fitting results into account. Shape and texture parameters of 3DMM are thus estimated by fitting the 3DMM to the 2D frontal face sample, which is a non-linear optimization problem. We experiment the proposed method on the publicly available CMUPIE database, which includes faces viewed from 11 different poses, and the results show that the proposed method is effective and the face recognition results toward pose variants are promising.  相似文献   

4.
Face recognition based on fitting a 3D morphable model   总被引:31,自引:0,他引:31  
This paper presents a method for face recognition across variations in pose, ranging from frontal to profile views, and across a wide range of illuminations, including cast shadows and specular reflections. To account for these variations, the algorithm simulates the process of image formation in 3D space, using computer graphics, and it estimates 3D shape and texture of faces from single images. The estimate is achieved by fitting a statistical, morphable model of 3D faces to images. The model is learned from a set of textured 3D scans of heads. We describe the construction of the morphable model, an algorithm to fit the model to images, and a framework for face identification. In this framework, faces are represented by model parameters for 3D shape and texture. We present results obtained with 4,488 images from the publicly available CMU-PIE database and 1,940 images from the FERET database.  相似文献   

5.
基于HMM的单样本可变光照、姿态人脸识别   总被引:2,自引:1,他引:2  
提出了一种基于HMM的单样本可变光照、姿态人脸识别算法.该算法首先利用人工配准的训练集对单张正面人脸输入图像与Candide3模型进行自动配准,在配准的基础上重建特定人脸三维模型.对重建模型进行各种角度的旋转可得到姿态不同的数字人脸,然后利用球面谐波基图像调整数字人脸的光照系数可产生光照不同的数字人脸.将产生的光照、姿态不同的数字人脸同原始样本图像一起作为训练数据,为每个用户建立其独立的人脸隐马尔可夫模型.将所提算法对现有人脸库进行识别,并与基于光照补偿和姿态校正的识别方法进行比较.结果显示,该算法能有效避免光照补偿、姿态校正方法因对某些光照、姿态校正不理想而造成的识别率低的情况,能更好地适应光照、姿态不同条件下的人脸识别.  相似文献   

6.
We present a novel approach to face recognition by constructing facial identity structures across views and over time, referred to as identity surfaces, in a Kernel Discriminant Analysis (KDA) feature space. This approach is aimed at addressing three challenging problems in face recognition: modelling faces across multiple views, extracting non-linear discriminatory features, and recognising faces over time. First, a multi-view face model is designed which can be automatically fitted to face images and sequences to extract the normalised facial texture patterns. This model is capable of dealing with faces with large pose variation. Second, KDA is developed to compute the most significant non-linear basis vectors with the intention of maximising the between-class variance and minimising the within-class variance. We applied KDA to the problem of multi-view face recognition, and a significant improvement has been achieved in reliability and accuracy. Third, identity surfaces are constructed in a pose-parameterised discriminatory feature space. Dynamic face recognition is then performed by matching the object trajectory computed from a video input and model trajectories constructed on the identity surfaces. These two types of trajectories encode the spatio-temporal dynamics of moving faces.  相似文献   

7.
Matching 2.5D face scans to 3D models   总被引:7,自引:0,他引:7  
The performance of face recognition systems that use two-dimensional images depends on factors such as lighting and subject's pose. We are developing a face recognition system that utilizes three-dimensional shape information to make the system more robust to arbitrary pose and lighting. For each subject, a 3D face model is constructed by integrating several 2.5D face scans which are captured from different views. 2.5D is a simplified 3D (x,y,z) surface representation that contains at most one depth value (z direction) for every point in the (x, y) plane. Two different modalities provided by the facial scan, namely, shape and texture, are utilized and integrated for face matching. The recognition engine consists of two components, surface matching and appearance-based matching. The surface matching component is based on a modified iterative closest point (ICP) algorithm. The candidate list from the gallery used for appearance matching is dynamically generated based on the output of the surface matching component, which reduces the complexity of the appearance-based matching stage. Three-dimensional models in the gallery are used to synthesize new appearance samples with pose and illumination variations and the synthesized face images are used in discriminant subspace analysis. The weighted sum rule is applied to combine the scores given by the two matching components. Experimental results are given for matching a database of 200 3D face models with 598 2.5D independent test scans acquired under different pose and some lighting and expression changes. These results show the feasibility of the proposed matching scheme.  相似文献   

8.
In this paper, we propose a novel Patch Geodesic Distance (PGD) to transform the texture map of an object through its shape data for robust 2.5D object recognition. Local geodesic paths within patches and global geodesic paths for patches are combined in a coarse to fine hierarchical computation of PGD for each surface point to tackle the missing data problem in 2.5D images. Shape adjusted texture patches are encoded into local patterns for similarity measurement between two 2.5D images with different viewing angles and/or shape deformations. An extensive experimental investigation is conducted on 2.5 face images using the publicly available BU-3DFE and Bosphorus databases covering face recognition under expression and pose changes. The performance of the proposed method is compared with that of three benchmark approaches. The experimental results demonstrate that the proposed method provides a very encouraging new solution for 2.5D object recognition.  相似文献   

9.
Face recognition from three-dimensional (3D) shape data has been proposed as a method of biometric identification as a way of either supplanting or reinforcing a two-dimensional approach. This paper presents a 3D face recognition system capable of recognizing the identity of an individual from a 3D facial scan in any pose across the view-sphere, by suitably comparing it with a set of models (all in frontal pose) stored in a database. The system makes use of only 3D shape data, ignoring textural information completely. Firstly, we propose a generic learning strategy using support vector regression [Burges, Data Mining Knowl Discov 2(2): 121–167, 1998] to estimate the approximate pose of a 3D head. The support vector machine (SVM) is trained on range images in several poses belonging to only a small set of individuals and is able to coarsely estimate the pose of any unseen facial scan. Secondly, we propose a hierarchical two-step strategy to normalize a facial scan to a nearly frontal pose before performing any recognition. The first step consists of either a coarse normalization making use of facial features or the generic learning algorithm using the SVM. This is followed by an iterative technique to refine the alignment to the frontal pose, which is basically an improved form of the Iterated Closest Point Algorithm [Besl and Mckay, IEEE Trans Pattern Anal Mach Intell 14(2):239–256, 1992]. The latter step produces a residual error value, which can be used as a metric to gauge the similarity between two faces. Our two-step approach is experimentally shown to outperform both of the individual normalization methods in terms of recognition rates, over a very wide range of facial poses. Our strategy has been tested on a large database of 3D facial scans in which the training and test images of each individual were acquired at significantly different times, unlike all except two of the existing 3D face recognition methods.  相似文献   

10.
Face recognition under uncontrolled illumination conditions is still considered an unsolved problem. In order to correct for these illumination conditions, we propose a virtual illumination grid (VIG) approach to model the unknown illumination conditions. Furthermore, we use coupled subspace models of both the facial surface and albedo to estimate the face shape. In order to obtain a representation of the face under frontal illumination, we relight the estimated face shape. We show that the frontal illuminated facial images achieve better performance in face recognition. We have performed the challenging Experiment 4 of the FRGCv2 database, which compares uncontrolled probe images to controlled gallery images. Our illumination correction method results in considerably better recognition rates for a number of well-known face recognition methods. By fusing our global illumination correction method with a local illumination correction method, further improvements are achieved.  相似文献   

11.
基于形状和纹理的人脸自动识别   总被引:1,自引:0,他引:1  
人脸识别技术在商业和法律上有广泛的应用前景,在安全监控中也大有用武之地。其主要任务是利用已有的人脸图像库,识别静止的或视频图像中的一张或多张人脸.实验首先提取人脸图像的形状和纹理特征,运用广义KL变换降低形状和纹理空间的维数,避开入脸识别小样本集的局限,同时,通过运用具有统计不相关性的最佳鉴别变换,来抽取人脸图像的有效鉴别特征.在包含960幅人脸图像的NUST603人脸图像库上进行识别实验,得到的识别错误率低于4%,且这种方法对人脸的姿态、表情等条件有一定的不敏感性.  相似文献   

12.
One of the main challenges in face recognition is represented by pose and illumination variations that drastically affect the recognition performance, as confirmed by the results of recent face recognition large-scale evaluations. This paper presents a new technique for face recognition, based on the joint use of 3D models and 2D images, specifically conceived to be robust with respect to pose and illumination changes. A 3D model of each user is exploited in the training stage (i.e. enrollment) to generate a large number of 2D images representing virtual views of the face with varying pose and illumination. Such images are then used to learn in a supervised manner a set of subspaces constituting the user's template. Recognition occurs by matching 2D images with the templates and no 3D information (neither images nor face models) is required. The experiments carried out confirm the efficacy of the proposed technique.  相似文献   

13.
The paper proposes a novel, pose-invariant face recognition system based on a deformable, generic 3D face model, that is a composite of: (1) an edge model, (2) a color region model and (3) a wireframe model for jointly describing the shape and important features of the face. The first two submodels are used for image analysis and the third mainly for face synthesis. In order to match the model to face images in arbitrary poses, the 3D model can be projected onto different 2D viewplanes based on rotation, translation and scale parameters, thereby generating multiple face-image templates (in different sizes and orientations). Face shape variations among people are taken into account by the deformation parameters of the model. Given an unknown face, its pose is estimated by model matching and the system synthesizes face images of known subjects in the same pose. The face is then classified as the subject whose synthesized image is most similar. The synthesized images are generated using a 3D face representation scheme which encodes the 3D shape and texture characteristics of the faces. This face representation is automatically derived from training face images of the subject. Experimental results show that the method is capable of determining pose and recognizing faces accurately over a wide range of poses and with naturally varying lighting conditions. Recognition rates of 92.3% have been achieved by the method with 10 training face images per person.  相似文献   

14.
针对当前人脸识别中姿态变化会影响识别性能,以及姿态恢复过程中脸部局部细节信息容易丢失的问题,提出一种基于多任务学习的多姿态人脸重建与识别方法——多任务学习堆叠自编码器(MtLSAE)。该方法通过运用多任务学习机制,联合考虑人脸姿态恢复和脸部局部细节信息保留这两个相关的任务,在步进逐层恢复正面人脸姿态的同时,引入非负约束稀疏自编码器,使得非负约束稀疏自编码器能够学习到人脸部的部分特征;其次在姿态恢复和局部信息保留两个任务之间通过共享参数的方式来学习整个网络框架;最后将重建出来的正脸图像通过Fisherface进行降维并提取具有判别信息的特征,并用最近邻分类器来识别。实验结果表明,MtLSAE方法获得了较好的姿态重建质量,保留的局部纹理信息清晰,而且与局部Gabor二值模式(LGBP)、基于视角的主动外观模型(VAAM)以及堆叠步进自编码器(SPAE)等经典方法相比,识别率性能得以提升。  相似文献   

15.
The open-set problem is among the problems that have significantly changed the performance of face recognition algorithms in real-world scenarios. Open-set operates under the supposition that not all the probes have a pair in the gallery. Most face recognition systems in real-world scenarios focus on handling pose, expression and illumination problems on face recognition. In addition to these challenges, when the number of subjects is increased for face recognition, these problems are intensified by look-alike faces for which there are two subjects with lower intra-class variations. In such challenges, the inter-class similarity is higher than the intra-class variation for these two subjects. In fact, these look-alike faces can be created as intrinsic, situation-based and also by facial plastic surgery. This work introduces three real-world open-set face recognition methods across facial plastic surgery changes and a look-alike face by 3D face reconstruction and sparse representation. Since some real-world databases for face recognition do not have multiple images per person in the gallery, with just one image per subject in the gallery, this paper proposes a novel idea to overcome this challenge by 3D modeling from gallery images and synthesizing them for generating several images. Accordingly, a 3D model is initially reconstructed from frontal face images in a real-world gallery. Then, each 3D reconstructed face in the gallery is synthesized to several possible views and a sparse dictionary is generated based on the synthesized face image for each person. Also, a likeness dictionary is defined and its optimization problem is solved by the proposed method. Finally, the face recognition is performed for open-set face recognition using three proposed representation classifications. Promising results are achieved for face recognition across plastic surgery and look-alike faces on three databases including the plastic surgery face, look-alike face and LFW databases compared to several state-of-the-art methods. Also, several real-world and open-set scenarios are performed to evaluate the proposed method on these databases in real-world scenarios.  相似文献   

16.
Exchanging Faces in Images   总被引:1,自引:0,他引:1  
  相似文献   

17.
基于因子分析与稀疏表示的多姿态人脸识别   总被引:1,自引:0,他引:1  
在非可控环境下,人脸识别面临的最大难题之一是姿态变化与遮挡问题。基于稀疏表示的人脸识别方法将测试人脸表示成训练人脸的稀疏线性组合,根据其组合系数的稀疏性进行人脸识别。该方法对人脸的噪声和遮挡变化具有很好的鲁棒性,但对人脸的姿态变化表现力极差,这是因为当人脸具有姿态变化时,同一个人不同姿态情况下很难对应起来,这违背线性组合的前提条件。为了克服稀疏表示方法对人脸姿态变化表现力极差问题,对人脸进行因子分析,分离出人脸姿态因子,得到合成的正面人脸;利用稀疏表示进行人脸分类识别。实验结果表明,该方法对人脸的遮挡和姿态变化具有很好的鲁棒性。  相似文献   

18.
Face recognition with variant pose, illumination and expression (PIE) is a challenging problem. In this paper, we propose an analysis-by-synthesis framework for face recognition with variant PIE. First, an efficient two-dimensional (2D)-to-three-dimensional (3D) integrated face reconstruction approach is introduced to reconstruct a personalized 3D face model from a single frontal face image with neutral expression and normal illumination. Then, realistic virtual faces with different PIE are synthesized based on the personalized 3D face to characterize the face subspace. Finally, face recognition is conducted based on these representative virtual faces. Compared with other related work, this framework has following advantages: (1) only one single frontal face is required for face recognition, which avoids the burdensome enrollment work; (2) the synthesized face samples provide the capability to conduct recognition under difficult conditions like complex PIE; and (3) compared with other 3D reconstruction approaches, our proposed 2D-to-3D integrated face reconstruction approach is fully automatic and more efficient. The extensive experimental results show that the synthesized virtual faces significantly improve the accuracy of face recognition with changing PIE.  相似文献   

19.
基于多姿态人脸图像合成的识别方法研究   总被引:1,自引:0,他引:1  
为了解决多姿态人脸识别问题,提出基于独立成分分析(ICA)进行正面人脸合成的新方法。首先利用ICA和PCA提取不同姿态人脸的特征子空间,然后利用通过训练得到的姿态转换矩阵合成其相对应的正面人脸图像,实验表明ICA人脸识别算法要优于PCA人脸识别算法,并在此基础上用小波对人脸图像进行预处理,据姿态转换矩阵得到的正面人脸特征系数直接进行分类比较,识别率得到了很大的提高。  相似文献   

20.
Face recognition under variable pose and illumination is a challenging problem in computer vision tasks. In this paper, we solve this problem by proposing a new residual based deep face reconstruction neural network to extract discriminative pose-and-illumination-invariant (PII) features. Our deep model can change arbitrary pose and illumination face images to the frontal view with standard illumination. We propose a new triplet-loss training method instead of Euclidean loss to optimize our model, which has two advantages: a) The training triplets can be easily augmented by freely choosing combinations of labeled face images, in this way, overfitting can be avoided; b) The triplet-loss training makes the PII features more discriminative even when training samples have similar appearance. By using our PII features, we achieve 83.8% average recognition accuracy on MultiPIE face dataset which is competitive to the state-of-the-art face recognition methods.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号