首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 840 毫秒
1.
《Pattern recognition》2014,47(2):544-555
This paper proposes a novel method of supervised and unsupervised multi-linear neighborhood preserving projection (MNPP) for face recognition. Unlike conventional neighborhood preserving projections, the MNPP method operates directly on tensorial data rather than vectors or matrices, and solves problems of tensorial representation for multi-dimensional feature extraction, classification and recognition. As opposed to traditional approaches such as NPP and 2DNPP, which derive only one subspace, multiple interrelated subspaces are obtained in the MNPP method by unfolding the tensor over different tensorial directions. The number of subspaces derived by MNPP is determined by the order of the tensor space. This approach is used for face recognition and biometrical security classification problems involving higher order tensors. The performance of our proposed and existing techniques is analyzed using three benchmark facial datasets ORL, AR, and FERET. The obtained results show that the MNPP outperforms the standard approaches in terms of the error rate.  相似文献   

2.
目的 人脸姿态偏转是影响人脸识别准确率的一个重要因素,本文利用3维人脸重建中常用的3维形变模型以及深度卷积神经网络,提出一种用于多姿态人脸识别的人脸姿态矫正算法,在一定程度上提高了大姿态下人脸识别的准确率。方法 对传统的3维形变模型拟合方法进行改进,利用人脸形状参数和表情参数对3维形变模型进行建模,针对面部不同区域的关键点赋予不同的权值,加权拟合3维形变模型,使得具有不同姿态和面部表情的人脸图像拟合效果更好。然后,对3维人脸模型进行姿态矫正并利用深度学习对人脸图像进行修复,修复不规则的人脸空洞区域,并使用最新的局部卷积技术同时在新的数据集上重新训练卷积神经网络,使得网络参数达到最优。结果 在LFW(labeled faces in the wild)人脸数据库和StirlingESRC(Economic Social Research Council)3维人脸数据库上,将本文算法与其他方法进行比较,实验结果表明,本文算法的人脸识别精度有一定程度的提高。在LFW数据库上,通过对具有任意姿态的人脸图像进行姿态矫正和修复后,本文方法达到了96.57%的人脸识别精确度。在StirlingESRC数据库上,本文方法在人脸姿态为±22°的情况下,人脸识别准确率分别提高5.195%和2.265%;在人脸姿态为±45°情况下,人脸识别准确率分别提高5.875%和11.095%;平均人脸识别率分别提高5.53%和7.13%。对比实验结果表明,本文提出的人脸姿态矫正算法有效提高了人脸识别的准确率。结论 本文提出的人脸姿态矫正算法,综合了3维形变模型和深度学习模型的优点,在各个人脸姿态角度下,均能使人脸识别准确率在一定程度上有所提高。  相似文献   

3.
目的 人脸识别已经得到了广泛应用,但大姿态人脸识别问题仍未完美解决。已有方法或提取姿态鲁棒特征,或进行人脸姿态的正面化。其中主流的人脸正面化方法包括2D回归生成和3D模型形变建模,前者能够生成相对自然真实的人脸,但会引入额外的噪声导致图像信息的扭曲;后者能够保持原始的人脸结构信息,但生成过程是基于物理模型的,不够自然灵活。为此,结合2D和3D方法的优势,本文提出了基于由粗到细形变场的人脸正面化方法。方法 该形变场由深度网络以2D回归方式学得,反映的是不同视角人脸图像像素之间的语义级对应关系,可以类3D的方式实现非正面人脸图像的正面化,因此该方法兼具了2D正面化方法的灵活性与3D正面化方法的保真性,且借鉴分步渐进的思路,本文提出了由粗到细的形变场学习框架,以获得更加准确鲁棒的形变场。结果 本文采用大姿态人脸识别实验来验证本文方法的有效性,在MultiPIE(multi pose, illumination, expressions)、LFW(labeled faces in the wild)、CFP(celebrities in frontal-profile in the wild)...  相似文献   

4.
The traditional matrix-based feature extraction methods that have been widely used in face recognition essentially work on the facial image matrixes only in one or two directions. For example, 2DPCA can be seen as the row-based PCA and only reflects the information in each row, and some structure information cannot be uncovered by it. In this paper, we propose the directional 2DPCA that can extract features from the matrixes in any direction. To effectively use all the features extracted by the D2DPCA, we combine a bank of D2DPCA performed in different directions to develop a matching score level fusion method named multi-directional 2DPCA for face recognition. The results of experiments on AR and FERET datasets show that the proposed method can obtain a higher accuracy than the previous matrix-based feature extraction methods.  相似文献   

5.
In this paper, we propose an efficient face recognition scheme which has two features: 1) representation of face images by two-dimensional (2D) wavelet subband coefficients and 2) recognition by a modular, personalised classification method based on kernel associative memory models. Compared to PCA projections and low resolution "thumb-nail" image representations, wavelet subband coefficients can efficiently capture substantial facial features while keeping computational complexity low. As there are usually very limited samples, we constructed an associative memory (AM) model for each person and proposed to improve the performance of AM models by kernel methods. Specifically, we first applied kernel transforms to each possible training pair of faces sample and then mapped the high-dimensional feature space back to input space. Our scheme using modular autoassociative memory for face recognition is inspired by the same motivation as using autoencoders for optical character recognition (OCR), for which the advantages has been proven. By associative memory, all the prototypical faces of one particular person are used to reconstruct themselves and the reconstruction error for a probe face image is used to decide if the probe face is from the corresponding person. We carried out extensive experiments on three standard face recognition datasets, the FERET data, the XM2VTS data, and the ORL data. Detailed comparisons with earlier published results are provided and our proposed scheme offers better recognition accuracy on all of the face datasets.  相似文献   

6.
Face datasets are considered a primary tool for evaluating the efficacy of face recognition methods. Here we show that in many of the commonly used face datasets, face images can be recognized accurately at a rate significantly higher than random even when no face, hair or clothes features appear in the image. The experiments were done by cutting a small background area from each face image, so that each face dataset provided a new image dataset which included only seemingly blank images. Then, an image classification method was used in order to check the classification accuracy. Experimental results show that the classification accuracy ranged between 13.5% (color FERET) to 99% (YaleB). These results indicate that the performance of face recognition methods measured using face image datasets may be biased. Compilable source code used for this experiment is freely available for download via the Internet.  相似文献   

7.
自动上妆旨在通过计算机算法实现人脸妆容的编辑与合成,隶属于人脸图像分析领域.其在互动娱乐应用、图像视频编辑、辅助人脸识别等多方面起着重要作用.然而作为人脸编辑任务,其仍难以在保证图像的编辑结果自然、真实的同时又很好地满足编辑需求,并且仍有难以精确控制编辑区域、图像编辑前后一致性差、图像质量不够精细等问题.针对以上难点,创新性地提出了一种掩模控制的自动上妆生成对抗网络,该网络利用掩模方法,能够重点编辑上妆区域,约束人脸妆容编辑中无需编辑的区域不变,保持主体信息.同时其又能单独编辑人脸的眼影、嘴唇、脸颊等局部区域,实现特定区域上妆,丰富了上妆功能.此外,该网络能够进行多数据集联合训练,除妆容数据集外,还可以使用其他人脸数据集作为辅助,增强模型的泛化能力,得到更加自然的上妆结果.最后,依据多种评价标准,进行了充分的定性及定量实验,并与目前的主流算法进行了对比,综合评价了所提方法的性能.  相似文献   

8.
9.
Most of the existing approaches of multimodal 2D + 3D face recognition exploit the 2D and 3D information at the feature or score level. They do not fully benefit from the dependency between modalities. Exploiting this dependency at the early stage is more effective than the later stage. Early fusion data contains richer information about the input biometric than the compressed features or matching scores. We propose an image recombination for face recognition that explores the dependency between modalities at the image level. Facial cues from the 2D and 3D images are recombined into a more independent and discriminating data by finding transformation axes that account for the maximal amount of variances in the images. We also introduce a complete framework of multimodal 2D + 3D face recognition that utilizes the 2D and 3D facial information at the enrollment, image and score levels. Experimental results based on NTU-CSP and Bosphorus 3D face databases show that our face recognition system using image recombination outperforms other face recognition systems based on the pixel- or score-level fusion.  相似文献   

10.
We propose in this paper two improved manifold learning methods called diagonal discriminant locality preserving projections (Dia-DLPP) and weighted two-dimensional discriminant locality preserving projections (W2D-DLPP) for face and palmprint recognition. Motivated by the fact that diagonal images outperform the original images for conventional two-dimensional (2D) subspace learning methods such as 2D principal component analysis (2DPCA) and 2D linear discriminant analysis (2DLDA), we first propose applying diagonal images to a recently proposed 2D discriminant locality preserving projections (2D-DLPP) algorithm, and formulate the Dia-DLPP method for feature extraction of face and palmprint images. Moreover, we show that transforming an image to a diagonal image is equivalent to assigning an appropriate weight to each pixel of the original image to emphasize its different importance for recognition, which provides the rationale and superiority of using diagonal images for 2D subspace learning. Inspired by this finding, we further propose a new discriminant weighted method to explicitly calculate the discriminative score of each pixel within a face and palmprint sample to duly emphasize its different importance, and incorporate it into 2D-DLPP to formulate the W2D-DLPP method to improve the recognition performance of 2D-DLPP and Dia-DLPP. Experimental results on the widely used FERET face and PolyU palmprint databases demonstrate the efficacy of the proposed methods.  相似文献   

11.
Recent face recognition algorithm can achieve high accuracy when the tested face samples are frontal. However, when the face pose changes largely, the performance of existing methods drop drastically. Efforts on pose-robust face recognition are highly desirable, especially when each face class has only one frontal training sample. In this study, we propose a 2D face fitting-assisted 3D face reconstruction algorithm that aims at recognizing faces of different poses when each face class has only one frontal training sample. For each frontal training sample, a 3D face is reconstructed by optimizing the parameters of 3D morphable model (3DMM). By rotating the reconstructed 3D face to different views, pose virtual face images are generated to enlarge the training set of face recognition. Different from the conventional 3D face reconstruction methods, the proposed algorithm utilizes automatic 2D face fitting to assist 3D face reconstruction. We automatically locate 88 sparse points of the frontal face by 2D face-fitting algorithm. Such 2D face-fitting algorithm is so-called Random Forest Embedded Active Shape Model, which embeds random forest learning into the framework of Active Shape Model. Results of 2D face fitting are added to the 3D face reconstruction objective function as shape constraints. The optimization objective energy function takes not only image intensity, but also 2D fitting results into account. Shape and texture parameters of 3DMM are thus estimated by fitting the 3DMM to the 2D frontal face sample, which is a non-linear optimization problem. We experiment the proposed method on the publicly available CMUPIE database, which includes faces viewed from 11 different poses, and the results show that the proposed method is effective and the face recognition results toward pose variants are promising.  相似文献   

12.
基于双目被动立体视觉的三维人脸重构与识别   总被引:4,自引:0,他引:4  
提出一种基于双目被动视觉的三维人脸识别方法, 该方法采用非接触式的人脸信息采集技术, 利用图像中弱特征检测方法实现双目视觉中的人脸检测与初步视差估计, 运用基于复小波的相位相关技术对人脸表面进行亚像素级小区域匹配, 重建人脸三维点云信息. 通过可调训练次数的神经网络技术实现多层次人脸曲面重建, 并结合人脸2D图像对重构曲面进行仿射归一, 继而迭代地进行特征提取与识别过程. 实验结果表明, 双目视觉方法使人脸信息采集过程友好隐蔽; 在对应点匹配中, 运用复小波的相位相关算法可获得密集的亚像素精度配准点对, 用神经网络方法可正确重建人脸曲面. 识别过程对环境以及人脸位姿表情等鲁棒性强. 该系统成本十分低廉, 适合在许多领域推广应用.  相似文献   

13.
人脸特征点定位是根据输入的人脸数据自动定位出预先按人脸生理特征定义的眼角、鼻尖、嘴角和脸部轮廓等面部关键特征点,在人脸识别和分析等系统中起着至关重要的作用。本文对基于深度学习的人脸特征点自动定位进行综述,阐释了人脸特征点自动定位的含义,归纳了目前常用的人脸公开数据集,系统阐述了针对2维和3维数据特征点的自动定位方法,总结了各方法的研究现状及其应用,分析了当前人脸特征点自动定位技术在深度学习应用中的现状、存在问题及发展趋势。在公开的2维和3维人脸数据集上对不同方法进行了比较。通过研究可以看出,基于深度学习的2维人脸特征点的自动定位方法研究相对比较深入,而3维人脸特征点定位方法的研究在模型表示、处理方法和样本数量上都存在挑战。未来基于深度学习的3维人脸特征点定位方法将成为研究趋势。  相似文献   

14.
The increasing availability of 3D facial data offers the potential to overcome the intrinsic difficulties faced by conventional face recognition using 2D images. Instead of extending 2D recognition algorithms for 3D purpose, this letter proposes a novel strategy for 3D face recognition from the perspective of representing each 3D facial surface with a 2D attribute image and taking the advantage of the advances in 2D face recognition. In our approach, each 3D facial surface is mapped homeomorphically onto a 2D lattice, where the value at each site is an attribute that represents the local 3D geometrical or textural properties on the surface, therefore invariant to pose changes. This lattice is then interpolated to generate a 2D attribute image. 3D face recognition can be achieved by applying the traditional 2D face recognition techniques to obtained attribute images. In this study, we chose the pose invariant local mean curvature calculated at each vertex on the 3D facial surface to construct the 2D attribute image and adopted the eigenface algorithm for attribute image recognition. We compared our approach to state-of-the-art 3D face recognition algorithms in the FRGC (Version 2.0), GavabDB and NPU3D database. Our results show that the proposed approach has improved the robustness to head pose variation and can produce more accurate 3D multi-pose face recognition.  相似文献   

15.
张帆  赵世坤  袁操  陈伟  刘小丽  赵涵捷 《软件学报》2022,33(7):2411-2446
当前,人脸识别理论和技术取得了巨大的成功,被广泛应用于政府、金融和军事等关键领域.与其他信息系统类似,人脸识别系统也面临着各类安全问题,其中,人脸欺诈(facespoofing,FS)是最主要的安全问题之一.所谓的人脸欺诈,是指攻击者采用打印照片、视频回放和3D面具等攻击方式,诱骗人脸识别系统做出错误判断,因而是人脸识别系统所必须解决的关键问题.对人脸反欺诈(faceanti-spoofing,FAS)的最新进展进行研究:首先,概述了FAS的基本概念;其次,介绍了当前FAS所面临的主要科学问题以及主要的解决方法及其优缺点;在此基础上,将已有的FAS工作分为传统方法和深度学习方法两大类,并分别进行详细论述;接着,针对基于深度学习的FAS域泛化和可解释性问题,从理论和实践的角度进行说明;然后,介绍了FAS研究所使用的典型数据集及其特点,并给出了FAS算法的评估标准和实验对比结果;最后,总结了FAS未来的研究方向并对发展趋势进行展望.  相似文献   

16.
In this paper, we propose two novel methods for face recognition under arbitrary unknown lighting by using spherical harmonics illumination representation, which require only one training image per subject and no 3D shape information. Our methods are based on the result which demonstrated that the set of images of a convex Lambertian object obtained under a wide variety of lighting conditions can be approximated accurately by a low-dimensional linear subspace. We provide two methods to estimate the spherical harmonic basis images spanning this space from just one image. Our first method builds the statistical model based on a collection of 2D basis images. We demonstrate that, by using the learned statistics, we can estimate the spherical harmonic basis images from just one image taken under arbitrary illumination conditions if there is no pose variation. Compared to the first method, the second method builds the statistical models directly in 3D spaces by combining the spherical harmonic illumination representation and a 3D morphable model of human faces to recover basis images from images across both poses and illuminations. After estimating the basis images, we use the same recognition scheme for both methods: we recognize the face for which there exists a weighted combination of basis images that is the closest to the test face image. We provide a series of experiments that achieve high recognition rates, under a wide range of illumination conditions, including multiple sources of illumination. Our methods achieve comparable levels of accuracy with methods that have much more onerous training data requirements. Comparison of the two methods is also provided.  相似文献   

17.
目的 3维人脸点云的局部遮挡是影响3维人脸识别精度的一个重要因素。为克服局部遮挡对3维人脸识别的影响,提出一种基于径向线和局部特征的3维人脸识别方法。方法 首先为了充分利用径向线的邻域信息,提出用一组局部特征来表示径向线;其次对于点云稀疏引起的采样点不均匀,提出将部分相邻局部区域合并以减小采样不均匀的影响;然后,利用径向线的邻域信息构造代价函数,进而构造相应径向线间的相似向量。最后,利用相似向量来进行径向线匹配,从而完成3维人脸识别。结果 在FRGC v2.0数据库上进行不同局部特征识别率的测试实验,选取的局部特征Rank-1识别率达到了95.2%,高于其他局部特征的识别率;在Bosphorus数据库上进行不同算法局部遮挡下的人脸识别实验,Rank-1识别率达到了最高的92.0%;进一步在Bosphorus数据库上进行不同算法的时间复杂度对比实验,耗费时间最短,为8.17 s。该算法在准确率和耗时方面均取得了最好的效果。结论 基于径向线和局部特征的3维人脸方法能有效提取径向线周围的局部信息;局部特征的代价函数生成的相似向量有效减小了局部遮挡带来的影响。实验结果表明本文算法具有较高的精度和较短的耗时,同时对人脸的局部遮挡具有一定的鲁棒性。该算法适用于局部遮挡下的3维人脸识别,但是对于鼻尖部分被遮挡的人脸,无法进行识别。  相似文献   

18.
In this paper, we present a novel image-based technique that transfers illumination from a source face image to a target face image based on the Logarithmic Total Variation (LTV) model. Our method does not require any prior information regarding the lighting conditions or the 3D geometries of the underlying faces. We first use a Radial Basis Functions (RBFs)-based deformation technique to align key facial features of the reference 2D face with those of the target face. Then, we employ the LTV model to factorize each of the two aligned face images to an illumination-dependent component and an illumination-invariant component. Finally, illumination transferring is achieved by replacing the illumination-dependent component of the target face by that of the reference face. We tested our technique on numerous grayscale and color face images from various face datasets including the Yale face Database, as well as the application of illumination-preserved face coloring.  相似文献   

19.
孙强  谭晓阳 《计算机应用》2017,37(11):3226-3230
针对人脸识别算法准确率受面部姿态、遮挡、图像分辨率等因素影响的问题,提出一种超分辨率摆正的方法,作用于低质量无约束输入图像上,生成高清晰度标准正面视图。主要通过估计输入图像与3D模型间的投影矩阵,产生标准正面视图,通过人脸对称性的特点,补全由于姿态、遮挡等原因所产生的面部缺失像素。在摆正过程中,为了提高图像分辨率以及避免面部像素信息丢失,引入一个16层的深度递归卷积神经网络进行超分辨率重构;并提出两个扩展:递归监督和跳跃链接,来降低网络训练难度以及缩小模型体量。在经过处理的LFW数据集上实验表明,该方法对人脸识别和性别检测算法的性能具有显著提升作用。  相似文献   

20.
Pose-Robust Facial Expression Recognition Using View-Based 2D $+$ 3D AAM   总被引:1,自引:0,他引:1  
This paper proposes a pose-robust face tracking and facial expression recognition method using a view-based 2D 3D active appearance model (AAM) that extends the 2D 3D AAM to the view-based approach, where one independent face model is used for a specific view and an appropriate face model is selected for the input face image. Our extension has been conducted in many aspects. First, we use principal component analysis with missing data to construct the 2D 3D AAM due to the missing data in the posed face images. Second, we develop an effective model selection method that directly uses the estimated pose angle from the 2D 3D AAM, which makes face tracking pose-robust and feature extraction for facial expression recognition accurate. Third, we propose a double-layered generalized discriminant analysis (GDA) for facial expression recognition. Experimental results show the following: 1) The face tracking by the view-based 2D 3D AAM, which uses multiple face models with one face model per each view, is more robust to pose change than that by an integrated 2D 3D AAM, which uses an integrated face model for all three views; 2) the double-layered GDA extracts good features for facial expression recognition; and 3) the view-based 2D 3D AAM outperforms other existing models at pose-varying facial expression recognition.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号