首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Deformation modeling for robust 3D face matching   总被引:1,自引:0,他引:1  
Face recognition based on 3D surface matching is promising for overcoming some of the limitations of current 2D image-based face recognition systems. The 3D shape is generally invariant to the pose and lighting changes, but not invariant to the non-rigid facial movement, such as expressions. Collecting and storing multiple templates to account for various expressions for each subject in a large database is not practical. We propose a facial surface modeling and matching scheme to match 2.5D facial scans in the presence of both non-rigid deformations and pose changes (multiview) to a 3D face template. A hierarchical geodesic-based resampling approach is applied to extract landmarks for modeling facial surface deformations. We are able to synthesize the deformation learned from a small group of subjects (control group) onto a 3D neutral model (not in the control group), resulting in a deformed template. A user-specific (3D) deformable model is built by combining the templates with synthesized deformations. The matching distance is computed by fitting this generative deformable model to a test scan. A fully automatic and prototypic 3D face matching system has been developed. Experimental results demonstrate that the proposed deformation modeling scheme increases the 3D face matching accuracy.  相似文献   

2.
多信息融合的多姿态三维人脸面部五官标志点定位方法   总被引:1,自引:0,他引:1  
针对三维人脸模型面部五官标志点定位对姿态变化非常敏感的问题,提出了一种基于多信息融合的多姿态三维人脸五官标志点定位方法.首先对二维人脸纹理图像采用仿射不变的Affine- SIFT方法进行特征点检测,再利用映射关系将其投影到三维空间,并采用局部邻域曲率变化最大规则和迭代约束优化相结合的方法对面部五官标志点进行精确定位.在FRGC2.0和自建NPU3D数据库的实验结果表明,文中方法无需对姿态和三维数据的格式进行预先估计和定义,算法复杂度低,同时对人脸模型的姿态有着较强的鲁棒性,与现有五官标志点定位方法相比,有着更高的定位精度.  相似文献   

3.
Pose-Robust Facial Expression Recognition Using View-Based 2D $+$ 3D AAM   总被引:1,自引:0,他引:1  
This paper proposes a pose-robust face tracking and facial expression recognition method using a view-based 2D 3D active appearance model (AAM) that extends the 2D 3D AAM to the view-based approach, where one independent face model is used for a specific view and an appropriate face model is selected for the input face image. Our extension has been conducted in many aspects. First, we use principal component analysis with missing data to construct the 2D 3D AAM due to the missing data in the posed face images. Second, we develop an effective model selection method that directly uses the estimated pose angle from the 2D 3D AAM, which makes face tracking pose-robust and feature extraction for facial expression recognition accurate. Third, we propose a double-layered generalized discriminant analysis (GDA) for facial expression recognition. Experimental results show the following: 1) The face tracking by the view-based 2D 3D AAM, which uses multiple face models with one face model per each view, is more robust to pose change than that by an integrated 2D 3D AAM, which uses an integrated face model for all three views; 2) the double-layered GDA extracts good features for facial expression recognition; and 3) the view-based 2D 3D AAM outperforms other existing models at pose-varying facial expression recognition.  相似文献   

4.
In this paper, we presented algorithms to assess the quality of facial images affected by factors such as blurriness, lighting conditions, head pose variations, and facial expressions. We developed face recognition prediction functions for images affected by blurriness, lighting conditions, and head pose variations based upon the eigenface technique. We also developed a classifier for images affected by facial expressions to assess their quality for recognition by the eigenface technique. Our experiments using different facial image databases showed that our algorithms are capable of assessing the quality of facial images. These algorithms could be used in a module for facial image quality assessment in a face recognition system. In the future, we will integrate the different measures of image quality to produce a single measure that indicates the overall quality of a face image  相似文献   

5.
目的 人脸姿态偏转是影响人脸识别准确率的一个重要因素,本文利用3维人脸重建中常用的3维形变模型以及深度卷积神经网络,提出一种用于多姿态人脸识别的人脸姿态矫正算法,在一定程度上提高了大姿态下人脸识别的准确率。方法 对传统的3维形变模型拟合方法进行改进,利用人脸形状参数和表情参数对3维形变模型进行建模,针对面部不同区域的关键点赋予不同的权值,加权拟合3维形变模型,使得具有不同姿态和面部表情的人脸图像拟合效果更好。然后,对3维人脸模型进行姿态矫正并利用深度学习对人脸图像进行修复,修复不规则的人脸空洞区域,并使用最新的局部卷积技术同时在新的数据集上重新训练卷积神经网络,使得网络参数达到最优。结果 在LFW(labeled faces in the wild)人脸数据库和StirlingESRC(Economic Social Research Council)3维人脸数据库上,将本文算法与其他方法进行比较,实验结果表明,本文算法的人脸识别精度有一定程度的提高。在LFW数据库上,通过对具有任意姿态的人脸图像进行姿态矫正和修复后,本文方法达到了96.57%的人脸识别精确度。在StirlingESRC数据库上,本文方法在人脸姿态为±22°的情况下,人脸识别准确率分别提高5.195%和2.265%;在人脸姿态为±45°情况下,人脸识别准确率分别提高5.875%和11.095%;平均人脸识别率分别提高5.53%和7.13%。对比实验结果表明,本文提出的人脸姿态矫正算法有效提高了人脸识别的准确率。结论 本文提出的人脸姿态矫正算法,综合了3维形变模型和深度学习模型的优点,在各个人脸姿态角度下,均能使人脸识别准确率在一定程度上有所提高。  相似文献   

6.
基于HMM的单样本可变光照、姿态人脸识别   总被引:2,自引:1,他引:2  
提出了一种基于HMM的单样本可变光照、姿态人脸识别算法.该算法首先利用人工配准的训练集对单张正面人脸输入图像与Candide3模型进行自动配准,在配准的基础上重建特定人脸三维模型.对重建模型进行各种角度的旋转可得到姿态不同的数字人脸,然后利用球面谐波基图像调整数字人脸的光照系数可产生光照不同的数字人脸.将产生的光照、姿态不同的数字人脸同原始样本图像一起作为训练数据,为每个用户建立其独立的人脸隐马尔可夫模型.将所提算法对现有人脸库进行识别,并与基于光照补偿和姿态校正的识别方法进行比较.结果显示,该算法能有效避免光照补偿、姿态校正方法因对某些光照、姿态校正不理想而造成的识别率低的情况,能更好地适应光照、姿态不同条件下的人脸识别.  相似文献   

7.
三维人脸识别研究综述   总被引:10,自引:0,他引:10  
近二十多年来,虽然基于图像的人脸识别已取得很大进展,并可在约束环境下获得很好的识别性能,但仍受光照、姿态、表情等变化的影响很大,其本质原因在于图像是三维物体在二维空间的简约投影.因此,利用脸部曲面的显式三维表达进行人脸识别正成为近几年学术界的研究热点.文中分析了三维人脸识别的产生动机、概念与基本过程;根据特征形式,将三维人脸识别算法分为基于空域直接匹配、基于局部特征匹配、基于整体特征匹配三大类进行综述;对二维和三维的双模态融合方法进行分类阐述;列出了部分代表性的三维人脸数据库;对部分方法进行实验比较,并分析了方法有效性的原因;总结了目前三维人脸识别技术的优势与困难,并探讨了未来的研究趋势.  相似文献   

8.
Variations in illumination degrade the performance of appearance based face recognition. We present a novel algorithm for the normalization of color facial images using a single image and its co-registered 3D pointcloud (3D image). The algorithm borrows the physically based Phong’s lighting model from computer graphics which is used for rendering computer images and employs it in a reverse mode for the calculation of face albedo from real facial images. Our algorithm estimates the number of the dominant light sources and their directions from the specularities in the facial image and the corresponding 3D points. The intensities of the light sources and the parameters of the Phong’s model are estimated by fitting the Phong’s model onto the facial skin data. Unlike existing approaches, our algorithm takes into account both Lambertian and specular reflections as well as attached and cast shadows. Moreover, our algorithm is invariant to facial pose and expression and can effectively handle the case of multiple extended light sources. The algorithm was tested on the challenging FRGC v2.0 data and satisfactory results were achieved. The mean fitting error was 6.3% of the maximum color value. Performing face recognition using the normalized images increased both identification and verification rates.  相似文献   

9.
This paper proposes an accurate, rotation invariant, and fast approach for detection of facial features from thermal images. The proposed approach combines both appearance and geometric information to detect the facial features. A texture based detector is performed using Haar features and AdaBoost algorithm. Then the relation between these facial features is modeled using a complex Gaussian distribution, which is invariant to rotation. Experiments show that our proposed approach outperforms existing algorithms for facial features detection in thermal images. The proposed approach’s performance is illustrated in a face recognition framework, which is based on extracting a local signature around facial features. Also, the paper presents a comparative study for different signature techniques with different facial image resolutions. The results of this comparative study suggest the minimum facial image resolution in thermal images, which can be used in face recognition. The study also gives a guideline for choosing a good signature, which leads to the best recognition rate.  相似文献   

10.
Head pose estimation is a key task for visual surveillance, HCI and face recognition applications. In this paper, a new approach is proposed for estimating 3D head pose from a monocular image. The approach assumes the full perspective projection camera model. Our approach employs general prior knowledge of face structure and the corresponding geometrical constraints provided by the location of a certain vanishing point to determine the pose of human faces. To achieve this, eye-lines, formed from the far and near eye corners, and mouth-line of the mouth corners are assumed parallel in 3D space. Then the vanishing point of these parallel lines found by the intersection of the eye-line and mouth-line in the image can be used to infer the 3D orientation and location of the human face. In order to deal with the variance of the facial model parameters, e.g. ratio between the eye-line and the mouth-line, an EM framework is applied to update the parameters. We first compute the 3D pose using some initially learnt parameters (such as ratio and length) and then adapt the parameters statistically for individual persons and their facial expressions by minimizing the residual errors between the projection of the model features points and the actual features on the image. In doing so, we assume every facial feature point can be associated to each of features points in 3D model with some a posteriori probability. The expectation step of the EM algorithm provides an iterative framework for computing the a posterori probabilities using Gaussian mixtures defined over the parameters. The robustness analysis of the algorithm on synthetic data and some real images with known ground-truth are included.  相似文献   

11.
目的 人脸识别已经得到了广泛应用,但大姿态人脸识别问题仍未完美解决。已有方法或提取姿态鲁棒特征,或进行人脸姿态的正面化。其中主流的人脸正面化方法包括2D回归生成和3D模型形变建模,前者能够生成相对自然真实的人脸,但会引入额外的噪声导致图像信息的扭曲;后者能够保持原始的人脸结构信息,但生成过程是基于物理模型的,不够自然灵活。为此,结合2D和3D方法的优势,本文提出了基于由粗到细形变场的人脸正面化方法。方法 该形变场由深度网络以2D回归方式学得,反映的是不同视角人脸图像像素之间的语义级对应关系,可以类3D的方式实现非正面人脸图像的正面化,因此该方法兼具了2D正面化方法的灵活性与3D正面化方法的保真性,且借鉴分步渐进的思路,本文提出了由粗到细的形变场学习框架,以获得更加准确鲁棒的形变场。结果 本文采用大姿态人脸识别实验来验证本文方法的有效性,在MultiPIE(multi pose, illumination, expressions)、LFW(labeled faces in the wild)、CFP(celebrities in frontal-profile in the wild)...  相似文献   

12.
三维人脸模型已经广泛应用到视频电话、视频会议、影视制作、电脑游戏、人脸识别等多个领域。目前三维人脸建模一般使用多幅图像,且要求表情中性。本文提出了基于正、侧面任意表情三维人脸重建方法。首先对二维图像中的人脸进行特征提取,然后基于三维人脸统计模型,通过缩放、平移、旋转等方法,及全局和局部匹配,获得特定的三维人脸。基于二维图像中的人脸纹理信息,通过纹理映射,获得完整的三维人脸。通过对大量实际二维人脸图像的三维人脸重建,证实了该方法的有效性和鲁棒性。  相似文献   

13.
This paper presents a novel automatic framework to perform 3D face recognition. The proposed method uses a Simulated Annealing-based approach (SA) for range image registration with the Surface Interpenetration Measure (SIM), as similarity measure, in order to match two face images. The authentication score is obtained by combining the SIM values corresponding to the matching of four different face regions: circular and elliptical areas around the nose, forehead, and the entire face region. Then, a modified SA approach is proposed taking advantage of invariant face regions to better handle facial expressions. Comprehensive experiments were performed on the FRGC v2 database, the largest available database of 3D face images composed of 4,007 images with different facial expressions. The experiments simulated both verification and identification systems and the results compared to those reported by state-of-the-art works. By using all of the images in the database, a verification rate of 96.5 percent was achieved at a False Acceptance Rate (FAR) of 0.1 percent. In the identification scenario, a rank-one accuracy of 98.4 percent was achieved. To the best of our knowledge, this is the highest rank-one score ever achieved for the FRGC v2 database when compared to results published in the literature.  相似文献   

14.
Sotiris  Michael G. 《Pattern recognition》2005,38(12):2537-2548
The paper addresses the problem of face recognition under varying pose and illumination. Robustness to appearance variations is achieved not only by using a combination of a 2D color and a 3D image of the face, but mainly by using face geometry information to cope with pose and illumination variations that inhibit the performance of 2D face recognition. A face normalization approach is proposed, which unlike state-of-the-art techniques is computationally efficient and does not require an extended training set. Experimental results on a large data set show that template-based face recognition performance is significantly benefited from the application of the proposed normalization algorithms prior to classification.  相似文献   

15.
In last years, Face recognition based on 3D techniques is an emergent technology which has demonstrated better results than conventional 2D approaches. Using texture (180° multi-view image) and depth maps is supposed to increase the robustness towards the two main challenges in Face Recognition: Pose and illumination. Nevertheless, 3D data should be acquired under highly controlled conditions and in most cases depends on the collaboration of the subject to be recognized. Thus, in applications such as surveillance or control access points, this kind of 3D data may not be available during the recognition process. This leads to a new paradigm using some mixed 2D-3D face recognition systems where 3D data is used in the training but either 2D or 3D information can be used in the recognition depending on the scenario. Following this concept, where only part of the information (partial concept) is used in the recognition, a novel method is presented in this work. This has been called Partial Principal Component Analysis (P2CA) since they fuse the Partial concept with the fundamentals of the well known PCA algorithm. This strategy has been proven to be very robust in pose variation scenarios showing that the 3D training process retains all the spatial information of the face while the 2D picture effectively recovers the face information from the available data. Furthermore, in this work, a novel approach for the automatic creation of 180° aligned cylindrical projected face images using nine different views is presented. These face images are created by using a cylindrical approximation for the real object surface. The alignment is done by applying first a global 2D affine transformation of the image, and afterward a local transformation of the desired face features using a triangle mesh. This local alignment allows a closer look to the feature properties and not the differences. Finally, these aligned face images are used for training a pose invariant face recognition approach (P2CA).  相似文献   

16.
Matching 2.5D face scans to 3D models   总被引:7,自引:0,他引:7  
The performance of face recognition systems that use two-dimensional images depends on factors such as lighting and subject's pose. We are developing a face recognition system that utilizes three-dimensional shape information to make the system more robust to arbitrary pose and lighting. For each subject, a 3D face model is constructed by integrating several 2.5D face scans which are captured from different views. 2.5D is a simplified 3D (x,y,z) surface representation that contains at most one depth value (z direction) for every point in the (x, y) plane. Two different modalities provided by the facial scan, namely, shape and texture, are utilized and integrated for face matching. The recognition engine consists of two components, surface matching and appearance-based matching. The surface matching component is based on a modified iterative closest point (ICP) algorithm. The candidate list from the gallery used for appearance matching is dynamically generated based on the output of the surface matching component, which reduces the complexity of the appearance-based matching stage. Three-dimensional models in the gallery are used to synthesize new appearance samples with pose and illumination variations and the synthesized face images are used in discriminant subspace analysis. The weighted sum rule is applied to combine the scores given by the two matching components. Experimental results are given for matching a database of 200 3D face models with 598 2.5D independent test scans acquired under different pose and some lighting and expression changes. These results show the feasibility of the proposed matching scheme.  相似文献   

17.
In this paper, we present an approach for 3D face recognition from frontal range data based on the ridge lines on the surface of the face. We use the principal curvature, kmax, to represent the face image as a 3D binary image called ridge image. The ridge image shows the locations of the ridge points around the important facial regions on the face (i.e., the eyes, the nose, and the mouth). We utilized the robust Hausdorff distance and the iterative closest points (ICP) for matching the ridge image of a given probe image to the ridge images of the facial images in the gallery. To evaluate the performance of our approach for 3D face recognition, we performed experiments on GavabDB face database (a small size database) and Face Recognition Grand Challenge V2.0 (a large size database). The results of the experiments show that the ridge lines have great capability for 3D face recognition. In addition, we found that as long as the size of the database is small, the performance of the ICP-based matching and the robust Hausdorff matching are comparable. But, when the size of the database increases, ICP-based matching outperforms the robust Hausdorff matching technique.  相似文献   

18.
目的表情变化是3维人脸识别面临的主要问题。为克服表情影响,提出了一种基于面部轮廓线对表情鲁棒的3维人脸识别方法。方法首先,对人脸进行预处理,包括人脸区域切割、平滑处理和姿态归一化,将所有的人脸置于姿态坐标系下;然后,从3维人脸模型的半刚性区域提取人脸多条垂直方向的轮廓线来表征人脸面部曲面;最后,利用弹性曲线匹配算法计算不同3维人脸模型间对应的轮廓线在预形状空间(preshape space)中的测地距离,将其作为相似性度量,并且对所有轮廓线的相似度向量加权融合,得到总相似度用于分类。结果在FRGC v2.0数据库上进行识别实验,获得97.1%的Rank-1识别率。结论基于面部轮廓线的3维人脸识别方法,通过从人脸的半刚性区域提取多条面部轮廓线来表征人脸,在一定程度上削弱了表情的影响,同时还提高了人脸匹配速度。实验结果表明,该方法具有较强的识别性能,并且对表情变化具有较好的鲁棒性。  相似文献   

19.
针对二维人脸识别对姿态与光照变化较为敏感的问题,提出了一种基于三维数据与混合多尺度奇异值特征MMSV(mixture of multi-scale singular value,MMSV)的二维人脸识别方法。在训练阶段,利用三维人脸数据与光照模型获取大量具有不同姿态和光照条件的二维虚拟图像,为构造完备的特征模板奠定基础;同时,通过子集划分有效地缓解了人脸特征提取过程中的非线性问题;最后对人脸图像进行MMSV特征提取,从而对人脸的全局与局部特征进行融合。在识别阶段,通过计算MMSV特征子空间距离完成分类识别。实验证明,提取到的MMSV特征包含有更多的鉴别信息,对姿态和光照变化具有理想的鲁棒性。该方法在WHU-3D数据库上取得了约98.4%的识别率。  相似文献   

20.
利用3D人脸建模的方法进行人脸识别有效地克服了2D人脸识别系统中识别率易受光照、姿态、表情影响的缺陷。文章采用一种依据人脸图像对3D通用人脸模型进行自适应调整的有效算法,构造出特定的人脸模型并运用于人脸识别中。通过比较从人脸图像中估算出的特征点与通用人脸模型在图像平面上的投影点之间的关系,对3D通用人脸模型进行全局和局部调整,以适应人脸中眼、口、鼻的个性化特征。最后以一个实例说明了此算法的应用。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号