首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
针对二维人脸识别对姿态与光照变化较为敏感的问题,提出了一种基于三维数据与混合多尺度奇异值特征MMSV(mixture of multi-scale singular value,MMSV)的二维人脸识别方法。在训练阶段,利用三维人脸数据与光照模型获取大量具有不同姿态和光照条件的二维虚拟图像,为构造完备的特征模板奠定基础;同时,通过子集划分有效地缓解了人脸特征提取过程中的非线性问题;最后对人脸图像进行MMSV特征提取,从而对人脸的全局与局部特征进行融合。在识别阶段,通过计算MMSV特征子空间距离完成分类识别。实验证明,提取到的MMSV特征包含有更多的鉴别信息,对姿态和光照变化具有理想的鲁棒性。该方法在WHU-3D数据库上取得了约98.4%的识别率。  相似文献   

2.
Matching 2.5D face scans to 3D models   总被引:7,自引:0,他引:7  
The performance of face recognition systems that use two-dimensional images depends on factors such as lighting and subject's pose. We are developing a face recognition system that utilizes three-dimensional shape information to make the system more robust to arbitrary pose and lighting. For each subject, a 3D face model is constructed by integrating several 2.5D face scans which are captured from different views. 2.5D is a simplified 3D (x,y,z) surface representation that contains at most one depth value (z direction) for every point in the (x, y) plane. Two different modalities provided by the facial scan, namely, shape and texture, are utilized and integrated for face matching. The recognition engine consists of two components, surface matching and appearance-based matching. The surface matching component is based on a modified iterative closest point (ICP) algorithm. The candidate list from the gallery used for appearance matching is dynamically generated based on the output of the surface matching component, which reduces the complexity of the appearance-based matching stage. Three-dimensional models in the gallery are used to synthesize new appearance samples with pose and illumination variations and the synthesized face images are used in discriminant subspace analysis. The weighted sum rule is applied to combine the scores given by the two matching components. Experimental results are given for matching a database of 200 3D face models with 598 2.5D independent test scans acquired under different pose and some lighting and expression changes. These results show the feasibility of the proposed matching scheme.  相似文献   

3.
Sotiris  Michael G. 《Pattern recognition》2005,38(12):2537-2548
The paper addresses the problem of face recognition under varying pose and illumination. Robustness to appearance variations is achieved not only by using a combination of a 2D color and a 3D image of the face, but mainly by using face geometry information to cope with pose and illumination variations that inhibit the performance of 2D face recognition. A face normalization approach is proposed, which unlike state-of-the-art techniques is computationally efficient and does not require an extended training set. Experimental results on a large data set show that template-based face recognition performance is significantly benefited from the application of the proposed normalization algorithms prior to classification.  相似文献   

4.
BJUT-3D三维人脸数据库及其处理技术   总被引:5,自引:0,他引:5  
BJUT-3D是目前国际上最大的中国人的三维人脸数据库,其中包括经过顸处理的1200名中国人的三维人脸数据,这一数据资源对于三维人脸识别与建模方面的研究有重要意义.首先介绍了BJUT-3D数据库的数据获取条件、数据形式,并针对数据库建立过程中数据预处理技术进行了讨论.最后作为数据库的直接应用,进行了多姿态人脸识别和人脸姿态估计算法的研究.实验结果证实,该算法具有良好的性能.  相似文献   

5.
Model-based face analysis is a general paradigm with applications that include face recognition, expression recognition, lip-reading, head pose estimation, and gaze estimation. A face model is first constructed from a collection of training data, either 2D images or 3D range scans. The face model is then fit to the input image(s) and the model parameters used in whatever the application is. Most existing face models can be classified as either 2D (e.g. Active Appearance Models) or 3D (e.g. Morphable Models). In this paper we compare 2D and 3D face models along three axes: (1) representational power, (2) construction, and (3) real-time fitting. For each axis in turn, we outline the differences that result from using a 2D or a 3D face model.  相似文献   

6.
The increasing availability of 3D facial data offers the potential to overcome the intrinsic difficulties faced by conventional face recognition using 2D images. Instead of extending 2D recognition algorithms for 3D purpose, this letter proposes a novel strategy for 3D face recognition from the perspective of representing each 3D facial surface with a 2D attribute image and taking the advantage of the advances in 2D face recognition. In our approach, each 3D facial surface is mapped homeomorphically onto a 2D lattice, where the value at each site is an attribute that represents the local 3D geometrical or textural properties on the surface, therefore invariant to pose changes. This lattice is then interpolated to generate a 2D attribute image. 3D face recognition can be achieved by applying the traditional 2D face recognition techniques to obtained attribute images. In this study, we chose the pose invariant local mean curvature calculated at each vertex on the 3D facial surface to construct the 2D attribute image and adopted the eigenface algorithm for attribute image recognition. We compared our approach to state-of-the-art 3D face recognition algorithms in the FRGC (Version 2.0), GavabDB and NPU3D database. Our results show that the proposed approach has improved the robustness to head pose variation and can produce more accurate 3D multi-pose face recognition.  相似文献   

7.
As is well known, traditional 2D face recognition based on optical (intensity or color) images faces many challenges, such as illumination, expression, and pose variation. In fact, the human face generates not only 2D texture information but also 3D shape information. In this paper, we investigate what contributions depth and intensity information makes to face recognition when expression and pose variations are taken into account, and we propose a novel system for combining depth and intensity information to improve face recognition systems. In our system, local features described by Gabor wavelets are extracted from depth and intensity images, which are obtained from 3D data after fine alignment. Then a novel hierarchical selecting scheme embedded in linear discriminant analysis (LDA) and AdaBoost learning is proposed to select the most effective and most robust features and to construct a strong classifier. Experiments are performed on the CASIA 3D face database and the FRGC V2.0 database, two data sets with complex variations, including expressions, poses and long time lapses between two scans. Experimental results demonstrate the promising performance of the proposed method. In our system, all processes are performed automatically, thus providing a prototype of automatic face recognition combining depth and intensity information.  相似文献   

8.
Face recognition with variant pose, illumination and expression (PIE) is a challenging problem. In this paper, we propose an analysis-by-synthesis framework for face recognition with variant PIE. First, an efficient two-dimensional (2D)-to-three-dimensional (3D) integrated face reconstruction approach is introduced to reconstruct a personalized 3D face model from a single frontal face image with neutral expression and normal illumination. Then, realistic virtual faces with different PIE are synthesized based on the personalized 3D face to characterize the face subspace. Finally, face recognition is conducted based on these representative virtual faces. Compared with other related work, this framework has following advantages: (1) only one single frontal face is required for face recognition, which avoids the burdensome enrollment work; (2) the synthesized face samples provide the capability to conduct recognition under difficult conditions like complex PIE; and (3) compared with other 3D reconstruction approaches, our proposed 2D-to-3D integrated face reconstruction approach is fully automatic and more efficient. The extensive experimental results show that the synthesized virtual faces significantly improve the accuracy of face recognition with changing PIE.  相似文献   

9.
We propose a novel 2D image-based approach that can simultaneously handle illumination and pose variations to enhance face recognition rate. It is much simpler, requires much less computational effort than the methods based on 3D models, and provides a comparable or better recognition rate.  相似文献   

10.
As part of the face recognition task in a robust security system, we propose a novel approach for the illumination recovery of faces with cast shadows and specularities. Given a single 2D face image, we relight the face object by extracting the nine spherical harmonic bases and the face spherical illumination coefficients by using the face spherical spaces properties. First, an illumination training database is generated by computing the properties of the spherical spaces out of face albedo and normal values estimated from 2D training images. The training database is then discriminately divided into two directions in terms of the illumination quality and light direction of each image. Based on the generated multi-level illumination discriminative training space, we analyze the target face pixels and compare them with the appropriate training subspace using pre-generated tiles. When designing the framework, practical real-time processing speed and small image size were considered. In contrast to other approaches, our technique requires neither 3D face models nor restricted illumination conditions for the training process. Furthermore, the proposed approach uses one single face image to estimate the face albedo and face spherical spaces. In this work, we also provide the results of a series of experiments performed on publicly available databases to show the significant improvements in the face recognition rates.  相似文献   

11.
张睿  于忠党 《计算机工程》2008,34(9):216-218
为了克服光照变化较大的情况对识别率的影响,提出基于二阶双向二维主成分分析(Sec-(2D)2PCA)的人脸识别方法。丢弃提取人脸图像的(2D)2PCA的前几个反映光照信息的主成分。在剩余图像中再次使用(2D)2PCA方法。Yale人脸库B和Yale人脸库上的试验结果表明,该方法在识别性能上优于2DPCA、(2D)2PCA、Sec-2DPCA方法。  相似文献   

12.
Pose-Robust Facial Expression Recognition Using View-Based 2D $+$ 3D AAM   总被引:1,自引:0,他引:1  
This paper proposes a pose-robust face tracking and facial expression recognition method using a view-based 2D 3D active appearance model (AAM) that extends the 2D 3D AAM to the view-based approach, where one independent face model is used for a specific view and an appropriate face model is selected for the input face image. Our extension has been conducted in many aspects. First, we use principal component analysis with missing data to construct the 2D 3D AAM due to the missing data in the posed face images. Second, we develop an effective model selection method that directly uses the estimated pose angle from the 2D 3D AAM, which makes face tracking pose-robust and feature extraction for facial expression recognition accurate. Third, we propose a double-layered generalized discriminant analysis (GDA) for facial expression recognition. Experimental results show the following: 1) The face tracking by the view-based 2D 3D AAM, which uses multiple face models with one face model per each view, is more robust to pose change than that by an integrated 2D 3D AAM, which uses an integrated face model for all three views; 2) the double-layered GDA extracts good features for facial expression recognition; and 3) the view-based 2D 3D AAM outperforms other existing models at pose-varying facial expression recognition.  相似文献   

13.
三维人脸识别研究综述   总被引:10,自引:0,他引:10  
近二十多年来,虽然基于图像的人脸识别已取得很大进展,并可在约束环境下获得很好的识别性能,但仍受光照、姿态、表情等变化的影响很大,其本质原因在于图像是三维物体在二维空间的简约投影.因此,利用脸部曲面的显式三维表达进行人脸识别正成为近几年学术界的研究热点.文中分析了三维人脸识别的产生动机、概念与基本过程;根据特征形式,将三维人脸识别算法分为基于空域直接匹配、基于局部特征匹配、基于整体特征匹配三大类进行综述;对二维和三维的双模态融合方法进行分类阐述;列出了部分代表性的三维人脸数据库;对部分方法进行实验比较,并分析了方法有效性的原因;总结了目前三维人脸识别技术的优势与困难,并探讨了未来的研究趋势.  相似文献   

14.
In this paper, we present a new algorithm that utilizes low-quality red, green, blue and depth (RGB-D) data from the Kinect sensor for face recognition under challenging conditions. This algorithm extracts multiple features and fuses them at the feature level. A Finer Feature Fusion technique is developed that removes redundant information and retains only the meaningful features for possible maximum class separability. We also introduce a new 3D face database acquired with the Kinect sensor which has released to the research community. This database contains over 5,000 facial images (RGB-D) of 52 individuals under varying pose, expression, illumination and occlusions. Under the first three variations and using only the noisy depth data, the proposed algorithm can achieve 72.5 % recognition rate which is significantly higher than the 41.9 % achieved by the baseline LDA method. Combined with the texture information, 91.3 % recognition rate has achieved under illumination, pose and expression variations. These results suggest the feasibility of low-cost 3D sensors for real-time face recognition.  相似文献   

15.
Face recognition is a rapidly growing research area due to increasing demands for security in commercial and law enforcement applications. This paper provides an up-to-date review of research efforts in face recognition techniques based on two-dimensional (2D) images in the visual and infrared (IR) spectra. Face recognition systems based on visual images have reached a significant level of maturity with some practical success. However, the performance of visual face recognition may degrade under poor illumination conditions or for subjects of various skin colors. IR imagery represents a viable alternative to visible imaging in the search for a robust and practical identification system. While visual face recognition systems perform relatively reliably under controlled illumination conditions, thermal IR face recognition systems are advantageous when there is no control over illumination or for detecting disguised faces. Face recognition using 3D images is another active area of face recognition, which provides robust face recognition with changes in pose. Recent research has also demonstrated that the fusion of different imaging modalities and spectral components can improve the overall performance of face recognition.  相似文献   

16.
In this paper, we propose two novel methods for face recognition under arbitrary unknown lighting by using spherical harmonics illumination representation, which require only one training image per subject and no 3D shape information. Our methods are based on the result which demonstrated that the set of images of a convex Lambertian object obtained under a wide variety of lighting conditions can be approximated accurately by a low-dimensional linear subspace. We provide two methods to estimate the spherical harmonic basis images spanning this space from just one image. Our first method builds the statistical model based on a collection of 2D basis images. We demonstrate that, by using the learned statistics, we can estimate the spherical harmonic basis images from just one image taken under arbitrary illumination conditions if there is no pose variation. Compared to the first method, the second method builds the statistical models directly in 3D spaces by combining the spherical harmonic illumination representation and a 3D morphable model of human faces to recover basis images from images across both poses and illuminations. After estimating the basis images, we use the same recognition scheme for both methods: we recognize the face for which there exists a weighted combination of basis images that is the closest to the test face image. We provide a series of experiments that achieve high recognition rates, under a wide range of illumination conditions, including multiple sources of illumination. Our methods achieve comparable levels of accuracy with methods that have much more onerous training data requirements. Comparison of the two methods is also provided.  相似文献   

17.
利用3D人脸建模的方法进行人脸识别有效地克服了2D人脸识别系统中识别率易受光照、姿态、表情影响的缺陷。文章采用一种依据人脸图像对3D通用人脸模型进行自适应调整的有效算法,构造出特定的人脸模型并运用于人脸识别中。通过比较从人脸图像中估算出的特征点与通用人脸模型在图像平面上的投影点之间的关系,对3D通用人脸模型进行全局和局部调整,以适应人脸中眼、口、鼻的个性化特征。最后以一个实例说明了此算法的应用。  相似文献   

18.
Illumination variation that occurs on face images degrades the performance of face recognition. In this paper, we propose a novel approach to handling illumination variation for face recognition. Since most human faces are similar in shape, we can find the shadow characteristics, which the illumination variation makes on the faces depending on the direction of light. By using these characteristics, we can compensate for the illumination variation on face images. The proposed method is simple and requires much less computational effort than the other methods based on 3D models, and at the same time, provides a comparable recognition rate.  相似文献   

19.
The paper proposes a novel, pose-invariant face recognition system based on a deformable, generic 3D face model, that is a composite of: (1) an edge model, (2) a color region model and (3) a wireframe model for jointly describing the shape and important features of the face. The first two submodels are used for image analysis and the third mainly for face synthesis. In order to match the model to face images in arbitrary poses, the 3D model can be projected onto different 2D viewplanes based on rotation, translation and scale parameters, thereby generating multiple face-image templates (in different sizes and orientations). Face shape variations among people are taken into account by the deformation parameters of the model. Given an unknown face, its pose is estimated by model matching and the system synthesizes face images of known subjects in the same pose. The face is then classified as the subject whose synthesized image is most similar. The synthesized images are generated using a 3D face representation scheme which encodes the 3D shape and texture characteristics of the faces. This face representation is automatically derived from training face images of the subject. Experimental results show that the method is capable of determining pose and recognizing faces accurately over a wide range of poses and with naturally varying lighting conditions. Recognition rates of 92.3% have been achieved by the method with 10 training face images per person.  相似文献   

20.
Recent face recognition algorithm can achieve high accuracy when the tested face samples are frontal. However, when the face pose changes largely, the performance of existing methods drop drastically. Efforts on pose-robust face recognition are highly desirable, especially when each face class has only one frontal training sample. In this study, we propose a 2D face fitting-assisted 3D face reconstruction algorithm that aims at recognizing faces of different poses when each face class has only one frontal training sample. For each frontal training sample, a 3D face is reconstructed by optimizing the parameters of 3D morphable model (3DMM). By rotating the reconstructed 3D face to different views, pose virtual face images are generated to enlarge the training set of face recognition. Different from the conventional 3D face reconstruction methods, the proposed algorithm utilizes automatic 2D face fitting to assist 3D face reconstruction. We automatically locate 88 sparse points of the frontal face by 2D face-fitting algorithm. Such 2D face-fitting algorithm is so-called Random Forest Embedded Active Shape Model, which embeds random forest learning into the framework of Active Shape Model. Results of 2D face fitting are added to the 3D face reconstruction objective function as shape constraints. The optimization objective energy function takes not only image intensity, but also 2D fitting results into account. Shape and texture parameters of 3DMM are thus estimated by fitting the 3DMM to the 2D frontal face sample, which is a non-linear optimization problem. We experiment the proposed method on the publicly available CMUPIE database, which includes faces viewed from 11 different poses, and the results show that the proposed method is effective and the face recognition results toward pose variants are promising.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号