首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 750 毫秒
1.
提出一种2D和3D模式相融合的人耳识别方法.利用基于Adaboost算法的人耳检测器在2D图像上进行人耳检测,在对应的深度图像中定位出人耳区域.对于2D人耳图像,利用核Fisher鉴别分析法进行特征提取,再利用最近邻分类器进行识别;对于3D人耳深度图,利用3D局部二值模式进行特征提取,结合几何约束和位置约束在测试耳和注册原型耳上进行特征点匹配,并利用匹配点数进行识别.最后将两者进行决策层融合.在UND人耳图像库上的实验结果表明,与单独2D或3D人耳识别相比,文中提出的2D+3D人耳融合识别方法在光照变化情况下能取得更好的识别性能.  相似文献   

2.
Most of the existing approaches of multimodal 2D + 3D face recognition exploit the 2D and 3D information at the feature or score level. They do not fully benefit from the dependency between modalities. Exploiting this dependency at the early stage is more effective than the later stage. Early fusion data contains richer information about the input biometric than the compressed features or matching scores. We propose an image recombination for face recognition that explores the dependency between modalities at the image level. Facial cues from the 2D and 3D images are recombined into a more independent and discriminating data by finding transformation axes that account for the maximal amount of variances in the images. We also introduce a complete framework of multimodal 2D + 3D face recognition that utilizes the 2D and 3D facial information at the enrollment, image and score levels. Experimental results based on NTU-CSP and Bosphorus 3D face databases show that our face recognition system using image recombination outperforms other face recognition systems based on the pixel- or score-level fusion.  相似文献   

3.
基于图像的二维人脸识别技术日趋成熟,但仍受光照、姿态和表情等变化的影响。利用三维人脸模型提高人脸识别性能并将其应用于实际成为近几年学术界的研究趋势。本文提出了SWJTU-MF多模人脸数据库(SWJTU multimodal face database, SWJTU-MF Database),包 含200个中性表情中国人的4种人脸样本数据,包括可见光图像、二维视频序列、三维人脸(高精度)和立体视频序列。本文首先分类介绍现有的三维人脸识别算法,然后概述相关的多模人脸数据库,接着提出SWJTU-MF多模人脸数据库,并说明数据库的采集装置、采集环境、采集过程及数据内容,随后简要展示数据标准化过程。最后讨论本数据库面向的应用研究,并给出SWJTU-MF建议的评测协议。  相似文献   

4.
As is well known, traditional 2D face recognition based on optical (intensity or color) images faces many challenges, such as illumination, expression, and pose variation. In fact, the human face generates not only 2D texture information but also 3D shape information. In this paper, we investigate what contributions depth and intensity information makes to face recognition when expression and pose variations are taken into account, and we propose a novel system for combining depth and intensity information to improve face recognition systems. In our system, local features described by Gabor wavelets are extracted from depth and intensity images, which are obtained from 3D data after fine alignment. Then a novel hierarchical selecting scheme embedded in linear discriminant analysis (LDA) and AdaBoost learning is proposed to select the most effective and most robust features and to construct a strong classifier. Experiments are performed on the CASIA 3D face database and the FRGC V2.0 database, two data sets with complex variations, including expressions, poses and long time lapses between two scans. Experimental results demonstrate the promising performance of the proposed method. In our system, all processes are performed automatically, thus providing a prototype of automatic face recognition combining depth and intensity information.  相似文献   

5.
In this paper, we propose a novel Patch Geodesic Distance (PGD) to transform the texture map of an object through its shape data for robust 2.5D object recognition. Local geodesic paths within patches and global geodesic paths for patches are combined in a coarse to fine hierarchical computation of PGD for each surface point to tackle the missing data problem in 2.5D images. Shape adjusted texture patches are encoded into local patterns for similarity measurement between two 2.5D images with different viewing angles and/or shape deformations. An extensive experimental investigation is conducted on 2.5 face images using the publicly available BU-3DFE and Bosphorus databases covering face recognition under expression and pose changes. The performance of the proposed method is compared with that of three benchmark approaches. The experimental results demonstrate that the proposed method provides a very encouraging new solution for 2.5D object recognition.  相似文献   

6.
基于多通道Log-Gabor小波与(2D)^2PCALDA的人脸识别方法   总被引:2,自引:1,他引:1  
火元莲 《计算机应用》2010,30(11):2970-2973
为了降低光照变化对基于子空间的人脸识别方法性能的影响,结合多通道Log-Gabor策略与(2D)2PCALDA特征提取方法,提出了一种新的人脸识别方法。将不同尺度与方向作为独立通道,在每个通道内采用(2D)2PCALDA对人脸图像的Log-Gabor表示进行特征提取、分类,然后对各通道分类结果进行决策融合得到最终的类别归属。在CAS-PEAL-R1、ORL与Yale 人脸数据库上的实验结果表明,该算法具有较好的识别性能。  相似文献   

7.
One of the main challenges in face recognition is represented by pose and illumination variations that drastically affect the recognition performance, as confirmed by the results of recent face recognition large-scale evaluations. This paper presents a new technique for face recognition, based on the joint use of 3D models and 2D images, specifically conceived to be robust with respect to pose and illumination changes. A 3D model of each user is exploited in the training stage (i.e. enrollment) to generate a large number of 2D images representing virtual views of the face with varying pose and illumination. Such images are then used to learn in a supervised manner a set of subspaces constituting the user's template. Recognition occurs by matching 2D images with the templates and no 3D information (neither images nor face models) is required. The experiments carried out confirm the efficacy of the proposed technique.  相似文献   

8.
Model-based face analysis is a general paradigm with applications that include face recognition, expression recognition, lip-reading, head pose estimation, and gaze estimation. A face model is first constructed from a collection of training data, either 2D images or 3D range scans. The face model is then fit to the input image(s) and the model parameters used in whatever the application is. Most existing face models can be classified as either 2D (e.g. Active Appearance Models) or 3D (e.g. Morphable Models). In this paper we compare 2D and 3D face models along three axes: (1) representational power, (2) construction, and (3) real-time fitting. For each axis in turn, we outline the differences that result from using a 2D or a 3D face model.  相似文献   

9.
A complete authentication system based on fusion of 3D face and hand biometrics is presented and evaluated in this paper. The system relies on a low cost real-time sensor, which can simultaneously acquire a pair of depth and color images of the scene. By combining 2D and 3D facial and hand geometry features, we are able to provide highly reliable user authentication robust to appearance and environmental variations. The design of the proposed system addresses two basic requirements of biometric technologies: dependable performance under real-world conditions along with user convenience. Experimental evaluation on an extensive database recorded in a real working environment demonstrates the superiority of the proposed multimodal scheme against unimodal classifiers in the presence of numerous appearance and environmental variations, thus making the proposed system an ideal solution for a wide range of real-world applications, from high-security to personalization of services and attendance control.  相似文献   

10.
张睿  于忠党 《计算机工程》2008,34(9):216-218
为了克服光照变化较大的情况对识别率的影响,提出基于二阶双向二维主成分分析(Sec-(2D)2PCA)的人脸识别方法。丢弃提取人脸图像的(2D)2PCA的前几个反映光照信息的主成分。在剩余图像中再次使用(2D)2PCA方法。Yale人脸库B和Yale人脸库上的试验结果表明,该方法在识别性能上优于2DPCA、(2D)2PCA、Sec-2DPCA方法。  相似文献   

11.
12.
In this paper, we present an approach for 3D face recognition from frontal range data based on the ridge lines on the surface of the face. We use the principal curvature, kmax, to represent the face image as a 3D binary image called ridge image. The ridge image shows the locations of the ridge points around the important facial regions on the face (i.e., the eyes, the nose, and the mouth). We utilized the robust Hausdorff distance and the iterative closest points (ICP) for matching the ridge image of a given probe image to the ridge images of the facial images in the gallery. To evaluate the performance of our approach for 3D face recognition, we performed experiments on GavabDB face database (a small size database) and Face Recognition Grand Challenge V2.0 (a large size database). The results of the experiments show that the ridge lines have great capability for 3D face recognition. In addition, we found that as long as the size of the database is small, the performance of the ICP-based matching and the robust Hausdorff matching are comparable. But, when the size of the database increases, ICP-based matching outperforms the robust Hausdorff matching technique.  相似文献   

13.
The quality of biometric samples plays an important role in biometric authentication systems because it has a direct impact on verification or identification performance. In this paper, we present a novel 3D face recognition system which performs quality assessment on input images prior to recognition. More specifically, a reject option is provided to allow the system operator to eliminate the incoming images of poor quality, e.g. failure acquisition of 3D image, exaggerated facial expressions, etc.. Furthermore, an automated approach for preprocessing is presented to reduce the number of failure cases in that stage. The experimental results show that the 3D face recognition performance is significantly improved by taking the quality of 3D facial images into account. The proposed system achieves the verification rate of 97.09% at the False Acceptance Rate (FAR) of 0.1% on the FRGC v2.0 data set.  相似文献   

14.
The paper proposes a novel, pose-invariant face recognition system based on a deformable, generic 3D face model, that is a composite of: (1) an edge model, (2) a color region model and (3) a wireframe model for jointly describing the shape and important features of the face. The first two submodels are used for image analysis and the third mainly for face synthesis. In order to match the model to face images in arbitrary poses, the 3D model can be projected onto different 2D viewplanes based on rotation, translation and scale parameters, thereby generating multiple face-image templates (in different sizes and orientations). Face shape variations among people are taken into account by the deformation parameters of the model. Given an unknown face, its pose is estimated by model matching and the system synthesizes face images of known subjects in the same pose. The face is then classified as the subject whose synthesized image is most similar. The synthesized images are generated using a 3D face representation scheme which encodes the 3D shape and texture characteristics of the faces. This face representation is automatically derived from training face images of the subject. Experimental results show that the method is capable of determining pose and recognizing faces accurately over a wide range of poses and with naturally varying lighting conditions. Recognition rates of 92.3% have been achieved by the method with 10 training face images per person.  相似文献   

15.
Matching 2.5D face scans to 3D models   总被引:7,自引:0,他引:7  
The performance of face recognition systems that use two-dimensional images depends on factors such as lighting and subject's pose. We are developing a face recognition system that utilizes three-dimensional shape information to make the system more robust to arbitrary pose and lighting. For each subject, a 3D face model is constructed by integrating several 2.5D face scans which are captured from different views. 2.5D is a simplified 3D (x,y,z) surface representation that contains at most one depth value (z direction) for every point in the (x, y) plane. Two different modalities provided by the facial scan, namely, shape and texture, are utilized and integrated for face matching. The recognition engine consists of two components, surface matching and appearance-based matching. The surface matching component is based on a modified iterative closest point (ICP) algorithm. The candidate list from the gallery used for appearance matching is dynamically generated based on the output of the surface matching component, which reduces the complexity of the appearance-based matching stage. Three-dimensional models in the gallery are used to synthesize new appearance samples with pose and illumination variations and the synthesized face images are used in discriminant subspace analysis. The weighted sum rule is applied to combine the scores given by the two matching components. Experimental results are given for matching a database of 200 3D face models with 598 2.5D independent test scans acquired under different pose and some lighting and expression changes. These results show the feasibility of the proposed matching scheme.  相似文献   

16.
Researchers have suggested that the ear may have advantages over the face for biometric recognition. Our previous experiments with ear and face recognition, using the standard principal component analysis approach, showed lower recognition performance using ear images. We report results of similar experiments on larger data sets that are more rigorously controlled for relative quality of face and ear images. We find that recognition performance is not significantly different between the face and the ear, for example, 70.5 percent versus 71.6 percent, respectively, in one experiment. We also find that multimodal recognition using both the ear and face results in statistically significant improvement over either individual biometric, for example, 90.9 percent in the analogous experiment.  相似文献   

17.
This paper presents a novel scheme for feature extraction, namely, the generalized two-dimensional Fisher's linear discriminant (G-2DFLD) method and its use for face recognition using multi-class support vector machines as classifier. The G-2DFLD method is an extension of the 2DFLD method for feature extraction. Like 2DFLD method, G-2DFLD method is also based on the original 2D image matrix. However, unlike 2DFLD method, which maximizes class separability either from row or column direction, the G-2DFLD method maximizes class separability from both the row and column directions simultaneously. To realize this, two alternative Fisher's criteria have been defined corresponding to row and column-wise projection directions. Unlike 2DFLD method, the principal components extracted from an image matrix in G-2DFLD method are scalars; yielding much smaller image feature matrix. The proposed G-2DFLD method was evaluated on two popular face recognition databases, the AT&T (formerly ORL) and the UMIST face databases. The experimental results using different experimental strategies show that the new G-2DFLD scheme outperforms the PCA, 2DPCA, FLD and 2DFLD schemes, not only in terms of computation times, but also for the task of face recognition using multi-class support vector machines (SVM) as classifier. The proposed method also outperforms some of the neural networks and other SVM-based methods for face recognition reported in the literature.  相似文献   

18.
目的 目前2D表情识别方法对于一些混淆性较高的表情识别率不高并且容易受到人脸姿态、光照变化的影响,利用RGBD摄像头Kinect获取人脸3D特征点数据,提出了一种结合像素2D特征和特征点3D特征的实时表情识别方法。方法 首先,利用3种经典的LBP(局部二值模式)、Gabor滤波器、HOG(方向梯度直方图)提取了人脸表情2D像素特征,由于2D像素特征对于人脸表情描述能力的局限性,进一步提取了人脸特征点之间的角度、距离、法向量3种3D表情特征,以对不同表情的变化情况进行更加细致地描述。为了提高算法对混淆性高的表情识别能力并增加鲁棒性,将2D像素特征和3D特征点特征分别训练了3组随机森林模型,通过对6组随机森林分类器的分类结果加权组合,得到最终的表情类别。结果 在3D表情数据集Face3D上验证算法对9种不同表情的识别效果,结果表明结合2D像素特征和3D特征点特征的方法有利于表情的识别,平均识别率达到了84.7%,高出近几年提出的最优方法4.5%,而且相比单独地2D、3D融合特征,平均识别率分别提高了3.0%和5.8%,同时对于混淆性较强的愤怒、悲伤、害怕等表情识别率均高于80%,实时性也达到了10~15帧/s。结论 该方法结合表情图像的2D像素特征和3D特征点特征,提高了算法对于人脸表情变化的描述能力,而且针对混淆性较强的表情分类,对多组随机森林分类器的分类结果加权平均,有效地降低了混淆性表情之间的干扰,提高了算法的鲁棒性。实验结果表明了该方法相比普通的2D特征、3D特征等对于表情的识别不仅具有一定的优越性,同时还能保证算法的实时性。  相似文献   

19.
20.
目的 人脸姿态偏转是影响人脸识别准确率的一个重要因素,本文利用3维人脸重建中常用的3维形变模型以及深度卷积神经网络,提出一种用于多姿态人脸识别的人脸姿态矫正算法,在一定程度上提高了大姿态下人脸识别的准确率。方法 对传统的3维形变模型拟合方法进行改进,利用人脸形状参数和表情参数对3维形变模型进行建模,针对面部不同区域的关键点赋予不同的权值,加权拟合3维形变模型,使得具有不同姿态和面部表情的人脸图像拟合效果更好。然后,对3维人脸模型进行姿态矫正并利用深度学习对人脸图像进行修复,修复不规则的人脸空洞区域,并使用最新的局部卷积技术同时在新的数据集上重新训练卷积神经网络,使得网络参数达到最优。结果 在LFW(labeled faces in the wild)人脸数据库和StirlingESRC(Economic Social Research Council)3维人脸数据库上,将本文算法与其他方法进行比较,实验结果表明,本文算法的人脸识别精度有一定程度的提高。在LFW数据库上,通过对具有任意姿态的人脸图像进行姿态矫正和修复后,本文方法达到了96.57%的人脸识别精确度。在StirlingESRC数据库上,本文方法在人脸姿态为±22°的情况下,人脸识别准确率分别提高5.195%和2.265%;在人脸姿态为±45°情况下,人脸识别准确率分别提高5.875%和11.095%;平均人脸识别率分别提高5.53%和7.13%。对比实验结果表明,本文提出的人脸姿态矫正算法有效提高了人脸识别的准确率。结论 本文提出的人脸姿态矫正算法,综合了3维形变模型和深度学习模型的优点,在各个人脸姿态角度下,均能使人脸识别准确率在一定程度上有所提高。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号