首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
《Pattern recognition》2014,47(2):509-524
This paper presents a computationally efficient 3D face recognition system based on a novel facial signature called Angular Radial Signature (ARS) which is extracted from the semi-rigid region of the face. Kernel Principal Component Analysis (KPCA) is then used to extract the mid-level features from the extracted ARSs to improve the discriminative power. The mid-level features are then concatenated into a single feature vector and fed into a Support Vector Machine (SVM) to perform face recognition. The proposed approach addresses the expression variation problem by using facial scans with various expressions of different individuals for training. We conducted a number of experiments on the Face Recognition Grand Challenge (FRGC v2.0) and the 3D track of Shape Retrieval Contest (SHREC 2008) datasets, and a superior recognition performance has been achieved. Our experimental results show that the proposed system achieves very high Verification Rates (VRs) of 97.8% and 88.5% at a 0.1% False Acceptance Rate (FAR) for the “neutral vs. nonneutral” experiments on the FRGC v2.0 and the SHREC 2008 datasets respectively, and 96.7% for the ROC III experiment of the FRGC v2.0 dataset. Our experiments also demonstrate the computational efficiency of the proposed approach.  相似文献   

2.
提出了一种基于等测地轮廓线的局部描述符来识别三维人脸。首先对三维人脸数据进行预处理, 得到统一的人脸区域并进行姿态归一化; 然后根据测地距离提取到鼻尖点相同距离的点组成等测地轮廓线, 对轮廓线进行重采样, 并对轮廓线上每个采样点的邻域提取局部描述符; 最后在建立测试人脸和库集人脸的点对应关系后进行局部描述符的加权融合和比较, 给出最终识别结果。算法在FRGC(face recognition grand challenge)v2. 0数据库上进行测试, 实验结果表明该方法具有较好的识别性能。  相似文献   

3.
多视角2.5维人脸数据的自动配准与三维融合*   总被引:1,自引:0,他引:1  
提出了一种有效的从三视角2.5维人脸图像到完整三维人脸模型的融合方法。首先用ICP(迭代最近点)方法对三视角人脸图像上手动选取的特征区域进行粗配准,然后用一种调整能量最优方法进行精确配准,最终合成完整的三维人脸模型。通过对融合结果进行相似度测量,实验说明了方法的有效性和优越性。  相似文献   

4.
3D face scans have been widely used for face modeling and analysis. Due to the fact that face scans provide variable point clouds across frames, they may not capture complete facial data or miss point-to-point correspondences across various facial scans, thus causing difficulties to use such data for analysis. This paper presents an efficient approach to representing facial shapes from face scans through the reconstruction of face models based on regional information and a generic model. A new approach for 3D feature detection and a hybrid approach using two vertex mapping algorithms, displacement mapping and point-to-surface mapping, and a regional blending algorithm are proposed to reconstruct the facial surface detail. The resulting models can represent individual facial shapes consistently and adaptively, establishing facial point correspondences across individual models. The accuracy of the generated models is evaluated quantitatively. The applicability of the models is validated through the application of 3D facial expression recognition using the static 3DFE and dynamic 4DFE databases. A comparison with the state of the art has also been reported.  相似文献   

5.
The paper proposes a novel, pose-invariant face recognition system based on a deformable, generic 3D face model, that is a composite of: (1) an edge model, (2) a color region model and (3) a wireframe model for jointly describing the shape and important features of the face. The first two submodels are used for image analysis and the third mainly for face synthesis. In order to match the model to face images in arbitrary poses, the 3D model can be projected onto different 2D viewplanes based on rotation, translation and scale parameters, thereby generating multiple face-image templates (in different sizes and orientations). Face shape variations among people are taken into account by the deformation parameters of the model. Given an unknown face, its pose is estimated by model matching and the system synthesizes face images of known subjects in the same pose. The face is then classified as the subject whose synthesized image is most similar. The synthesized images are generated using a 3D face representation scheme which encodes the 3D shape and texture characteristics of the faces. This face representation is automatically derived from training face images of the subject. Experimental results show that the method is capable of determining pose and recognizing faces accurately over a wide range of poses and with naturally varying lighting conditions. Recognition rates of 92.3% have been achieved by the method with 10 training face images per person.  相似文献   

6.
李昕昕  龚勋 《计算机应用》2017,37(1):262-267
针对现有三维人脸采集技术对采集场景存在诸多限制,提出了自由场景下基于多张图像的三维人脸建模技术,并对其进行了有效性验证。首先,提出一个姿态及深度值迭代计算模型,实现了特征点深度值的准确估计;然后,进行了基于多张图像的深度值融合及整体形状建模;最后,将深度迭代优化算法(IPDO)与目前最优的非线性最小二乘法(NLS1_SR)在Bosphorus Database数据集上进行了对比,建模精度提高了9%,所重建的三维人脸模型投影图像与二维图像具有较高的相似度。实验结果表明,在大姿态变化条件下,该识别算法借助三维信息相较于未借助的情况下,其识别率可以提高50%以上。  相似文献   

7.
We present a multimodal approach for face modeling and recognition. The algorithm uses three cameras to capture stereo images, two frontal and one profile, of the face. 2D facial features are extracted from one of the frontal images and a dense disparity map is computed from the two frontal images. Using the extracted 2D features and their corresponding disparities, we compute their 3D coordinates. We next align a low resolution 3D mesh model to the 3D features, re-project its vertices onto the frontal 2D image and adjust its profile silhouette vertices using the profile view image. We increase the resolution of the resulting 2D model at its center region to obtain a facial mask model covering distinctive features of the face. The 2D coordinates of the vertices, along with their disparities, result in a deformed 3D mask model specific to a given subject’s face. Our method integrates information from the extracted facial features from the 2D image modality with information from the 3D modality obtained from the stereo images. Application of the models in 3D face recognition, for 112 subjects, validates the algorithm with a 95% identification rate and 92% verification rate at 0.1% false acceptance rate.
Mohammad H. MahoorEmail:
  相似文献   

8.
9.
A-Nasser  Mohamed   《Pattern recognition》2005,38(12):2549-2563
We present a fully automated algorithm for facial feature extraction and 3D face modeling from a pair of orthogonal frontal and profile view images of a person's face taken by calibrated cameras. The algorithm starts by automatically extracting corresponding 2D landmark facial features from both view images, then compute their 3D coordinates. Further, we estimate the coordinates of the features that are hidden in the profile view based on the visible features extracted in the two orthogonal face images. The 3D coordinates of the selected feature points obtained from the images are used first to align, then to locally deform the corresponding facial vertices of the generic 3D model. Preliminary experiments to assess the applicability of the resulted models for face recognition show encouraging results.  相似文献   

10.
11.
局部二值描述子的研究进展综述   总被引:1,自引:0,他引:1       下载免费PDF全文
局部二值描述子是局部不变特征中的重要研究对象,广泛应用于计算机视觉与模 式识别中。近年来,以 BRIEF 描述子为代表的局部二值描述子相继出现,对十年来局部二值描 述子的研究成果与发展方向进行综述,旨在为初步研究者与工程应用人员提供参考。首先,对 典型的现代局部二值描述子进行概述;其次,对优化局部二值描述子方法进行分析;最后,对 相关实验评估准则进行讨论,通过总结现阶段存在的问题,给出未来研究的展望。从整体来看, 近年来局部二值描述子经历了显著的发展与进步,许多对于局部二值描述子的研究均在普适性、 鲁棒性和高效性上取得了成果。针对应用场景的不同,部分优化后的描述子也具备了应对实际 问题的能力。这些研究进展为局部二值描述子向高层次发展、多领域拓宽打下了坚实的基础并 提供了更多的思路。局部二值描述子的成功发展标志着计算机视觉技术的进步,但其发展过程 中依然存在一些共性问题与矛盾,有待进一步的深入研究与解决。  相似文献   

12.
针对二维人脸识别对姿态与光照变化较为敏感的问题,提出了一种基于三维数据与混合多尺度奇异值特征MMSV(mixture of multi-scale singular value,MMSV)的二维人脸识别方法。在训练阶段,利用三维人脸数据与光照模型获取大量具有不同姿态和光照条件的二维虚拟图像,为构造完备的特征模板奠定基础;同时,通过子集划分有效地缓解了人脸特征提取过程中的非线性问题;最后对人脸图像进行MMSV特征提取,从而对人脸的全局与局部特征进行融合。在识别阶段,通过计算MMSV特征子空间距离完成分类识别。实验证明,提取到的MMSV特征包含有更多的鉴别信息,对姿态和光照变化具有理想的鲁棒性。该方法在WHU-3D数据库上取得了约98.4%的识别率。  相似文献   

13.
三维人脸识别是未来人脸识别的方向,有望解决二维人脸识别的瓶颈问题。基于MEGI模型,扩展了球面相关性系数,将其用于三维人脸识别。实验证明,基于MEGI模型的方法可以用于三维人脸识别。  相似文献   

14.
鹿乐  周大可  胡阳明 《计算机应用》2012,32(11):3189-3192
针对传统三维人脸重建算法效率低且难以满足实际应用的缺陷,提出一种基于特征分块的三维人脸重建算法,并将此算法应用到三维人脸识别中,实现了基于特征分块的加权三维人脸识别。首先,利用基于平面模板的非均匀重采样法对原始数据进行归一化;其次,采用主动形状模型(ASM)算法对三维人脸和二维人脸图像进行特征定位和特征分块;然后,利用基于分块主元分析(PCA)的稀疏形变模型算法实现每个人脸分块的三维重建;最后,实现了此算法在三维人脸识别中的应用。实验表明,此重建算法具有较高的精度和重建效率,还可以达到全局最优,并且可以提高三维人脸的识别率。  相似文献   

15.
One of the main challenges in face recognition is represented by pose and illumination variations that drastically affect the recognition performance, as confirmed by the results of recent face recognition large-scale evaluations. This paper presents a new technique for face recognition, based on the joint use of 3D models and 2D images, specifically conceived to be robust with respect to pose and illumination changes. A 3D model of each user is exploited in the training stage (i.e. enrollment) to generate a large number of 2D images representing virtual views of the face with varying pose and illumination. Such images are then used to learn in a supervised manner a set of subspaces constituting the user's template. Recognition occurs by matching 2D images with the templates and no 3D information (neither images nor face models) is required. The experiments carried out confirm the efficacy of the proposed technique.  相似文献   

16.
Facial expression analysis has interested many researchers in the past decade due to its potential applications in various fields such as human–computer interaction, psychological studies, and facial animation. Three-dimensional facial data has been proven to be insensitive to illumination condition and head pose, and has hence gathered attention in recent years. In this paper, we focus on discrete expression classification using 3D data from the human face. The paper is divided in two parts. In the first part, we present improvement to the fitting of the Annotated Face Model (AFM) so that a dense point correspondence can be found in terms of both position and semantics among static 3D face scans or frames in 3D face sequences. Then, an expression recognition framework on static 3D images is presented. It is based on a Point Distribution Model (PDM) which can be built on different features. In the second part of this article, a systematic pipeline that operates on dynamic 3D sequences (4D datasets or 3D videos) is proposed and alternative modules are investigated as a comparative study. We evaluated both 3D and 4D Facial Expression Recognition pipelines on two publicly available facial expression databases and obtained promising results.  相似文献   

17.
目的 3维人脸的表情信息不均匀地分布在五官及脸颊附近,对表情进行充分的描述和合理的权重分配是提升识别效果的重要途径。为提高3维人脸表情识别的准确率,提出了一种基于带权重局部旋度模式的3维人脸表情识别算法。方法 首先,为了提取具有较强表情分辨能力的特征,提出对3维人脸的旋度向量进行编码,获取局部旋度模式作为表情特征;然后,提出将ICNP(interactive closest normal points)算法与最小投影偏差算法结合,前者实现3维人脸子区域的不规则划分,划分得到的11个子区域保留了表情变化下面部五官和肌肉的完整性,后者根据各区域对表情识别的贡献大小为各区域的局部旋度模式特征分配权重;最后,带有权重的局部旋度模式特征被输入到分类器中实现表情识别。结果 基于BU-3DFE 3维人脸表情库对本文提出的局部旋度模式特征进行评估,结果表明其分辨能力较其他表情特征更强;基于BU-3DFE库进行表情识别实验,与其他3维人脸表情识别算法相比,本文算法取得了最高的平均识别率,达到89.67%,同时对易混淆的“悲伤”、“愤怒”和“厌恶”等表情的误判率也较低。结论 局部旋度模式特征对3维人脸的表情有较强的表征能力; ICNP算法与最小投影偏差算法的结合,能够实现区域的有效划分和权重的准确计算,有效提高特征对表情的识别能力。试验结果表明本文算法对3维人脸表情具有较高的识别率,并对易混淆的相似表情仍具有较好的识别效果。  相似文献   

18.
19.
20.
提出了一种基于双目视觉的三维重建方法,无须昂贵设备和通用模型。从校正后的图像中提取有效人脸重建区域以降低整体耗时。改进了Realtime局部立体匹配算法和基于种子视差约束的区域生长算法,融合Realtime阈值排序和置信度排序进行区域生长,提高了种子像素提取的可靠性和降低区域生长误匹配的可能性。最后,研究了纹理映射技术,提高了重建模型的逼真度。实验结果表明,该方法能够产生逼真光滑的三维人脸模型。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号