首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Deformation modeling for robust 3D face matching   总被引:1,自引:0,他引:1  
Face recognition based on 3D surface matching is promising for overcoming some of the limitations of current 2D image-based face recognition systems. The 3D shape is generally invariant to the pose and lighting changes, but not invariant to the non-rigid facial movement, such as expressions. Collecting and storing multiple templates to account for various expressions for each subject in a large database is not practical. We propose a facial surface modeling and matching scheme to match 2.5D facial scans in the presence of both non-rigid deformations and pose changes (multiview) to a 3D face template. A hierarchical geodesic-based resampling approach is applied to extract landmarks for modeling facial surface deformations. We are able to synthesize the deformation learned from a small group of subjects (control group) onto a 3D neutral model (not in the control group), resulting in a deformed template. A user-specific (3D) deformable model is built by combining the templates with synthesized deformations. The matching distance is computed by fitting this generative deformable model to a test scan. A fully automatic and prototypic 3D face matching system has been developed. Experimental results demonstrate that the proposed deformation modeling scheme increases the 3D face matching accuracy.  相似文献   

2.
In this paper, we present a fully-automatic and real-time approach for person-independent recognition of facial expressions from dynamic sequences of 3D face scans. In the proposed solution, first a set of 3D facial landmarks are automatically detected, then the local characteristics of the face in the neighborhoods of the facial landmarks and their mutual distances are used to model the facial deformation. Training two hidden Markov models for each facial expression to be recognized, and combining them to form a multiclass classifier, an average recognition rate of 79.4 % has been obtained for the 3D dynamic sequences showing the six prototypical facial expressions of the Binghamton University 4D Facial Expression database. Comparisons with competitor approaches on the same database show that our solution is able to obtain effective results with the advantage of being capable to process facial sequences in real-time.  相似文献   

3.

The development of web cameras and smart phones is mature, and more and more facial recognition-related applications are implemented on embedded systems. The demand for real-time face recognition on embedded systems is also increasing. In order to improve the accuracy of face recognition, most of the modern face recognition systems consist of multiple deep neural network models for recognition. However, in an embedded system, integrating these complex neural network models and execute simultaneously is not easy to achieve the goal of real-time recognition of human faces and their identities. In view of this, this study proposes a new frame analysis mechanism, continuous frames skipping mechanism (CFSM), which can analyze the frame in real time to determine whether it is necessary to perform face recognition on the current frame. Through the analysis of CFSM, the frames that do not need to be re-recognized for face are omitted. In this way, the workload of the face recognition system will be greatly reduced to achieve the goal of real-time face recognition in the embedded system. The experimental results show that the proposed CFSM mechanism can greatly increase the speed of face recognition in the video on the embedded system, achieving the goal of real-time face recognition.

  相似文献   

4.
A fully automated, multistage system for real-time recognition of facial expression is presented. The system uses facial motion to characterize monochrome frontal views of facial expressions and is able to operate effectively in cluttered and dynamic scenes, recognizing the six emotions universally associated with unique facial expressions, namely happiness, sadness, disgust, surprise, fear, and anger. Faces are located using a spatial ratio template tracker algorithm. Optical flow of the face is subsequently determined using a real-time implementation of a robust gradient model. The expression recognition system then averages facial velocity information over identified regions of the face and cancels out rigid head motion by taking ratios of this averaged motion. The motion signatures produced are then classified using Support Vector Machines as either nonexpressive or as one of the six basic emotions. The completed system is demonstrated in two simple affective computing applications that respond in real-time to the facial expressions of the user, thereby providing the potential for improvements in the interaction between a computer user and technology.  相似文献   

5.
For effective interaction between humans and socially adept, intelligent service robots, a key capability required by this class of sociable robots is the successful interpretation of visual data. In addition to crucial techniques like human face detection and recognition, an important next step for enabling intelligence and empathy within social robots is that of emotion recognition. In this paper, an automated and interactive computer vision system is investigated for human facial expression recognition and tracking based on the facial structure features and movement information. Twenty facial features are adopted since they are more informative and prominent for reducing the ambiguity during classification. An unsupervised learning algorithm, distributed locally linear embedding (DLLE), is introduced to recover the inherent properties of scattered data lying on a manifold embedded in high-dimensional input facial images. The selected person-dependent facial expression images in a video are classified using the DLLE. In addition, facial expression motion energy is introduced to describe the facial muscle’s tension during the expressions for person-independent tracking for person-independent recognition. This method takes advantage of the optical flow which tracks the feature points’ movement information. Finally, experimental results show that our approach is able to separate different expressions successfully.  相似文献   

6.
7.
Anthropometric 3D Face Recognition   总被引:1,自引:0,他引:1  
We present a novel anthropometric three dimensional (Anthroface 3D) face recognition algorithm, which is based on a systematically selected set of discriminatory structural characteristics of the human face derived from the existing scientific literature on facial anthropometry. We propose a novel technique for automatically detecting 10 anthropometric facial fiducial points that are associated with these discriminatory anthropometric features. We isolate and employ unique textural and/or structural characteristics of these fiducial points, along with the established anthropometric facial proportions of the human face for detecting them. Lastly, we develop a completely automatic face recognition algorithm that employs facial 3D Euclidean and geodesic distances between these 10 automatically located anthropometric facial fiducial points and a linear discriminant classifier. On a database of 1149 facial images of 118 subjects, we show that the standard deviation of the Euclidean distance of each automatically detected fiducial point from its manually identified position is less than 2.54 mm. We further show that the proposed Anthroface 3D recognition algorithm performs well (equal error rate of 1.98% and a rank 1 recognition rate of 96.8%), out performs three of the existing benchmark 3D face recognition algorithms, and is robust to the observed fiducial point localization errors.  相似文献   

8.
人脸检测与特征定位是人脸分析技术的一个重要组成部分,其目标是在图像中搜索人脸特征(如眼、鼻、嘴、耳等)的位置。虽然人们可以毫不费力地完成这些工作,但对于机器来说,这依然是一件极其困难的任务。近几年来该项技术已有了长足的发展,已成功地应用于诸如人脸识别、姿态识别、表情识别、脸部动画等诸多领域。本文利用David Cristinace和Tim Cootes提出的一个多阶段人脸特征检测方法实现了一个实时人脸特征定位系统。同时也对原算法本身作了一些改进,在对精度影响极小的情况下,大大提高了原算法的速度。  相似文献   

9.
Facial features under variant-expressions and partial occlusions could have degrading effect on overall face recognition performance. As a solution, we suggest that the contribution of these features on final classification should be determined. In order to represent facial features contribution according to their variations, we propose a feature selection process that describes facial features as local independent component analysis(ICA) features. These local features are acquired using locally lateral subspace(LLS) strategy.Then, through linear discriminant analysis(LDA) we investigate the intraclass and interclass representation of each local ICA feature and express each feature s contribution via a weighting process. Using these weights, we define the contribution of each feature at local classifier level. In order to recognize faces under single sample constraint, we implement LLS strategy on locally linear embedding(LLE) along with the proposed feature selection. Additionally, we highlight the efficiency of the implementation of LLS strategy. The overall accuracy achieved by our approach on datasets with different facial expressions and partial occlusions such as AR, JAFFE,FERET and CK+ is 90.70%. We present together in this paper survey results on face recognition performance and physiological feature selection performed by human subjects.  相似文献   

10.
本文报告了一种多姿态人脸图象识别原型系统,它不同于现有系统和方法,该系统可工作在合作对象下允许姿态变化(存在图象平面内旋转和深度方向上旋转,限于双眼可见)的人脸图象识别。由于对成象条件有所放松,故可望应用于身份验证、保安和视频会议等领域。对姿态可变条件下的人脸特征检测、姿态估计、识别建模以及基于模板相关的匹配等技术进行了深人研究,分析了光照、姿态及分辨率变化等因素对识别的影响程度。实验结果表明,对于30类人脸,每人18幅图象大小的测试集,达到了100%的识别率。  相似文献   

11.
Toward automatic simulation of aging effects on face images   总被引:6,自引:0,他引:6  
The process of aging causes significant alterations in the facial appearance of individuals. When compared with other sources of variation in face images, appearance variation due to aging displays some unique characteristics. Changes in facial appearance due to aging can even affect discriminatory facial features, resulting in deterioration of the ability of humans and machines to identify aged individuals. We describe how the effects of aging on facial appearance can be explained using learned age transformations and present experimental results to show that reasonably accurate estimates of age can be made for unseen images. We also show that we can improve our results by taking into account the fact that different individuals age in different ways and by considering the effect of lifestyle. Our proposed framework can be used for simulating aging effects on new face images in order to predict how an individual might look like in the future or how he/she used to look in the past. The methodology presented has also been used for designing a face recognition system, robust to aging variation. In this context, the perceived age of the subjects in the training and test images is normalized before the training and classification procedure so that aging variation is eliminated. Experimental results demonstrate that, when age normalization is used, the performance of our face recognition system can be improved  相似文献   

12.
钟锐  吴怀宇  何云 《计算机科学》2018,45(6):308-313
传统的人脸识别模型采用离线方式进行训练,同时由于人脸特征维数较高导致算法的实时性不足。文中分别从人脸特征与分类器两方面来构建快速的人脸识别算法。首先使用 SDM(Supervised Descent Method)算法进行人脸特征点定位,提取每个人脸特征点邻域内的局部(Multi Block-Center Symmetric Local Binary Patterns,MB-CSLBP)特征,并将所有的人脸特征点邻域特征以串联的方式构成局部融合特征,即所提出的局部融合MB-CSLBP特征LFP-MB-CSLBP(Local Fusion Feature of MB-CSLBP)。将以上特征送入分层增量树HI-tree(Hierarchical Incremental tree)中进行人脸识别模型的在线训练。分层增量树是使用分层聚类算法来实现增量式学习的,因此其能够以在线的方式对识别模型进行训练,具有较高的实时性与准确性。最后在3种不同的人脸库以及摄像头采集的人脸视频上对算法的识别率与实时性进行测试。实验结果表明,相比于当前其他算法,所提算法具有较高的人脸识别率与实时性。  相似文献   

13.
说话是人类正常生活中最重要的技能之一,是发音相关肌肉在神经中枢的控制下协调运动的 结果。表面肌电图法(Surface Electromyography,sEMG)是目前采集肌肉电信号的常用方法,能检测到 可靠的肌肉电生理信息。用肌电信号进行语音分类时,所选的电极位置对分类精度有重大作用。但目 前基于 sEMG 的语音识别方法选取电极位置及数量时没有一个客观的指标,也不清楚发音相关的面、 颈部左右两侧对称位置电极对肌电语音识别的贡献是否冗余。该文使用 120 通道电极(关于面中、颈 中对称)采集了 8 名发音正常的受试者分别发 5 个中文单词和 5 个英文单词时的面、颈部 sEMG,考察 了面、颈部左右两侧对称位置 sEMG 对语音识别的贡献。结果表明,发音过程中面、颈部左右两侧肌 肉活动有相似的变化规律,但整个活动过程中面部对称位置的相关性比颈部低;使用颈部左侧、右侧 的肌电信号进行语音分类的分类精度区别不大,而使用面部左、右两侧肌电信号的分类结果差异较明 显。因此,颈部对称位置的 sEMG 信号对语音识别贡献程度具有一致性,而面部则不具有,这为后续 研究减少电极数量和选择最佳通道提供了新思路。  相似文献   

14.
In this paper, an analysis of the effect of partial occlusion on facial expression recognition is investigated. The classification from partially occluded images in one of the six basic facial expressions is performed using a method based on Gabor wavelets texture information extraction, a supervised image decomposition method based on Discriminant Non-negative Matrix Factorization and a shape-based method that exploits the geometrical displacement of certain facial features. We demonstrate how partial occlusion affects the above mentioned methods in the classification of the six basic facial expressions, and indicate the way partial occlusion affects human observers when recognizing facial expressions. An attempt to specify which part of the face (left, right, lower or upper region) contains more discriminant information for each facial expression, is also made and conclusions regarding the pairs of facial expressions misclassifications that each type of occlusion introduces, are drawn.  相似文献   

15.
The increasing availability of 3D facial data offers the potential to overcome the intrinsic difficulties faced by conventional face recognition using 2D images. Instead of extending 2D recognition algorithms for 3D purpose, this letter proposes a novel strategy for 3D face recognition from the perspective of representing each 3D facial surface with a 2D attribute image and taking the advantage of the advances in 2D face recognition. In our approach, each 3D facial surface is mapped homeomorphically onto a 2D lattice, where the value at each site is an attribute that represents the local 3D geometrical or textural properties on the surface, therefore invariant to pose changes. This lattice is then interpolated to generate a 2D attribute image. 3D face recognition can be achieved by applying the traditional 2D face recognition techniques to obtained attribute images. In this study, we chose the pose invariant local mean curvature calculated at each vertex on the 3D facial surface to construct the 2D attribute image and adopted the eigenface algorithm for attribute image recognition. We compared our approach to state-of-the-art 3D face recognition algorithms in the FRGC (Version 2.0), GavabDB and NPU3D database. Our results show that the proposed approach has improved the robustness to head pose variation and can produce more accurate 3D multi-pose face recognition.  相似文献   

16.
基于子模式的Gabor特征融合的单样本人脸识别   总被引:5,自引:0,他引:5  
针对传统人脸识别方法在单训练样本条件下效果不佳的缺点,提出基于子模式的Gabor特征融合方法并用于单样本人脸识别。首先采用Gabor变换抽取人脸局部信息,为有效利用面部器官的空间位置信息,将Gabor人脸图像分块构成子模式,采用最小距离分类器对各子模式分类。最后对各子模式分类结果做决策级融合得出分类结果。根据子模式构成原则和决策级融合策略不同,提出两种子模式Gabor特征融合方法。利用ORL人脸库和CAS-PEAL-R1人脸库进行实验和比较分析,实验结果表明文中方法有效提高单样本人脸识别的正确率,改善单样本人脸识别系统的性能。  相似文献   

17.
A human face does not play its role in the identification of an individual but also communicates useful information about a person’s emotional state at a particular time. No wonder automatic face expression recognition has become an area of great interest within the computer science, psychology, medicine, and human–computer interaction research communities. Various feature extraction techniques based on statistical to geometrical data have been used for recognition of expressions from static images as well as real-time videos. In this paper, we present a method for automatic recognition of facial expressions from face images by providing discrete wavelet transform features to a bank of seven parallel support vector machines (SVMs). Each SVM is trained to recognize a particular facial expression, so that it is most sensitive to that expression. Multi-classification is achieved by combining multiple SVMs performing binary classification using one-against-all approach. The outputs of all SVMs are combined using a maximum function. The classification efficiency is tested on static images from the publicly available Japanese Female Facial Expression database. The experiments using the proposed method demonstrate promising results.  相似文献   

18.
杨光  王晅  徐鹏  陈丹丹 《计算机工程》2012,38(22):151-153
为提高人脸识别对人脸姿态、位置、表情变化的鲁棒性,提出一种基于非下采样Contourlet变换(NSCT)与改进脉冲耦合神经网络(M-PCNN)的人脸特征提取方法。利用NSCT对输入图像进行多尺度分解和多方向稀疏分解,以捕获图像中的高维奇异信息,使用M-PCNN模型提取各子带的信息熵,将其作为人脸特征,利用支持向量机(SVM)实现分类与识别。仿真结果表明,该方法鲁棒性较强,在识别和分类中表现出较好的性能。  相似文献   

19.
基于特征点表情变化的3维人脸识别   总被引:1,自引:1,他引:0       下载免费PDF全文
目的 为克服表情变化对3维人脸识别的影响,提出一种基于特征点提取局部区域特征的3维人脸识别方法。方法 首先,在深度图上应用2维图像的ASM(active shape model)算法粗略定位出人脸特征点,再根据Shape index特征在人脸点云上精确定位出特征点。其次,提取以鼻中为中心的一系列等测地轮廓线来表征人脸形状;然后,提取具有姿态不变性的Procrustean向量特征(距离和角度)作为识别特征;最后,对各条等测地轮廓线特征的分类结果进行了比较,并对分类结果进行决策级融合。结果 在FRGC V2.0人脸数据库分别进行特征点定位实验和识别实验,平均定位误差小于2.36 mm,Rank-1识别率为98.35%。结论 基于特征点的3维人脸识别方法,通过特征点在人脸近似刚性区域提取特征,有效避免了受表情影响较大的嘴部区域。实验证明该方法具有较高的识别精度,同时对姿态、表情变化具有一定的鲁棒性。  相似文献   

20.
在计算机技术高速发展的时代,多平台计算机视觉库随之产生。OpenCV作为一种开源代码的计算机视觉库,以可兼容多平台、接口广泛的特点被广泛运用各个领域。在低照度条件下,会出现光照环境差异过大或光线不足等情况,导致传统图像采集系统不能采集高质量的人脸图像,局限性较差。提出基于OpenCV在C 环境配置下运用三维人脸识别技术算法,设计一套低照度条件下超分辨率人脸图像采集系统。实验证明,该设计方案具有实时(对焦速度快)、快速(单张采集0.05秒)、准确(面部识别率99.3%)等特点,能够充分满足低照度条件下超分辨率人脸图像采集的需求。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号