首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 33 毫秒
1.
三维人脸模型已经广泛应用到视频电话、视频会议、影视制作、电脑游戏、人脸识别等多个领域。目前三维人脸建模一般使用多幅图像,且要求表情中性。本文提出了基于正、侧面任意表情三维人脸重建方法。首先对二维图像中的人脸进行特征提取,然后基于三维人脸统计模型,通过缩放、平移、旋转等方法,及全局和局部匹配,获得特定的三维人脸。基于二维图像中的人脸纹理信息,通过纹理映射,获得完整的三维人脸。通过对大量实际二维人脸图像的三维人脸重建,证实了该方法的有效性和鲁棒性。  相似文献   

2.
Most present research into facial expression recognition focuses on the visible spectrum, which is sensitive to illumination change. In this paper, we focus on integrating thermal infrared data with visible spectrum images for spontaneous facial expression recognition. First, the active appearance model AAM parameters and three defined head motion features are extracted from visible spectrum images, and several thermal statistical features are extracted from infrared (IR) images. Second, feature selection is performed using the F-test statistic. Third, Bayesian networks BNs and support vector machines SVMs are proposed for both decision-level and feature-level fusion. Experiments on the natural visible and infrared facial expression (NVIE) spontaneous database show the effectiveness of the proposed methods, and demonstrate thermal IR images’ supplementary role for visible facial expression recognition.  相似文献   

3.
基于二维图像的人脸识别算法提取人脸纹理特征进行识别,但是光照、表情、人脸姿态等会对其产生不利影响。三维人脸特征能更精确地描述人脸的几何结构,并且不易受化妆和光照的影响,但只采用三维人脸数据进行人脸识别又缺少人脸纹理信息,因此文中将二维人脸特征与三维人脸特征相融合进行人脸识别。采用基于Gabor变换的二维特征与基于新的分块策略的三维梯度直方图特征相融合的算法进行人脸识别。首先,提取二维人脸的Gabor特征;然后,提取三维人脸基于新的分块策略的三维梯度直方图特征,旨在提取人脸的可辨别性特征;接下来,对二维人脸特征与三维人脸特征分别使用线性判别分析子空间算法进行训练,并使用加法原则融合两种特征的相似度矩阵;最后,输出识别结果。  相似文献   

4.
基于二维主分量分析的面部表情识别   总被引:8,自引:2,他引:6  
提出了一种直接基于图像矩阵的二维主分量分析(2DPCA)和多分类器联合的面部表情识别方法。首先利用2DPCA进行特征提取,然后用基于模糊积分的多分类器联合的方法对七种表情(生气、厌恶、恐惧、高兴、中性、悲伤、惊讶)进行识别。在JAFFE人脸表情静态图像库上进行实验,与传统主分量分析(PCA)相比,采用2DPCA进行特征提取,不仅识别率比较高,而且运算速度也有很大的提高。  相似文献   

5.
目的 目前2D表情识别方法对于一些混淆性较高的表情识别率不高并且容易受到人脸姿态、光照变化的影响,利用RGBD摄像头Kinect获取人脸3D特征点数据,提出了一种结合像素2D特征和特征点3D特征的实时表情识别方法。方法 首先,利用3种经典的LBP(局部二值模式)、Gabor滤波器、HOG(方向梯度直方图)提取了人脸表情2D像素特征,由于2D像素特征对于人脸表情描述能力的局限性,进一步提取了人脸特征点之间的角度、距离、法向量3种3D表情特征,以对不同表情的变化情况进行更加细致地描述。为了提高算法对混淆性高的表情识别能力并增加鲁棒性,将2D像素特征和3D特征点特征分别训练了3组随机森林模型,通过对6组随机森林分类器的分类结果加权组合,得到最终的表情类别。结果 在3D表情数据集Face3D上验证算法对9种不同表情的识别效果,结果表明结合2D像素特征和3D特征点特征的方法有利于表情的识别,平均识别率达到了84.7%,高出近几年提出的最优方法4.5%,而且相比单独地2D、3D融合特征,平均识别率分别提高了3.0%和5.8%,同时对于混淆性较强的愤怒、悲伤、害怕等表情识别率均高于80%,实时性也达到了10~15帧/s。结论 该方法结合表情图像的2D像素特征和3D特征点特征,提高了算法对于人脸表情变化的描述能力,而且针对混淆性较强的表情分类,对多组随机森林分类器的分类结果加权平均,有效地降低了混淆性表情之间的干扰,提高了算法的鲁棒性。实验结果表明了该方法相比普通的2D特征、3D特征等对于表情的识别不仅具有一定的优越性,同时还能保证算法的实时性。  相似文献   

6.
目的 3维人脸的表情信息不均匀地分布在五官及脸颊附近,对表情进行充分的描述和合理的权重分配是提升识别效果的重要途径。为提高3维人脸表情识别的准确率,提出了一种基于带权重局部旋度模式的3维人脸表情识别算法。方法 首先,为了提取具有较强表情分辨能力的特征,提出对3维人脸的旋度向量进行编码,获取局部旋度模式作为表情特征;然后,提出将ICNP(interactive closest normal points)算法与最小投影偏差算法结合,前者实现3维人脸子区域的不规则划分,划分得到的11个子区域保留了表情变化下面部五官和肌肉的完整性,后者根据各区域对表情识别的贡献大小为各区域的局部旋度模式特征分配权重;最后,带有权重的局部旋度模式特征被输入到分类器中实现表情识别。结果 基于BU-3DFE 3维人脸表情库对本文提出的局部旋度模式特征进行评估,结果表明其分辨能力较其他表情特征更强;基于BU-3DFE库进行表情识别实验,与其他3维人脸表情识别算法相比,本文算法取得了最高的平均识别率,达到89.67%,同时对易混淆的“悲伤”、“愤怒”和“厌恶”等表情的误判率也较低。结论 局部旋度模式特征对3维人脸的表情有较强的表征能力; ICNP算法与最小投影偏差算法的结合,能够实现区域的有效划分和权重的准确计算,有效提高特征对表情的识别能力。试验结果表明本文算法对3维人脸表情具有较高的识别率,并对易混淆的相似表情仍具有较好的识别效果。  相似文献   

7.
The challenge of coping with non-frontal head poses during facial expression recognition results in considerable reduction of accuracy and robustness when capturing expressions that occur during natural communications. In this paper, we attempt to recognize facial expressions under poses with large rotation angles from 2D videos. A depth-patch based 4D expression representation model is proposed. It was reconstructed from 2D dynamic images for delineating continuous spatial changes and temporal context under non-frontal cases. Furthermore, we present an effective deep neural network classifier, which can accurately capture pose-variant expression features from the depth patches and recognize non-frontal expressions. Experimental results on the BU-4DFE database show that the proposed method achieves a high recognition accuracy of 86.87% for non-frontal facial expressions within a range of head rotation angle of up to 52°, outperforming existing methods. We also present a quantitative analysis of the components contributing to the performance gain through tests on the BU-4DFE and Multi-PIE datasets.  相似文献   

8.
Gabor 滤波器和ICA支持的无监督纹理分割   总被引:1,自引:1,他引:0  
纹理分割已经取得了很大的进展,但仍然缺乏一个轻便的解决方案,建立了一个无监督纹理分割框架,其核心是将Gabor滤波器所提取的特征视为统计量,用独立分量分析(ICA)整合特征,并用独立分量作为新的纹理特征,避开了Gabor滤波器参数选择的难题,实验结果表明,ICA比主分量分析更利于纹理特征重整,采用该方法对大多数自然纹理能够得到满意的分割结果。  相似文献   

9.
主动外观模型是基于统计分析建立物体2维模型的有效方法,它融合了目标的形状和纹理信息。在基于相关型图像传感器3维人脸成像的基础上,提出了一种建立3维人脸模型的方法,该方法利用由相关型图像传感器得到的深度信息和与之对应的亮度信息将2维AAMs扩展为3维AAMs,融合人脸的形状,纹理和深度信息来构建3维人脸模型。人脸识别实验结果表明,该方法在不同人脸姿态,表情和光照条件下识别效果要优于Eigenface和2维AAMs。  相似文献   

10.
This paper employs both two-dimensional (2D) and three-dimensional (3D) features of palmprint for recognition. While 2D palmprint image contains plenty of texture information, 3D palmprint image contains the depth information of the palm surface. Using two different features, we can achieve higher recognition accuracy than using only one of them. In addition, we can improve the robustness. To recognize palmprints, we use two-phase test sample representation (TPTSR) which is proved to be successful in face recognition. Before TPTSR, we perform principal component analysis to extract global features from the 2D and 3D palmprint images. We make decision based on the fusion of 2D and 3D features matching scores. We perform experiments on the PolyU 2D + 3D palmprint database which contains 8,000 samples and achieve satisfying recognition performance.  相似文献   

11.
针对人脸表情识别背景复杂性以及表情识别的鲁棒性问题,基于Dempster-Shafer(DS)证据理论,提出了一种融合主动形状模型(ASM)差分纹理特征和局部方向模式(LDP)特征的人脸表情识别方法。ASM差分纹理既能有效地屏蔽个体人脸之间的差异,又能保留人脸表情信息。LDP特征通过计算8个方向的边缘响应来对图像进行编码,因此具有很强的抗噪能力,能够捕捉人脸因表情而产生的细微变化。在DS证据理论融合时,针对不同的特征对表情的识别率,分别用不同的权重系数来计算概率分配值。通过对JAFFE和Cohn-Kanade混合数据库进行实验,表情识别的平均识别率为97.08%,比单特征LDP高出一个百分点,有效地提高了表情识别率和鲁棒性。  相似文献   

12.
为了更有效地提取图像的局部特征,提出了一种基于2维偏最小二乘法(two-dimensional partial leastsquare,2DPLS)的图像局部特征提取方法,并将其应用于面部表情识别中。该方法首先利用局部二元模式(localbinary pattern,LBP)算子提取一幅图像中所有子块的纹理特征,并将其组合成局部纹理特征矩阵。由于样本图像被转化为局部纹理特征矩阵,因此可将传统PLS方法推广为2DPLS方法,用来提取其中的判别信息。2DPLS方法通过对类成员关系矩阵的构造进行相应的修改,使其适应样本的矩阵形式,并能体现出人脸局部信息重要性的差异。同时,对于类成员关系协方差矩阵的奇异性问题,也推导出了其广义逆的解析解。基于JAFFE人脸表情库的实验结果表明,该方法不但可以有效地提取图像局部特征,并能取得良好的表情识别效果。  相似文献   

13.
Pose-Robust Facial Expression Recognition Using View-Based 2D $+$ 3D AAM   总被引:1,自引:0,他引:1  
This paper proposes a pose-robust face tracking and facial expression recognition method using a view-based 2D 3D active appearance model (AAM) that extends the 2D 3D AAM to the view-based approach, where one independent face model is used for a specific view and an appropriate face model is selected for the input face image. Our extension has been conducted in many aspects. First, we use principal component analysis with missing data to construct the 2D 3D AAM due to the missing data in the posed face images. Second, we develop an effective model selection method that directly uses the estimated pose angle from the 2D 3D AAM, which makes face tracking pose-robust and feature extraction for facial expression recognition accurate. Third, we propose a double-layered generalized discriminant analysis (GDA) for facial expression recognition. Experimental results show the following: 1) The face tracking by the view-based 2D 3D AAM, which uses multiple face models with one face model per each view, is more robust to pose change than that by an integrated 2D 3D AAM, which uses an integrated face model for all three views; 2) the double-layered GDA extracts good features for facial expression recognition; and 3) the view-based 2D 3D AAM outperforms other existing models at pose-varying facial expression recognition.  相似文献   

14.
Face images are difficult to interpret because they are highly variable. Sources of variability include individual appearance, 3D pose, facial expression, and lighting. We describe a compact parametrized model of facial appearance which takes into account all these sources of variability. The model represents both shape and gray-level appearance, and is created by performing a statistical analysis over a training set of face images. A robust multiresolution search algorithm is used to fit the model to faces in new images. This allows the main facial features to be located, and a set of shape, and gray-level appearance parameters to be recovered. A good approximation to a given face can be reconstructed using less than 100 of these parameters. This representation can be used for tasks such as image coding, person identification, 3D pose recovery, gender recognition, and expression recognition. Experimental results are presented for a database of 690 face images obtained under widely varying conditions of 3D pose, lighting, and facial expression. The system performs well on all the tasks listed above  相似文献   

15.
基于特征点表情变化的3维人脸识别   总被引:1,自引:1,他引:0       下载免费PDF全文
目的 为克服表情变化对3维人脸识别的影响,提出一种基于特征点提取局部区域特征的3维人脸识别方法。方法 首先,在深度图上应用2维图像的ASM(active shape model)算法粗略定位出人脸特征点,再根据Shape index特征在人脸点云上精确定位出特征点。其次,提取以鼻中为中心的一系列等测地轮廓线来表征人脸形状;然后,提取具有姿态不变性的Procrustean向量特征(距离和角度)作为识别特征;最后,对各条等测地轮廓线特征的分类结果进行了比较,并对分类结果进行决策级融合。结果 在FRGC V2.0人脸数据库分别进行特征点定位实验和识别实验,平均定位误差小于2.36 mm,Rank-1识别率为98.35%。结论 基于特征点的3维人脸识别方法,通过特征点在人脸近似刚性区域提取特征,有效避免了受表情影响较大的嘴部区域。实验证明该方法具有较高的识别精度,同时对姿态、表情变化具有一定的鲁棒性。  相似文献   

16.
In this paper, an efficient method for human facial expression recognition is presented. We first propose a representation model for facial expressions, namely the spatially maximum occurrence model (SMOM), which is based on the statistical characteristics of training facial images and has a powerful representation capability. Then the elastic shape-texture matching (ESTM) algorithm is used to measure the similarity between images based on the shape and texture information. By combining SMOM and ESTM, the algorithm, namely SMOM-ESTM, can achieve a higher recognition performance level. The recognition rates of the SMOM-ESTM algorithm based on the AR database and the Yale database are 94.5% and 94.7%, respectively.  相似文献   

17.
18.
This study proposes a novel deep learning approach for the fusion of 2D and 3D modalities in in-the-wild facial expression recognition (FER). Different from other studies, we exploit the 3D facial information in in-the-wild FER. In particular, in-the-wild 3D FER dataset is not widely available; therefore, 3D facial data are constructed from available 2D datasets thanks to recent advances in 3D face reconstruction. The 3D facial geometry features are then extracted by deep learning technique to exploit the mid-level details, which provides meaningful expression for the recognition. In addition, to demonstrate the potential of 3D data on FER, the 2D projected images of 3D faces are taken as additional input to FER. These features are then jointly fused with 2D features obtained from the original input. The fused features are then classified by support vector machines (SVMs). The results show that the proposed approach achieves state-of-the-art recognition performances on Real-World Affective Faces (RAF) and Static Facial Expressions in the Wild (SFEW 2.0), and AffectNet dataset. This approach is also applied to a 3D FER dataset, i.e. BU-3DFE, to compare the effectiveness of reconstructed and available 3D face data for FER. This is the first time such a deep learning combination of 3D and 2D facial modalities is presented in the context of in-the-wild FER.  相似文献   

19.
In this paper, we present a 3D face photography system based on a facial expression training dataset, composed of both facial range images (3D geometry) and facial texture (2D photography). The proposed system allows one to obtain a 3D geometry representation of a given face provided as a 2D photography, which undergoes a series of transformations through the texture and geometry spaces estimated. In the training phase of the system, the facial landmarks are obtained by an active shape model (ASM) extracted from the 2D gray-level photography. Principal components analysis (PCA) is then used to represent the face dataset, thus defining an orthonormal basis of texture and another of geometry. In the reconstruction phase, an input is given by a face image to which the ASM is matched. The extracted facial landmarks and the face image are fed to the PCA basis transform, and a 3D version of the 2D input image is built. Experimental tests using a new dataset of 70 facial expressions belonging to ten subjects as training set show rapid reconstructed 3D faces which maintain spatial coherence similar to the human perception, thus corroborating the efficiency and the applicability of the proposed system.  相似文献   

20.
Facial expression recognition has recently become an important research area, and many efforts have been made in facial feature extraction and its classification to improve face recognition systems. Most researchers adopt a posed facial expression database in their experiments, but in a real-life situation the facial expressions may not be very obvious. This article describes the extraction of the minimum number of Gabor wavelet parameters for the recognition of natural facial expressions. The objective of our research was to investigate the performance of a facial expression recognition system with a minimum number of features of the Gabor wavelet. In this research, principal component analysis (PCA) is employed to compress the Gabor features. We also discuss the selection of the minimum number of Gabor features that will perform the best in a recognition task employing a multiclass support vector machine (SVM) classifier. The performance of facial expression recognition using our approach is compared with those obtained previously by other researchers using other approaches. Experimental results showed that our proposed technique is successful in recognizing natural facial expressions by using a small number of Gabor features with an 81.7% recognition rate. In addition, we identify the relationship between the human vision and computer vision in recognizing natural facial expressions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号