首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
2.
3.
In this paper we propose a method that exploits 3D motion-based features between frames of 3D facial geometry sequences for dynamic facial expression recognition. An expressive sequence is modelled to contain an onset followed by an apex and an offset. Feature selection methods are applied in order to extract features for each of the onset and offset segments of the expression. These features are then used to train GentleBoost classifiers and build a Hidden Markov Model in order to model the full temporal dynamics of the expression. The proposed fully automatic system was employed on the BU-4DFE database for distinguishing between the six universal expressions: Happy, Sad, Angry, Disgust, Surprise and Fear. Comparisons with a similar 2D system based on the motion extracted from facial intensity images was also performed. The attained results suggest that the use of the 3D information does indeed improve the recognition accuracy when compared to the 2D data in a fully automatic manner.  相似文献   

4.
基于生成式对抗网络的鲁棒人脸表情识别   总被引:1,自引:0,他引:1  
人们在自然情感交流中经常伴随着头部旋转和肢体动作,它们往往导致较大范围的人脸遮挡,使得人脸图像损失部分表情信息.现有的表情识别方法大多基于通用的人脸特征和识别算法,未考虑表情和身份的差异,导致对新用户的识别不够鲁棒.本文提出了一种对人脸局部遮挡图像进行用户无关表情识别的方法.该方法包括一个基于Wasserstein生成式对抗网络(Wasserstein generative adversarial net,WGAN)的人脸图像生成网络,能够为图像中的遮挡区域生成上下文一致的补全图像;以及一个表情识别网络,能够通过在表情识别任务和身份识别任务之间建立对抗关系来提取用户无关的表情特征并推断表情类别.实验结果表明,我们的方法在由CK+,Multi-PIE和JAFFE构成的混合数据集上用户无关的平均识别准确率超过了90%.在CK+上用户无关的识别准确率达到了96%,其中4.5%的性能提升得益于本文提出的对抗式表情特征提取方法.此外,在45°头部旋转范围内,本文方法还能够用于提高非正面表情的识别准确率.  相似文献   

5.
This paper presents an approach to recognize Facial Expressions of different intensities using 3D flow of facial points. 3D flow is the geometrical displacement (in 3D) of a facial point from its position in a neutral face to that in the expressive face. Experiments are performed on 3D face models from the BU-3DFE database. Four different intensities of expressions are used for analyzing the relevance of intensity of the expression for the task of FER. It was observed that high intensity expressions are easier to recognize and there is a need to develop algorithms for recognizing low intensity facial expressions. The proposed features outperform difference of facial distances and 2D optical flow. Performances of two classifiers, SVM and LDA are compared wherein SVM performs better. Feature selection did not prove useful.  相似文献   

6.
This study presents a facial expression recognition system which separates the non-rigid facial expression from the rigid head rotation and estimates the 3D rigid head rotation angle in real time. The extracted trajectories of the feature points contain both rigid head motion components and non-rigid facial expression motion components. A 3D virtual face model is used to obtain accurate estimation of the head rotation angle such that the non-rigid motion components can be precisely separated to enhance the facial expression recognition performance. The separation performance of the proposed system is further improved through the use of a restoration mechanism designed to recover feature points lost during large pan rotations. Having separated the rigid and non-rigid motions, hidden Markov models (HMMs) are employed to recognize a prescribed set of facial expressions defined in terms of facial action coding system (FACS) action units (AUs).  相似文献   

7.
In this paper we address the problem of 3D facial expression recognition. We propose a local geometric shape analysis of facial surfaces coupled with machine learning techniques for expression classification. A computation of the length of the geodesic path between corresponding patches, using a Riemannian framework, in a shape space provides a quantitative information about their similarities. These measures are then used as inputs to several classification methods. The experimental results demonstrate the effectiveness of the proposed approach. Using multiboosting and support vector machines (SVM) classifiers, we achieved 98.81% and 97.75% recognition average rates, respectively, for recognition of the six prototypical facial expressions on BU-3DFE database. A comparative study using the same experimental setting shows that the suggested approach outperforms previous work.  相似文献   

8.
Bilinear Models for 3-D Face and Facial Expression Recognition   总被引:1,自引:0,他引:1  
In this paper, we explore bilinear models for jointly addressing 3-D face and facial expression recognition. An elastically deformable model algorithm that establishes correspondence among a set of faces is proposed first and then bilinear models that decouple the identity and facial expression factors are constructed. Fitting these models to unknown faces enables us to perform face recognition invariant to facial expressions and facial expression recognition with unknown identity. A quantitative evaluation of the proposed technique is conducted on the publicly available BU-3DFE face database in comparison with our previous work on face recognition and other state-of-the-art algorithms for facial expression recognition. Experimental results demonstrate an overall 90.5% facial expression recognition rate and an 86% rank-1 face recognition rate.   相似文献   

9.
Facial expression recognition generally requires that faces be described in terms of a set of measurable features. The selection and quality of the features representing each face have a considerable bearing on the success of subsequent facial expression classification. Feature selection is the process of choosing a subset of features in order to increase classifier efficiency and allow higher classification accuracy. Many current dimensionality reduction techniques, used for facial expression recognition, involve linear transformations of the original pattern vectors to new vectors of lower dimensionality. In this paper, we present a methodology for the selection of features that uses nondominated sorting genetic algorithm-II (NSGA-II), which is one of the latest genetic algorithms developed for resolving problems with multiobjective approach with high accuracy. In the proposed feature selection process, NSGA-II optimizes a vector of feature weights, which increases the discrimination, by means of class separation. The proposed methodology is evaluated using 3D facial expression database BU-3DFE. Classification results validates the effectiveness and the flexibility of the proposed approach when compared with results reported in the literature using the same experimental settings.  相似文献   

10.
针对现阶段人脸表情识别过程中所遇到的问题,基于三维数据库BU-3DFE中的三维表情数据,研究三维人脸表情数据的点云对齐及基于对齐数据的双线性模型建立,对基于双线性模型的识别算法加以改进,形成新的识别分类算法,降低原有算法中身份特征参与计算的比重,最大可能地降低身份特征对于整个表情识别过程的影响。旨在提高表情识别的结果,最终实现高鲁棒性的三维表情识别。  相似文献   

11.
12.
We introduce a novel approach to recognizing facial expressions over a large range of head poses. Like previous approaches, we map the features extracted from the input image to the corresponding features of the face with the same facial expression but seen in a frontal view. This allows us to collect all training data into a common referential and therefore benefit from more data to learn to recognize the expressions. However, by contrast with such previous work, our mapping depends on the pose of the input image: We first estimate the pose of the head in the input image, and then apply the mapping specifically learned for this pose. The features after mapping are therefore much more reliable for recognition purposes. In addition, we introduce a non-linear form for the mapping of the features, and we show that it is robust to occasional mistakes made by the pose estimation stage. We evaluate our approach with extensive experiments on two protocols of the BU3DFE and Multi-PIE datasets, and show that it outperforms the state-of-the-art on both datasets.  相似文献   

13.
目的 3维人脸的表情信息不均匀地分布在五官及脸颊附近,对表情进行充分的描述和合理的权重分配是提升识别效果的重要途径。为提高3维人脸表情识别的准确率,提出了一种基于带权重局部旋度模式的3维人脸表情识别算法。方法 首先,为了提取具有较强表情分辨能力的特征,提出对3维人脸的旋度向量进行编码,获取局部旋度模式作为表情特征;然后,提出将ICNP(interactive closest normal points)算法与最小投影偏差算法结合,前者实现3维人脸子区域的不规则划分,划分得到的11个子区域保留了表情变化下面部五官和肌肉的完整性,后者根据各区域对表情识别的贡献大小为各区域的局部旋度模式特征分配权重;最后,带有权重的局部旋度模式特征被输入到分类器中实现表情识别。结果 基于BU-3DFE 3维人脸表情库对本文提出的局部旋度模式特征进行评估,结果表明其分辨能力较其他表情特征更强;基于BU-3DFE库进行表情识别实验,与其他3维人脸表情识别算法相比,本文算法取得了最高的平均识别率,达到89.67%,同时对易混淆的“悲伤”、“愤怒”和“厌恶”等表情的误判率也较低。结论 局部旋度模式特征对3维人脸的表情有较强的表征能力; ICNP算法与最小投影偏差算法的结合,能够实现区域的有效划分和权重的准确计算,有效提高特征对表情的识别能力。试验结果表明本文算法对3维人脸表情具有较高的识别率,并对易混淆的相似表情仍具有较好的识别效果。  相似文献   

14.
3D facial expression recognition has great potential in human computer interaction and intelligent robot systems. In this paper, we propose a two-step approach which combines both the feature selection and the feature fusion techniques to choose more comprehensive and discriminative features for 3D facial expression recognition. In the feature selection stage, we utilize a novel normalized cut-based filter (NCBF) algorithm to select the high relevant and low redundant geometrically localized features (GLF) and surface curvature features (SCF), respectively. Then in the feature fusion stage, PCA is performed on the selected GLF and SCF in order to avoid the curse-of-dimensionality challenge. Finally, the processed GLF and SCF are fused together to capture the most discriminative information in 3D expressional faces. Experiments are carried out on the BU-3DFE database, and the proposed approach outperforms the conventional methods by providing more competitive results.  相似文献   

15.
16.
刘洁  李毅  朱江平 《计算机应用》2021,41(3):839-844
为了生成表情丰富、动作流畅的三维虚拟人动画,提出了一种基于双相机同步捕获面部表情及人体姿态生成三维虚拟人动画的方法。首先,采用传输控制协议(TCP)网络时间戳方法实现双相机时间同步,采用张正友标定法实现双相机空间同步。然后,利用双相机分别采集面部表情和人体姿态。采集面部表情时,提取图像的2D特征点,利用这些2D特征点回归计算得到面部行为编码系统(FACS)面部行为单元,为实现表情动画做准备;以标准头部3D坐标值为基准,根据相机内参,采用高效n点投影(EPnP)算法实现头部姿态估计;之后将面部表情信息和头部姿态估计信息进行匹配。采集人体姿态时,利用遮挡鲁棒姿势图(ORPM)方法计算人体姿态,输出每个骨骼点位置、旋转角度等数据。最后,在虚幻引擎4(UE4)中使用建立的虚拟人体三维模型来展示数据驱动动画的效果。实验结果表明,该方法能够同步捕获面部表情及人体姿态,而且在实验测试中的帧率达到20 fps,能实时生成自然真实的三维动画。  相似文献   

17.
人脸表情识别是计算机视觉领域的研究热点之一。针对自然状态下的人脸存在多视角变化、脸部信息缺失等问题,提出了一种基于MVFE-LightNet(Multi-View Facial Expression Lightweight Network)的多视角人脸表情识别方法。首先,在残差网络的基础上设计卷积网络提取不同视角下的表情特征,引入深度可分离卷积来减少网络参数。其次,嵌入压缩和奖惩网络模块学习特征权重,利用特征重新标定方式提高网络表示能力,并通过加入空间金字塔池化增强网络的鲁棒性。最后,为了进一步优化识别结果,采用AdamW(Adam with Weight decay)优化方法使网络模型加速收敛。在RaFD、BU-3DFE和Fer2013表情库上的实验表明,该方法具有较高的识别率,且减少网络计算时间。  相似文献   

18.
This study proposes a novel deep learning approach for the fusion of 2D and 3D modalities in in-the-wild facial expression recognition (FER). Different from other studies, we exploit the 3D facial information in in-the-wild FER. In particular, in-the-wild 3D FER dataset is not widely available; therefore, 3D facial data are constructed from available 2D datasets thanks to recent advances in 3D face reconstruction. The 3D facial geometry features are then extracted by deep learning technique to exploit the mid-level details, which provides meaningful expression for the recognition. In addition, to demonstrate the potential of 3D data on FER, the 2D projected images of 3D faces are taken as additional input to FER. These features are then jointly fused with 2D features obtained from the original input. The fused features are then classified by support vector machines (SVMs). The results show that the proposed approach achieves state-of-the-art recognition performances on Real-World Affective Faces (RAF) and Static Facial Expressions in the Wild (SFEW 2.0), and AffectNet dataset. This approach is also applied to a 3D FER dataset, i.e. BU-3DFE, to compare the effectiveness of reconstructed and available 3D face data for FER. This is the first time such a deep learning combination of 3D and 2D facial modalities is presented in the context of in-the-wild FER.  相似文献   

19.
2017年人工智能正式升级为中国国家战略,作为人工智能领域中重要的研究方向,人脸表情识别受到了国内外研究者们的广泛关注。然而传统的人脸表情识别技术无法适应自然环境下的表情识别需求。因此非正面人脸表情识别方法成为实现表情识别技术实用化突破的重点。但是现有的非正面表情识别研究面临很多困难:头部偏转不仅造成了识别图像的扭曲,而且还遮挡了部分人脸区域,严重干扰了表情特征的提取与识别。有鉴于此,研究者们将深度学习技术与非正面表情识别相结合,依靠非正面表情图像的深度信息,实现算法识别能力的提升。综述详细介绍了深度神经网络的结构,对最新的深度学习神经网络研究方法进行分类对比,同时对未来的研究和挑战做了展望。  相似文献   

20.
This paper explores the use of multisensory information fusion technique with dynamic Bayesian networks (DBN) for modeling and understanding the temporal behaviors of facial expressions in image sequences. Our facial feature detection and tracking based on active IR illumination provides reliable visual information under variable lighting and head motion. Our approach to facial expression recognition lies in the proposed dynamic and probabilistic framework based on combining DBN with Ekman's facial action coding system (FACS) for systematically modeling the dynamic and stochastic behaviors of spontaneous facial expressions. The framework not only provides a coherent and unified hierarchical probabilistic framework to represent spatial and temporal information related to facial expressions, but also allows us to actively select the most informative visual cues from the available information sources to minimize the ambiguity in recognition. The recognition of facial expressions is accomplished by fusing not only from the current visual observations, but also from the previous visual evidences. Consequently, the recognition becomes more robust and accurate through explicitly modeling temporal behavior of facial expression. In this paper, we present the theoretical foundation underlying the proposed probabilistic and dynamic framework for facial expression modeling and understanding. Experimental results demonstrate that our approach can accurately and robustly recognize spontaneous facial expressions from an image sequence under different conditions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号