首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 234 毫秒
1.
Past work on automatic analysis of facial expressions has focused mostly on detecting prototypic expressions of basic emotions like happiness and anger. The method proposed here enables the detection of a much larger range of facial behavior by recognizing facial muscle actions [action units (AUs)] that compound expressions. AUs are agnostic, leaving the inference about conveyed intent to higher order decision making (e.g., emotion recognition). The proposed fully automatic method not only allows the recognition of 22 AUs but also explicitly models their temporal characteristics (i.e., sequences of temporal segments: neutral, onset, apex, and offset). To do so, it uses a facial point detector based on Gabor-feature-based boosted classifiers to automatically localize 20 facial fiducial points. These points are tracked through a sequence of images using a method called particle filtering with factorized likelihoods. To encode AUs and their temporal activation models based on the tracking data, it applies a combination of GentleBoost, support vector machines, and hidden Markov models. We attain an average AU recognition rate of 95.3% when tested on a benchmark set of deliberately displayed facial expressions and 72% when tested on spontaneous expressions.  相似文献   

2.
3.
人脸动作编码系统从人脸解剖学的角度定义了一组面部动作单元(action unit,AU),用于精确刻画人脸表情变化。每个面部动作单元描述了一组脸部肌肉运动产生的表观变化,其组合可以表达任意人脸表情。AU检测问题属于多标签分类问题,其挑战在于标注数据不足、头部姿态干扰、个体差异和不同AU的类别不均衡等。为总结近年来AU检测技术的发展,本文系统概述了2016年以来的代表性方法,根据输入数据的模态分为基于静态图像、基于动态视频以及基于其他模态的AU检测方法,并讨论在不同模态数据下为了降低数据依赖问题而引入的弱监督AU检测方法。针对静态图像,进一步介绍基于局部特征学习、AU关系建模、多任务学习以及弱监督学习的AU检测方法。针对动态视频,主要介绍基于时序特征和自监督AU特征学习的AU检测方法。最后,本文对比并总结了各代表性方法的优缺点,并在此基础上总结和讨论了面部AU检测所面临的挑战和未来发展趋势。  相似文献   

4.
In this paper, we address the analysis and recognition of facial expressions in continuous videos. More precisely, we study classifiers performance that exploit head pose independent temporal facial action parameters. These are provided by an appearance-based 3D face tracker that simultaneously provides the 3D head pose and facial actions. The use of such tracker makes the recognition pose- and texture-independent. Two different schemes are studied. The first scheme adopts a dynamic time warping technique for recognizing expressions where training data are given by temporal signatures associated with different universal facial expressions. The second scheme models temporal signatures associated with facial actions with fixed length feature vectors (observations), and uses some machine learning algorithms in order to recognize the displayed expression. Experiments quantified the performance of different schemes. These were carried out on CMU video sequences and home-made video sequences. The results show that the use of dimension reduction techniques on the extracted time series can improve the classification performance. Moreover, these experiments show that the best recognition rate can be above 90%.  相似文献   

5.
A system that could automatically analyze the facial actions in real time has applications in a wide range of different fields. However, developing such a system is always challenging due to the richness, ambiguity, and the dynamic nature of facial actions. Although a number of research groups attempt to recognize facial action units (AUs) by either improving facial feature extraction techniques, or the AU classification techniques, these methods often recognize AUs or certain AU combinations individually and statically, ignoring the semantic relationships among AUs and the dynamics of AUs. Hence, these approaches cannot always recognize AUs reliably, robustly, and consistently.In this paper, we propose a novel approach that systematically accounts for the relationships among AUs and their temporal evolutions for AU recognition. Specifically, we use a dynamic Bayesian network (DBN) to model the relationships among different AUs. The DBN provides a coherent and unified hierarchical probabilistic framework to represent probabilistic relationships among various AUs and to account for the temporal changes in facial action development. Within our system, robust computer vision techniques are used to obtain AU measurements. And such AU measurements are then applied as evidence to the DBN for inferring various AUs. The experiments show that the integration of AU relationships and AU dynamics with AU measurements yields significant improvement of AU recognition, especially for spontaneous facial expressions and under more realistic environment including illumination variation, face pose variation, and occlusion.  相似文献   

6.
The relationship between nonverbal behavior and severity of depression was investigated by following depressed participants over the course of treatment and video recording a series of clinical interviews. Facial expressions and head pose were analyzed from video using manual and automatic systems. Both systems were highly consistent for FACS action units (AUs) and showed similar effects for change over time in depression severity. When symptom severity was high, participants made fewer affiliative facial expressions (AUs 12 and 15) and more non-affiliative facial expressions (AU 14). Participants also exhibited diminished head motion (i.e., amplitude and velocity) when symptom severity was high. These results are consistent with the Social Withdrawal hypothesis: that depressed individuals use nonverbal behavior to maintain or increase interpersonal distance. As individuals recover, they send more signals indicating a willingness to affiliate. The finding that automatic facial expression analysis was both consistent with manual coding and revealed the same pattern of findings suggests that automatic facial expression analysis may be ready to relieve the burden of manual coding in behavioral and clinical science.  相似文献   

7.
Facial expression provides a crucial behavioral measure for studies of human emotion, cognitive processes, and social interaction. In this paper, we focus on recognizing facial action units (AUs), which represent the subtle change of facial expressions. We adopt ICA (independent component analysis) as the feature extraction and representation method and SVM (support vector machine) as the pattern classifier. By comparing with three existing systems, such as Tian, Donato, and Bazzo, our proposed system can achieve the highest recognition rates. Furthermore, the proposed system is fast since it takes only 1.8 ms for classifying a test image.  相似文献   

8.
面部表情分析是计算机通过分析人脸信息尝试理解人类情感的一种技术,目前已成为计算机视觉领域的热点话题。其挑战在于数据标注困难、多人标签一致性差、自然环境下人脸姿态大以及遮挡等。为了推动面部表情分析发展,本文概述了面部表情分析的相关任务、进展、挑战和未来趋势。首先,简述了面部表情分析的几个常见任务、基本算法框架和数据库;其次,对人脸表情识别方法进行了综述,包括传统的特征设计方法以及深度学习方法;接着,对人脸表情识别存在的问题与挑战进行总结思考;最后,讨论了未来发展趋势。通过全面综述和讨论,总结以下观点:1)针对可靠人脸表情数据库规模小的问题,从人脸识别模型进行迁移学习以及利用无标签数据进行半监督学习是两个重要策略;2)受模糊表情、低质量图像以及标注者的主观性影响,非受控自然场景的人脸表情数据的标签库存在一定的不确定性,抑制这些因素可以使得深度网络学习真正的表情特征;3)针对人脸遮挡和大姿态问题,利用局部块进行融合的策略是一个有效的策略,另一个值得考虑的策略是先在大规模人脸识别数据库中学习一个对遮挡和姿态鲁棒的模型,再进行人脸表情识别迁移学习;4)由于基于深度学习的表情识别方法受很多超参数影响,导致当前人脸表情识别方法的可比性不强,不同的表情识别方法有必要在不同的简单基线方法上进行评测。目前,虽然非受控自然环境下的表情分析得到较快发展,但是上述问题和挑战仍然有待解决。人脸表情分析是一个比较实用的任务,未来发展除了要讨论方法的精度也要关注方法的耗时以及存储消耗,也可以考虑用非受控环境下高精度的人脸运动单元检测结果进行表情类别推断。  相似文献   

9.
Facial expression is central to human experience. Its efficiency and valid measurement are challenges that automated facial image analysis seeks to address. Most publically available databases are limited to 2D static images or video of posed facial behavior. Because posed and un-posed (aka “spontaneous”) facial expressions differ along several dimensions including complexity and timing, well-annotated video of un-posed facial behavior is needed. Moreover, because the face is a three-dimensional deformable object, 2D video may be insufficient, and therefore 3D video archives are required. We present a newly developed 3D video database of spontaneous facial expressions in a diverse group of young adults. Well-validated emotion inductions were used to elicit expressions of emotion and paralinguistic communication. Frame-level ground-truth for facial actions was obtained using the Facial Action Coding System. Facial features were tracked in both 2D and 3D domains. To the best of our knowledge, this new database is the first of its kind for the public. The work promotes the exploration of 3D spatiotemporal features in subtle facial expression, better understanding of the relation between pose and motion dynamics in facial action units, and deeper understanding of naturally occurring facial action.  相似文献   

10.
随着人脸表情识别任务逐渐从实验室受控环境转移至具有挑战性的真实世界环境,在深度学习技术的迅猛发展下,深度神经网络能够学习出具有判别能力的特征,逐渐应用于自动人脸表情识别任务。目前的深度人脸表情识别系统致力于解决以下两个问题:1)由于缺乏足量训练数据导致的过拟合问题;2)真实世界环境下其他与表情无关因素变量(例如光照、头部姿态和身份特征)带来的干扰问题。本文首先对近十年深度人脸表情识别方法的研究现状以及相关人脸表情数据库的发展进行概括。然后,将目前基于深度学习的人脸表情识别方法分为两类:静态人脸表情识别和动态人脸表情识别,并对这两类方法分别进行介绍和综述。针对目前领域内先进的深度表情识别算法,对其在常见表情数据库上的性能进行了对比并详细分析了各类算法的优缺点。最后本文对该领域的未来研究方向和机遇挑战进行了总结和展望:考虑到表情本质上是面部肌肉运动的动态活动,基于动态序列的深度表情识别网络往往能够取得比静态表情识别网络更好的识别效果。此外,结合其他表情模型如面部动作单元模型以及其他多媒体模态,如音频模态和人体生理信息能够将表情识别拓展到更具有实际应用价值的场景。  相似文献   

11.

Facial expressions are essential in community based interactions and in the analysis of emotions behaviour. The automatic identification of face is a motivating topic for the researchers because of its numerous applications like health care, video conferencing, cognitive science etc. In the computer vision with the facial images, the automatic detection of facial expression is a very challenging issue to be resolved. An innovative methodology is introduced in the presented work for the recognition of facial expressions. The presented methodology is described in subsequent stages. At first, input image is taken from the facial expression database and pre-processed with high frequency emphasis (HFE) filtering and modified histogram equalization (MHE). After the process of image enhancement, Viola Jones (VJ) framework is utilized to detect the face in the images and also the face region is cropped by finding the face coordinates. Afterwards, different effective features such as shape information is extracted from enhanced histogram of gradient (EHOG feature), intensity variation is extracted with mean, standard deviation and skewness, facial movement variation is extracted with facial action coding (FAC),texture is extracted using weighted patch based local binary pattern (WLBP) and spatial information is extracted byentropy based Spatial feature. Subsequently, dimensionality of the features are reduced by attaining the most relevant features using Residual Network (ResNet). Finally, extended wavelet deep convolutional neural network (EWDCNN) classifier uses the extracted features and accurately detects the face expressions as sad, happy, anger, fear disgust, surprise and neutral classes. The implementation platform used in the work is PYTHON. The presented technique is tested with the three datasets such as JAFFE, CK+ and Oulu-CASIA.

  相似文献   

12.
Computing environment is moving towards human-centered designs instead of computer centered designs and human's tend to communicate wealth of information through affective states or expressions. Traditional Human Computer Interaction (HCI) based systems ignores bulk of information communicated through those affective states and just caters for user's intentional input. Generally, for evaluating and benchmarking different facial expression analysis algorithms, standardized databases are needed to enable a meaningful comparison. In the absence of comparative tests on such standardized databases it is difficult to find relative strengths and weaknesses of different facial expression recognition algorithms. In this article we present a novel video database for Children's Spontaneous facial Expressions (LIRIS-CSE). Proposed video database contains six basic spontaneous facial expressions shown by 12 ethnically diverse children between the ages of 6 and 12 years with mean age of 7.3 years. To the best of our knowledge, this database is first of its kind as it records and shows spontaneous facial expressions of children. Previously there were few database of children expressions and all of them show posed or exaggerated expressions which are different from spontaneous or natural expressions. Thus, this database will be a milestone for human behavior researchers. This database will be a excellent resource for vision community for benchmarking and comparing results. In this article, we have also proposed framework for automatic expression recognition based on Convolutional Neural Network (CNN) architecture with transfer learning approach. Proposed architecture achieved average classification accuracy of 75% on our proposed database i.e. LIRIS-CSE.  相似文献   

13.
An approach to the analysis of dynamic facial images for the purposes of estimating and resynthesizing dynamic facial expressions is presented. The approach exploits a sophisticated generative model of the human face originally developed for realistic facial animation. The face model which may be simulated and rendered at interactive rates on a graphics workstation, incorporates a physics-based synthetic facial tissue and a set of anatomically motivated facial muscle actuators. The estimation of dynamical facial muscle contractions from video sequences of expressive human faces is considered. An estimation technique that uses deformable contour models (snakes) to track the nonrigid motions of facial features in video images is developed. The technique estimates muscle actuator controls with sufficient accuracy to permit the face model to resynthesize transient expressions  相似文献   

14.
We develop a new 3D hierarchical model of the human face. The model incorporates a physically-based approximation to facial tissue and a set of anatomically-motivated facial muscle actuators. Despite its sophistication, the model is efficient enough to produce facial animation at interactive rates on a high-end graphics workstation. A second contribution of this paper is a technique for estimating muscle contractions from video sequences of human faces performing expressive articulations. These estimates may be input as dynamic control parameters to the face model in order to produce realistic animation. Using an example, we demonstrate that our technique yields sufficiently accurate muscle contraction estimates for the model to reconstruct expressions from dynamic images of faces.  相似文献   

15.
Automatic detection of facial expressions attracts great attention due to its potential applications in human–computer interaction as well as in human facial behavior research. Most of the research has so far been performed in 2D. However, as the limitations of 2D data are understood, expression analysis research is being pursued in 3D face modality. 3D can capture true facial surface data and is less disturbed by illumination and head pose. At this junction we have conducted a comparative evaluation of 3D and 2D face modalities. We have investigated extensively 25 action units (AU) defined in the Facial Action Coding System. For fairness we map facial surface geometry into 2D and apply totally data-driven techniques in order to avoid biases due to design. We have demonstrated that overall 3D data performs better, especially for lower face AUs and that there is room for improvement by fusion of 2D and 3D modalities. Our study involves the determination of the best feature set from 2D and 3D modalities and of the most effective classifier, both from several alternatives. Our detailed analysis puts into evidence the merits and some shortcomings of 3D modality over 2D in classifying facial expressions from single images.  相似文献   

16.
目的 人脸表情识别是计算机视觉的核心问题之一。一方面,表情的产生对应着面部肌肉的一个连续动态变化过程,另一方面,该运动过程中的表情峰值帧通常包含了能够识别该表情的完整信息。大部分已有的人脸表情识别算法要么基于表情视频序列,要么基于单幅表情峰值图像。为此,提出了一种融合时域和空域特征的深度神经网络来分析和理解视频序列中的表情信息,以提升表情识别的性能。方法 该网络包含两个特征提取模块,分别用于学习单幅表情峰值图像中的表情静态“空域特征”和视频序列中的表情动态“时域特征”。首先,提出了一种基于三元组的深度度量融合技术,通过在三元组损失函数中采用不同的阈值,从单幅表情峰值图像中学习得到多个不同的表情特征表示,并将它们组合在一起形成一个鲁棒的且更具辩识能力的表情“空域特征”;其次,为了有效利用人脸关键组件的先验知识,准确提取人脸表情在时域上的运动特征,提出了基于人脸关键点轨迹的卷积神经网络,通过分析视频序列中的面部关键点轨迹,学习得到表情的动态“时域特征”;最后,提出了一种微调融合策略,取得了最优的时域特征和空域特征融合效果。结果 该方法在3个基于视频序列的常用人脸表情数据集CK+(the e...  相似文献   

17.
Automatic recognition of facial gestures (i.e., facial muscle activity) is rapidly becoming an area of intense interest in the research field of machine vision. In this paper, we present an automated system that we developed to recognize facial gestures in static, frontal- and/or profile-view color face images. A multidetector approach to facial feature localization is utilized to spatially sample the profile contour and the contours of the facial components such as the eyes and the mouth. From the extracted contours of the facial features, we extract ten profile-contour fiducial points and 19 fiducial points of the contours of the facial components. Based on these, 32 individual facial muscle actions (AUs) occurring alone or in combination are recognized using rule-based reasoning. With each scored AU, the utilized algorithm associates a factor denoting the certainty with which the pertinent AU has been scored. A recognition rate of 86% is achieved.  相似文献   

18.
Over the last decade, automatic facial expression analysis has become an active research area that finds potential applications in areas such as more engaging human-computer interfaces, talking heads, image retrieval and human emotion analysis. Facial expressions reflect not only emotions, but other mental activities, social interaction and physiological signals. In this survey, we introduce the most prominent automatic facial expression analysis methods and systems presented in the literature. Facial motion and deformation extraction approaches as well as classification methods are discussed with respect to issues such as face normalization, facial expression dynamics and facial expression intensity, but also with regard to their robustness towards environmental changes.  相似文献   

19.
20.
为了降低样貌、姿态、眼镜以及表情定义不统一等因素对人脸表情识别的影响,提出一种人脸样貌独立判别的协作表情识别算法。首先,采用自动的人脸检测算法定位、对齐视频每帧的人脸区域,并从人脸视频序列中选择峰值表情的人脸;然后,采用峰值人脸与某个表情类内的所有人脸产生表情类内差异人脸信息,并通过计算峰值表情人脸与表情类内差异人脸的差异信息获得协作的表情表示;最终,采用基于稀疏的分类器与表情表示决定每个人脸表情的标签。采用欧美与亚洲人脸的数据库进行仿真实验,结果表明本算法获得了较好的表情识别准确率,对不同样貌、佩戴眼镜的人脸样本也具有较好的识别效果。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号