首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Automatic emotion recognition based on facial cues, such as facial action units (AUs), has received huge attention in the last decade due to its wide variety of applications. Current computer‐based automated two‐phase facial emotion recognition procedures first detect AUs from input images and then infer target emotions from the detected AUs. However, more robust AU detection and AU‐to‐emotion mapping methods are required to deal with the error accumulation problem inherent in the multiphase scheme. Motivated by our key observation that a single AU detector does not perform equally well for all AUs, we propose a novel two‐phase facial emotion recognition framework, where the presence of AUs is detected by group decisions of multiple AU detectors and a target emotion is inferred from the combined AU detection decisions. Our emotion recognition framework consists of three major components — multiple AU detection, AU detection fusion, and AU‐to‐emotion mapping. The experimental results on two real‐world face databases demonstrate an improved performance over the previous two‐phase method using a single AU detector in terms of both AU detection accuracy and correct emotion recognition rate.  相似文献   

2.
3.
People instinctively recognize facial expression as a key to nonverbal communication, which has been confirmed by many different research projects. A change in intensity or magnitude of even one specific facial expression can cause different interpretations. A systematic method for generating facial expression syntheses, while mimicking realistic facial expressions and intensities, is a strong need in various applications. Although manually produced animation is typically of high quality, the process is slow and costly-therefore, often unrealistic for low polygonal applications. In this paper, we present a simple and efficient emotional-intensity-based expression cloning process for low-polygonal-based applications, by generating a customized face, as well as by cloning facial expressions. We define intensity mappings to measure expression intensity. Once a source expression is determined by a set of suitable parameter values in a customized 3D face and its embedded muscles, expressions for any target face(s) can be easily cloned by using the same set of parameters. Through experimental study, including facial expression simulation and cloning with intensity mapping, our research reconfirms traditional psychological findings. Additionally, we discuss the method's overall usability and how it allows us to automatically adjust a customized face with embedded facial muscles while mimicking the user's facial configuration, expression, and intensity.  相似文献   

4.
王春峰  李军 《光电子.激光》2020,31(11):1197-1203
面部情绪识别已成为可见光人脸识别应用的重要部 分,是光学模式识别研究中最重要的领域之一。为了进一步实现可见光条件下面部情绪的自 动识别,本文结合Viola-Jones、自适应直方图均衡(AHE)、离散小波变换(DWT)和深度卷 积神经网络(CNN),提出了一种面部情绪自动识别算法。该算法使用Viola-Jones定位脸 部和五官,使用自适应直方图均衡增强面部图像,使用DWT完成面部特征提取;最后,提取 的特征直接用于深度卷积神经网络训练,以实现面部情绪自动识别。仿真实验分别在CK+数 据库和可见光人脸图像中进行,在CK+数据集上收获了97%的平均准确 率,在可见光人脸图像测试中也获得了95%的平均准确率。实验结果 表明,针对不同的面部五官和情绪,本文算法能够对可见光面部特征进行准确定位,对可见 光图像信息进行均衡处理,对情绪类别进行自动识别,并且能够满足同框下多类面部情绪同 时识别的需求,有着较高的识别率和鲁棒性。  相似文献   

5.
Most researches of emotion recognition focus on single person information. However, everyone's emotions will affect each other. For example, when the teacher is angry, the student's nervousness will increase. But the facial expression information of the light single is already quite large. It means that group emotion recognition will encounter a huge traffic bottleneck. Therefore, there is a vast amount of data collected by end‐devices that will be uploaded to the emotion cloud for big data analysis. Because different emotions may require different analytical methods, in the face of diverse big data, connecting different emotion clouds is a viable alternative method to extend the emotion cloud hardware. In this paper, we built a software defined networking (SDN) multi‐emotion cloud platform to connect different emotion clouds. Through the advantages of the splicing control plane and the data plane, the routing path can be changed using software. This means that the individual conditions of different students can be handled by a dedicated system via service function (SF). The load balancing effect between different emotion clouds is achieved by optimizing the SFC. In addition, we propose a SFC‐based dynamic load balancing mechanism which eliminates a large number of SFC creation processes. The simulation results show that the proposed mechanism can effectively allocate resources to different emotion clouds to achieve real‐time emotion recognition. This is the first strategy to use SFC to balance the emotion data that the teachers can change teaching policy in a timely manner in response to students' emotions.  相似文献   

6.
MiE is a facial involuntary reaction that reflects the real emotion and thoughts of a human being. It is very difficult for a normal human to detect a Micro-Expression (MiE), since it is a very fast and local face reaction with low intensity. As a consequence, it is a challenging task for researchers to build an automatic system for MiE recognition. Previous works for MiE recognition have attempted to use the whole face, yet a facial MiE appears in a small region of the face, which makes the extraction of relevant features a hard task. In this paper, we propose a novel deep learning approach that leverages the locality aspect of MiEs by learning spatio-temporal features from local facial regions using a composite architecture of Convolutional Neural Network (CNN) and Long Short Term Memory (LSTM). The proposed solution succeeds to extract relevant local features for MiEs recognition. Experimental results on benchmark datasets demonstrate the highest recognition accuracy of our solution with respect to state-of-the-art methods.  相似文献   

7.
人类面部表情是其心理情绪变化的最直观刻画,不同人的面部表情具有很大差异,现有表情识别方法均利用面部统计特征区分不同表情,其缺乏对于人脸细节信息的深度挖掘。根据心理学家对面部行为编码的定义可以看出,人脸的局部细节信息决定了其表情意义。因此该文提出一种基于多尺度细节增强的面部表情识别方法,针对面部表情受图像细节影响较大的特点,提出利用高斯金字塔提取图像细节信息,并对图像进行细节增强,从而强化人脸表情信息。针对面部表情的局部性特点,提出利用层次结构的局部梯度特征计算方法,描述面部特征点局部形状特征。最后,使用支持向量机(SVM)对面部表情进行分类。该文在CK+表情数据库中的实验结果表明,该方法不仅验证了图像细节对面部表情识别过程的重要作用,而且在小规模训练数据下也能够得到非常好的识别结果,表情平均识别率达到98.19%。  相似文献   

8.
Facial expressions contain most of the information on human face which is essential for human–computer interaction. Development of robust algorithms for automatic recognition of facial expressions with high recognition rates has been a challenge for the last 10 years. In this paper, we propose a novel feature selection procedure which recognizes basic facial expressions with high recognition rates by utilizing three-Dimensional (3D) geometrical facial feature positions. The paper presents a system of classifying expressions in one of the six basic emotional categories which are anger, disgust, fear, happiness, sadness, and surprise. The paper contributes on feature selections for each expression independently and achieves high recognition rates with the proposed geometric facial features selected for each expression. The novel feature selection procedure is entropy based, and it is employed independently for each of the six basic expressions. The system’s performance is evaluated using the 3D facial expression database, BU-3DFE. Experimental results show that the proposed method outperforms the latest methods reported in the literature.  相似文献   

9.
This paper presents a hierarchical animation method for transferring facial expressions extracted from a performance video to different facial sketches. Without any expression example obtained from target faces, our approach can transfer expressions by motion retargetting to facial sketches. However, in practical applications, the image noise in each frame will reduce the feature extraction accuracy from source faces. And the shape difference between source and target faces will influence the animation quality for representing expressions. To solve these difficulties, we propose a robust neighbor-expression transfer (NET) model, which aims at modeling the spatial relations among sparse facial features. By learning expression behaviors from neighbor face examples, the NET model can reconstruct facial expressions from noisy signals. Based on the NET model, we present a hierarchical method to animate facial sketches. The motion vectors on the source face are adjusted from coarse to fine on the target face. Accordingly, the animation results are generated to replicate source expressions. Experimental results demonstrate that the proposed method can effectively and robustly transfer expressions by noisy animation signals.  相似文献   

10.
In the field of affective computing (AC), coarse-grained AC has been developed and widely applied in many fields. Electroencephalogram (EEG) signals contain abundant emotional information. However, it is difficult to develop fine-grained AC due to the lack of fine-grained labeling data and suitable visualization methods for EEG data with fine labels. To achieve a fine mapping of EEG data directly to facial images, we propose a conditional generative adversarial network (cGAN) to establish the relationship between EEG data associated with emotions, a coarse label, and a facial expression image in this study. In addition, a corresponding training strategy is also proposed to realize the fine-grained estimation and visualization of EEG-based emotion. The experiments prove the reasonableness of the proposed method for the generation of fine-grained facial expressions. The image entropy of the generated image indicates that the proposed method can provide a satisfactory visualization of fine-grained facial expressions.  相似文献   

11.
Most previous methods for emotion recognition focus on facial emotion and ignore the rich context information that implies important emotion states. To make full use of the contextual information to make up for the facial information, we propose the Context-Dependent Net (CD-Net) for robust context-aware human emotion recognition. Inspired by the long-range dependency of the transformer, we introduce the tubal transformer which forms the shared feature representation space to facilitate the interactions among the face, body, and context features. Besides, we introduce the hierarchical feature fusion to recombine the enhanced multi-scale face, body, and context features for emotion classification. Experimentally, we verify the effectiveness of the proposed CD-Net on the two large emotion datasets, CAER-S and EMOTIC. On the one hand, the quantitative evaluation results demonstrate the superiority of the proposed CD-Net over other state-of-the-art methods. On the other hand, the visualization results show CD-Net can capture the dependencies among the face, body, and context components and focus on the important features related to the emotion.  相似文献   

12.
With better understanding of face anatomy and technical advances in computer graphics, 3D face synthesis has become one of the most active research fields for many human-machine applications, ranging from immersive telecommunication to the video games industry. In this paper we proposed a method that automatically extracts features like eyes, mouth, eyebrows and nose from the given frontal face image. Then a generic 3D face model is superimposed onto the face in accordance with the extracted facial features in order to fit the input face image by transforming the vertex topology of the generic face model. The 3D-specific face can finally be synthesized by texturing the individualized face model. Once the model is ready six basic facial expressions are generated with the help of MPEG-4 facial animation parameters. To generate transitions between these facial expressions we use 3D shape morphing between the corresponding face models and blend the corresponding textures. Novelty of our method is automatic generation of 3D model and synthesis face with different expressions from frontal neutral face image. Our method has the advantage that it is fully automatic, robust, fast and can generate various views of face by rotation of 3D model. It can be used in a variety of applications for which the accuracy of depth is not critical such as games, avatars, face recognition. We have tested and evaluated our system using standard database namely, BU-3DFE.  相似文献   

13.
14.
In the domain of telecommunication applications, videophony, teleconferency, the representation and modelization of human face, and its expressions, knows an important development. In this paper, we present the basic principles of image sequences coding with main approaches and methods to lead to 3D model-based coding. Then, we introduce our 3D wire-frame model with which we have developed some compression and triangulated surface representation methods. An original approach to simulate and reproduce facial expressions with radial basis functions is also presented.  相似文献   

15.
Visual face tracking is an important building block for all intelligent living and working spaces, as it is able to locate persons without any human intervention or the need for the users to carry sensors on themselves. In this paper we present a novel face tracking system built on a particle filtering framework that facilitates the use of non-linear visual measurements on the facial area. We concentrate on three different such non-linear visual measurement cues, namely object detection, foreground segmentation and colour matching. We derive robust measurement likelihoods under a unified representation scheme and fuse them into our face tracking algorithm. This algorithm is complemented with optimum selection of the particle filter’s object model and a target handling scheme. The resulting face tracking system is extensively evaluated and compared to baseline ones.  相似文献   

16.
A 3D facial reconstruction and expression modeling system which creates 3D video sequences of test subjects and facilitates interactive generation of novel facial expressions is described. Dynamic 3D video sequences are generated using computational binocular stereo matching with active illumination and are used for interactive expression modeling. An individual’s 3D video set is annotated with control points associated with face subregions. Dragging a control point updates texture and depth in only the associated subregion so that the user generates new composite expressions unseen in the original source video sequences. Such an interactive manipulation of dynamic 3D face reconstructions requires as little preparation on the test subject as possible. Dense depth data combined with video-based texture results in realistic and convincing facial animations, a feature lacking in conventional marker-based motion capture systems.  相似文献   

17.
18.
吴晓军  鞠光亮 《电子学报》2016,44(9):2141-2147
提出了一种无标记点的人脸表情捕捉方法.首先根据ASM(Active Shape Model)人脸特征点生成了覆盖人脸85%面部特征的人脸均匀网格模型;其次,基于此人脸模型提出了一种表情捕捉方法,使用光流跟踪特征点的位移变化并辅以粒子滤波稳定其跟踪结果,以特征点的位移变化驱动网格整体变化,作为网格跟踪的初始值,使用网格的形变算法作为网格的驱动方式.最后,以捕捉到的表情变化数据驱动不同的人脸模型,根据模型的维数不同使用不同的驱动方法来实现表情动画重现,实验结果表明,提出的算法能很好地捕捉人脸表情,将捕捉到的表情映射到二维卡通人脸和三维虚拟人脸模型都能取得较好的动画效果.  相似文献   

19.
20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号