首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this paper, we propose an automatic facial expression exaggeration system, which consists of face detection, facial expression recognition, and facial expression exaggeration components, for generating exaggerated views of different expressions for an input face video. In addition, the parallelized algorithms for the automatic facial expression exaggeration system are developed to reduce the execution time on a multi-core embedded system. The experimental results show satisfactory expression exaggeration results and computational efficiency of the automatic facial expression exaggeration system under cluttered environments. The quantitative experimental comparisons show that the proposed parallelization strategies provide significant computational speedup compared to the single-processor implementation on a multi-core embedded platform.  相似文献   

2.
In this paper, two novel methods for facial expression recognition in facial image sequences are presented. The user has to manually place some of Candide grid nodes to face landmarks depicted at the first frame of the image sequence under examination. The grid-tracking and deformation system used, based on deformable models, tracks the grid in consecutive video frames over time, as the facial expression evolves, until the frame that corresponds to the greatest facial expression intensity. The geometrical displacement of certain selected Candide nodes, defined as the difference of the node coordinates between the first and the greatest facial expression intensity frame, is used as an input to a novel multiclass Support Vector Machine (SVM) system of classifiers that are used to recognize either the six basic facial expressions or a set of chosen Facial Action Units (FAUs). The results on the Cohn-Kanade database show a recognition accuracy of 99.7% for facial expression recognition using the proposed multiclass SVMs and 95.1% for facial expression recognition based on FAU detection.  相似文献   

3.
This paper presents a hierarchical animation method for transferring facial expressions extracted from a performance video to different facial sketches. Without any expression example obtained from target faces, our approach can transfer expressions by motion retargetting to facial sketches. However, in practical applications, the image noise in each frame will reduce the feature extraction accuracy from source faces. And the shape difference between source and target faces will influence the animation quality for representing expressions. To solve these difficulties, we propose a robust neighbor-expression transfer (NET) model, which aims at modeling the spatial relations among sparse facial features. By learning expression behaviors from neighbor face examples, the NET model can reconstruct facial expressions from noisy signals. Based on the NET model, we present a hierarchical method to animate facial sketches. The motion vectors on the source face are adjusted from coarse to fine on the target face. Accordingly, the animation results are generated to replicate source expressions. Experimental results demonstrate that the proposed method can effectively and robustly transfer expressions by noisy animation signals.  相似文献   

4.
With better understanding of face anatomy and technical advances in computer graphics, 3D face synthesis has become one of the most active research fields for many human-machine applications, ranging from immersive telecommunication to the video games industry. In this paper we proposed a method that automatically extracts features like eyes, mouth, eyebrows and nose from the given frontal face image. Then a generic 3D face model is superimposed onto the face in accordance with the extracted facial features in order to fit the input face image by transforming the vertex topology of the generic face model. The 3D-specific face can finally be synthesized by texturing the individualized face model. Once the model is ready six basic facial expressions are generated with the help of MPEG-4 facial animation parameters. To generate transitions between these facial expressions we use 3D shape morphing between the corresponding face models and blend the corresponding textures. Novelty of our method is automatic generation of 3D model and synthesis face with different expressions from frontal neutral face image. Our method has the advantage that it is fully automatic, robust, fast and can generate various views of face by rotation of 3D model. It can be used in a variety of applications for which the accuracy of depth is not critical such as games, avatars, face recognition. We have tested and evaluated our system using standard database namely, BU-3DFE.  相似文献   

5.
We present a novel method for caricature synthesis based on mean value coordinates (MVC). Our method can be applied to any single frontal face image to learn a specified caricature face pair for frontal and 3D caricature synthesis. This technique only requires one or a small number of exemplar pairs and a natural frontal face image training set, while the system can transfer the style of the exemplar pair across individuals. Further exaggeration can be fulfilled in a controllable way. Our method is further applied to facial expression transfer, interpolation, and exaggeration, which are applications of expression editing. Additionally, we have extended our approach to 3D caricature synthesis based on the 3D version of MVC. With experiments we demonstrate that the transferred expressions are credible and the resulting caricatures can be characterized and recognized.  相似文献   

6.
This paper describes a new and efficient method for facial expression generation on cloned synthetic head models. The system uses abstract facial muscles called action units (AUs) based on both anatomical muscles and the facial action coding system. The facial expression generation method has real-time performance, is less computationally expensive than physically based models, and has greater anatomical correspondence than rational free-form deformation or spline-based, techniques. Automatic cloning of a real human head is done by adapting a generic facial and head mesh to Cyberware laser scanned data. The conformation of the generic head to the individual data and the fitting of texture onto it are based on a fully automatic feature extraction procedure. Individual facial animation parameters are also automatically estimated during the conformation process. The entire animation system is hierarchical; emotions and visemes (the visual mouth shapes that occur during speech) are defined in terms of the AUs, and higher-level gestures are defined in terms of AUs, emotions, and visemes as well as the temporal relationships between them. The main emphasis of the paper is on the abstract muscle model, along with limited discussion on the automatic cloning process and higher-level animation control aspects.  相似文献   

7.

Face recognition has become an accessible issue for experts as well as ordinary people as it is a focal non-interfering biometric modality. In this paper, we introduced a new approach to perform face recognition under varying facial expressions. The proposed approach consists of two main steps: facial expression recognition and face recognition. They are two complementary steps to improve face recognition across facial expression variation. In the first step, we selected the most expressive regions responsible for facial expression appearance using the Mutual Information technique. Such a process helps not only improve the facial expression classification accuracy but also reduce the features vector size. In the second step, we used the Principal Component Analysis (PCA) to build EigenFaces for each facial expression class. Then, a face recognition is performed by projecting the face onto the corresponding facial expression Eigenfaces. The PCA technique significantly reduces the dimensionality of the original space since the face recognition is carried out in the reduced Eigenfaces space. An experimental study was conducted to evaluate the performance of the proposed approach in terms of face recognition accuracy and spatial-temporal complexity.

  相似文献   

8.
A 3D facial reconstruction and expression modeling system which creates 3D video sequences of test subjects and facilitates interactive generation of novel facial expressions is described. Dynamic 3D video sequences are generated using computational binocular stereo matching with active illumination and are used for interactive expression modeling. An individual’s 3D video set is annotated with control points associated with face subregions. Dragging a control point updates texture and depth in only the associated subregion so that the user generates new composite expressions unseen in the original source video sequences. Such an interactive manipulation of dynamic 3D face reconstructions requires as little preparation on the test subject as possible. Dense depth data combined with video-based texture results in realistic and convincing facial animations, a feature lacking in conventional marker-based motion capture systems.  相似文献   

9.
苏育挺  陈耀  吕卫 《红外技术》2019,41(4):377-382
在岗检测是现代安防领域中视频分析的一个重要研究方向,应用领域非常广泛.本文设计并实现了一种嵌入式人员在岗检测系统,为了提高此嵌入式系统的运行速度,提出了改进的人脸特征点检测方法;并且为了提高系统的检测准确率,建立了一个近红外人脸样本库.该系统通过近红外摄像头采集实时图像,然后进行人脸特征点检测,获取被检测人的面部信息.根据违规行为判断准则,判断当前是否出现违规动作并且发出警报.实验结果表明:在规定条件下,系统的人脸特征点检测准确率达到了95%,针对两种异常情况的检测准确率也都超过了94%,具有良好的实时性能.  相似文献   

10.
The face is the window to the soul. This is what the 19th-century French doctor Duchenne de Boulogne thought. Using electric shocks to stimulate muscular contractions and induce bizarre-looking expressions, he wanted to understand how muscles produce facial expressions and reveal the most hidden human emotions. Two centuries later, this research field remains very active. We see automatic systems for recognizing emotion and facial expression being applied in medicine, security and surveillance systems, advertising and marketing, among others. However, there are still fundamental questions that scientists are trying to answer when analyzing a person’s emotional state from their facial expressions. Is it possible to reliably infer someone’s internal state based only on their facial muscles’ movements? Is there a universal facial setting to express basic emotions such as anger, disgust, fear, happiness, sadness, and surprise? In this research, we seek to address some of these questions through convolutional neural networks. Unlike most studies in the prior art, we are particularly interested in examining whether characteristics learned from one group of people can be generalized to predict another’s emotions successfully. In this sense, we adopt a cross-dataset evaluation protocol to assess the performance of the proposed methods. Our baseline is a custom-tailored model initially used in face recognition to categorize emotion. By applying data visualization techniques, we improve our baseline model, deriving two other methods. The first method aims to direct the network’s attention to regions of the face considered important in the literature but ignored by the baseline model, using patches to hide random parts of the facial image so that the network can learn discriminative characteristics in different regions. The second method explores a loss function that generates data representations in high-dimensional spaces so that examples of the same emotion class are close and examples of different classes are distant. Finally, we investigate the complementarity between these two methods, proposing a late-fusion technique that combines their outputs through the multiplication of probabilities. We compare our results to an extensive list of works evaluated in the same adopted datasets. In all of them, when compared to works that followed an intra-dataset protocol, our methods present competitive numbers. Under a cross-dataset protocol, we achieve state-of-the-art results, outperforming even commercial off-the-shelf solutions from well-known tech companies.  相似文献   

11.
Face recognition has been addressed with pattern recognition techniques such as composite correlation filters. These filters are synthesized from training sets which are representative of facial classes. For this reason, the filter performance depends greatly on the appropriate selection of the training set. This set can be selected either by a filter designer or by a conventional method. This paper presents an optimization-based methodology for the automatic selection of the training set. Given an optimization algorithm, the proposed methodology uses its main mechanics to iteratively examine a given set of available images in order to find the best subset for the training set. To this end, three objective functions are proposed as optimization criteria for training set selection. The proposed methodology was evaluated by undertaking face recognition under variable illumination and facial expressions. Four optimization algorithms and three composite correlation filters were used to test the proposed methodology. The Maximum Average Correlation Height filter designed by Grey Wolf Optimizer obtained the best performance under homogeneous illumination and facial expressions, while the Unconstrained Nonlinear Composite Filter designed by either Grey Wolf Optimizer or (1+1)-Evolution Strategy obtained the best performance under variable illumination. The proposed methodology selects training sets for the synthesis of composite filters with competitive results comparable to the results reported in the face recognition literature.  相似文献   

12.
人类面部表情是其心理情绪变化的最直观刻画,不同人的面部表情具有很大差异,现有表情识别方法均利用面部统计特征区分不同表情,其缺乏对于人脸细节信息的深度挖掘。根据心理学家对面部行为编码的定义可以看出,人脸的局部细节信息决定了其表情意义。因此该文提出一种基于多尺度细节增强的面部表情识别方法,针对面部表情受图像细节影响较大的特点,提出利用高斯金字塔提取图像细节信息,并对图像进行细节增强,从而强化人脸表情信息。针对面部表情的局部性特点,提出利用层次结构的局部梯度特征计算方法,描述面部特征点局部形状特征。最后,使用支持向量机(SVM)对面部表情进行分类。该文在CK+表情数据库中的实验结果表明,该方法不仅验证了图像细节对面部表情识别过程的重要作用,而且在小规模训练数据下也能够得到非常好的识别结果,表情平均识别率达到98.19%。  相似文献   

13.
吴晓军  鞠光亮 《电子学报》2016,44(9):2141-2147
提出了一种无标记点的人脸表情捕捉方法.首先根据ASM(Active Shape Model)人脸特征点生成了覆盖人脸85%面部特征的人脸均匀网格模型;其次,基于此人脸模型提出了一种表情捕捉方法,使用光流跟踪特征点的位移变化并辅以粒子滤波稳定其跟踪结果,以特征点的位移变化驱动网格整体变化,作为网格跟踪的初始值,使用网格的形变算法作为网格的驱动方式.最后,以捕捉到的表情变化数据驱动不同的人脸模型,根据模型的维数不同使用不同的驱动方法来实现表情动画重现,实验结果表明,提出的算法能很好地捕捉人脸表情,将捕捉到的表情映射到二维卡通人脸和三维虚拟人脸模型都能取得较好的动画效果.  相似文献   

14.
Recently, Facial Expression Recognition (FER) has gained much attention in the research area for its various applications. In the facial expression recognition task, subject-dependent issue is predominant when a small-scale database is used for training the system. The proposed Auxiliary Classifier Generative Adversarial Network (AC-GAN) based model regenerates ten expressions (angry, contempt, disgust, embarrassment, fear, joy, neutral, pride, sad, surprise) from input face image and recognizes its expression. To alleviate the subject dependence issue, we train the model person-wise and generate all the above expressions for a person and allow the discriminator to classify the expressions. The generator of our model uses U-Net Architecture, and the discriminator uses Capsule Networks for improved feature extraction. The model has been evaluated on the ADFES-BIV dataset yielding an overall classification accuracy of 93.4%. We also compared our model with the existing methods by evaluating our model on commonly used datasets like CK+, KDEF.  相似文献   

15.
Facial expressions contain most of the information on human face which is essential for human–computer interaction. Development of robust algorithms for automatic recognition of facial expressions with high recognition rates has been a challenge for the last 10 years. In this paper, we propose a novel feature selection procedure which recognizes basic facial expressions with high recognition rates by utilizing three-Dimensional (3D) geometrical facial feature positions. The paper presents a system of classifying expressions in one of the six basic emotional categories which are anger, disgust, fear, happiness, sadness, and surprise. The paper contributes on feature selections for each expression independently and achieves high recognition rates with the proposed geometric facial features selected for each expression. The novel feature selection procedure is entropy based, and it is employed independently for each of the six basic expressions. The system’s performance is evaluated using the 3D facial expression database, BU-3DFE. Experimental results show that the proposed method outperforms the latest methods reported in the literature.  相似文献   

16.
Expression cloning plays an important role in facial expression synthesis. In this paper, a novel algorithm is proposed for facial expression cloning. The proposed algorithm first introduces a new elastic model to balance the global and local warping effects, such that the impacts from facial feature diversity among people can be minimized, and thus more effective geometric warping results can be achieved. Furthermore, a muscle-distribution-based (MD) model is proposed, which utilizes the muscle distribution of the human face and results in more accurate facial illumination details. In addition, we also propose a new distance-based metric to automatically select the optimal parameters such that the global and local warping effects in the elastic model can be suitably balanced. Experimental results show that our proposed algorithm outperforms the existing methods.  相似文献   

17.
In this paper, the authors have developed a system that animates 3D facial agents based on real-time facial expression analysis techniques and research on synthesizing facial expressions and text-to-speech capabilities. This system combines visual, auditory, and primary interfaces to communicate one coherent multimodal chat experience. Users can represent themselves using agents they select from a group that we have predefined. When a user shows a particular expression while typing a text, the 3D agent at the receiving end speaks the message aloud while it replays the recognized facial expression sequences and also augments the synthesized voice with appropriate emotional content. Because the visual data exchange is based on the MPEG-4 high-level Facial Animation Parameter for facial expressions (FAP 2), rather than real-time video, the method requires very low bandwidth.  相似文献   

18.
19.
Emotions of human beings are largely represented by facial expressions. Facial expressions, simple as well as complex, are well decoded by facial action units. Any facial expression can be detected and analyzed if facial action units are decoded well. In the presented work, an attempt has been made to detect facial action unit intensity by mapping the features based on their cosine similarity. Distance metric learning based on cosine similarity maps the data by learning a metric that measures orientation rather than magnitude. The motivation behind using cosine similarity is that change in facial expressions can be better represented by changes in orientation as compared to the magnitude. The features are applied to support vector machine for classification of various intensities of action units. Experimental results on the popularly accepted database such as DISFA database and UNBC McMaster shoulder pain database confirm the efficacy of the proposed approach.  相似文献   

20.
三维人脸动画是计算机图形学领域的热点课题。针对目前三维动画模型对人脸的模拟难度高且效果不够逼真地问题,为了简洁且逼真的模拟人脸表情动作,提出了一种拟合抽象肌肉模型。该模型基于人脸动画模型中常用的抽象肌肉模型,对其中宽线性肌的数学模型进行改进,利用形变参数控制宽线性肌的形态,对面部肌肉动作直接进行模拟。仿真实验表明,利用拟合抽象肌肉模型能够更为真实地模拟出复杂的嘴部动作。因此,拟合抽象肌肉模型与传统的抽象肌肉模型相比,实现的计算复杂度不高,并且模拟效果更加逼真,具有广阔的应用前景。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号