首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper presents a hierarchical multi-state pose-dependent approach for facial feature detection and tracking under varying facial expression and face pose. For effective and efficient representation of feature points, a hybrid representation that integrates Gabor wavelets and gray-level profiles is proposed. To model the spatial relations among feature points, a hierarchical statistical face shape model is proposed to characterize both the global shape of human face and the local structural details of each facial component. Furthermore, multi-state local shape models are introduced to deal with shape variations of some facial components under different facial expressions. During detection and tracking, both facial component states and feature point positions, constrained by the hierarchical face shape model, are dynamically estimated using a switching hypothesized measurements (SHM) model. Experimental results demonstrate that the proposed method accurately and robustly tracks facial features in real time under different facial expressions and face poses.  相似文献   

2.
In this paper a facial feature point tracker that is motivated by applications such as human-computer interfaces and facial expression analysis systems is proposed. The proposed tracker is based on a graphical model framework. The facial features are tracked through video streams by incorporating statistical relations in time as well as spatial relations between feature points. By exploiting the spatial relationships between feature points, the proposed method provides robustness in real-world conditions such as arbitrary head movements and occlusions. A Gabor feature-based occlusion detector is developed and used to handle occlusions. The performance of the proposed tracker has been evaluated on real video data under various conditions including occluded facial gestures and head movements. It is also compared to two popular methods, one based on Kalman filtering exploiting temporal relations, and the other based on active appearance models (AAM). Improvements provided by the proposed approach are demonstrated through both visual displays and quantitative analysis.  相似文献   

3.
For effective interaction between humans and socially adept, intelligent service robots, a key capability required by this class of sociable robots is the successful interpretation of visual data. In addition to crucial techniques like human face detection and recognition, an important next step for enabling intelligence and empathy within social robots is that of emotion recognition. In this paper, an automated and interactive computer vision system is investigated for human facial expression recognition and tracking based on the facial structure features and movement information. Twenty facial features are adopted since they are more informative and prominent for reducing the ambiguity during classification. An unsupervised learning algorithm, distributed locally linear embedding (DLLE), is introduced to recover the inherent properties of scattered data lying on a manifold embedded in high-dimensional input facial images. The selected person-dependent facial expression images in a video are classified using the DLLE. In addition, facial expression motion energy is introduced to describe the facial muscle’s tension during the expressions for person-independent tracking for person-independent recognition. This method takes advantage of the optical flow which tracks the feature points’ movement information. Finally, experimental results show that our approach is able to separate different expressions successfully.  相似文献   

4.
Automatic analysis of head gestures and facial expressions is a challenging research area and it has significant applications in human-computer interfaces. We develop a face and head gesture detector in video streams. The detector is based on face landmark paradigm in that appearance and configuration information of landmarks are used. First we detect and track accurately facial landmarks using adaptive templates, Kalman predictor and subspace regularization. Then the trajectories (time series) of facial landmark positions during the course of the head gesture or facial expression are converted in various discriminative features. Features can be landmark coordinate time series, facial geometric features or patches on expressive regions of the face. We use comparatively, two feature sequence classifiers, that is, Hidden Markov Models (HMM) and Hidden Conditional Random Fields (HCRF), and various feature subspace classifiers, that is, ICA (Independent Component Analysis) and NMF (Non-negative Matrix Factorization) on the spatiotemporal data. We achieve 87.3% correct gesture classification on a seven-gesture test database, and the performance reaches 98.2% correct detection under a fusion scheme. Promising and competitive results are also achieved on classification of naturally occurring gesture clips of LIlir TwoTalk Corpus.  相似文献   

5.
6.
一种鲁棒高效的人脸特征点跟踪方法   总被引:2,自引:0,他引:2  
黄琛  丁晓青  方驰 《自动化学报》2012,38(5):788-796
人脸特征点跟踪能获取除粗略的人脸位置和运动轨迹以外的人脸部件的精确信息,对计算机视觉研究有重要作用.主动表象模型(Active appearance model, AAM)是描述人脸特征点位置的最有效的方法之一,但是其高维参数空间和梯度下降优化策略使得AAM对初始参数敏感,且易陷入局部极值. 因此,基于传统AAM的人脸特征点跟踪方法不能同时较好地解决大姿态、光照和表情的问题.本文在多视角AAM的框架下,提出一种结合随机森林和线性判别分析(Linear discriminate analysis, LDA)的实时姿态估计算法对跟踪的人脸进行姿态预估计和更新,从而有效地解决了视频人脸大姿态变化的问题.提出了一种改进的在线表象模型(Online appearance model, OAM)方法来评估跟踪的准确性,并自适应地通过增量主成分分析(Principle component analysis, PCA) 学习来更新AAM的纹理模型,极大地提高了跟踪的稳定性和模型应对光照和表情变化的能力.实验结果表明,本文算法在视频人脸特征点跟踪的准确性、鲁棒性和实时性方面都有良好的性能.  相似文献   

7.
Automatic facial expression recognition constitutes an active research field due to the latest advances in computing technology that make the user's experience a clear priority. The majority of work conducted in this area involves 2D imagery, despite the problems this presents due to inherent pose and illumination variations. In order to deal with these problems, 3D and 4D (dynamic 3D) recordings are increasingly used in expression analysis research. In this paper we survey the recent advances in 3D and 4D facial expression recognition. We discuss developments in 3D facial data acquisition and tracking, and present currently available 3D/4D face databases suitable for 3D/4D facial expressions analysis as well as the existing facial expression recognition systems that exploit either 3D or 4D data in detail. Finally, challenges that have to be addressed if 3D facial expression recognition systems are to become a part of future applications are extensively discussed.  相似文献   

8.
In this paper, an analysis of the effect of partial occlusion on facial expression recognition is investigated. The classification from partially occluded images in one of the six basic facial expressions is performed using a method based on Gabor wavelets texture information extraction, a supervised image decomposition method based on Discriminant Non-negative Matrix Factorization and a shape-based method that exploits the geometrical displacement of certain facial features. We demonstrate how partial occlusion affects the above mentioned methods in the classification of the six basic facial expressions, and indicate the way partial occlusion affects human observers when recognizing facial expressions. An attempt to specify which part of the face (left, right, lower or upper region) contains more discriminant information for each facial expression, is also made and conclusions regarding the pairs of facial expressions misclassifications that each type of occlusion introduces, are drawn.  相似文献   

9.
基于Gabor小波变换的人脸表情特征提取   总被引:13,自引:1,他引:12  
叶敬福  詹永照 《计算机工程》2005,31(15):172-174
提出了一种基于Gabor小波变换的人脸表情特征提取算法。针对包含表情信息的静态灰度图像,首先对其预处理,然后对表情子区域执行Gabor小波变换,提取表情特征矢量,进而构建表情弹性图。最后分析比较了在不同光照条件下不同测试者做出6种基本表情时所提取的表情特征,结果表明Gabor小波变换能够有效地提取与表情变化有关的特征,并能有效地屏蔽光照变化及个人特征差异的影响。  相似文献   

10.
In this paper a real-time 3D pose estimation algorithm using range data is described. The system relies on a novel 3D sensor that generates a dense range image of the scene. By not relying on brightness information, the proposed system guarantees robustness under a variety of illumination conditions, and scene contents. Efficient face detection using global features and exploitation of prior knowledge along with novel feature localization and tracking techniques are described. Experimental results demonstrate accurate estimation of the six degrees of freedom of the head and robustness under occlusions, facial expressions, and head shape variability.  相似文献   

11.
针对非可控环境下人脸表情识别面临的诸如种族、性别和年龄等因子变化问题,提出一种基于深度条件随机森林的鲁棒性人脸表情识别方法.与传统的单任务人脸表情识别方法不同,设计了一种以人脸表情识别为主,人脸性别和年龄属性识别为辅的多任务识别模型.在研究中发现,人脸性别和年龄等属性对人脸表情识别有一定的影响,为了捕获它们之间的关系,...  相似文献   

12.
提出了一种新的视频人脸表情识别方法. 该方法将识别过程分成人脸表情特征提取和分类2个部分,首先采用基于点跟踪的活动形状模型(ASM)从视频人脸中提取人脸表情几何特征;然后,采用一种新的局部支撑向量机分类器对表情进行分类. 在Cohn2Kanade数据库上对KNN、SVM、KNN2SVM和LSVM 4种分类器的比较实验结果验证了所提出方法的有效性.  相似文献   

13.
This paper explores the use of multisensory information fusion technique with dynamic Bayesian networks (DBN) for modeling and understanding the temporal behaviors of facial expressions in image sequences. Our facial feature detection and tracking based on active IR illumination provides reliable visual information under variable lighting and head motion. Our approach to facial expression recognition lies in the proposed dynamic and probabilistic framework based on combining DBN with Ekman's facial action coding system (FACS) for systematically modeling the dynamic and stochastic behaviors of spontaneous facial expressions. The framework not only provides a coherent and unified hierarchical probabilistic framework to represent spatial and temporal information related to facial expressions, but also allows us to actively select the most informative visual cues from the available information sources to minimize the ambiguity in recognition. The recognition of facial expressions is accomplished by fusing not only from the current visual observations, but also from the previous visual evidences. Consequently, the recognition becomes more robust and accurate through explicitly modeling temporal behavior of facial expression. In this paper, we present the theoretical foundation underlying the proposed probabilistic and dynamic framework for facial expression modeling and understanding. Experimental results demonstrate that our approach can accurately and robustly recognize spontaneous facial expressions from an image sequence under different conditions.  相似文献   

14.
面部表情分析是计算机通过分析人脸信息尝试理解人类情感的一种技术,目前已成为计算机视觉领域的热点话题。其挑战在于数据标注困难、多人标签一致性差、自然环境下人脸姿态大以及遮挡等。为了推动面部表情分析发展,本文概述了面部表情分析的相关任务、进展、挑战和未来趋势。首先,简述了面部表情分析的几个常见任务、基本算法框架和数据库;其次,对人脸表情识别方法进行了综述,包括传统的特征设计方法以及深度学习方法;接着,对人脸表情识别存在的问题与挑战进行总结思考;最后,讨论了未来发展趋势。通过全面综述和讨论,总结以下观点:1)针对可靠人脸表情数据库规模小的问题,从人脸识别模型进行迁移学习以及利用无标签数据进行半监督学习是两个重要策略;2)受模糊表情、低质量图像以及标注者的主观性影响,非受控自然场景的人脸表情数据的标签库存在一定的不确定性,抑制这些因素可以使得深度网络学习真正的表情特征;3)针对人脸遮挡和大姿态问题,利用局部块进行融合的策略是一个有效的策略,另一个值得考虑的策略是先在大规模人脸识别数据库中学习一个对遮挡和姿态鲁棒的模型,再进行人脸表情识别迁移学习;4)由于基于深度学习的表情识别方法受很多超参数影响,导致当前人脸表情识别方法的可比性不强,不同的表情识别方法有必要在不同的简单基线方法上进行评测。目前,虽然非受控自然环境下的表情分析得到较快发展,但是上述问题和挑战仍然有待解决。人脸表情分析是一个比较实用的任务,未来发展除了要讨论方法的精度也要关注方法的耗时以及存储消耗,也可以考虑用非受控环境下高精度的人脸运动单元检测结果进行表情类别推断。  相似文献   

15.
一种鲁棒的全自动人脸特征点定位方法   总被引:4,自引:0,他引:4  
人脸特征点定位的目标是能够对人脸进行全自动精确定位. 主动形状模型(Active shape modal, ASM)和主动表象模型(Active appearance modal, AAM)的发表为全自动人脸特征点定位工作提供了很好的思路和解决框架. 之后很多研究工作也都在ASM和AAM的框架下进行了改进. 但是目前的研究工作尚未很好地解决人脸表情、光照以及姿态变化情况下的人脸特征点定位问题, 本文基于ASM框架提出了全自动人脸特征点定位算法. 和传统ASM方法以及ASM的改进方法的不同在于: 1)引进有效的机器学习方法来建立局部纹理模型. 这部分工作改进了传统ASM方法中用灰度图像的梯度分布进行局部纹理建模的方法, 引入了基于随机森林分类器和点对比较特征的局部纹理建模方法. 这种方法基于大量样本的统计学习, 能够有效解决人脸特征点定位中光照和表情变化这些难点; 2)在人脸模型参数优化部分, 本文成功地将分类器输出的结果结合到人脸模型参数优化的目标函数当中, 并且加入形状限制项使得优化的目标函数更为合理. 本文在包含表情、光照以及姿态变化的人脸数据上进行实验, 实验结果证明本文提出的全自动人脸特征点定位方法能够有效地适应人脸的光照和表情变化. 在姿态数据库上的测试结果说明了本算法的有效性.  相似文献   

16.
在真实世界中,每个个体对表情的表现方式不同.基于上述事实,文中提出局部特征聚类(LFA)损失函数,能够在深度神经网络的训练过程中减小相同类图像之间的差异,扩大不同类图像之间的差异,从而削弱表情的多态性对深度学习方式提取特征的影响.同时,具有丰富表情的局部区域可以更好地表现面部表情特征,所以提出融入LFA损失函数的深度学习网络框架,提取的面部图像的局部特征用于面部表情识别.实验结果表明文中方法在真实世界的RAF数据集及实验室条件下的CK+数据集上的有效性.  相似文献   

17.
朱虹  李千目  李德强 《计算机科学》2018,45(4):273-277, 284
深度学习在面部特征点定位领域取得了比较显著的效果。然而,由于姿态、光照、表情和遮挡等因素引起的面部图像的复杂多样性,数目较多的面部特征点定位仍然是一个具有挑战性的问题。现有的用于面部特征点定位的深度学习方法是基于级联网络或基于任务约束的深度卷积网络,其不仅复杂,且训练非常困难。为了解决这些问题,提出了一种新的基于单个卷积神经网络的面部多特征点定位方法。与级联网络不同,该网络包含了3组堆叠层,每组由两个卷积层和最大池化层组成。这种网络结构可以提取更多的全局高级特征,能更精确地表达面部特征点。大量的实验表明,所提方法在姿态、光照、表情和遮挡等变化复杂的条件下优于现有的方法。  相似文献   

18.
在三维人脸表情识别中,基于局部二值模式(LBP)算子算法与传统的特征提取算法相比具有特征提取准确、精细、光照不变性等优点,但也有直方图维数高、判别能力差、冗余信息大的缺点.本文提出一种通过对整幅图像进行多尺度分块提取CBP特征的CBP算法,能够更有效的提取分类特征.再结合使用稀疏表达分类器实现对特征进行分类和识别.经实验结果表明,与传统LBP算法和SVM分类识别算法对比,文中算法用于人脸表情的识别的识别率得到大幅度提高.  相似文献   

19.
Emerging significance of person-independent, emotion specific facial feature tracking has been actively tracked in the machine vision society for decades. Among distinct methods, the Constrained Local Model (CLM) has shown significant results in person-independent feature tracking. In this paper, we propose an automatic, efficient, and robust method for emotion specific facial feature detection and tracking from image sequences. A novel tracking system along with 17-point feature model on the frontal face region has also been proposed to facilitate the tracking of human basic facial expressions. The proposed feature tracking system keeps patch images and face shapes till certain number of key frames incorporating CLM-based tracker. After that, incremental patch and shape clustering algorithms is applied to build appearance model and structure model of similar patches and similar shapes respectively. The clusters in each model are built and updated incrementally and online, controlled by amount of facial muscle movement. The overall performance of the proposed Robust Incremental Clustering-based Facial Feature Tracking (RICFFT) is evaluated on the FGnet database and the Extended Cohn-Kanade (CK+) database. RICFFT demonstrates mean tracking accuracy of 97.45% and 96.64% for FGnet and CK+ database respectively. Also, RICFFT is more robust by minimizing average shape distortion error of 0.20% and 1.86% for FGnet and CK+ (apex frame) database, as compared with classic method CLM.  相似文献   

20.
以面部表情视频序列为研究对象,基于一种统计学习方法-支持向量机对面部表情进行识别及强度度量。采用一种改进的特征点跟踪方法来提取面部特征形变。通过非线性降维方法-等容特征映射自动产生表情强度范围,从高维特征点轨迹中抽取一维的表情强度。最后,使用SVM建立表情模型和强度模型,进行表情的分类,并对高兴表情进行强度等级的归类。实验证明了该表情分析方法的有效性。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号