首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 182 毫秒
1.
Yeon-Sik Ryu  Se-Young Oh   《Pattern recognition》2001,34(12):2459-2466
This paper presents a novel algorithm for the extraction of the eye and mouth (facial features) fields from 2-D gray-level face images. The fundamental philosophy is that eigenfeatures, derived from the eigenvalues and eigenvectors of the binary edge data set constructed from the eye and mouth fields, are very good features to locate these fields efficiently. The eigenfeatures extracted from the positive and negative training samples of the facial features are used to train a multilayer perceptron whose output indicates the degree to which a particular image window contains an eye or a mouth. It turns out that only a small number of frontal faces are sufficient to train the networks. Furthermore, they lend themselves to good generalization to non-frontal pose and even other people's faces. It has been experimentally verified that the proposed algorithm is robust against facial size and slight variations of pose.  相似文献   

2.
基于特征点表情变化的3维人脸识别   总被引:1,自引:1,他引:0       下载免费PDF全文
目的 为克服表情变化对3维人脸识别的影响,提出一种基于特征点提取局部区域特征的3维人脸识别方法。方法 首先,在深度图上应用2维图像的ASM(active shape model)算法粗略定位出人脸特征点,再根据Shape index特征在人脸点云上精确定位出特征点。其次,提取以鼻中为中心的一系列等测地轮廓线来表征人脸形状;然后,提取具有姿态不变性的Procrustean向量特征(距离和角度)作为识别特征;最后,对各条等测地轮廓线特征的分类结果进行了比较,并对分类结果进行决策级融合。结果 在FRGC V2.0人脸数据库分别进行特征点定位实验和识别实验,平均定位误差小于2.36 mm,Rank-1识别率为98.35%。结论 基于特征点的3维人脸识别方法,通过特征点在人脸近似刚性区域提取特征,有效避免了受表情影响较大的嘴部区域。实验证明该方法具有较高的识别精度,同时对姿态、表情变化具有一定的鲁棒性。  相似文献   

3.
在表情中含有最多特征信息的是面部眉毛、眼睛和嘴巴这三个区域,为充分利用这些特征,减少图像中无用信息在识别过程中对计算机内存的占用,提高人脸表情识别系统的准确率和速度,首先采用haar 和 adaboost人脸检测算法,对图像中的人脸进行识别,获得人脸图像并提取眉毛、眼睛和嘴巴,生成局部(眉毛、眼睛、嘴巴)二值化图,利用PCA方法对人脸图像降维,降维后的全局和局部灰度特征值组成一个列向量。样本由表情数据库产生,经过神经网络样本训练后,进行表情识别。结果表明,该系统对人脸表情识别速度明显快于Gabor 小波算法;识别的准确率高于单独使用PCA算法和神经网络算法;消耗内存比用Gabor 小波算法少,运行较流畅。得出结论:因为提取出包含表情特征信息集中区的眉毛、眼睛和嘴巴,尽可能地多保留了这些局部特征信息,因而提高了表情识别准确率,同时,采用PCA方法对原始图像进行降维处理,有效的减少了信息冗余。  相似文献   

4.
刘姗姗  王玲 《计算机应用》2009,29(11):3040-3043
针对包含表情信息的静态灰度图像,提出基于自动分割的局部Gabor小波人脸表情识别算法。首先使用数学形态学与积分投影相结合定位眉毛眼睛区域,采用模板内计算均值定位嘴巴区域,自动分割出表情子区域。接着,对分割出的表情子区域进行Gabor小波变换提取表情特征,再利用Fisher线性判别分析进行选择,有效地去除了表情特征的冗余性和相关性。最后利用支持向量机实现对人脸表情的分类。用该算法在日本女性表情数据库上进行测试,实现了自动化且易于实现,结果证明了该方法的有效性。  相似文献   

5.
为提高脸部年龄预测的准确性,在深度学习的基础上提出一种可有效预测脸部年龄的算法。通过对人脸图像进行预处理,获取左眼、右眼、鼻子和嘴巴四个部分的局部图像,利用迁移TensorFlow深度学习库中的Inception V4模型,提取脸部图像四个部分的多尺度局部特征,并将提取的局部特征使用串联方式相连接以得到融合特征,再将不同年龄的融合特征输入双向长短期记忆中,以学习不同年龄融合特征间的相关性,进而完成年龄预测。在公开数据集FG-NET和MORPH上的实验结果表明,该算法通过利用脸部多尺度融合特征和不同年龄融合特征间的相关性,能够显著提高年龄预测的准确性和鲁棒性。  相似文献   

6.
7.
一种鲁棒的人脸特征定位方法   总被引:1,自引:0,他引:1  
提出了一种基于AdaBoost算法和C-V方法的人脸特征定位方法。首先根据AdaBoost算法训练样本得到脸、眼、鼻、嘴4个检测器;然后结合人脸边缘图像的先验规则,使用人脸检测器提取人脸区域;接着利用眼、鼻、嘴检测器从人脸区域中检测出人脸特征所在的矩形区域;最后利用C-V方法从各个特征区域中分割出人脸特征的轮廓,进而得到人脸关键特征点的位置。在DTU IMM人脸测试集上,眼睛的检测率为100%,鼻子的检测率为95.3%,嘴巴的检测率为98.4%,提取出的特征点位置准确。实验结果表明方法是有效和鲁棒的。  相似文献   

8.
本文将卷积神经网络(Convolutional Neural Network,CNN)应用到视频理解中,提出一种基于多面部特征融合的驾驶员疲劳检测算法.本文使用多任务级联卷积网络(Multi-Task Cascaded Convolutional Networks,MTCNN)定位驾驶员的嘴部、左眼,使用CNN从驾驶员嘴部、左眼图像中提取静态特征,结合CNN从嘴部、左眼光流图中提取动态特征进行训练分类.实验结果表明,该算法比只使用静态图像进行驾驶员疲劳检测效果更好,准确率达到87.4%,而且可以很好地区别在静态图像中很相似的打哈欠和讲话动作.  相似文献   

9.
嘴部特征提取是人机交互、人脸识别和重建等领域的重要步骤。提出了一种改进的利用变形模板提取嘴部特征的算法。首先对预处理之后的图像用改进的积分投影法确定人脸各特征的水平位置,然后对投影曲线进行分析确定出嘴巴的中心位置,之后利用边缘和灰度信息,对所设计的模板进行全局搜索确定变形模板的最佳参数。算法先提取嘴巴轮廓再根据曲线约束关系判断嘴巴是否张开。实验表明,对于简单背景的人脸图像,该算法可较好地提取嘴部特征,速度快。  相似文献   

10.
在介绍人脸特征点检测的理论知识的基础上,提出了一种基于深层卷积神经网络(Deep Convolutional Neural Network,DCNN)解决人脸5点特征点(眼角、鼻子、嘴角)预测问题的方法。通过添加更多的卷积层稳定地增加网络的深度,并且在所有层中使用3×3的卷积滤波器,有效减小参数,更好地解决了人脸特征点检测问题。然后计算双眼角与嘴角所成平面与正视时此平面的单应性矩阵,最后利用等效算法求解驾驶员面部转角。实验结果表明,面部特征点检测准确率达到97.96%,算法在角度判断上的误差是1°~5°,这证明了该算法对注意力分散监测的有效性。  相似文献   

11.
This paper presents a wavelet-based texture segmentation method using multilayer perceptron (MLP) networks and Markov random fields (MRF) in a multi-scale Bayesian framework. Inputs and outputs of MLP networks are constructed to estimate a posterior probability. The multi-scale features produced by multi-level wavelet decompositions of textured images are classified at each scale by maximum a posterior (MAP) classification and the posterior probabilities from MLP networks. An MRF model is used in order to model the prior distribution of each texture class, and a factor, which fuses the classification information through scales and acts as a guide for the labeling decision, is incorporated into the MAP classification of each scale. By fusing the multi-scale MAP classifications sequentially from coarse to fine scales, our proposed method gets the final and improved segmentation result at the finest scale. In this fusion process, the MRF model serves as the smoothness constraint and the Gibbs sampler acts as the MAP classifier. Our texture segmentation method was applied to segmentation of gray-level textured images. The proposed segmentation method shows better performance than texture segmentation using the hidden Markov trees (HMT) model and the HMTseg algorithm, which is a multi-scale Bayesian image segmentation algorithm.  相似文献   

12.
对有偏转角度的人脸特征点定位来说,拟合初始位置和模型的角度对人脸特征点定位效果有很大的影响。而传统的AAM(Active Appearance Models)人脸特征定位方法没有具体考虑这一问题,对有偏转角度的人脸特征点的定位准确率和速度并不理想。为解决这个问题,文中提出了一种利用两眼中心坐标和嘴中心坐标来计算人脸偏转角度,根据坐标和角度确定拟合初始位置和模板的方法。用Adaboost和YCbCr对人脸进行预检测,根据找到的特征区域计算偏转角,用反向算法结合该角度的模板进行特征点定位。实验的测试结果表明本方法对有偏转角度的人脸的特征点定位比传统方法在准确度和速度上都有了提高。  相似文献   

13.
目的表情变化是3维人脸识别面临的主要问题。为克服表情影响,提出了一种基于面部轮廓线对表情鲁棒的3维人脸识别方法。方法首先,对人脸进行预处理,包括人脸区域切割、平滑处理和姿态归一化,将所有的人脸置于姿态坐标系下;然后,从3维人脸模型的半刚性区域提取人脸多条垂直方向的轮廓线来表征人脸面部曲面;最后,利用弹性曲线匹配算法计算不同3维人脸模型间对应的轮廓线在预形状空间(preshape space)中的测地距离,将其作为相似性度量,并且对所有轮廓线的相似度向量加权融合,得到总相似度用于分类。结果在FRGC v2.0数据库上进行识别实验,获得97.1%的Rank-1识别率。结论基于面部轮廓线的3维人脸识别方法,通过从人脸的半刚性区域提取多条面部轮廓线来表征人脸,在一定程度上削弱了表情的影响,同时还提高了人脸匹配速度。实验结果表明,该方法具有较强的识别性能,并且对表情变化具有较好的鲁棒性。  相似文献   

14.
《Real》2000,6(1):3-16
Automatic wire-frame fitting and automatic wire-frame tracking are the two most important and most difficult issues associated with semantic-based moving image coding. A novel approach to high speed tracking of important facial features is presented as a part of a complete fitting-tracking system. The method allows real-time processing ofhead-and-shoulders sequences using software tools only. The algorithm is based on eigenvalue decomposition of the sub-images extracted from subsequent frames of the video sequence. Each important facial feature (the left eye, the right eye, the nose and the lips) is tracked separately using the same method. The algorithm was tested on widely used head-and-shoulders video sequences containing the speaker's head pan, rotation and zoom with remarkably good results. These experiments prove that it is possible to maintain tracking even when the facial features are partially occluded.  相似文献   

15.
Towards a system for automatic facial feature detection   总被引:21,自引:0,他引:21  
A model-based methodology is proposed to detect facial features from a front-view ID-type picture. The system is composed of three modules: context (i.e. face location), eye, and mouth. The context module is a low resolution module which defines a face template in terms of intensity valley regions. The valley regions are detected using morphological filtering and 8-connected blob coloring. The objective is to generate a list of hypothesized face locations ranked by face likelihood. The detailed analysis is left for the high resolution eye and mouth modules. The aim for both is to confirm as well as refine the locations and shapes of their respective features of interest. The detection is done via a two-step modelling approach based on the Hough transform and the deformable template technique. The results show that facial features can be located very quickly with Adequate or better fit in over 80% of the images with the proposed system.  相似文献   

16.
人脸检测是全自动人脸识别系统和许多监视系统的基础,在许多领域有着广泛的应用。文章提出了一种基于多分量信息融合的人脸检测方法。首先进行彩色空间的变换,检测出图像中的肤色区域;建立眼睛颜色模型,并根据眼睛﹑嘴在不同分量上的分布特征,将它们提取出来;最后融合眼睛﹑嘴候选区域的信息,利用特征不变的方法进行人脸的确定。实验结果表明,该方法能够快速有效地检测出人脸,并能够确定眼﹑嘴的位置。  相似文献   

17.
Head pose estimation is a key task for visual surveillance, HCI and face recognition applications. In this paper, a new approach is proposed for estimating 3D head pose from a monocular image. The approach assumes the full perspective projection camera model. Our approach employs general prior knowledge of face structure and the corresponding geometrical constraints provided by the location of a certain vanishing point to determine the pose of human faces. To achieve this, eye-lines, formed from the far and near eye corners, and mouth-line of the mouth corners are assumed parallel in 3D space. Then the vanishing point of these parallel lines found by the intersection of the eye-line and mouth-line in the image can be used to infer the 3D orientation and location of the human face. In order to deal with the variance of the facial model parameters, e.g. ratio between the eye-line and the mouth-line, an EM framework is applied to update the parameters. We first compute the 3D pose using some initially learnt parameters (such as ratio and length) and then adapt the parameters statistically for individual persons and their facial expressions by minimizing the residual errors between the projection of the model features points and the actual features on the image. In doing so, we assume every facial feature point can be associated to each of features points in 3D model with some a posteriori probability. The expectation step of the EM algorithm provides an iterative framework for computing the a posterori probabilities using Gaussian mixtures defined over the parameters. The robustness analysis of the algorithm on synthetic data and some real images with known ground-truth are included.  相似文献   

18.
基于人脸特征和AdaBoost算法的多姿态人脸检测   总被引:2,自引:0,他引:2  
基于人脸特征和AdaBoost算法,提出一种改进的多姿态人脸检测算法。首先利用肤色特征快速排除绝大部分背景区域,然后在肤色区域中搜索眼睛和嘴巴区域,根据眼睛和嘴巴区域的几何特征所确定的人脸方向分割出大致正向的人脸候选区域,最后利用AdaBoost算法对候选区域进行分类。实验表明,算法能实现多姿态人脸的快速检测,而且对脸部表情和遮挡有较强的鲁棒性。  相似文献   

19.
A novel method for eye and mouth detection and eye center and mouth corner localization, based on geometrical information is presented in this paper. First, a face detector is applied to detect the facial region, and the edge map of this region is calculated. The distance vector field of the face is extracted by assigning to every facial image pixel a vector pointing to the closest edge pixel. The x and y components of these vectors are used to detect the eyes and mouth regions. Luminance information is used for eye center localization, after removing unwanted effects, such as specular highlights, whereas the hue channel of the lip area is used for the detection of the mouth corners. The proposed method has been tested on the XM2VTS and BioID databases, with very good results.  相似文献   

20.
目前人脸表情识别研究多数采用卷积神经网络(CNN)提取人脸特征并分类, CNN的缺点是网络结构复杂, 消耗计算资源. 针对以上缺点, 本文采用基于多层感知机(MLP)的Mixer Layer网络结构用于人脸表情识别. 采用数据增强和迁移学习方法解决数据集样本不足的问题, 搭建了不同层数的Mixer Layer网络. 经过实验比较, 4层Mixer Layer网络在CK+和JAFFE 数据集上的识别准确率分别达到了98.71%和95.93%, 8层Mixer Layer网络在Fer2013数据集上的识别准确率达到了63.06%. 实验结果表明, 无卷积结构的Mixer Layer网络在人脸表情识别任务上表现出良好的学习能力和泛化能力.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号