首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 171 毫秒
1.
为了从一幅人脸图像中合成出该人脸其他姿态和表情下的图像,提出了一种基于张量子空间的多姿态人脸表情合成方法。首先,用标记过特征点的人脸图像集构造四维纹理特征张量和形状张量;其次,通过张量分解得到核张量以及各维的投影子空间(人物标识、表情、姿态、特征维);最后应用核张量以及表情、姿态子空间构造新的张量用于姿态、表情的合成,在合成新人脸图像的时候充分利用了影响人脸的各因素间的内在关系。实验结果表明,所提方法可以利用一张已知表情和姿态的人脸图合成出自然合理的其他姿态表情下的该人脸图像。  相似文献   

2.
针对人脸表情变化对人脸识别的影响,提出一种结合小波变换(DWT)、特征脸方法(PCA)和线性判别法(LDA)的人脸特征提取新方法.首先将人脸图像通过二维小波变换(2DWT)提取其低频分量,然后将低频图像经过PCA变换映射到一个低维空间,最后在低维空间中利用LDA方法进行人脸特征的提取.通过此方法,采用ORL人脸库和Yale人脸库进行测试,我们可实现更准确的特征提取,并有效解决表情变化对人脸识别的影响问题.实验结果显示,本文方法在提高人脸识别率的同时,也提高了人脸识别速度.  相似文献   

3.
一种基于频谱脸和Fisherface的人脸识别新方法   总被引:1,自引:1,他引:0  
韩凌  王宏 《计算机仿真》2006,23(7):187-190
频谱脸方法主要是采用二维小波变换和傅立叶变换。因为人脸图像的低频部分对人脸的表情变化是不敏感的,所以对人脸图像使用二维小波变换,提取人脸图像的低频部分。再对人脸图像的低频部分使用傅立叶变换,从而获得原人像的一个低维空间的表达。但是频谱脸特征维数仍然较高,所以在频谱脸法的基础上继续提取人脸频谱图像的Fisherface特征,降低特征的维数,提高识别效率。利用ORL人脸库进行了实验,实验结果表明该识别系统具有较好的识别能力。  相似文献   

4.
针对小样本环境下存在人脸姿态、表情变化等干扰时的人脸识别问题,提出利用基于Haar特征的随机森林分类器完成对注册样本和待识别人脸图像的关键点自适应定位,再以SURF(Speed-Up Robust Features)特征的欧氏距离决策得出初匹配和再匹配关键点,完成人脸识别,解决在小样本环境下识别多姿态人脸图像的问题。实验结果证明,该方法在表情、姿态变化等干扰情况下能有效提高小样本人脸识别的识别率。  相似文献   

5.
针对人脸研究领域中高维数据产生的计算复杂度问题,提出基于小波分解的流形学习方法,对高维数据进行降维,从而达到降低计算复杂度的目的。该方法对人脸图像进行不同层次的小波分解保留低频分量后再分别应用局部线性嵌入(LLE)及局部保持投影(LPP)两种流形学习算法。实验在Frey和CMU PIE人脸库上进行,给出人脸姿态和表情分布变化的实验结果,并分析了运行时间和经小波分解得到的低频子图像的能量。结果表明,基于小波分解的流形学习算法对于降低计算复杂度和保持图像信息是有效的。  相似文献   

6.
罗瑜  李涛  何大可  徐图 《计算机工程》2008,34(4):198-200
提出一种基于部件的人脸分类方法,将人脸部件的离散余弦变换系数作为特征向量,通过支持向量机训练分类器。部件分类器确定了人脸图像中的部件区域,人脸分类器确定了人脸图像的所属类别。ORL人脸图像数据库仿真实验表明,该方法对表情、姿态变化具有很好的鲁棒性。  相似文献   

7.
提出基于广义判别分析的人脸识别方法,通过非线性核函数把样本映射到高维线性空间,然后在高维空间运用线性判决算法,从而获得输入空间非线性判决特征,可以很好地适应人脸图像中的光照、表情以及姿态等复杂的变化。实验证明该方法用较少的特征向量能获得比特征脸算法、Fisherfaces算法更高的分类准确率。  相似文献   

8.
基于分流抑制机制的卷积神经网络人脸检测法   总被引:1,自引:1,他引:1  
提出了一种基于分流抑制机制的卷积神经网络(SICNN)的人脸检测方法.该方法具有结构简单、训练参数少和适应性强等优点.待测图像经过具有分流抑制机制的神经元处理以后,特征信息进一步增强.该方法的人脸检测器可以检测出具有不同表情、姿态、尺寸和背景的图像中的人脸位置,并达到较高的检测率、较快的检测速度和低的错误警报数.  相似文献   

9.
刘树利  胡茂林 《微机发展》2006,16(6):213-215
对在不同视角下,得到的人脸模型,文中提出一种基于人脸表面的识别方法,采用平面射影变换,将人脸的图像变换到一个相同的图像,使图像对齐;而后运用主成分分析法(PCA)进行分类。基于这种方法,由光线、面部表情、姿态的改变引起的不必要变化就可能被消除或可以忽略。这种方法可以达到比较准确的识别人脸的目的。实验结果显示,文中的方法对人脸模型提供了更好的表达,并且人脸识别的错误率更低。  相似文献   

10.
张龙媛  陈莹 《计算机工程》2012,38(12):125-128
根据姿态与表情变化对人脸识别的影响,采用对图像的旋转、尺度变化保持不变性的SIFT算子作为人脸特征,建立人脸各个子区域的相似性测度,并通过混合高斯建立不同变形条件下相同样本与不同样本的相似性概率模型。在此基础上,利用各子区域特有的识别能力获取子区域概率权值,结合基于贝叶斯公式建立的概率框架确定识别结果。实验结果表明,与直接用SIFT算子进行人脸识别的方法相比,该方法在姿态变化较大及表情变化较大的情况下识别率有明显提高。  相似文献   

11.
证明和测试是验证规格说明是否正确的2种方法,两者互为补充。针对软件规格说明难以证明的问题,提出对状态空间进行完备性测试的理论。采用构造函数和受限状态空间的概念,讨论用于测试Z规格说明语言初始状态存在性的方法,通过实例证明该方法的可行性。  相似文献   

12.
In this paper a real-time 3D pose estimation algorithm using range data is described. The system relies on a novel 3D sensor that generates a dense range image of the scene. By not relying on brightness information, the proposed system guarantees robustness under a variety of illumination conditions, and scene contents. Efficient face detection using global features and exploitation of prior knowledge along with novel feature localization and tracking techniques are described. Experimental results demonstrate accurate estimation of the six degrees of freedom of the head and robustness under occlusions, facial expressions, and head shape variability.  相似文献   

13.
提出一种基于三维人脸深度数据的人脸姿态计算方法。利用人脸的深度数 据以及与其一一对应的灰度图像,根据微分几何原理和相应的曲率算法与人脸数据中的灰度 特征对人脸面部关键特征点定位,进而计算出人脸姿态在三维空间中的3 个姿态角。实验证 明该方法能在姿态变化情况下实现对人脸旋转角的准确估计,为进一步的人脸识别和表情分 析提供基础。  相似文献   

14.
基于表情分解-扭曲变形的人工表情合成算法   总被引:1,自引:0,他引:1       下载免费PDF全文
为了能快速有效地生成任意强度的人脸表情图像,提出了一种鲁棒的可以生成带任意强度表情图像的人工表情合成算法,该算法首先通过施加高阶奇异值分解(HOSVD)来把训练集分解为个人、表情和特征3个子空间,并把它们映射到表情子空间中,用来合成任意人脸正面照片的任意强度、任意表情的图像;在生成图像时,不采用通常所使用的线性组合基图像生成法,而是对源图像进行扭曲变形,这不仅能使训练数据和计算量大为减少,还可以生成任意尺寸、任意背景、任意光照、任意色彩或任意姿势的表情图像,且通过二次插值,还可以得到任意强度的表情图像。实验证明,该算法效率较高,且生成的图像效果很好。  相似文献   

15.
Variations in illumination degrade the performance of appearance based face recognition. We present a novel algorithm for the normalization of color facial images using a single image and its co-registered 3D pointcloud (3D image). The algorithm borrows the physically based Phong’s lighting model from computer graphics which is used for rendering computer images and employs it in a reverse mode for the calculation of face albedo from real facial images. Our algorithm estimates the number of the dominant light sources and their directions from the specularities in the facial image and the corresponding 3D points. The intensities of the light sources and the parameters of the Phong’s model are estimated by fitting the Phong’s model onto the facial skin data. Unlike existing approaches, our algorithm takes into account both Lambertian and specular reflections as well as attached and cast shadows. Moreover, our algorithm is invariant to facial pose and expression and can effectively handle the case of multiple extended light sources. The algorithm was tested on the challenging FRGC v2.0 data and satisfactory results were achieved. The mean fitting error was 6.3% of the maximum color value. Performing face recognition using the normalized images increased both identification and verification rates.  相似文献   

16.
We proposed a facial motion tracking and expression recognition system based on video data. By a 3D deformable facial model, the online statistical model (OSM) and cylinder head model (CHM) were combined to track 3D facial motion in the framework of particle filtering. For facial expression recognition, a fast and efficient algorithm and a robust and precise algorithm were developed. With the first, facial animation and facial expression were retrieved sequentially. After that facial animation was obtained, facial expression was recognized by static facial expression knowledge learned from anatomical analysis. With the second, facial animation and facial expression were simultaneously retrieved to increase the reliability and robustness with noisy input data. Facial expression was recognized by fusing static and dynamic facial expression knowledge, the latter of which was learned by training a multi-class expressional Markov process using a video database. The experiments showed that facial motion tracking by OSM+CHM is more pose robust than that by OSM, and the facial expression score of the robust and precise algorithm is higher than those of other state-of-the-art facial expression recognition methods.  相似文献   

17.
We introduce a novel approach to recognizing facial expressions over a large range of head poses. Like previous approaches, we map the features extracted from the input image to the corresponding features of the face with the same facial expression but seen in a frontal view. This allows us to collect all training data into a common referential and therefore benefit from more data to learn to recognize the expressions. However, by contrast with such previous work, our mapping depends on the pose of the input image: We first estimate the pose of the head in the input image, and then apply the mapping specifically learned for this pose. The features after mapping are therefore much more reliable for recognition purposes. In addition, we introduce a non-linear form for the mapping of the features, and we show that it is robust to occasional mistakes made by the pose estimation stage. We evaluate our approach with extensive experiments on two protocols of the BU3DFE and Multi-PIE datasets, and show that it outperforms the state-of-the-art on both datasets.  相似文献   

18.
We present a random forest-based framework for real time head pose estimation from depth images and extend it to localize a set of facial features in 3D. Our algorithm takes a voting approach, where each patch extracted from the depth image can directly cast a vote for the head pose or each of the facial features. Our system proves capable of handling large rotations, partial occlusions, and the noisy depth data acquired using commercial sensors. Moreover, the algorithm works on each frame independently and achieves real time performance without resorting to parallel computations on a GPU. We present extensive experiments on publicly available, challenging datasets and present a new annotated head pose database recorded using a Microsoft Kinect.  相似文献   

19.
Yeon-Sik Ryu  Se-Young Oh   《Pattern recognition》2001,34(12):2459-2466
This paper presents a novel algorithm for the extraction of the eye and mouth (facial features) fields from 2-D gray-level face images. The fundamental philosophy is that eigenfeatures, derived from the eigenvalues and eigenvectors of the binary edge data set constructed from the eye and mouth fields, are very good features to locate these fields efficiently. The eigenfeatures extracted from the positive and negative training samples of the facial features are used to train a multilayer perceptron whose output indicates the degree to which a particular image window contains an eye or a mouth. It turns out that only a small number of frontal faces are sufficient to train the networks. Furthermore, they lend themselves to good generalization to non-frontal pose and even other people's faces. It has been experimentally verified that the proposed algorithm is robust against facial size and slight variations of pose.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号