首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 171 毫秒
1.
一种鲁棒的全自动人脸特征点定位方法   总被引:4,自引:0,他引:4  
人脸特征点定位的目标是能够对人脸进行全自动精确定位. 主动形状模型(Active shape modal, ASM)和主动表象模型(Active appearance modal, AAM)的发表为全自动人脸特征点定位工作提供了很好的思路和解决框架. 之后很多研究工作也都在ASM和AAM的框架下进行了改进. 但是目前的研究工作尚未很好地解决人脸表情、光照以及姿态变化情况下的人脸特征点定位问题, 本文基于ASM框架提出了全自动人脸特征点定位算法. 和传统ASM方法以及ASM的改进方法的不同在于: 1)引进有效的机器学习方法来建立局部纹理模型. 这部分工作改进了传统ASM方法中用灰度图像的梯度分布进行局部纹理建模的方法, 引入了基于随机森林分类器和点对比较特征的局部纹理建模方法. 这种方法基于大量样本的统计学习, 能够有效解决人脸特征点定位中光照和表情变化这些难点; 2)在人脸模型参数优化部分, 本文成功地将分类器输出的结果结合到人脸模型参数优化的目标函数当中, 并且加入形状限制项使得优化的目标函数更为合理. 本文在包含表情、光照以及姿态变化的人脸数据上进行实验, 实验结果证明本文提出的全自动人脸特征点定位方法能够有效地适应人脸的光照和表情变化. 在姿态数据库上的测试结果说明了本算法的有效性.  相似文献   

2.
一种鲁棒高效的人脸特征点跟踪方法   总被引:2,自引:0,他引:2  
黄琛  丁晓青  方驰 《自动化学报》2012,38(5):788-796
人脸特征点跟踪能获取除粗略的人脸位置和运动轨迹以外的人脸部件的精确信息,对计算机视觉研究有重要作用.主动表象模型(Active appearance model, AAM)是描述人脸特征点位置的最有效的方法之一,但是其高维参数空间和梯度下降优化策略使得AAM对初始参数敏感,且易陷入局部极值. 因此,基于传统AAM的人脸特征点跟踪方法不能同时较好地解决大姿态、光照和表情的问题.本文在多视角AAM的框架下,提出一种结合随机森林和线性判别分析(Linear discriminate analysis, LDA)的实时姿态估计算法对跟踪的人脸进行姿态预估计和更新,从而有效地解决了视频人脸大姿态变化的问题.提出了一种改进的在线表象模型(Online appearance model, OAM)方法来评估跟踪的准确性,并自适应地通过增量主成分分析(Principle component analysis, PCA) 学习来更新AAM的纹理模型,极大地提高了跟踪的稳定性和模型应对光照和表情变化的能力.实验结果表明,本文算法在视频人脸特征点跟踪的准确性、鲁棒性和实时性方面都有良好的性能.  相似文献   

3.
人脸纹理映射技术是计算机辅助颅骨面貌复原中一种特殊的真实感处理技术。针对人脸面部器官纹理映射难以准确实现的问题,提出一种基于特征点约束的人脸纹理映射方法。利用最小二乘保角映射参数化时需固定顶点来完成特征点约束。通过对大量单张、正面照片作为纹理进行映射,证实了该方法能够取得良好的映射效果。实验结果表明本方法鲁棒且效率高,降低了算法的复杂性。  相似文献   

4.
黄华  樊鑫  齐春  朱世华 《软件学报》2006,17(12):2529-2536
将人脸图像超分辨率重建描述为人脸混合模型的纹理和位置参数的贝叶斯概率估计问题,将超分辨率重建的图像配准和像素融合这两个过程置于统一的概率估计框架下,并利用基于粒子滤波的参数估计算法,同时估计纹理和位置参数,从而实现人脸图像的超分辨率重建.包含灰度和位置两种先验信息的人脸混合模型,同时用于超分辨率重建的两个过程中,提高了图像配准精度和重建算法的性能,避免了通常方法在获得准确鲁棒的运动场估计时需要清晰的高分辨图像,而获得清晰的高分辨图像时又需要准确鲁棒运动场估计的困境.正面人脸合成序列图像实验结果表明,该方法获得的重建结果较为理想.  相似文献   

5.
基于特征三角形的多姿态视频图像人脸跟踪   总被引:2,自引:0,他引:2       下载免费PDF全文
提出了一种能在复杂环境中进行人脸跟踪的鲁棒、有效的视频图像人脸跟踪的算法。该算法根据面部特征构造特征三角形包括等腰三角形和直角三角形,根据刚体约束生成潜在人脸跟踪矩形区域。该算法能够在不同尺寸、不同光照、不同姿态和不同表情甚至不同噪音情况下检测人脸,有效率达98.18%。  相似文献   

6.
为了对人脸特征点进行精确地跟踪,提出一种在线参考表观模型(ORAM)的算法.首先在原主动表观模型(AAM)中加入在线更新的参考模型;然后采用子空间在线自更新机制,利用增量学习方法在线更新AAM的纹理模型和参考模型;在此基础上,基于同步反向合成建立ORAM的特征点拟合算法.为减少跟踪过程产生的累积误差,利用初始稳定跟踪结果建立纹理子空间重置机制,完成人脸特征点跟踪.实验结果表明,ORAM算法无需训练集,在姿态、表情、光照变化的环境下,能够准确、快速地完成人脸跟踪.  相似文献   

7.
提出一种建立三维人脸扫描模型参数空间的算法,其中的模板拟和算法基于能量最小的优化机制,通过非线性优化过程求解模板模型在每个顶点上的位移矢量,使之逼近目标模型.优化目标方程由以下测度组成:距离、平滑度以及人脸特征对应,如特征曲线、边界和特征点对等.使用该算法可用于不同人脸以及不同表情模型之间的对应建立,从而获取一致参数化的人脸形状和表情空间.在文中系统中,三维面部特征曲线通过Canny边检测算法自动获取,这样自动检测获取的特征曲线可用于降低三维形状描述的维数,同时完整的面部几何形状通过径向基函数插值得到.在中性人脸和表情人脸模型上所作的一致参数化为许多应用提供了平台,如形状渐变,纹理迁移和表情迁移.考虑到自动提取的特征曲线和二维线画卡通人脸的相似性,使用迭代优化算法实现二维线画卡通人脸姿态到三维真实人脸模型的迁移.  相似文献   

8.
利用3D人脸建模的方法进行人脸识别有效地克服了2D人脸识别系统中识别率易受光照、姿态、表情影响的缺陷。文章采用一种依据人脸图像对3D通用人脸模型进行自适应调整的有效算法,构造出特定的人脸模型并运用于人脸识别中。通过比较从人脸图像中估算出的特征点与通用人脸模型在图像平面上的投影点之间的关系,对3D通用人脸模型进行全局和局部调整,以适应人脸中眼、口、鼻的个性化特征。最后以一个实例说明了此算法的应用。  相似文献   

9.
基于Candide-3模型的姿态表情人脸识别研究   总被引:1,自引:0,他引:1  
针对姿态表情严重影响人脸识别准确率的问题,基于Candide-3模型的简化,提出了形状表情关键点拟合的人脸几何结构重建和基于三角网格模型的纹理映射的方法,该方法确定关键特征点,根据人脸的几何结构信息确定姿态角,提取Candide-3模型形状表情对应点,调整模型参数,进行几何结构重建;对几何结构中每个三角网格模型进行纹理影射,得到逼真的特定人脸模型.实验结果表明,该方法提高了人脸重建速度,达到减弱姿态表情对人脸识别影响的目的.  相似文献   

10.
不确定性系统的自适应鲁棒跟踪控制   总被引:4,自引:0,他引:4  
李昇平 《自动化学报》2003,29(6):883-892
针对存在未知干扰和未建模动态等不确定性的系统的自适应鲁棒跟踪控制问题进行了 探讨.首选将l1优化控制器的有限拍设计方法结合给出了最优鲁棒稳态跟踪控制器的设计方法. 然后利用集员辨识的思想,将名义模型的参数和未建模动态及干扰的大小作为未知参数,提出了 一种递推参数估计方法.最后将上述研究结果结合起来提出了一种自适应鲁棒跟踪控制策略,证 明了自适应算法的全局收敛性并给出了鲁棒跟踪性能指标的一下较紧的上界.与现有的结果相 比,本文提出的自适应控制具有非保守的鲁棒稳定性,具有渐近最优的鲁棒跟踪性能.  相似文献   

11.
We proposed a facial motion tracking and expression recognition system based on video data. By a 3D deformable facial model, the online statistical model (OSM) and cylinder head model (CHM) were combined to track 3D facial motion in the framework of particle filtering. For facial expression recognition, a fast and efficient algorithm and a robust and precise algorithm were developed. With the first, facial animation and facial expression were retrieved sequentially. After that facial animation was obtained, facial expression was recognized by static facial expression knowledge learned from anatomical analysis. With the second, facial animation and facial expression were simultaneously retrieved to increase the reliability and robustness with noisy input data. Facial expression was recognized by fusing static and dynamic facial expression knowledge, the latter of which was learned by training a multi-class expressional Markov process using a video database. The experiments showed that facial motion tracking by OSM+CHM is more pose robust than that by OSM, and the facial expression score of the robust and precise algorithm is higher than those of other state-of-the-art facial expression recognition methods.  相似文献   

12.
13.
Face images are difficult to interpret because they are highly variable. Sources of variability include individual appearance, 3D pose, facial expression, and lighting. We describe a compact parametrized model of facial appearance which takes into account all these sources of variability. The model represents both shape and gray-level appearance, and is created by performing a statistical analysis over a training set of face images. A robust multiresolution search algorithm is used to fit the model to faces in new images. This allows the main facial features to be located, and a set of shape, and gray-level appearance parameters to be recovered. A good approximation to a given face can be reconstructed using less than 100 of these parameters. This representation can be used for tasks such as image coding, person identification, 3D pose recovery, gender recognition, and expression recognition. Experimental results are presented for a database of 690 face images obtained under widely varying conditions of 3D pose, lighting, and facial expression. The system performs well on all the tasks listed above  相似文献   

14.
In this paper, we propose an On-line Appearance-Based Tracker (OABT) for simultaneous tracking of 3D head pose, lips, eyebrows, eyelids and irises in monocular video sequences. In contrast to previously proposed tracking approaches, which deal with face and gaze tracking separately, our OABT can also be used for eyelid and iris tracking, as well as 3D head pose, lips and eyebrows facial actions tracking. Furthermore, our approach applies an on-line learning of changes in the appearance of the tracked target. Hence, the prior training of appearance models, which usually requires a large amount of labeled facial images, is avoided. Moreover, the proposed method is built upon a hierarchical combination of three OABTs, which are optimized using a Levenberg–Marquardt Algorithm (LMA) enhanced with line-search procedures. This, in turn, makes the proposed method robust to changes in lighting conditions, occlusions and translucent textures, as evidenced by our experiments. Finally, the proposed method achieves head and facial actions tracking in real-time.  相似文献   

15.
轨道扣件的精确定位是实现扣件缺陷检测的前提,而常规的基于机器视觉的轨道扣件定位检测方法存在适应性差,且易受光照强度及遮挡的干扰。为了实现轨道扣件的快速精确定位,提出一种基于边缘特征的轨道扣件定位方法。通过在参考图像中选取标准扣件区域以生成匹配模板,利用Canny边缘滤波算子获得边缘点的位置坐标及梯度方向;在此基础上构建搜索模型并采用图像金字塔匹配搜索策略,获得匹配分值的潜在匹配点;并利用匹配阈值设定法,逐层逐次跟踪潜在匹配点,直至图像金字塔最底层,以提高定位速度;基于最小二乘法调整位姿参数,使其达到亚像素级精度。实验表明,该方法具有鲁棒性强、定位速度快且不受光照变化及遮挡的影响,定位精度达到1/15像素,定位成功率大于95%,满足无砟轨道扣件定位需求。  相似文献   

16.
Interaction between a personal service robot and a human user is contingent on being aware of the posture and facial expression of users in the home environment. In this work, we propose algorithms to robustly and efficiently track the head, facial gestures, and the upper body movements of a user. The face processing module consists of 3D head pose estimation, modeling nonrigid facial deformations, and expression recognition. Thus, it can detect and track the face, and classify expressions under various poses, which is the key for human–robot interaction. For body pose tracking, we develop an efficient algorithm based on bottom-up techniques to search in a tree-structured 2D articulated body model, and identify multiple pose candidates to represent the state of current body configuration. We validate these face and body modules in varying experiments with different datasets, and the experimental results are reported. The implementation of both modules can run in real-time, which meets the requirement for real-world human–robot interaction task. These two modules have been ported onto a real robot platform by the Electronics and Telecommunications Research Institute.  相似文献   

17.
The recognition of facial gestures and expressions in image sequences is an important and challenging problem. Most of the existing methods adopt the following paradigm. First, facial actions/features are retrieved from the images, then the facial expression is recognized based on the retrieved temporal parameters. In contrast to this mainstream approach, this paper introduces a new approach allowing the simultaneous retrieval of facial actions and expression using a particle filter adopting multi-class dynamics that are conditioned on the expression. For each frame in the video sequence, our approach is split into two consecutive stages. In the first stage, the 3D head pose is retrieved using a deterministic registration technique based on Online Appearance Models. In the second stage, the facial actions as well as the facial expression are simultaneously retrieved using a stochastic framework based on second-order Markov chains. The proposed fast scheme is either as robust as, or more robust than existing ones in a number of respects. We describe extensive experiments and provide evaluations of performance to show the feasibility and robustness of the proposed approach.  相似文献   

18.
Variations in illumination degrade the performance of appearance based face recognition. We present a novel algorithm for the normalization of color facial images using a single image and its co-registered 3D pointcloud (3D image). The algorithm borrows the physically based Phong’s lighting model from computer graphics which is used for rendering computer images and employs it in a reverse mode for the calculation of face albedo from real facial images. Our algorithm estimates the number of the dominant light sources and their directions from the specularities in the facial image and the corresponding 3D points. The intensities of the light sources and the parameters of the Phong’s model are estimated by fitting the Phong’s model onto the facial skin data. Unlike existing approaches, our algorithm takes into account both Lambertian and specular reflections as well as attached and cast shadows. Moreover, our algorithm is invariant to facial pose and expression and can effectively handle the case of multiple extended light sources. The algorithm was tested on the challenging FRGC v2.0 data and satisfactory results were achieved. The mean fitting error was 6.3% of the maximum color value. Performing face recognition using the normalized images increased both identification and verification rates.  相似文献   

19.
同时跟踪具有丰富表情的人脸多个特征是一个有挑战性的问题.提出了一个基于时空概率图模型的方法.在时间域上,使用几个相互独立的Condensation类型的粒子滤波器分别跟踪人脸的每个特征.粒子滤波对独立的视觉跟踪问题非常有效,但是多个独立的跟踪器忽视了人脸的空间约束和人脸特征间的自然相互联系;在空间域上,事先从人脸表情库中学习人脸特征轮廓的相互关系,使用贝叶斯推理一信任度传播算法来对人脸特征的轮廓位置进行求精.实验结果表明,文中算法可以在帧间运动较大的情况下,鲁棒地同时跟踪人脸多个特征.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号