首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到18条相似文献,搜索用时 93 毫秒
1.
田卓  佘青山  甘海涛  孟明 《计量学报》2019,40(4):576-582
为了提高复杂背景下面部信息的识别性能,提出了一种面向人脸特征点定位和姿态估计任务协同的深度卷积神经网络(DCNN)方法。首先从视频图像中检测出人脸信息;其次设计一个深度卷积网络模型,将人脸特征点定位和姿态估计两个任务协同优化,同时回归得到人脸特征点坐标和姿态角度值,然后融合生成相应的人机交互信息;最后采用公开数据集和实际场景数据进行测试,并与其他现有方法进行比对分析。实验结果表明:该方法在人脸特征点定位和姿态估计上表现出较好的性能,在光照变化、表情变化、部分遮挡等复杂条件下人机交互应用也取得了良好的准确性和鲁棒性,平均处理速度约16帧/s,具备一定的实用性。  相似文献   

2.
针对现有的人脸姿态估计方法易受"自遮挡"影响,采用改进的ASM算法提取人脸特征点,并利用人脸形态的几何统计知识来估计人脸特征点的深度值。以人脸主要特征点建立人脸稀疏模型,在利用相关人脸特征点近似估计人脸姿态后,通过最小二乘法精确估计三维人脸空间姿态。实验结果表明,对于"自遮挡"情况,该方法仍有较好的估计结果,与同类方法比较具有良好的姿态估计精度。  相似文献   

3.
提出一种将肤色分割与灰度图像对称变换相结合的有效的人脸特征定位方法.为降低背景和人脸姿态的影响,在传统人脸肤色分割基础上,利用Blob分析及椭圆拟和方法判断并剔除无效区域,并将待选人脸区域初步校正为竖直;直接利用色度信息在待选人脸区域中分割并检测嘴唇;接着,采用灰度图像的对称变换检测双眼的待选位置;以嘴唇和待选双眼的位置为依据,提出了基于人脸几何模型知识的双眼匹配代价函数并利用组合优化方法检测真实双眼;最后,对人脸上其它器官的进行精确定位.实验结果表明,该方法对于头发遮挡、大倾斜角以及眼镜干扰具有较强的鲁棒性.  相似文献   

4.
提出了一种利用三维人脸模型匹配二维人脸图像的分层人脸识别方法和基于模糊数学的人脸姿态角度估计算法.对多姿态二维图像进行姿态空间划分,利用主成分分析方法(PCA)形成多姿态特征脸.识别过程首先估计测试图像姿态和模糊姿态角,在估计的姿态空间内采用基于PCA的方法进行第一层识别得到候选个体,然后利用候选个体的三维模型结合模糊姿态角产生虚拟图像,利用相关进行第二层识别.实验结果表明,该方法对姿态的变化有较好的鲁棒性.  相似文献   

5.
针对人脸关键点检测(人脸对齐)在应用场景下的速度和精度需求,首先在SSD基础之上融合更多分布均匀的特征层,对人脸框坐标进行级联预测,形成对于多尺度人脸信息均具有更加鲁棒响应的深度学习检测器MR-SSD。其次在局部二值特征LBF的级联形状回归方法基础上,提出了基于面部像素差值的多角度初始化算法。采用端正人脸正负90°倾斜范围内的五组特征点形状进行初始化,求取每组回归后形状的眼部特征点像素均方差值并以最大者对应方案作为最终回归形状,从而实现对多角度倾斜人脸优异的拟合效果。本文所提出的最优架构可以实时获得极具鲁棒性的人脸框坐标并且可实现对于多角度倾斜人脸的关键点检测。  相似文献   

6.
基于肤色特征和动态聚类的彩色人脸检测   总被引:2,自引:1,他引:1  
何光宏  潘英俊  吴芳 《光电工程》2004,31(11):47-50
在人类视觉机制和肤色聚类特性的基础上,提出了一种复杂背景下人脸检测方法。该方法采用K-均值动态聚类分析算法,利用人类肤色特征在输入图像中检测包含人脸的似人脸区作为候选人脸,再用同样的方法对候选人脸区域进行扫描,得到真正的人脸。实验结果表明,该方法的正确检出率达到84%,受背景、光照、角度、姿态的影响很小,具有较好的鲁棒性。  相似文献   

7.
根据视频监控图像在时间上的连续性和空间上的继承性,利用连续三帧视频图像对称差分,找到运动区域,再结合人脸肤色的聚类特征确定出人脸候选区域,然后改进了利用投影的人脸定位算法,将单次投影发展为多次投影,并且结合人脸的几何特征,实现视频监控中复杂背景下的多人脸检测。实验表明,该算法复杂度小,准确率较高,对姿态、表情、背景等变化情况下人脸的检测均具有较好的鲁棒性。  相似文献   

8.
提出并实现了一种新的基于颜色空间的人脸检测算法;在充分考虑人脸与头发的颜色特征与几何关系的基础上,给出了一种用于人脸和头发表达的几何模型,在对肤色区域和头发区域分别进行检测后,根据不同区域之间的几何关系,通过几何约束对人脸和头发可能存在的区域进行特征判别.对不同姿态的人脸进行检测的结果表明了算法的可行性和鲁棒性.  相似文献   

9.
自适应的彩色图像光照补偿新方法   总被引:7,自引:0,他引:7  
环境光照变化、光照分布不均匀会影响彩色图像中人脸的正确检测。在融合了不同的基于空域的解决方案的基础上,提出了一种自适应的彩色图像光照补偿新方法。分别对光照过亮、过暗以及中间灰度区域进行自适应的处理。对亮度值最小、最大的5%的像素,如果这些像素的数目足够多(本文大于100),在变换后分别压缩为0和255;用对数函数做非线性变换函数来修正中间灰度区域。在人脸PIE数据库上对光照不均的彩色图像进行了实验,验证了该方法能对人脸检测中的光照进行有效的补偿。  相似文献   

10.
在无约束的开放空间中,由于面部姿态变化、背景环境复杂、运动模糊等,人脸检测仍是一个具有挑战性的任务。本文针对视频流中人脸检测存在的平面内旋转问题,将人脸关键点与金字塔光流相结合,提出了基于级联网络和金字塔光流的旋转不变人脸检测算法。首先利用级联渐进卷积神经网络对视频流中前一帧进行人脸位置和关键点的定位;其次为获取关键点与人脸候选框间光流映射,使用独立的关键点检测网络对当前帧进行再次定位;之后计算前后两帧之间关键点光流位移;最后通过关键点光流位移与人脸候选框的映射关系,对视频中检测到的人脸进行校正,从而完成平面内旋转人脸不变性检测。实验经FDDB公开数据集上测试,证明该方法精确度较高。并且,在Boston面部跟踪数据集上进行动态测试,证明该人脸检测算法能有效解决平面内旋转人脸检测问题。对比其它检测算法,该算法检测速度有较大优势,同时视频中窗口抖动问题得到了很好解决。  相似文献   

11.
In this article, we proposed a novel teleconferencing system that combines a facial muscle model and the techniques of face detection and facial feature extraction to synthesize a sequence of life‐like face animation. The proposed system can animate realistic 3D face images in a low‐bandwidth environment to support virtual videoconferencing. Based on the technique of feature extraction, a face detection algorithm for the virtual conferencing system is proposed in this article. In the proposed face detection algorithm, the YCbCr skin color model is used to detect the possible face area of the image; the feature points of the face is determined by using the symmetry property of the face and the gray level characteristics of the eyes and the mouth. According to the positions of the feature points on a facial image, we can compute the transformation values of the feature points. These values will then be sent via a network from the sender's side to the receiver's side frame by frame. We can synthesize the realistic facial animations on the receiver's side based on these. Experimental results show that the proposed system can achieve a practical animated face‐to‐face virtual conference with good facial expressions and a low‐bandwidth requirement. © 2010 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 20, 323–332, 2010  相似文献   

12.
《中国工程学刊》2012,35(5):529-534
Faces are highly deformable objects which may easily change their appearance over time. Not all face areas are subject to the same variability. Therefore, decoupling of the information from independent areas of the face is of paramount importance to improve the robustness of any face recognition technique. The aim of this article is to present a robust face recognition technique based on the extraction and matching of probabilistic graphs drawn on scale invariant feature transform (SIFT) features related to independent face areas. The face matching strategy is based on matching individual salient facial graphs characterized by SIFT features as connected to facial landmarks such as the eyes and the mouth. In order to reduce the face matching errors, the Dempster–Shafer decision theory is applied to fuse the individual matching scores obtained from each pair of salient facial features. The proposed algorithm is evaluated with the Olivetti Research Lab (ORL) and the Indian Institute of Technology Kanpur (IITK) face databases. The experimental results demonstrate the effectiveness and potential of the proposed face recognition technique, even in the case of partially occluded faces.  相似文献   

13.
江艳霞  任波 《光电工程》2011,38(5):139-144
本文提出了一种基于子模式的加权邻域极大边界准则的人脸识别算法.该方法首先在训练过程中对人脸图像进行子块划分,采用邻接点的类信息自适应地计算每块的权重以提高人脸在姿态、表情以及光照等变化下的鲁棒性.其次,对每块图像采用加权邻域极大边界准则进行特征提取,该准则充分利用了数据的类信息,选择数据的邻域点最优重构系数用在目标函数...  相似文献   

14.
王娟  江艳霞  唐彩虹 《光电工程》2012,39(10):32-39
实际人脸跟踪过程中,光照和姿态的变化、背景颜色干扰等因素都会极大地削弱颜色特征的有效性,从而造成跟踪的不稳定.针对该问题,本文提出了一种以颜色和轮廓分布为线索的粒子滤波人脸跟踪算法.该算法主要有三个方面的特点:第一,在粒子滤波基本框架下,引入新的用直方图描述人脸轮廓的方法,有效解决了光照、人脸旋转、部分遮挡问题对跟踪的影响,并且能及时有效地重新捕获由于大面积遮挡等原因而丢失的目标.同时采用实时调整每帧图像特征点个数,有效提高了跟踪效率.第二,针对背景干扰问题,提出了一种抑制相似背景颜色干扰的方法.第三,本文还提出实时更新模板的方法来提高跟踪的准确性.实验证明本文算法对人脸跟踪具有很好的效果.  相似文献   

15.
Race classification is a long-standing challenge in the field of face image analysis. The investigation of salient facial features is an important task to avoid processing all face parts. Face segmentation strongly benefits several face analysis tasks, including ethnicity and race classification. We propose a race-classification algorithm using a prior face segmentation framework. A deep convolutional neural network (DCNN) was used to construct a face segmentation model. For training the DCNN, we label face images according to seven different classes, that is, nose, skin, hair, eyes, brows, back, and mouth. The DCNN model developed in the first phase was used to create segmentation results. The probabilistic classification method is used, and probability maps (PMs) are created for each semantic class. We investigated five salient facial features from among seven that help in race classification. Features are extracted from the PMs of five classes, and a new model is trained based on the DCNN. We assessed the performance of the proposed race classification method on four standard face datasets, reporting superior results compared with previous studies.  相似文献   

16.
To generate realistic three-dimensional animation of virtual character, capturing real facial expression is the primary task. Due to diverse facial expressions and complex background, facial landmarks recognized by existing strategies have the problem of deviations and low accuracy. Therefore, a method for facial expression capture based on two-stage neural network is proposed in this paper which takes advantage of improved multi-task cascaded convolutional networks (MTCNN) and high-resolution network. Firstly, the convolution operation of traditional MTCNN is improved. The face information in the input image is quickly filtered by feature fusion in the first stage and Octave Convolution instead of the original ones is introduced into in the second stage to enhance the feature extraction ability of the network, which further rejects a large number of false candidates. The model outputs more accurate facial candidate windows for better landmarks recognition and locates the faces. Then the images cropped after face detection are input into high-resolution network. Multi-scale feature fusion is realized by parallel connection of multi-resolution streams, and rich high-resolution heatmaps of facial landmarks are obtained. Finally, the changes of facial landmarks recognized are tracked in real-time. The expression parameters are extracted and transmitted to Unity3D engine to drive the virtual character's face, which can realize facial expression synchronous animation. Extensive experimental results obtained on the WFLW database demonstrate the superiority of the proposed method in terms of accuracy and robustness, especially for diverse expressions and complex background. The method can accurately capture facial expression and generate three-dimensional animation effects, making online entertainment and social interaction more immersive in shared virtual space.  相似文献   

17.
Hu S  Maschal R  Young SS  Hong TH  Phillips PJ 《Applied optics》2012,51(18):4250-4259
With the prevalence of surveillance systems, face recognition is crucial to aiding the law enforcement community and homeland security in identifying suspects and suspicious individuals on watch lists. However, face recognition performance is severely affected by the low face resolution of individuals in typical surveillance footage, oftentimes due to the distance of individuals from the cameras as well as the small pixel count of low-cost surveillance systems. Superresolution image reconstruction has the potential to improve face recognition performance by using a sequence of low-resolution images of an individual's face in the same pose to reconstruct a more detailed high-resolution facial image. This work conducts an extensive performance evaluation of superresolution for a face recognition algorithm using a methodology and experimental setup consistent with real world settings at multiple subject-to-camera distances. Results show that superresolution image reconstruction improves face recognition performance considerably at the examined midrange and close range.  相似文献   

18.
目的在机器人视觉应用领域中,为控制机器人能够完成焊接、搬运、跟踪等任务,需要确定摄像机与目标之间的相对位姿关系,提出一种目标位姿测量方法。方法利用单摄像机获取目标特征,坐标变换参数表示为对偶四元数的形式,同时计算旋转矩阵和平移向量,构建位置向量和方向向量的测量值与模型值之间的误差方程,利用Hopfield神经网络实现拉格朗日乘子法,求解目标位姿最优解。结果利用Matlab软件平台,选择SVD,DQ以及文中算法进行比较,仿真实验结果表明,基于Hopfield神经网络和对偶四元数的位姿测量算法计算出的位姿参数误差最小。随着测量点数量的增大,文中提出的算法精度更高。结论对偶四元数同时求解位姿变换矩阵的旋转分量和平移分量,可消除计算误差,基于Hopfield神经网络和拉格朗日乘子法,可快速准确地计算,并收敛至目标位姿最优解。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号