首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this paper, an effective method of facial features detection is proposed for human-robot interaction (HRI). Considering the mobility of mobile robot, it is inevitable that any vision system for a mobile robot is bound to be faced with various imaging conditions such as pose variations, illumination changes, and cluttered backgrounds. To detecting face correctly under such difficult conditions, we focus on the local intensity pattern of the facial features. The characteristics of relatively dark and directionally different pattern can provide robust clues for detecting facial features. Based on this observation, we suggest a new directional template for detecting the major facial features, namely the two eyes and the mouth. By applying this template to a facial image, we can make a new convolved image, which we refer to as the edge-like blob map. One distinctive characteristic of this map image is that it provides the local and directional convolution values for each image pixel, which makes it easier to construct the candidate blobs of the major facial features without the information of facial boundary. Then, these candidates are filtered using the conditions associated with the spatial relationship of the two eyes and the mouth, and the face detection process is completed by applying appearance-based facial templates to the refined facial features. The overall detection results obtained with various color images and gray-level face database images demonstrate the usefulness of the proposed method in HRI applications.  相似文献   

2.
Towards a system for automatic facial feature detection   总被引:21,自引:0,他引:21  
A model-based methodology is proposed to detect facial features from a front-view ID-type picture. The system is composed of three modules: context (i.e. face location), eye, and mouth. The context module is a low resolution module which defines a face template in terms of intensity valley regions. The valley regions are detected using morphological filtering and 8-connected blob coloring. The objective is to generate a list of hypothesized face locations ranked by face likelihood. The detailed analysis is left for the high resolution eye and mouth modules. The aim for both is to confirm as well as refine the locations and shapes of their respective features of interest. The detection is done via a two-step modelling approach based on the Hough transform and the deformable template technique. The results show that facial features can be located very quickly with Adequate or better fit in over 80% of the images with the proposed system.  相似文献   

3.
目的针对从单幅人脸图像中恢复面部纹理图时获得的信息不完整、纹理细节不够真实等问题,提出一种基于生成对抗网络的人脸全景纹理图生成方法。方法将2维人脸图像与3维人脸模型之间的特征关系转换为编码器中的条件参数,从图像数据与人脸条件参数的多元高斯分布中得到隐层数据的概率分布,用于在生成器中学习人物的头面部纹理特征。在新创建的人脸纹理图数据集上训练一个全景纹理图生成模型,利用不同属性的鉴别器对输出结果进行评估反馈,提升生成纹理图的完整性和真实性。结果实验与当前最新方法进行了比较,在Celeb A-HQ和LFW(labled faces in the wild)数据集中随机选取单幅正面人脸测试图像,经生成结果的可视化对比及3维映射显示效果对比,纹理图的完整度和显示效果均优于其他方法。通过全局和面部区域的像素量化指标进行数据比较,相比于UVGAN,全局峰值信噪比(peak signal to noise ratio,PSNR)和全局结构相似性(structural similarity index,SSIM)分别提高了7.9 d B和0.088,局部PSNR和局部SSIM分别提高了2.8 d B和0...  相似文献   

4.
This paper presents an integrated approach for tracking hands, faces and specific facial features (eyes, nose, and mouth) in image sequences. For hand and face tracking, we employ a state-of-the-art blob tracker which is specifically trained to track skin-colored regions. In this paper we extend the skin color tracker by proposing an incremental probabilistic classifier, which can be used to maintain and continuously update the belief about the class of each tracked blob, which can be left-hand, right hand or face as well as to associate hand blobs with their corresponding faces. An additional contribution of this paper is related to the employment of a novel method for the detection and tracking of specific facial features within each detected facial blob which consists of an appearance-based detector and a feature-based tracker. The proposed approach is intended to provide input for the analysis of hand gestures and facial expressions that humans utilize while engaged in various conversational states with robots that operate autonomously in public places. It has been integrated into a system which runs in real time on a conventional personal computer which is located on a mobile robot. Experimental results confirm its effectiveness for the specific task at hand.  相似文献   

5.
A novel method for eye and mouth detection and eye center and mouth corner localization, based on geometrical information is presented in this paper. First, a face detector is applied to detect the facial region, and the edge map of this region is calculated. The distance vector field of the face is extracted by assigning to every facial image pixel a vector pointing to the closest edge pixel. The x and y components of these vectors are used to detect the eyes and mouth regions. Luminance information is used for eye center localization, after removing unwanted effects, such as specular highlights, whereas the hue channel of the lip area is used for the detection of the mouth corners. The proposed method has been tested on the XM2VTS and BioID databases, with very good results.  相似文献   

6.
Proportional symbol maps visualize numerical data associated with point locations by placing a scaled symbol—typically an opaque disk or square—at the corresponding point on a map. The area of each symbol is proportional to the numerical value associated with its location. Every visually meaningful proportional symbol map will contain at least some overlapping symbols. These need to be drawn in such a way that the user can still judge their relative sizes accurately.  相似文献   

7.
提出一种基于三维人脸深度数据的人脸姿态计算方法。利用人脸的深度数 据以及与其一一对应的灰度图像,根据微分几何原理和相应的曲率算法与人脸数据中的灰度 特征对人脸面部关键特征点定位,进而计算出人脸姿态在三维空间中的3 个姿态角。实验证 明该方法能在姿态变化情况下实现对人脸旋转角的准确估计,为进一步的人脸识别和表情分 析提供基础。  相似文献   

8.
In this paper a method is proposed for person identification and verification. The method proposed in Viola and Jones (2001) is used to detect the face region in the image. The detected face region is processed to determine the locations of the eyes and mouth. The facial and mouth features are extracted relative to the locations of the eyes and mouth. A new feature called fovea intensity comparison code (FICC) is obtained from intensity values of the face/mouth region. The dimension of the FICC is reduced using principal component analysis (PCA). Euclidean distance matching is used for identification and verification. The performance of the system is evaluated in real time in the laboratory environment, and the system achieves a recognition rate (RR) of 99.0% and an equal error rate (EER) of about 0.84% for 50 subjects. The performance of the system is also evaluated for the eXtended Multi Modal Verification for Teleservices and Security (XM2VTS) database, and the system achieves a recognition rate of 100% an equal error rate (EER) of about 0.23%.  相似文献   

9.
人脸属性迁移作为计算机视觉领域的一个研究热点,对于数字娱乐制作、辅助人脸识别等领域有着重要的意义。现有的算法存在着生成图像模糊、转移属性无关区域变化等问题。针对这些不足,提出一种基于视觉注意力生成对抗网络的人脸属性迁移模型。生成器为减小属性无关区域的变化,引入视觉注意力分别输出RGB图像和注意力图像,并通过一定的融合方式得到属性迁移结果。采用多尺度判别器保持高维特征映射的细节。在约束中加入循环一致性损失和注意力图像损失,保持人脸身份信息,并专注属性相关区域的迁移。实验证明,该模型能够减少属性无关区域的变化,提高人脸属性转移的效果。  相似文献   

10.
目的 人脸超分辨率重建是特定应用领域的超分辨率问题,为了充分利用面部先验知识,提出一种基于多任务联合学习的深度人脸超分辨率重建算法。方法 首先使用残差学习和对称式跨层连接网络提取低分辨率人脸的多层次特征,根据不同任务的学习难易程度设置损失权重和损失阈值,对网络进行多属性联合学习训练。然后使用感知损失函数衡量HR(high-resolution)图像与SR(super-resolution)图像在语义层面的差距,并论证感知损失在提高人脸语义信息重建效果方面的有效性。最后对人脸属性数据集进行增强,在此基础上进行联合多任务学习,以获得视觉感知效果更加真实的超分辨率结果。结果 使用峰值信噪比(PSNR)和结构相似度(SSIM)两个客观评价标准对实验结果进行评价,并与其他主流方法进行对比。实验结果显示,在人脸属性数据集(CelebA)上,在放大8倍时,与通用超分辨率MemNet(persistent memory network)算法和人脸超分辨率FSRNet(end-to-end learning face super-resolution network)算法相比,本文算法的PSNR分别提升约2.15 dB和1.2 dB。结论 实验数据与效果图表明本文算法可以更好地利用人脸先验知识,产生在视觉感知上更加真实和清晰的人脸边缘和纹理细节。  相似文献   

11.
Given a person’s neutral face, we can predict his/her unseen expression by machine learning techniques for image processing. Different from the prior expression cloning or image analogy approaches, we try to hallucinate the person’s plausible facial expression with the help of a large face expression database. In the first step, regularization network based nonlinear manifold learning is used to obtain a smooth estimation for unseen facial expression, which is better than the reconstruction results of PCA. In the second step, Markov network is adopted to learn the low-level local facial feature’s relationship between the residual neutral and the expressional face image’s patches in the training set, then belief propagation is employed to infer the expressional residual face image for that person. By integrating the two approaches, we obtain the final results. The experimental results show that the hallucinated facial expression is not only expressive but also close to the ground truth.  相似文献   

12.
将一幅人脸图像(称为源图像)映射到另一幅人脸图像(称为目标图像)上,从而实现人脸图像的变形并达到某种特殊效果的表现要求。其中,一个最关键的问题是保持两幅人脸中器官特征的约束映射。提出一种技术策略,可以有效实现这种映射效果。利用人脸特征检测算法检测出源人脸和目标人脸的特征点;基于特征点,利用Delaunay三角剖分的方法将目标人脸转化为三角网格;采用调和映射的方法计算得出所有网格点在源人脸图像中的对应坐标;在获得网格点纹理坐标后,就可以实现源人脸到目标人脸的映射。实验表明,该方法可以实现任意人脸间的映射,并能较好地保持人脸五官间的映射关系。  相似文献   

13.
对于人脸表情识别,传统方法是先提取图像特征,再使用机器学习方法进行识别,这种方法不但特征提取过程复杂且泛化能力也差。为了达到更好的人脸表情识别效果,文中提出一种结合特征提取和卷积神经网络的人脸表情识别方法。首先使用基于Haar-like特征的AdaBoost算法对于数据库原始图片进行人脸区域检测,然后提取人脸区域局部二值模式(Local Binary Patterns,LBP)特征图,将其尺寸归一化后输入到改进的LeNet-5神经网络模型中进行识别。在CK+和JAFFE数据集上采用10折交叉验证方法进行实验,分别为98.19%和96.35%的准确率。实验结果表明该方法与其他主流方法相比在人脸表情识别上有一定的先进性和有效性。  相似文献   

14.
We present a multimodal approach for face modeling and recognition. The algorithm uses three cameras to capture stereo images, two frontal and one profile, of the face. 2D facial features are extracted from one of the frontal images and a dense disparity map is computed from the two frontal images. Using the extracted 2D features and their corresponding disparities, we compute their 3D coordinates. We next align a low resolution 3D mesh model to the 3D features, re-project its vertices onto the frontal 2D image and adjust its profile silhouette vertices using the profile view image. We increase the resolution of the resulting 2D model at its center region to obtain a facial mask model covering distinctive features of the face. The 2D coordinates of the vertices, along with their disparities, result in a deformed 3D mask model specific to a given subject’s face. Our method integrates information from the extracted facial features from the 2D image modality with information from the 3D modality obtained from the stereo images. Application of the models in 3D face recognition, for 112 subjects, validates the algorithm with a 95% identification rate and 92% verification rate at 0.1% false acceptance rate.
Mohammad H. MahoorEmail:
  相似文献   

15.
一种适合于表情分析的人脸二值边缘图像的提取   总被引:1,自引:0,他引:1       下载免费PDF全文
边缘图像是人脸图像的一种重要表征方法,它对人脸图像的分析有重要的作用。提出了一种提取人脸二值边缘图像的新方法。该方法利用小波变换进行图像的高频重构,并利用其多尺度分析特性进行图像边缘的提取,包括两次二值化过程和一次去噪过程。提取的边缘图像具有较高的质量,同一人脸部件的边缘连通性好,不同部件的边缘粘连现象少,且具有较好的光照鲁棒性。将所提取的二值边缘图像用于AR库和Yale库图像的人脸4种动作单元的识别,识别率在93%以上,初步表明其适合于人脸图像的表情分析。  相似文献   

16.
基于深度学习的图像超分辨率重构方法对低分辨率人脸图像进行超分辨率重构时,通常存在重构图像模糊和重构图像与真实图像差异较大等问题.基于此问题,文中提出融合参考图像的人脸超分辨率重构方法,可以实现对低分辨率人脸图像的有效重构.参考图像特征提取子网提取参考图像的多尺度特征,保留人脸神态和重点部位的细节特征信息,去除人脸轮廓和面部表情等冗余信息.基于提取的参考图像多尺度特征,逐级超分主网络对低分辨率人脸图像特征进行逐次填充,最终重构生成高分辨率的人脸图像.在数据集上的实验表明,文中方法可以实现对低分辨率人脸图像的有效重构,具有良好的鲁棒性.  相似文献   

17.
1.引言人脸建模与动画(face modeling and animation)是计算机图形学中最富有挑战性的课题之一。这是因为:首先,人脸的几何形状非常复杂,其表面不但具有无数细小的皱纹,而且呈现颜色和纹理的微妙变化,因此建立精确的人脸模型、生成真实感人脸非常困难;其次,脸部运动是骨骼、肌肉、皮下组织和皮肤共同作用的结果,其运动机理非常复杂,因此生成真实感人脸动画非常困难;另外,我们人类生来就具有一种识别和  相似文献   

18.
斑点是图像中的一种基本灰度形态,在图像分析中有着重要应用.本文提出了一种新的斑点形态滤波器(Blob Form Filter,以下简称BFF),用于图像中斑点形态的目标检测.BFF采用了序位滤波原理,所以它对噪声具有较好的鲁棒性.实验结果表明,BFF可以有效地从受噪声污染的二值图像、灰度图像中检测出指定大小的斑点区域,并且抽出的斑点形态非常接近于斑点原有的形态.  相似文献   

19.
An oscillatory network model with controllable coupling and self-organized synchronization-based performance was developed for image processing. The model demonstrates the following capabilities: (a) brightness segmentation of real grey-level images; (b) colored image segmentation; (c) selective image segmentation—extraction of the subset of image fragments with brightness values contained in an arbitrary given interval. An additional capability—successive selection of spatially separated fragments of a visual scene—has been achieved via further model extension. The fragment selection (under minor natural restrictions on mutual fragment locations) is based on in-phase internal synchronization of oscillator ensembles, corresponding to all the fragments, and distinct phase shifts between different ensembles.  相似文献   

20.
For facial expression recognition, we selected three images: (i) just before speaking, (ii) speaking the first vowel, and (iii) speaking the last vowel in an utterance. In this study, as a pre-processing module, we added a judgment function to distinguish a front-view face for facial expression recognition. A frame of the front-view face in a dynamic image is selected by estimating the face direction. The judgment function measures four feature parameters using thermal image processing, and selects the thermal images that have all the values of the feature parameters within limited ranges which were decided on the basis of training thermal images of front-view faces. As an initial investigation, we adopted the utterance of the Japanese name “Taro,” which is semantically neutral. The mean judgment accuracy of the front-view face was 99.5% for six subjects who changed their face direction freely. Using the proposed method, the facial expressions of six subjects were distinguishable with 84.0% accuracy when they exhibited one of the intentional facial expressions of “angry,” “happy,” “neutral,” “sad,” and “surprised.” We expect the proposed method to be applicable for recognizing facial expressions in daily conversation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号