首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 750 毫秒
1.
基于Adaboost算法的多角度人脸检测   总被引:2,自引:1,他引:1  
龙敏  黄福珍  边后琴 《计算机仿真》2007,24(11):206-209
文中提出了一种基于Adaboost算法的多角度人脸检测方法.多角度人脸检测问题的研究与正面人脸检测相比,相对薄弱,离实际应用的需求还比较远.首先使用Haar特征设计并构造弱分类器空间,用Adaboost算法学习得到基于视图的多分类器级联的人脸检测器;然后将多角度人脸划分成三类:全侧脸,半侧脸及正面人脸,并为不同角度的人脸建立不同的检测器分别用于检测.在CMU侧面人脸检测集合上,用基于Adaboost的方法对多角度人脸图像进行仿真实验,检测正确率为89.8%,误报数为243个.相比Schneiderman等人的方法,该方法具有更好的性能.  相似文献   

2.
Recent face recognition algorithm can achieve high accuracy when the tested face samples are frontal. However, when the face pose changes largely, the performance of existing methods drop drastically. Efforts on pose-robust face recognition are highly desirable, especially when each face class has only one frontal training sample. In this study, we propose a 2D face fitting-assisted 3D face reconstruction algorithm that aims at recognizing faces of different poses when each face class has only one frontal training sample. For each frontal training sample, a 3D face is reconstructed by optimizing the parameters of 3D morphable model (3DMM). By rotating the reconstructed 3D face to different views, pose virtual face images are generated to enlarge the training set of face recognition. Different from the conventional 3D face reconstruction methods, the proposed algorithm utilizes automatic 2D face fitting to assist 3D face reconstruction. We automatically locate 88 sparse points of the frontal face by 2D face-fitting algorithm. Such 2D face-fitting algorithm is so-called Random Forest Embedded Active Shape Model, which embeds random forest learning into the framework of Active Shape Model. Results of 2D face fitting are added to the 3D face reconstruction objective function as shape constraints. The optimization objective energy function takes not only image intensity, but also 2D fitting results into account. Shape and texture parameters of 3DMM are thus estimated by fitting the 3DMM to the 2D frontal face sample, which is a non-linear optimization problem. We experiment the proposed method on the publicly available CMUPIE database, which includes faces viewed from 11 different poses, and the results show that the proposed method is effective and the face recognition results toward pose variants are promising.  相似文献   

3.
《Image and vision computing》2002,20(9-10):657-664
We demonstrate that a small number of 2D linear statistical models are sufficient to capture the shape and appearance of a face from a wide range of viewpoints. Such models can be used to estimate head orientation and track faces through large angles. Given multiple images of the same face we can learn a coupled model describing the relationship between the frontal appearance and the profile of a face. This relationship can be used to predict new views of a face seen from one view and to constrain search algorithms which seek to locate a face in multiple views simultaneously.  相似文献   

4.
Automatic analysis of human facial expression is a challenging problem with many applications. Most of the existing automated systems for facial expression analysis attempt to recognize a few prototypic emotional expressions, such as anger and happiness. Instead of representing another approach to machine analysis of prototypic facial expressions of emotion, the method presented in this paper attempts to handle a large range of human facial behavior by recognizing facial muscle actions that produce expressions. Virtually all of the existing vision systems for facial muscle action detection deal only with frontal-view face images and cannot handle temporal dynamics of facial actions. In this paper, we present a system for automatic recognition of facial action units (AUs) and their temporal models from long, profile-view face image sequences. We exploit particle filtering to track 15 facial points in an input face-profile sequence, and we introduce facial-action-dynamics recognition from continuous video input using temporal rules. The algorithm performs both automatic segmentation of an input video into facial expressions pictured and recognition of temporal segments (i.e., onset, apex, offset) of 27 AUs occurring alone or in a combination in the input face-profile video. A recognition rate of 87% is achieved.  相似文献   

5.
Existing face hallucination methods assume that the face images are well-aligned. However, in practice, given a low-resolution face image, it is very difficult to perform precise alignment. As a result, the quality of the super-resolved image is degraded dramatically. In this paper, we propose a near frontal-view face hallucination method which is robust to face image mis-alignment. Based on the discriminative nature of sparse representation, we propose a global face sparse representation model that can reconstruct images with mis-alignment variations. We further propose an iterative method combining the global sparse representation and the local linear regression using the Expectation Maximization (EM) algorithm, in which the face hallucination is converted into a parameter estimation problem with incomplete data. Since the proposed algorithm is independent of the face similarity resulting from precise alignment, the proposed algorithm is robust to mis-alignment. In addition, the proposed iterative manner not only combines the merits of the global and local face hallucination, but also provides a convenient way to integrate different strategies to handle the mis-alignment problem. Experimental results show that the proposed method achieves better performance than existing methods, especially for mis-aligned face images.  相似文献   

6.
In this paper, we propose a new face recognition algorithm based on a single frontal-view image for each face subject, which considers the effect of the face manifold structure. To compare two near-frontal face images, each face is considered a combination of a sequence of local image blocks. Each of the image blocks of one image can be reconstructed according to the corresponding local image block of the other face image. Then an elastic local reconstruction (ELR) method is proposed to measure the similarities between the image block pairs in order to measure the difference between the two face images. Our algorithm not only benefits from the face manifold structure, in terms of being robust to various image variations, but also is computationally simple because there is no need to build the face manifold. We evaluate the performance of our proposed face recognition algorithm with the use of different databases, which are produced under various conditions, e.g. lightings, expressions, perspectives, with/without glasses and occlusions. Consistent and promising experimental results were obtained, which show that our algorithm can greatly improve the recognition rates under all the different conditions.  相似文献   

7.
This paper demonstrates how a weighted fusion of multiple Active Shape (ASM) or Active Appearance (AAM) models can be utilized to perform multi-view facial segmentation with only a limited number of views available for training the models. The idea is to construct models only from frontal and profile views and subsequently fuse these models with adequate weights to segment any facial view. This reduces the problem of multi-view facial segmentation to that of weight estimation, the algorithm for which is proposed as well. The evaluation is performed on a set of 280 landmarked static face images corresponding to seven different rotation angles and on several video sequences of the AV@CAR database. The evaluation demonstrates that the estimation of the weights does not have to be very accurate in the case of ASM, while in the case of AAM the influence of correct weight estimation is more critical. The segmentation with the proposed weight estimation method produced accurate segmentations in 91% of 280 testing images with the median point-to-point error varying from two to eight pixels (1.8–7.2% of average inter-eye distance).  相似文献   

8.
为解决特定人脸的建模问题提供了一个简单而行之有效的方法。给定特定人脸的正面侧面照片,以及内嵌具有人脸特征信息的弹性人脸网格模型,采用基于小波分析的方法进行人脸特征识别,基于特定人脸的特征线相对于一般人脸模型上的特征线的位移,根据弹性系数求解所有点的位移变化,适配特定人脸几何。纹理映射之后生成能以任意视线方向观察的高度真实感特定人脸。该方法能在廉价的PC平台上快速而方便地得到实现。  相似文献   

9.
In last years, Face recognition based on 3D techniques is an emergent technology which has demonstrated better results than conventional 2D approaches. Using texture (180° multi-view image) and depth maps is supposed to increase the robustness towards the two main challenges in Face Recognition: Pose and illumination. Nevertheless, 3D data should be acquired under highly controlled conditions and in most cases depends on the collaboration of the subject to be recognized. Thus, in applications such as surveillance or control access points, this kind of 3D data may not be available during the recognition process. This leads to a new paradigm using some mixed 2D-3D face recognition systems where 3D data is used in the training but either 2D or 3D information can be used in the recognition depending on the scenario. Following this concept, where only part of the information (partial concept) is used in the recognition, a novel method is presented in this work. This has been called Partial Principal Component Analysis (P2CA) since they fuse the Partial concept with the fundamentals of the well known PCA algorithm. This strategy has been proven to be very robust in pose variation scenarios showing that the 3D training process retains all the spatial information of the face while the 2D picture effectively recovers the face information from the available data. Furthermore, in this work, a novel approach for the automatic creation of 180° aligned cylindrical projected face images using nine different views is presented. These face images are created by using a cylindrical approximation for the real object surface. The alignment is done by applying first a global 2D affine transformation of the image, and afterward a local transformation of the desired face features using a triangle mesh. This local alignment allows a closer look to the feature properties and not the differences. Finally, these aligned face images are used for training a pose invariant face recognition approach (P2CA).  相似文献   

10.
We propose an analytic-to-holistic approach which can identify faces at different perspective variations. The database for the test consists of 40 frontal-view faces. The first step is to locate 15 feature points on a face. A head model is proposed, and the rotation of the face can be estimated using geometrical measurements. The positions of the feature points are adjusted so that their corresponding positions for the frontal view are approximated. These feature points are then compared with the feature points of the faces in a database using a similarity transform. In the second step, we set up windows for the eyes, nose, and mouth. These feature windows are compared with those in the database by correlation. Results show that this approach can achieve a similar level of performance from different viewing directions of a face. Under different perspective variations, the overall recognition rates are over 84 percent and 96 percent for the first and the first three likely matched faces, respectively  相似文献   

11.
Mug shot photography has been used to identify criminals by the police for more than a century. However, the common scenario of face recognition using frontal and side-view mug shots as gallery remains largely uninvestigated in computerized face recognition across pose. This paper presents a novel appearance-based approach using frontal and sideface images to handle pose variations in face recognition, which has great potential in forensic and security applications involving police mugshot databases. Virtual views in different poses are generated in two steps: 1) shape modelling and 2) texture synthesis. In the shape modelling step, a multilevel variation minimization approach is applied to generate personalized 3-D face shapes. In the texture synthesis step, face surface properties are analyzed and virtual views in arbitrary viewing conditions are rendered, taking diffuse and specular reflections into account. Appearance-based face recognition is performed with the augmentation of synthesized virtual views covering possible viewing angles to recognize probe views in arbitrary conditions. The encouraging experimental results demonstrated that the proposed approach by using frontal and side-view images is a feasible and effective solution to recognizing rotated faces, which can lead to a better and practical use of existing forensic databases in computerized human face-recognition applications.   相似文献   

12.
Reconstructing 3D face models from 2D face images is usually done by using a single reference 3D face model or some gender/ethnicity specific 3D face models. However, different persons, even those of the same gender or ethnicity, usually have significantly different faces in terms of their overall appearance, which forms the base of person recognition via faces. Consequently, existing 3D reference model based methods have limited capability of reconstructing precise 3D face models for a large variety of persons. In this paper, we propose to explore a reservoir of diverse reference models for 3D face reconstruction from forensic mugshot face images, where facial examplars coherent with the input determine the final shape estimation. Specifically, our 3D face reconstruction is formulated as an energy minimization problem with: 1) shading constraint from multiple input face images, 2) distortion and self-occlusion based color consistency between different views, and 3) depth uncertainty based smoothness constraint on adjacent pixels. The proposed energy is minimized in a coarse to fine way, where the shape refinement step is done by using a multi-label segmentation algorithm. Experimental results on challenging datasets demonstrate that the proposed algorithm is capable of recovering high quality 3D face models. We also show that our reconstructed models successfully boost face recognition accuracy.  相似文献   

13.
The open-set problem is among the problems that have significantly changed the performance of face recognition algorithms in real-world scenarios. Open-set operates under the supposition that not all the probes have a pair in the gallery. Most face recognition systems in real-world scenarios focus on handling pose, expression and illumination problems on face recognition. In addition to these challenges, when the number of subjects is increased for face recognition, these problems are intensified by look-alike faces for which there are two subjects with lower intra-class variations. In such challenges, the inter-class similarity is higher than the intra-class variation for these two subjects. In fact, these look-alike faces can be created as intrinsic, situation-based and also by facial plastic surgery. This work introduces three real-world open-set face recognition methods across facial plastic surgery changes and a look-alike face by 3D face reconstruction and sparse representation. Since some real-world databases for face recognition do not have multiple images per person in the gallery, with just one image per subject in the gallery, this paper proposes a novel idea to overcome this challenge by 3D modeling from gallery images and synthesizing them for generating several images. Accordingly, a 3D model is initially reconstructed from frontal face images in a real-world gallery. Then, each 3D reconstructed face in the gallery is synthesized to several possible views and a sparse dictionary is generated based on the synthesized face image for each person. Also, a likeness dictionary is defined and its optimization problem is solved by the proposed method. Finally, the face recognition is performed for open-set face recognition using three proposed representation classifications. Promising results are achieved for face recognition across plastic surgery and look-alike faces on three databases including the plastic surgery face, look-alike face and LFW databases compared to several state-of-the-art methods. Also, several real-world and open-set scenarios are performed to evaluate the proposed method on these databases in real-world scenarios.  相似文献   

14.
面向网上人际交流的实际需求,提出一种基于图像的便捷人脸动画方法.基于同一人脸的正面和半侧面图像,引入三维通用人脸模型估计其初始朝向,分别建立两图像的二维网格模型;然后采用图像插值和变形技术合成不同朝向人脸,并合理地解决遮挡问题,面部的唇动和表情则可将MPEG4三维模型的形变参数投影到相应的二维网格上,该方法无需重构对象人脸的三维模型和对摄像机进行标定,可同时合成面部表情与小范围朝向变化.实验结果证明了文中方法的便捷性、跨平台通用性,合成效果良好.  相似文献   

15.
Matching 2.5D face scans to 3D models   总被引:7,自引:0,他引:7  
The performance of face recognition systems that use two-dimensional images depends on factors such as lighting and subject's pose. We are developing a face recognition system that utilizes three-dimensional shape information to make the system more robust to arbitrary pose and lighting. For each subject, a 3D face model is constructed by integrating several 2.5D face scans which are captured from different views. 2.5D is a simplified 3D (x,y,z) surface representation that contains at most one depth value (z direction) for every point in the (x, y) plane. Two different modalities provided by the facial scan, namely, shape and texture, are utilized and integrated for face matching. The recognition engine consists of two components, surface matching and appearance-based matching. The surface matching component is based on a modified iterative closest point (ICP) algorithm. The candidate list from the gallery used for appearance matching is dynamically generated based on the output of the surface matching component, which reduces the complexity of the appearance-based matching stage. Three-dimensional models in the gallery are used to synthesize new appearance samples with pose and illumination variations and the synthesized face images are used in discriminant subspace analysis. The weighted sum rule is applied to combine the scores given by the two matching components. Experimental results are given for matching a database of 200 3D face models with 598 2.5D independent test scans acquired under different pose and some lighting and expression changes. These results show the feasibility of the proposed matching scheme.  相似文献   

16.

This paper proposes a novel image forensics technique to detect splicing forgeries in digital images. The method is applicable to images containing two or more persons, where the near frontal views of the faces are available. Firstly, a low-dimensional lighting model is created from a set of front pose face images of a single individual under different directional lighting environments. For this, the set of images is decomposed using principal component analysis. This low-dimensional model, which captures the lighting variation in faces, is then used to estimate the lighting environment (LE) from a given near front pose face image. In a spliced image, the LE estimated from the spliced face will be different from that estimated from the original faces. Therefore, finding the inconsistencies among the LEs estimated from different faces could reveal the splicing forgery. The experimental results on Yale Face Database B and a set of authentic and forged real-life images show the efficacy of the proposed method.

  相似文献   

17.
Most face recognition techniques have been successful in dealing with high-resolution (HR) frontal face images. However, real-world face recognition systems are often confronted with the low-resolution (LR) face images with pose and illumination variations. This is a very challenging issue, especially under the constraint of using only a single gallery image per person. To address the problem, we propose a novel approach called coupled kernel-based enhanced discriminant analysis (CKEDA). CKEDA aims to simultaneously project the features from LR non-frontal probe images and HR frontal gallery ones into a common space where discrimination property is maximized. There are four advantages of the proposed approach: 1) by using the appropriate kernel function, the data becomes linearly separable, which is beneficial for recognition; 2) inspired by linear discriminant analysis (LDA), we integrate multiple discriminant factors into our objective function to enhance the discrimination property; 3) we use the gallery extended trick to improve the recognition performance for a single gallery image per person problem; 4) our approach can address the problem of matching LR non-frontal probe images with HR frontal gallery images, which is difficult for most existing face recognition techniques. Experimental evaluation on the multi-PIE dataset signifies highly competitive performance of our algorithm.   相似文献   

18.
提出并实现一种基于两张正交图像和一个标准3维头模型,并利用2D图像特征点和3D模型特征点的匹配进行3维头模型重建的算法。首先,进行面部区域和头发区域的分割,利用色彩传递对输入图像进行颜色处理。对正面图像利用改进后的ASM(主动形状模型)模型进行特征点定位。改进局部最大曲率跟踪(LMCT)方法,更为鲁棒的定位了侧面特征点。在匹配图像特征点与标准3维头上预先定义的特征点的基础上,利用径向基函数进行标准头形变,获得特定人的3维头部形状模型。采用重建好的3维头作为桥梁,自动匹配输入图像,进行无缝纹理融合。最后,将所得纹理映射到形状模型上,获得对应输入图像的特定真实感3维头模型。  相似文献   

19.
Face recognition based on fitting a 3D morphable model   总被引:31,自引:0,他引:31  
This paper presents a method for face recognition across variations in pose, ranging from frontal to profile views, and across a wide range of illuminations, including cast shadows and specular reflections. To account for these variations, the algorithm simulates the process of image formation in 3D space, using computer graphics, and it estimates 3D shape and texture of faces from single images. The estimate is achieved by fitting a statistical, morphable model of 3D faces to images. The model is learned from a set of textured 3D scans of heads. We describe the construction of the morphable model, an algorithm to fit the model to images, and a framework for face identification. In this framework, faces are represented by model parameters for 3D shape and texture. We present results obtained with 4,488 images from the publicly available CMU-PIE database and 1,940 images from the FERET database.  相似文献   

20.
We propose a novel method for unsupervised face recognition from time-varying sequences of face images obtained in real-world environments. The method utilizes the higher level of sensory variation contained in the input image sequences to autonomously organize the data in an incrementally built graph structure, without relying on category-specific information provided in advance. This is achieved by “chaining” together similar views across the spatio-temporal representations of the face sequences in image space by two types of connecting edges depending on local measures of similarity. Experiments with real-world data gathered over a period of several months and including both frontal and side-view faces from 17 different subjects were used to test the method, achieving correct self-organization rate of 88.6%. The proposed method can be used in video surveillance systems or for content-based information retrieval.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号