首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 375 毫秒
1.
Recently, the importance of face recognition has been increasingly emphasized since popular CCD cameras are distributed to various applications. However, facial images are dramatically changed by lighting variations, so that facial appearance changes caused serious performance degradation in face recognition. Many researchers have tried to overcome these illumination problems using diverse approaches, which have required a multiple registered images per person or the prior knowledge of lighting conditions. In this paper, we propose a new method for face recognition under arbitrary lighting conditions, given only a single registered image and training data under unknown illuminations. Our proposed method is based on the illuminated exemplars which are synthesized from photometric stereo images of training data. The linear combination of illuminated exemplars can represent the new face and the weighted coefficients of those illuminated exemplars are used as identity signature. We make experiments for verifying our approach and compare it with two traditional approaches. As a result, higher recognition rates are reported in these experiments using the illumination subset of Max-Planck Institute face database and Korean face database.  相似文献   

2.
Face images are difficult to interpret because they are highly variable. Sources of variability include individual appearance, 3D pose, facial expression, and lighting. We describe a compact parametrized model of facial appearance which takes into account all these sources of variability. The model represents both shape and gray-level appearance, and is created by performing a statistical analysis over a training set of face images. A robust multiresolution search algorithm is used to fit the model to faces in new images. This allows the main facial features to be located, and a set of shape, and gray-level appearance parameters to be recovered. A good approximation to a given face can be reconstructed using less than 100 of these parameters. This representation can be used for tasks such as image coding, person identification, 3D pose recovery, gender recognition, and expression recognition. Experimental results are presented for a database of 690 face images obtained under widely varying conditions of 3D pose, lighting, and facial expression. The system performs well on all the tasks listed above  相似文献   

3.
4.
The paper proposes a novel, pose-invariant face recognition system based on a deformable, generic 3D face model, that is a composite of: (1) an edge model, (2) a color region model and (3) a wireframe model for jointly describing the shape and important features of the face. The first two submodels are used for image analysis and the third mainly for face synthesis. In order to match the model to face images in arbitrary poses, the 3D model can be projected onto different 2D viewplanes based on rotation, translation and scale parameters, thereby generating multiple face-image templates (in different sizes and orientations). Face shape variations among people are taken into account by the deformation parameters of the model. Given an unknown face, its pose is estimated by model matching and the system synthesizes face images of known subjects in the same pose. The face is then classified as the subject whose synthesized image is most similar. The synthesized images are generated using a 3D face representation scheme which encodes the 3D shape and texture characteristics of the faces. This face representation is automatically derived from training face images of the subject. Experimental results show that the method is capable of determining pose and recognizing faces accurately over a wide range of poses and with naturally varying lighting conditions. Recognition rates of 92.3% have been achieved by the method with 10 training face images per person.  相似文献   

5.
This paper proposes a robust face tracking method based on the condensation algorithm that uses skin color and facial shape as observation measures. Two trackers are used for robust tracking: one tracks the skin color regions and the other tracks the facial shape regions. The two trackers are coupled using an importance sampling technique, where the skin color density obtained from the skin color tracker is used as the importance function to generate samples for the shape tracker. The samples of the skin color tracker within the chosen shape region are updated with higher weights. Also, an adaptive color model is used to avoid the effect of illumination change in the skin color tracker. The proposed face tracker performs more robustly than either the skin-color-based tracker or the facial shape-based tracker, given the presence of background clutter and/or illumination changes.  相似文献   

6.
The aim in this paper is to use principal geodesic analysis to model the statistical variations for sets of facial needle maps. We commence by showing how to represent the distribution of surface normals using the exponential map. Shape deformations are described using principal geodesic analysis on the exponential map. Using ideas from robust statistics we show how this deformable model may be fitted to facial images in which there is significant self-shadowing. Moreover, we demonstrate that the resulting shape-from-shading algorithm can be used to recover accurate facial shape and albedo from real world images. In particular, the algorithm can effectively fill-in the facial surface when more than 30% of its area is subject to self-shadowing. To investigate the utility of the shape parameters delivered by the method, we conduct experiments with illumination insensitive face recognition. We present a novel recognition strategy in which similarity is measured in the space of the principal geodesic parameters. We also use the recovered shape information to generate illumination normalized prototype images on which recognition can be performed. Finally we show that, from a single input image, we are able to generate the basis images employed by a number of well known illumination-insensitive recognition algorithms. We also demonstrate that the principal geodesics provide an efficient parameterization of the space of harmonic basis images.  相似文献   

7.
In this paper we show how to estimate facial surface reflectance properties (a slice of the BRDF and the albedo) in conjunction with the facial shape from a single image. The key idea underpinning our approach is to iteratively interleave the two processes of estimating reflectance properties based on the current shape estimate and updating the shape estimate based on the current estimate of the reflectance function. For frontally illuminated faces, the reflectance properties can be described by a function of one variable which we estimate by fitting a curve to the scattered and noisy reflectance samples provided by the input image and estimated shape. For non-frontal illumination, we fit a smooth surface to the scattered 2D reflectance samples. We make use of a novel statistical face shape constraint which we term ‘model-based integrability’ which we use to regularise the shape estimation. We show that the method is capable of recovering accurate shape and reflectance information from single grayscale or colour images using both synthetic and real world imagery. We use the estimated reflectance measurements to render synthetic images of the face in varying poses. To synthesise images under novel illumination, we show how to fit a parametric model of reflectance to the estimated reflectance function.  相似文献   

8.
Recently, technologies such as face detection, facial landmark localisation and face recognition and verification have matured enough to provide effective and efficient solutions for imagery captured under arbitrary conditions (referred to as “in-the-wild”). This is partially attributed to the fact that comprehensive “in-the-wild” benchmarks have been developed for face detection, landmark localisation and recognition/verification. A very important technology that has not been thoroughly evaluated yet is deformable face tracking “in-the-wild”. Until now, the performance has mainly been assessed qualitatively by visually assessing the result of a deformable face tracking technology on short videos. In this paper, we perform the first, to the best of our knowledge, thorough evaluation of state-of-the-art deformable face tracking pipelines using the recently introduced 300 VW benchmark. We evaluate many different architectures focusing mainly on the task of on-line deformable face tracking. In particular, we compare the following general strategies: (a) generic face detection plus generic facial landmark localisation, (b) generic model free tracking plus generic facial landmark localisation, as well as (c) hybrid approaches using state-of-the-art face detection, model free tracking and facial landmark localisation technologies. Our evaluation reveals future avenues for further research on the topic.  相似文献   

9.
一种人脸标准光照图像的线性重构方法   总被引:2,自引:0,他引:2  
基于相同光照下不同人脸图像与其标准光照图像之间的稳定关系,文中提出一种人脸标准光照图像重构方法。首先,为消除人脸结构影响,引入人脸三维变形,实现图像像素级对齐。其次,根据图像明暗变化,给出一种基于图像分块的光照分类方法。最后,对于形状对齐后的不同光照类别样本,训练出基于子空间的线性重构模型。该方法有效避免传统预处理方法带来的重构图像纹理丢失和子空间方法引起的图像失真。Extended Yale B数据库上实验表明,该方法对图像真实度与人脸识别率的提升,也验证文中人脸对齐和光照分类方法的有效性。  相似文献   

10.
基于模型的头部运动估计和面部图像合成   总被引:9,自引:0,他引:9  
文中讨论一种基于模型的头部运动估计和面部图像合成方法。首先建立了一个基于人脸几何模型的可变形三维面部模型,此模型可根据不同人脸图像特征修正特定人脸模型。为了使特定人脸模型与特定人脸图像相匹配,需根据变形模型修正人脸模型。文中采用自动调整与人机交互相结合的方法实现特定人脸模型匹配。在调整完模型形状之后,应用3个方向的面部图像进行纹理映射生成不同视点方向的面部图像。应用合成面部图像与输入面部图像最佳匹  相似文献   

11.
Face recognition under uncontrolled illumination conditions is still considered an unsolved problem. In order to correct for these illumination conditions, we propose a virtual illumination grid (VIG) approach to model the unknown illumination conditions. Furthermore, we use coupled subspace models of both the facial surface and albedo to estimate the face shape. In order to obtain a representation of the face under frontal illumination, we relight the estimated face shape. We show that the frontal illuminated facial images achieve better performance in face recognition. We have performed the challenging Experiment 4 of the FRGCv2 database, which compares uncontrolled probe images to controlled gallery images. Our illumination correction method results in considerably better recognition rates for a number of well-known face recognition methods. By fusing our global illumination correction method with a local illumination correction method, further improvements are achieved.  相似文献   

12.
This work addresses the matching of a 3D deformable face model to 2D images through a 2.5D Active Appearance Models (AAM). We propose a 2.5D AAM that combines a 3D metric Point Distribution Model (PDM) and a 2D appearance model whose control points are defined by a full perspective projection of the PDM. The advantage is that, assuming a calibrated camera, 3D metric shapes can be retrieved from single view images. Two model fitting algorithms and their computational efficient approximations are proposed: the Simultaneous Forwards Additive (SFA) and the Normalization Forwards Additive (NFA), both based on the Lucas–Kanade framework. The SFA algorithm searches for shape and appearance parameters simultaneously whereas the NFA projects out the appearance from the error image and searches only for the shape parameters. SFA is therefore more accurate. Robust solutions for the SFA and NFA are also proposed in order to take into account the self-occlusion or partial occlusion of the face. Several performance evaluations for the SFA, NFA and theirs efficient approximations were performed. The experiments include evaluating the frequency of converge, the fitting performance in unseen data and the tracking performance in the FGNET Talking Face sequence. All results show that the 2.5D AAM can outperform both the 2D + 3D combined models and the 2D standard methods. The robust extensions to occlusion were tested on a synthetic sequence showing that the model can deal efficiently with large head rotation.  相似文献   

13.
In this paper, we propose an On-line Appearance-Based Tracker (OABT) for simultaneous tracking of 3D head pose, lips, eyebrows, eyelids and irises in monocular video sequences. In contrast to previously proposed tracking approaches, which deal with face and gaze tracking separately, our OABT can also be used for eyelid and iris tracking, as well as 3D head pose, lips and eyebrows facial actions tracking. Furthermore, our approach applies an on-line learning of changes in the appearance of the tracked target. Hence, the prior training of appearance models, which usually requires a large amount of labeled facial images, is avoided. Moreover, the proposed method is built upon a hierarchical combination of three OABTs, which are optimized using a Levenberg–Marquardt Algorithm (LMA) enhanced with line-search procedures. This, in turn, makes the proposed method robust to changes in lighting conditions, occlusions and translucent textures, as evidenced by our experiments. Finally, the proposed method achieves head and facial actions tracking in real-time.  相似文献   

14.
A major drawback of statistical models of non-rigid, deformable objects, such as the active appearance model (AAM), is the required pseudo-dense annotation of landmark points for every training image. We propose a regression-based approach for automatic annotation of face images at arbitrary pose and expression, and for deformable model building using only the annotated frontal images. We pose the problem of learning the pattern of manual annotation as a data-driven regression problem and explore several regression strategies to effectively predict the spatial arrangement of the landmark points for unseen face images, with arbitrary expression, at arbitrary poses. We show that the proposed fully sparse non-linear regression approach outperforms other regression strategies by effectively modelling the changes in the shape of the face under varying pose and is capable of capturing the subtleties of different facial expressions at the same time, thus, ensuring the high quality of the generated synthetic images. We show the generalisability of the proposed approach by automatically annotating the face images from four different databases and verifying the results by comparing them with a ground truth obtained from manual annotations.  相似文献   

15.
We introduce a robust framework for learning and fusing of orientation appearance models based on both texture and depth information for rigid object tracking. Our framework fuses data obtained from a standard visual camera and dense depth maps obtained by low-cost consumer depth cameras such as the Kinect. To combine these two completely different modalities, we propose to use features that do not depend on the data representation: angles. More specifically, our framework combines image gradient orientations as extracted from intensity images with the directions of surface normals computed from dense depth fields. We propose to capture the correlations between the obtained orientation appearance models using a fusion approach motivated by the original Active Appearance Models (AAMs). To incorporate these features in a learning framework, we use a robust kernel based on the Euler representation of angles which does not require off-line training, and can be efficiently implemented online. The robustness of learning from orientation appearance models is presented both theoretically and experimentally in this work. This kernel enables us to cope with gross measurement errors, missing data as well as other typical problems such as illumination changes and occlusions. By combining the proposed models with a particle filter, the proposed framework was used for performing 2D plus 3D rigid object tracking, achieving robust performance in very difficult tracking scenarios including extreme pose variations.  相似文献   

16.
Active Appearance Model (AAM) is an algorithm for fitting a generative model of object shape and appearance to an input image. AAM allows accurate, real-time tracking of human faces in 2D and can be extended to track faces in 3D by constraining its fitting with a linear 3D morphable model. Unfortunately, this AAM-based 3D tracking does not provide adequate accuracy and robustness, as we show in this paper. We introduce a new constraint into AAM fitting that uses depth data from a commodity RGBD camera (Kinect). This addition significantly reduces 3D tracking errors. We also describe how to initialize the 3D morphable face model used in our tracking algorithm by computing its face shape parameters of the user from a batch of tracked frames. The described face tracking algorithm is used in Microsoft's Kinect system.  相似文献   

17.
One of the main challenges in face recognition is represented by pose and illumination variations that drastically affect the recognition performance, as confirmed by the results of recent face recognition large-scale evaluations. This paper presents a new technique for face recognition, based on the joint use of 3D models and 2D images, specifically conceived to be robust with respect to pose and illumination changes. A 3D model of each user is exploited in the training stage (i.e. enrollment) to generate a large number of 2D images representing virtual views of the face with varying pose and illumination. Such images are then used to learn in a supervised manner a set of subspaces constituting the user's template. Recognition occurs by matching 2D images with the templates and no 3D information (neither images nor face models) is required. The experiments carried out confirm the efficacy of the proposed technique.  相似文献   

18.
基于HMM的单样本可变光照、姿态人脸识别   总被引:3,自引:1,他引:2  
提出了一种基于HMM的单样本可变光照、姿态人脸识别算法.该算法首先利用人工配准的训练集对单张正面人脸输入图像与Candide3模型进行自动配准,在配准的基础上重建特定人脸三维模型.对重建模型进行各种角度的旋转可得到姿态不同的数字人脸,然后利用球面谐波基图像调整数字人脸的光照系数可产生光照不同的数字人脸.将产生的光照、姿态不同的数字人脸同原始样本图像一起作为训练数据,为每个用户建立其独立的人脸隐马尔可夫模型.将所提算法对现有人脸库进行识别,并与基于光照补偿和姿态校正的识别方法进行比较.结果显示,该算法能有效避免光照补偿、姿态校正方法因对某些光照、姿态校正不理想而造成的识别率低的情况,能更好地适应光照、姿态不同条件下的人脸识别.  相似文献   

19.
We introduce a new markerless 3D face tracking approach for 2D videos captured by a single consumer grade camera. Our approach takes detected 2D facial features as input and matches them with projections of 3D features of a deformable model to determine its pose and shape. To make the tracking and reconstruction more robust we add a smoothness prior for pose and deformation changes of the faces. Our major contribution lies in the formulation of the deformation prior which we derive from a large database of facial animations showing different (dynamic) facial expressions of a fairly large number of subjects. We split these animation sequences into snippets of fixed length which we use to predict the facial motion based on previous frames. In order to keep the deformation model compact and independent from the individual physiognomy, we represent it by deformation gradients (instead of vertex positions) and apply a principal component analysis in deformation gradient space to extract the major modes of facial deformation. Since the facial deformation is optimized during tracking, it is particularly easy to apply them to other physiognomies and thereby re‐target the facial expressions. We demonstrate the effectiveness of our technique on a number of examples.  相似文献   

20.
现有人脸纹理重建方法对于人脸的皱纹、胡须、瞳孔颜色等重建效果往往不够细致.为了解决此问题,文中提出基于人脸标准化的纹理和光照保持3D人脸重构.首先对2D人脸图像标准化,使用光照信息和对称纹理重构人脸自遮挡区域的纹理.然后依据2D-3D点对应关系从标准化的2D人脸图像获取相应的3D人脸纹理,结合人脸形状重构和纹理信息,得到最终的3D人脸重构结果.实验表明文中方法有效保留原始2D图像的纹理和光照信息,重构的人脸更自然,具有更丰富的人脸细节.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号