首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
We present a generative appearance-based method for recognizing human faces under variation in lighting and viewpoint. Our method exploits the fact that the set of images of an object in fixed pose, but under all possible illumination conditions, is a convex cone in the space of images. Using a small number of training images of each face taken with different lighting directions, the shape and albedo of the face can be reconstructed. In turn, this reconstruction serves as a generative model that can be used to render (or synthesize) images of the face under novel poses and illumination conditions. The pose space is then sampled and, for each pose, the corresponding illumination cone is approximated by a low-dimensional linear subspace whose basis vectors are estimated using the generative model. Our recognition algorithm assigns to a test image the identity of the closest approximated illumination cone. Test results show that the method performs almost without error, except on the most extreme lighting directions  相似文献   

2.
任意光照下人脸图像的低维光照空间表示   总被引:3,自引:0,他引:3  
本文提出一种不同光照条件下人脸图像的低维光照空间表示方法.这种低维光照空间表示不仅能够由输入图像估计其光照参数,而且能够由给定的光照条件生成虚拟的人脸图像.利用主成分分析和最近邻聚类方法得到9个基本点光源的位置,这9个基本点光源可以近似人脸识别应用中几乎所有的光照条件.在这9个基本光源照射下的9幅人脸基图像构成了低维人脸光照空间,它可以表示不同光照条件下的人脸图像,结合光照比图像方法,可以生成不同光照下的虚拟人脸图像.本文提出的低维光照空间的最大优点是利用某个人脸的图像建立的光照空间,可以用于不同的人脸.图像重构和不同光照下的人脸识别实验说明了本文算法的有效性.  相似文献   

3.
We develop a face recognition algorithm which is insensitive to large variation in lighting direction and facial expression. Taking a pattern classification approach, we consider each pixel in an image as a coordinate in a high-dimensional space. We take advantage of the observation that the images of a particular face, under varying illumination but fixed pose, lie in a 3D linear subspace of the high dimensional image space-if the face is a Lambertian surface without shadowing. However, since faces are not truly Lambertian surfaces and do indeed produce self-shadowing, images will deviate from this linear subspace. Rather than explicitly modeling this deviation, we linearly project the image into a subspace in a manner which discounts those regions of the face with large deviation. Our projection method is based on Fisher's linear discriminant and produces well separated classes in a low-dimensional subspace, even under severe variation in lighting and facial expressions. The eigenface technique, another method based on linearly projecting the image space to a low dimensional subspace, has similar computational requirements. Yet, extensive experimental results demonstrate that the proposed “Fisherface” method has error rates that are lower than those of the eigenface technique for tests on the Harvard and Yale face databases  相似文献   

4.
In this paper, we propose two novel methods for face recognition under arbitrary unknown lighting by using spherical harmonics illumination representation, which require only one training image per subject and no 3D shape information. Our methods are based on the result which demonstrated that the set of images of a convex Lambertian object obtained under a wide variety of lighting conditions can be approximated accurately by a low-dimensional linear subspace. We provide two methods to estimate the spherical harmonic basis images spanning this space from just one image. Our first method builds the statistical model based on a collection of 2D basis images. We demonstrate that, by using the learned statistics, we can estimate the spherical harmonic basis images from just one image taken under arbitrary illumination conditions if there is no pose variation. Compared to the first method, the second method builds the statistical models directly in 3D spaces by combining the spherical harmonic illumination representation and a 3D morphable model of human faces to recover basis images from images across both poses and illuminations. After estimating the basis images, we use the same recognition scheme for both methods: we recognize the face for which there exists a weighted combination of basis images that is the closest to the test face image. We provide a series of experiments that achieve high recognition rates, under a wide range of illumination conditions, including multiple sources of illumination. Our methods achieve comparable levels of accuracy with methods that have much more onerous training data requirements. Comparison of the two methods is also provided.  相似文献   

5.
The paper proposes a novel, pose-invariant face recognition system based on a deformable, generic 3D face model, that is a composite of: (1) an edge model, (2) a color region model and (3) a wireframe model for jointly describing the shape and important features of the face. The first two submodels are used for image analysis and the third mainly for face synthesis. In order to match the model to face images in arbitrary poses, the 3D model can be projected onto different 2D viewplanes based on rotation, translation and scale parameters, thereby generating multiple face-image templates (in different sizes and orientations). Face shape variations among people are taken into account by the deformation parameters of the model. Given an unknown face, its pose is estimated by model matching and the system synthesizes face images of known subjects in the same pose. The face is then classified as the subject whose synthesized image is most similar. The synthesized images are generated using a 3D face representation scheme which encodes the 3D shape and texture characteristics of the faces. This face representation is automatically derived from training face images of the subject. Experimental results show that the method is capable of determining pose and recognizing faces accurately over a wide range of poses and with naturally varying lighting conditions. Recognition rates of 92.3% have been achieved by the method with 10 training face images per person.  相似文献   

6.
刘树利  胡茂林 《微机发展》2006,16(6):213-215
对在不同视角下,得到的人脸模型,文中提出一种基于人脸表面的识别方法,采用平面射影变换,将人脸的图像变换到一个相同的图像,使图像对齐;而后运用主成分分析法(PCA)进行分类。基于这种方法,由光线、面部表情、姿态的改变引起的不必要变化就可能被消除或可以忽略。这种方法可以达到比较准确的识别人脸的目的。实验结果显示,文中的方法对人脸模型提供了更好的表达,并且人脸识别的错误率更低。  相似文献   

7.
Acquiring linear subspaces for face recognition under variable lighting   总被引:9,自引:0,他引:9  
Previous work has demonstrated that the image variation of many objects (human faces in particular) under variable lighting can be effectively modeled by low-dimensional linear spaces, even when there are multiple light sources and shadowing. Basis images spanning this space are usually obtained in one of three ways: a large set of images of the object under different lighting conditions is acquired, and principal component analysis (PCA) is used to estimate a subspace. Alternatively, synthetic images are rendered from a 3D model (perhaps reconstructed from images) under point sources and, again, PCA is used to estimate a subspace. Finally, images rendered from a 3D model under diffuse lighting based on spherical harmonics are directly used as basis images. In this paper, we show how to arrange physical lighting so that the acquired images of each object can be directly used as the basis vectors of a low-dimensional linear space and that this subspace is close to those acquired by the other methods. More specifically, there exist configurations of k point light source directions, with k typically ranging from 5 to 9, such that, by taking k images of an object under these single sources, the resulting subspace is an effective representation for recognition under a wide range of lighting conditions. Since the subspace is generated directly from real images, potentially complex and/or brittle intermediate steps such as 3D reconstruction can be completely avoided; nor is it necessary to acquire large numbers of training images or to physically construct complex diffuse (harmonic) light fields. We validate the use of subspaces constructed in this fashion within the context of face recognition.  相似文献   

8.
张旭  胡晰远  陈晨  彭思龙 《自动化学报》2019,45(10):1857-1869
将一个人的头像剪切并拼接到另一张照片中,是一种常见的图像篡改手段.如果将该合成照片用于敲诈勒索,会对社会带来严重危害.因此,用来检测图像篡改的图像取证技术具有重大意义.由于不同照片成像环境不同,拼接时很难做到不同人脸的光照绝对一致,因此可以通过光照是否一致检测篡改.以往光照估计方法基于平行投影的假设,利用照片投影光照进行光照一致性分析.实际上,相机针孔模型是透视投影,从而导致上述检测方法出现误差.针对这一问题,本文提出一种透视投影下物体空间光照估计算法,将各人脸姿态统一到相机坐标系下,估计各人脸相对于相机坐标系的空间光照,然后分析空间光照一致性.另外,根据人脸空间光照一致性约束可以优化出相机参数,并得到该参数下的等效焦距、人脸空间位置及重新透视投影的图像等空间信息.本文将空间光照的一致性和上述空间信息的合理性作为依据,对人脸图像进行拼接篡改检测.实验结果表明,相比于传统方法基于平行投影光照进行光照一致性分析,采用本文提出的方法得到的空间光照进行光照一致性分析具有更高的准确度,结合相关信息进行照片空间合理性分析的篡改检测方法具有更强的说服力.  相似文献   

9.
In this paper, we present a new method to modify the appearance of a face image by manipulating the illumination condition, when the face geometry and albedo information is unknown. This problem is particularly difficult when there is only a single image of the subject available. Recent research demonstrates that the set of images of a convex Lambertian object obtained under a wide variety of lighting conditions can be approximated accurately by a low-dimensional linear subspace using a spherical harmonic representation. Moreover, morphable models are statistical ensembles of facial properties such as shape and texture. In this paper, we integrate spherical harmonics into the morphable model framework by proposing a 3D spherical harmonic basis morphable model (SHBMM). The proposed method can represent a face under arbitrary unknown lighting and pose simply by three low-dimensional vectors, i.e., shape parameters, spherical harmonic basis parameters, and illumination coefficients, which are called the SHBMM parameters. However, when the image was taken under an extreme lighting condition, the approximation error can be large, thus making it difficult to recover albedo information. In order to address this problem, we propose a subregion-based framework that uses a Markov random field to model the statistical distribution and spatial coherence of face texture, which makes our approach not only robust to extreme lighting conditions, but also insensitive to partial occlusions. The performance of our framework is demonstrated through various experimental results, including the improved rates for face recognition under extreme lighting conditions.  相似文献   

10.
Face recognition across pose is a problem of fundamental importance in computer vision. We propose to address this problem by using stereo matching to judge the similarity of two, 2D images of faces seen from different poses. Stereo matching allows for arbitrary, physically valid, continuous correspondences. We show that the stereo matching cost provides a very robust measure of similarity of faces that is insensitive to pose variations. To enable this, we show that, for conditions common in face recognition, the epipolar geometry of face images can be computed using either four or three feature points. We also provide a straightforward adaptation of a stereo matching algorithm to compute the similarity between faces. The proposed approach has been tested on the CMU PIE data set and demonstrates superior performance compared to existing methods in the presence of pose variation. It also shows robustness to lighting variation.  相似文献   

11.
Face images are difficult to interpret because they are highly variable. Sources of variability include individual appearance, 3D pose, facial expression, and lighting. We describe a compact parametrized model of facial appearance which takes into account all these sources of variability. The model represents both shape and gray-level appearance, and is created by performing a statistical analysis over a training set of face images. A robust multiresolution search algorithm is used to fit the model to faces in new images. This allows the main facial features to be located, and a set of shape, and gray-level appearance parameters to be recovered. A good approximation to a given face can be reconstructed using less than 100 of these parameters. This representation can be used for tasks such as image coding, person identification, 3D pose recovery, gender recognition, and expression recognition. Experimental results are presented for a database of 690 face images obtained under widely varying conditions of 3D pose, lighting, and facial expression. The system performs well on all the tasks listed above  相似文献   

12.
Exchanging Faces in Images   总被引:1,自引:0,他引:1  
  相似文献   

13.
光照和姿态变化带来的影响是自动人脸识别的两个主要瓶颈问题。提出了消除这两方面影响的处理方法:首先对训练集里的图像应用灰度归一化处理,降低对光照强度的敏感度;然后进行姿态估计,并用特征脸方法计算不同姿态的特征子空间,最后提出了“姿态权重PWV(Pose’s Weight Value)”这一概念,据此设计了加权的最小距离分类器WMDC(Weighted Minimum Distance Classifier),分配不同姿态权重消除姿态变化影响。在FERET和Yale B数据库上的实验结果表明,此方法能在很大程度上提高人脸光照和姿态改变时的识别率。  相似文献   

14.
15.
Lambertian reflectance and linear subspaces   总被引:23,自引:0,他引:23  
We prove that the set of all Lambertian reflectance functions (the mapping from surface normals to intensities) obtained with arbitrary distant light sources lies close to a 9D linear subspace. This implies that, in general, the set of images of a convex Lambertian object obtained under a wide variety of lighting conditions can be approximated accurately by a low-dimensional linear subspace, explaining prior empirical results. We also provide a simple analytic characterization of this linear space. We obtain these results by representing lighting using spherical harmonics and describing the effects of Lambertian materials as the analog of a convolution. These results allow us to construct algorithms for object recognition based on linear methods as well as algorithms that use convex optimization to enforce nonnegative lighting functions. We also show a simple way to enforce nonnegative lighting when the images of an object lie near a 4D linear space. We apply these algorithms to perform face recognition by finding the 3D model that best matches a 2D query image.  相似文献   

16.
This paper proposes a novel framework of real-time face tracking and recognition by combining two eigen-based methods. The first method is a novel extension of eigenface called augmented eigenface and the second method is a sparse 3D eigentemplate tracker controlled by a particle filter. The augmented eigenface is an eigenface augmented by an associative mapping to 3D shape that is specified by a set of volumetric face models. This paper discusses how to make up the augmented eigenface and how it can be used for inference of 3D shape from partial images. The associative mapping is also generalized to subspace-to-one mappings to cover photometric image changes for a fixed shape. A novel technique, called photometric adjustment, is introduced for simple implementation of associative mapping when an image subspace should be combined to a shape. The sparse 3D eigentemplate tracker is an extension of the 3D template tracker proposed by Oka et al. In combination with the augmented eigenface, the sparse 3D eigentemplate tracker facilitates real-time 3D tracking and recognition when a monocular image sequence is provided. In the tracking, sparse 3D eigentemplate is updated by the augmented eigenface while face pose is estimated by the sparse eigentracker. Since the augmented eigenface is constructed on the conventional eigenfaces, face identification and expression recognition are also accomplished efficiently during the tracking. In the experiment, an augmented eigenface was constructed from 25 faces where 24 images were taken in different lighting conditions for each face. Experimental results show that the augmented eigenface works with the 3D eigentemplate tracker for real-time tracking and recognition.  相似文献   

17.
Detecting faces in images: a survey   总被引:14,自引:0,他引:14  
Images containing faces are essential to intelligent vision-based human-computer interaction, and research efforts in face processing include face recognition, face tracking, pose estimation and expression recognition. However, many reported methods assume that the faces in an image or an image sequence have been identified and localized. To build fully automated systems that analyze the information contained in face images, robust and efficient face detection algorithms are required. Given a single image, the goal of face detection is to identify all image regions which contain a face, regardless of its 3D position, orientation and lighting conditions. Such a problem is challenging because faces are non-rigid and have a high degree of variability in size, shape, color and texture. Numerous techniques have been developed to detect faces in a single image, and the purpose of this paper is to categorize and evaluate these algorithms. We also discuss relevant issues such as data collection, evaluation metrics and benchmarking. After analyzing these algorithms and identifying their limitations, we conclude with several promising directions for future research  相似文献   

18.
In this paper, we presented algorithms to assess the quality of facial images affected by factors such as blurriness, lighting conditions, head pose variations, and facial expressions. We developed face recognition prediction functions for images affected by blurriness, lighting conditions, and head pose variations based upon the eigenface technique. We also developed a classifier for images affected by facial expressions to assess their quality for recognition by the eigenface technique. Our experiments using different facial image databases showed that our algorithms are capable of assessing the quality of facial images. These algorithms could be used in a module for facial image quality assessment in a face recognition system. In the future, we will integrate the different measures of image quality to produce a single measure that indicates the overall quality of a face image  相似文献   

19.
Face recognition is challenging because variations can be introduced to the pattern of a face by varying pose, lighting, scale, and expression. A new face recognition approach using rank correlation of Gabor-filtered images is presented. Using this technique, Gabor filters of different sizes and orientations are applied on images before using rank correlation for matching the face representation. The representation used for each face is computed from the Gabor-filtered images and the original image. Although training requires a fairly substantial length of time, the computation time required for recognition is very short. Recognition rates ranging between 83.5% and 96% are obtained using the AT&T (formerly ORL) database using different permutations of 5 and 9 training images per subject. In addition, the effect of pose variation on the recognition system is systematically determined using images from the UMIST database.  相似文献   

20.
目的 人脸识别已经得到了广泛应用,但大姿态人脸识别问题仍未完美解决。已有方法或提取姿态鲁棒特征,或进行人脸姿态的正面化。其中主流的人脸正面化方法包括2D回归生成和3D模型形变建模,前者能够生成相对自然真实的人脸,但会引入额外的噪声导致图像信息的扭曲;后者能够保持原始的人脸结构信息,但生成过程是基于物理模型的,不够自然灵活。为此,结合2D和3D方法的优势,本文提出了基于由粗到细形变场的人脸正面化方法。方法 该形变场由深度网络以2D回归方式学得,反映的是不同视角人脸图像像素之间的语义级对应关系,可以类3D的方式实现非正面人脸图像的正面化,因此该方法兼具了2D正面化方法的灵活性与3D正面化方法的保真性,且借鉴分步渐进的思路,本文提出了由粗到细的形变场学习框架,以获得更加准确鲁棒的形变场。结果 本文采用大姿态人脸识别实验来验证本文方法的有效性,在MultiPIE(multi pose, illumination, expressions)、LFW(labeled faces in the wild)、CFP(celebrities in frontal-profile in the wild)...  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号