首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Generalized face super-resolution.   总被引:3,自引:0,他引:3  
Existing learning-based face super-resolution (hallucination) techniques generate high-resolution images of a single facial modality (i.e., at a fixed expression, pose and illumination) given one or set of low-resolution face images as probe. Here, we present a generalized approach based on a hierarchical tensor (multilinear) space representation for hallucinating high-resolution face images across multiple modalities, achieving generalization to variations in expression and pose. In particular, we formulate a unified tensor which can be reduced to two parts: a global image-based tensor for modeling the mappings among different facial modalities, and a local patch-based multiresolution tensor for incorporating high-resolution image details. For realistic hallucination of unregistered low-resolution faces contained in raw images, we develop an automatic face alignment algorithm capable of pixel-wise alignment by iteratively warping the probing face to its projection in the space of training face images. Our experiments show not only performance superiority over existing benchmark face super-resolution techniques on single modal face hallucination, but also novelty of our approach in coping with multimodal hallucination and its robustness in automatic alignment under practical imaging conditions.  相似文献   

2.
Unconstrained illumination and pose variation lead to significant variation in the photographs of faces and constitute a major hurdle preventing the widespread use of face recognition systems. The challenge is to generalize from a limited number of images of an individual to a broad range of conditions. Recently, advances in modeling the effects of illumination and pose have been accomplished using three-dimensional (3-D) shape information coupled with reflectance models. Notable developments in understanding the effects of illumination include the nonexistence of illumination invariants, a characterization of the set of images of objects in fixed pose under variable illumination (the illumination cone), and the introduction of spherical harmonics and low-dimensional linear subspaces for modeling illumination. To generalize to novel conditions, either multiple images must be available to reconstruct 3-D shape or, if only a single image is accessible, prior information about the 3-D shape and appearance of faces in general must be used. The 3-D Morphable Model was introduced as a generative model to predict the appearances of an individual while using a statistical prior on shape and texture allowing its parameters to be estimated from single image. Based on these new understandings, face recognition algorithms have been developed to address the joint challenges of pose and lighting. In this paper, we review these developments and provide a brief survey of the resulting face recognition algorithms and their performance  相似文献   

3.
Face Recognition Under Varying Illumination Using Gradientfaces   总被引:4,自引:0,他引:4  
In this correspondence, we propose a novel method to extract illumination insensitive features for face recognition under varying lighting called the gradient faces. Theoretical analysis shows gradient faces is an illumination insensitive measure, and robust to different illumination, including uncontrolled, natural lighting. In addition, gradient faces is derived from the image gradient domain such that it can discover underlying inherent structure of face images since the gradient domain explicitly considers the relationships between neighboring pixel points. Therefore, gradient faces has more discriminating power than the illumination insensitive measure extracted from the pixel domain. Recognition rates of 99.83% achieved on PIE database of 68 subjects, 98.96% achieved on Yale B of ten subjects, and 95.61% achieved on Outdoor database of 132 subjects under uncontrolled natural lighting conditions show that gradient faces is an effective method for face recognition under varying illumination. Furthermore, the experimental results on Yale database validate that gradient faces is also insensitive to image noise and object artifacts (such as facial expressions).  相似文献   

4.
The problem of time validity of biometric models has received only a marginal attention from researchers. In this paper, we propose to manage the aging influence on the adult face verification system by an A-stack age modeling technique, which uses the age as a class-independent metadata quality measure together with scores from a single or multiple baseline classifiers, in order to obtain better face verification performance. This allows for improved long-term class separation by introducing a dynamically changing decision boundary across the age progression in the scores-age space using a short-term enrollment model. This new method, based on the concept of classifier stacking and age-aware decision boundary, compares favorably with the conventional face verification approach, which uses age-independent decision threshold calculated only in the score space at the time of enrollment. Our experiments on the YouTube and MORPH data show that the use of the proposed approach allows for improving the identification accuracy as opposed to the baseline classifier.  相似文献   

5.
平强  庄连生  俞能海 《电子学报》2012,40(10):1965-1970
传统人脸识别算法通常把光照处理和姿态校正作为两个相对独立的处理过程,难以取得全局最优识别性能.针对该问题,本文根据人脸的非刚体特性,将仿射变换和分块思想融入线性重构模型中,提出了一种基于仿射最小线性重构误差(Affine Minimum Linear Reconstruction Error,AMLRE)的人脸识别算法,在处理光照问题的同时能够补偿姿态变化造成的局部区域对齐误差,以获得更好的全局识别性能.在公共数据集上的实验结果表明,本文提出的算法对光照和姿态有很好的鲁棒性,同时与现有的人脸识别算法相比,本文的算法具有更高的识别率.  相似文献   

6.
In this paper, we develop a novel framework for robust recovery of three-dimensional (3-D) surfaces of faces from single images. The underlying principle is shape from recognition, i.e., the idea that pre-recognizing face parts can constrain the space of possible solutions to the image irradiance equation, thus allowing robust recovery of the 3-D structure of a specific part. Parts of faces like nose, lips and eyes are recognized and localized using robust expansion matching filter templates under varying pose and illumination. Specialized backpropagation based neural networks are then employed to recover the 3-D shape of particular face parts. Representation using principal components allows to efficiently encode classes of objects such as nose, lips, etc. The specialized networks are designed and trained to map the principal component coefficients of the part images to another set of principal component coefficients that represent the corresponding 3-D surface shapes. To achieve robustness to viewing conditions, the network is trained with a wide range of illumination and viewing directions. A method for merging recovered 3-D surface regions by minimizing the sum squared error in overlapping areas is also derived. Quantitative analysis of the reconstruction of the surface parts in varying illumination and pose show relatively small errors, indicating that the method is robust and accurate. Several examples showing recovery of the complete face also illustrate the efficacy of the approach.  相似文献   

7.
张丹 《电讯技术》2012,52(6):1031-1034
如何有效利用视频中人脸之间的时空连续性信息来克服人脸分辨率低、图像尺度变化大和姿态、光照变化以及遮挡等问题是视频人脸识别的关键所在.提出了一种基于流形学习的视频人脸性别识别算法.该算法不仅可以通过聚类融合学习来挖掘视频内在的连续性信息,同时能发现人脸数据中内在非线性结构信息而获得低维本质的流形结构.在UCSD/Honda和自采集数据库上与静态的算法比较结果表明,所提算法能够获得更好的识别率.  相似文献   

8.
In the field of security, faces are usually blurry, occluded, diverse pose and small in the image captured by an outdoor surveillance camera, which is affected by the external environment such as the camera pose and range, weather conditions, etc. It can be described as a problem of hard face detection in natural images. To solve this problem, we propose a deep convolutional neural network named feature hierarchy encoder–decoder network (FHEDN). It is motivated by two observations from contextual semantic information and the mechanism of multi-scale face detection. The proposed network is a scale-variant style architecture and single stage, which are composed of encoder and decoder subnetworks. Based on the assumption that contextual semantic information around face being auxiliary to detect faces, we introduce a residual mechanism to fuse context prior-based information into face feature and formulate the learning chain to train each encoder–decoder pair. In addition, we discuss some important factors in implement details such as the distribution of training dataset, the scale of feature hierarchy, and anchor box size, etc. They have some impact on the detection performance of the final network. Compared with some state-of-the-art algorithms, our method achieves promising performance on the popular benchmarks including AFW, PASCAL FACE, FDDB, and WIDER FACE. Consequently, the proposed approach can be efficiently implemented and routinely applied to detect faces with severe occlusion and arbitrary pose variations in unconstrained scenes. Our code and results are available on https://github.com/zzxcoder/EvaluationFHEDN.  相似文献   

9.
We propose a face recognition method that is robust against image variations due to arbitrary lighting and a large extent of pose variations, ranging from frontal to profile views. Existing appearance models defined on image planes are not applicable for such pose variations that cause occlusions and changes of silhouette. In contrast, our method constructs an appearance model of a three-dimensional (3-D) object on its surface. Our proposed model consists of a 3-D shape and geodesic illumination bases (GIBs). GIBs can describe the irradiances of an object's surface under any illumination and generate illumination subspace that can describe illumination variations of an image in an arbitrary pose. Our appearance model is automatically aligned to the target image by pose optimization based on a rough pose, and the residual error of this model fitting is used as the recognition score. We tested the recognition performance of our method with an extensive database that includes 14 000 images of 200 individuals with drastic illumination changes and pose variations up to 60/spl deg/ sideward and 45/spl deg/ upward. The method achieved a first-choice success ratio of 94.2% without knowing precise poses a priori.  相似文献   

10.
蔡晓东  甘凯今  杨超  王丽娟 《电视技术》2016,40(11):116-120
为了更有针对性地从车辆图像的不同区域提取出独特的图像特征,提出基于多分支卷积神经网络的车辆图像比对方法.首先,根据车牌定位结果获取比对车辆的车脸图像,并根据车脸图像的纹理丰富度将车脸图像划分为多个图像块;其次,使用多分支卷积神经网络分别提取各车脸图像块的深度特征;最后,通过计算车脸图像深度特征的相似度判定比对的车辆图像是否属于同种车型.实验表明,提出的方法能够提取有效的车辆图像各区域的深度特征,获得良好的车辆比对准确率,可用于套牌车辆识别.  相似文献   

11.
Many 2D face processing algorithms can perform better using frontal or near frontal faces. In this paper, we present a robust frontal view search method based on manifold learning, with the assumption that with the pose being the only variable, face images should lie in a smooth and low-dimensional manifold. In 2D embedding, we find that manifold geometry of face images with varying poses has the shape of a parabola with the frontal view in the vertex. However, background clutter and illumination variations make frontal view deviate from the vertex. To address this problem, we propose a pairwise K-nearest neighbor protocol to extend manifold learning. In addition, we present an illumination-robust localized edge orientation histogram to represent face image in the extended manifold learning. The experimental results show that the extended algorithms have higher search accuracy, even under varying illuminations.  相似文献   

12.
Forensic age progression for the purpose of ageing a missing child is a discipline currently dominated by artistic methodologies. In order to improve on these techniques, a statistically rigorous approach to the ageing of the human face is presented. The technique is based upon a principal component analysis and involves the definition of an ageing direction through the model space, using an age-weighted combination of model parameters. Pose and expression compensation methods are also incorporated, allowing faces at a wide variety of pose orientations and expressions to be aged accurately. Near photo-quality images are obtained quickly and the resultant ageing effects are realistic and plausible. As a quantitative check of the results, the root mean square error is calculated between the shape vector of the aged face and that of the target face, as well as between the aged face and faces of different identity at the target age. In general, this error is found to be smaller between the aged face and the target face, indicating that the face successfully retains its identity as it is aged. As a further test of the basic plausibility of our results, a regression analysis is performed between the shape model parameters and the age of each subject, assuming a linear relationship. The coefficient of determination is calculated to be r2=0.68 and the relationship between the variables is found to be significant at a level >0.99 upon performance of a standard F-test  相似文献   

13.
Existing face aging (FA) approaches usually concentrate on a universal aging pattern, and produce restricted aging faces from one-to-one mapping. However, the diversity of living environments impact individuals differently in their oldness. To simulate various aging effects, we propose a multimodal FA framework based on face disentanglement technique of age-specific and age-irrelevant information. A Variational Autoencoder (VAE)-based encoder is designed to represent the distribution of the age-specific attributes. To capture the age-irrelevant features, a cycle-consistency loss of unpaired faces is utilized among various age spans. The extensive experimental results demonstrate that the sampled age-specific codes along with an age-irrelevant feature make the multimodal FA diverse and realistic.  相似文献   

14.
We present a new approach to face relighting by jointly estimating the pose, reflectance functions, and lighting from as few as one image of a face. Upon such estimation, we can synthesize the face image under any prescribed new lighting condition. In contrast to commonly used face shape models or shape-dependent models, we neither recover nor assume the 3-D face shape during the estimation process. Instead, we train a pose- and pixel-dependent subspace model of the reflectance function using a face database that contains samples of pose and illumination for a large number of individuals (e.g., the CMU PIE database and the Yale database). Using this subspace model, we can estimate the pose, the reflectance functions, and the lighting condition of any given face image. Our approach lends itself to practical applications thanks to many desirable properties, including the preservation of the non-Lambertian skin reflectance properties and facial hair, as well as reproduction of various shadows on the face. Extensive experiments show that, compared to recent representative face relighting techniques, our method successfully produces better results, in terms of subjective and objective quality, without reconstructing a 3-D shape.  相似文献   

15.
针对人脸光照、遮挡、身份、表情等因素变化的人脸姿态估计难题,结合稀疏表示分类(SRC)方法的优秀识别性能,对SRC理论进行了深入分析,并将其应用于人脸姿态分类.为了解决姿态估计中人脸光照、噪声和遮挡变化问题,将人脸姿态离散化为不同的子空间,每个子空间对应一个类别,据此,提出基于字典学习与稀疏约束的人脸姿态识别方法.通过在公开的XJTU和PIE人脸库上实验表明:所研究的方法对人脸光照、噪声和遮挡变化具有鲁棒性.  相似文献   

16.
Appearance-based methods have been proven to be useful for face recognition tasks. The main problem with appearance-based methods originates from the multimodality of face images. It is known that images of different people in the original data space are more closely located to each other than those of the same person under different imaging conditions. In this paper, we propose a novel approach based on the nonlinear manifold embedding to define a linear subspace for illumination variations. This embedding based framework utilizes an optimization scheme to calculate the bases of the subspace. Since the optimization problem does not rely on the physical properties of the factor, the framework can also be used for other types of factors such as pose and expression. We obtained some promising recognition results under changing illumination conditions. Our error rates are comparable with state of art methods.  相似文献   

17.
王宏勇  王青青 《电子科技》2012,25(12):141-143
有效提取人脸特征是人脸识别技术的关键组成部分。传统的二维图像容易受到光照、姿态及表情的影响,而三维数据被认为具有光照姿态不变性。文中从局部特征和整体特征两个角度,对三维人脸特征提取进行综述,对部分方法进行比较,并分析了方法的有效性,总结了三维人脸特征提取方法的优势和困难。  相似文献   

18.
19.
薛峰  丁晓青 《电子学报》2006,34(10):1896-1899
为了从多幅人脸图像构造三维人脸结构,通常需要自动提取不同图像中的对应特征点,这往往是很难完成的.为了避免这个困难,本文建立了一个基于形状匹配的三维变形模型,在保证形状最佳匹配的条件下,实现对人脸图像姿态的估计和三维人脸重构.模型采用径向基函数对通用头部模型进行变形,用形状上下文来描述点之间的形状相似性,形状距离用来描述头部模型和人脸图像整体形状上的相似性,从而实现形状最佳匹配意义上的三维重构.实验表明,本文的算法只需要在人脸图像中提取特征点集,不需进行配准,就可以恢复出令人满意的三维头部结构.  相似文献   

20.
This paper presents a complete face authentication system integrating both two-dimensional (color or intensity) and three-dimensional (3-D) range data, based on a low-cost 3-D sensor, capable of real-time acquisition of 3-D and color images. Novel algorithms are proposed that exploit depth information to achieve robust face detection and localization under conditions of background clutter, occlusion, face pose alteration, and harsh illumination. The well-known embedded hidden Markov model technique for face authentication is applied to depth maps and color images. To cope with pose and illumination variations, the enrichment of face databases with synthetically generated views is proposed. The performance of the proposed authentication scheme is tested thoroughly on two distinct face databases of significant size. Experimental results demonstrate significant gains resulting from the combined use of depth and color or intensity information.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号