共查询到20条相似文献,搜索用时 93 毫秒
1.
Liting Wang Liu Ding Xiaoqing Ding Chi Fang 《Soft Computing - A Fusion of Foundations, Methodologies and Applications》2011,15(3):417-428
Recent face recognition algorithm can achieve high accuracy when the tested face samples are frontal. However, when the face
pose changes largely, the performance of existing methods drop drastically. Efforts on pose-robust face recognition are highly
desirable, especially when each face class has only one frontal training sample. In this study, we propose a 2D face fitting-assisted
3D face reconstruction algorithm that aims at recognizing faces of different poses when each face class has only one frontal
training sample. For each frontal training sample, a 3D face is reconstructed by optimizing the parameters of 3D morphable
model (3DMM). By rotating the reconstructed 3D face to different views, pose virtual face images are generated to enlarge
the training set of face recognition. Different from the conventional 3D face reconstruction methods, the proposed algorithm
utilizes automatic 2D face fitting to assist 3D face reconstruction. We automatically locate 88 sparse points of the frontal
face by 2D face-fitting algorithm. Such 2D face-fitting algorithm is so-called Random Forest Embedded Active Shape Model,
which embeds random forest learning into the framework of Active Shape Model. Results of 2D face fitting are added to the
3D face reconstruction objective function as shape constraints. The optimization objective energy function takes not only
image intensity, but also 2D fitting results into account. Shape and texture parameters of 3DMM are thus estimated by fitting
the 3DMM to the 2D frontal face sample, which is a non-linear optimization problem. We experiment the proposed method on the
publicly available CMUPIE database, which includes faces viewed from 11 different poses, and the results show that the proposed
method is effective and the face recognition results toward pose variants are promising. 相似文献
2.
Due to the limitation of the storage space in the real-world face recognition application systems, only one sample image per person is often stored in the system, which is the so-called single sample problem. Moreover, real-world illumination has impact on recognition performance. This paper presents an illumination robust single sample face recognition approach, which utilizes multi-directional orthogonal gradient phase faces to solve the above limitations. In the proposed approach, an illumination insensitive orthogonal gradient phase face is obtained by using two vertical directional gradient values of the original image. Multi-directional orthogonal gradient phase faces can be used to extend samples for single sample face recognition. Simulated experiments and comparisons on a subset of Yale B database, Yale database, a subset of PIE database and VALID face database show that the proposed approach is not only an outstanding method for single sample face recognition under illumination but also more effective when addressing illumination, expression, decoration, etc. 相似文献
3.
《Information Forensics and Security, IEEE Transactions on》2008,3(4):684-697
4.
The appearance of a face image is severely affected by illumination conditions that will hinder the automatic face recognition process. To recognize faces under varying lighting conditions, a homomorphic filtering-based illumination normalization method is proposed in this paper. In this work, the effect of illumination is effectively reduced by a modified implementation of homomorphic filtering whose key component is a Difference of Gaussian (DoG) filter, and the contrast is enhanced by histogram equalization. The resulted face image is not only reduced illumination effect but also preserved edges and details that will facilitate the further face recognition task. Among others, our method has the following advantages: (1) neither does it need any prior information of 3D shape or light sources, nor many training samples thus can be directly applied to single training image per person condition; and (2) it is simple and computationally fast because there are mature and fast algorithms for the Fourier transform used in homomorphic filter. The Eigenfaces method is chosen to recognize the normalized face images. Experimental results on the Yale face database B and the CMU PIE face database demonstrate the significant performance improvement of the proposed method in the face recognition system for the face images with large illumination variations. 相似文献
5.
Amr Almaddah Sadi Vural Yasushi Mae Kenichi Ohara Tatsuo Arai 《Machine Vision and Applications》2014,25(4):845-857
As part of the face recognition task in a robust security system, we propose a novel approach for the illumination recovery of faces with cast shadows and specularities. Given a single 2D face image, we relight the face object by extracting the nine spherical harmonic bases and the face spherical illumination coefficients by using the face spherical spaces properties. First, an illumination training database is generated by computing the properties of the spherical spaces out of face albedo and normal values estimated from 2D training images. The training database is then discriminately divided into two directions in terms of the illumination quality and light direction of each image. Based on the generated multi-level illumination discriminative training space, we analyze the target face pixels and compare them with the appropriate training subspace using pre-generated tiles. When designing the framework, practical real-time processing speed and small image size were considered. In contrast to other approaches, our technique requires neither 3D face models nor restricted illumination conditions for the training process. Furthermore, the proposed approach uses one single face image to estimate the face albedo and face spherical spaces. In this work, we also provide the results of a series of experiments performed on publicly available databases to show the significant improvements in the face recognition rates. 相似文献
6.
The open-set problem is among the problems that have significantly changed the performance of face recognition algorithms in real-world scenarios. Open-set operates under the supposition that not all the probes have a pair in the gallery. Most face recognition systems in real-world scenarios focus on handling pose, expression and illumination problems on face recognition. In addition to these challenges, when the number of subjects is increased for face recognition, these problems are intensified by look-alike faces for which there are two subjects with lower intra-class variations. In such challenges, the inter-class similarity is higher than the intra-class variation for these two subjects. In fact, these look-alike faces can be created as intrinsic, situation-based and also by facial plastic surgery. This work introduces three real-world open-set face recognition methods across facial plastic surgery changes and a look-alike face by 3D face reconstruction and sparse representation. Since some real-world databases for face recognition do not have multiple images per person in the gallery, with just one image per subject in the gallery, this paper proposes a novel idea to overcome this challenge by 3D modeling from gallery images and synthesizing them for generating several images. Accordingly, a 3D model is initially reconstructed from frontal face images in a real-world gallery. Then, each 3D reconstructed face in the gallery is synthesized to several possible views and a sparse dictionary is generated based on the synthesized face image for each person. Also, a likeness dictionary is defined and its optimization problem is solved by the proposed method. Finally, the face recognition is performed for open-set face recognition using three proposed representation classifications. Promising results are achieved for face recognition across plastic surgery and look-alike faces on three databases including the plastic surgery face, look-alike face and LFW databases compared to several state-of-the-art methods. Also, several real-world and open-set scenarios are performed to evaluate the proposed method on these databases in real-world scenarios. 相似文献
7.
Facial configuration and BMI based personalized face and upper body modeling for customer-oriented wearable product design 总被引:1,自引:0,他引:1
To realize truly customer-oriented wearable products, individual users’ unique characteristics and features should be properly captured and represented. This research focuses on an efficient methodology to generate low polygonal virtual human face models, which overcome the limitation of existing high polygonal models. To determine individuals’ characteristics in the conceptual design stage of wearable products, a computerized and personalized 3D face model should be efficiently generated and be able to interact with wearable products. This research formulates a computerized 3D face via a 3D feature-based transformation. The developed algorithm is able to concisely and efficiently create a 3D face by using frontal and lateral pictures of users. The performance of this algorithm is well adapted both to typical PCs and to mobile devices. The generated virtual face models can serve as communication media in a multi-device based collaborative design environment. Through experiments, the validity of the proposed modeling method is considerably acceptable with respect to the quality of the similarity between 3D faces and individual pictures. Finally, this paper discusses how the developed personalized face modeling can be successfully utilized for customer-oriented wearable product design by showing compatible matching of a hairstyle product as a user study. 相似文献
8.
9.
B.J. Boom Author Vitae L.J. Spreeuwers Author VitaeAuthor Vitae 《Pattern recognition》2011,44(9):1980-1989
Face recognition under uncontrolled illumination conditions is still considered an unsolved problem. In order to correct for these illumination conditions, we propose a virtual illumination grid (VIG) approach to model the unknown illumination conditions. Furthermore, we use coupled subspace models of both the facial surface and albedo to estimate the face shape. In order to obtain a representation of the face under frontal illumination, we relight the estimated face shape. We show that the frontal illuminated facial images achieve better performance in face recognition. We have performed the challenging Experiment 4 of the FRGCv2 database, which compares uncontrolled probe images to controlled gallery images. Our illumination correction method results in considerably better recognition rates for a number of well-known face recognition methods. By fusing our global illumination correction method with a local illumination correction method, further improvements are achieved. 相似文献
10.
11.
Taiping Zhang Author Vitae Author Vitae Yuan Yuan Author Vitae Author Vitae Zhaowei Shang Author Vitae Author Vitae Fangnian Lang Author Vitae 《Pattern recognition》2009,42(2):251-258
Facial structure of face image under lighting lies in multiscale space. In order to detect and eliminate illumination effect, a wavelet-based face recognition method is proposed in this paper. In this work, the effect of illuminations is effectively reduced by wavelet-based denoising techniques, and meanwhile the multiscale facial structure is generated. Among others, the proposed method has the following advantages: (1) it can be directly applied to single face image, without any prior information of 3D shape or light sources, nor many training samples; (2) due to the multiscale nature of wavelet transform, it has better edge-preserving ability in low frequency illumination fields; and (3) the parameter selection process is computationally feasible and fast. Experiments are carried out upon the Yale B and CMU PIE face databases, and the results demonstrate that the proposed method achieves satisfactory recognition rates under varying illumination conditions. 相似文献
12.
针对人脸识别中的光照变化问题,利用随机投影对传统稀疏表示分类器进行改进,提出一种基于随机投影与加权稀疏表示残差的光照鲁棒人脸识别方法。通过对人脸图像进行光照规范化处理,尽量消除人脸图像上的恶劣光照,取得经光照校正的人脸样本后进行多次随机空间投影,进一步丰富样本的光照不变特征,以减小光照变化对人脸识别带来的影响。在此基础上,对利用单一残差分类的传统稀疏表示分类方法进行改进,样本经过多次随机投影和稀疏表示会产生多个样本特征和重构残差,利用样本特征的能量来确定各个重构残差的融合权值,最终得到一种稳定性和可靠性更强的加权残差。在 Yale B 和 CMU PIE 两个光照变化较大的人脸库上的实验结果表明,改进的方法具有较强的光照鲁棒性。与传统稀疏表示方法相比,本文提出的方法在Yale B人脸库上两组实验的平均识别率分别提高了25.76%和46.39%,在CMU PIE上的平均识别率提高了10%左右。 相似文献
13.
14.
在光照变化的环境下,人脸识别因受到光照强度和方向的非线性干扰而变得困难重重。在人脸局部区域,光照的变化比较缓慢,而皮肤对光照的反射率特征变化比较快,可以认为光照变化是低频信号,而人脸本质特征是高频信号。FABEMD是一种快速自适应的BEMD(Bidimensional Empirical Mode Decomposition,二维经验模式分解)方法,它能够将图像分解为不同尺度的高频图像和低频图像,高频图像代表了人脸皮肤细节纹理特征,而低频图像则代表了轮廓特征。但是并不能定量判别什么样的高频信号以及多少高频信号能够用来消除光照影响,所以提出了两种衡量高频细节信息量的方法,将这些信息量的相对值来推算融合不同尺度的高频信号权重系数。基于Yale B人脸数据库的实验数据证明了所提方法能够取得很好的识别效果。 相似文献
15.
针对人脸校正中单幅图像难以解决大姿态侧脸的问题,提出一种基于多姿态特征融合生成对抗网络(MFFGAN)的人脸校正方法,利用多幅不同姿态侧脸之间的相关信息来进行人脸校正,并采用对抗机制对网络参数进行调整。该方法设计了一种新的网络,包括由多姿态特征提取、多姿态特征融合、正脸合成三个模块组成的生成器,以及用于对抗训练的判别器。多姿态特征提取模块利用多个卷积层提取侧脸图像的多姿态特征;多姿态特征融合模块将多姿态特征融合成包含多姿态侧脸信息的融合特征;而正脸合成模块在进行姿态校正的过程中加入融合特征,通过探索多姿态侧脸图像之间的特征依赖关系来获取相关信息与全局结构,可以有效提高校正结果。实验结果表明,与现有基于深度学习的人脸校正方法相比,所提方法恢复出的正脸图像不仅轮廓清晰,而且从两幅侧脸中恢复出的正脸图像的识别率平均提高了1.9个百分点,并且输入侧脸图像越多,恢复出的正脸图像的识别率越高,表明所提方法可以有效融合多姿态特征来恢复出轮廓清晰的正脸图像。 相似文献
16.
针对二维人脸识别对姿态与光照变化较为敏感的问题,提出了一种基于三维数据与混合多尺度奇异值特征MMSV(mixture of multi-scale singular value,MMSV)的二维人脸识别方法。在训练阶段,利用三维人脸数据与光照模型获取大量具有不同姿态和光照条件的二维虚拟图像,为构造完备的特征模板奠定基础;同时,通过子集划分有效地缓解了人脸特征提取过程中的非线性问题;最后对人脸图像进行MMSV特征提取,从而对人脸的全局与局部特征进行融合。在识别阶段,通过计算MMSV特征子空间距离完成分类识别。实验证明,提取到的MMSV特征包含有更多的鉴别信息,对姿态和光照变化具有理想的鲁棒性。该方法在WHU-3D数据库上取得了约98.4%的识别率。 相似文献
17.
The morphable model has been employed to efficiently describe 3D face shape and the associated albedo with a reduced set of basis vectors. The spherical harmonics (SH) model provides a compact basis to well approximate the image appearance of a Lambertian object under different illumination conditions. Recently, the SH and morphable models have been integrated for 3D face shape reconstruction. However, the reconstructed 3D shape is either inconsistent with the SH bases or obtained just from landmarks only. In this work, we propose a geometrically consistent algorithm to reconstruct the 3D face shape and the associated albedo from a single face image iteratively by combining the morphable model and the SH model. The reconstructed 3D face geometry can uniquely determine the SH bases, therefore the optimal 3D face model can be obtained by minimizing the error between the input face image and a linear combination of the associated SH bases. In this way, we are able to preserve the consistency between the 3D geometry and the SH model, thus refining the 3D shape reconstruction recursively. Furthermore, we present a novel approach to recover the illumination condition from the estimated weighting vector for the SH bases in a constrained optimization formulation independent of the 3D geometry. Experimental results show the effectiveness and accuracy of the proposed face reconstruction and illumination estimation algorithm under different face poses and multiple‐light‐source illumination conditions. 相似文献
18.
Illuminant-Dependence of Von Kries Type Quotients 总被引:9,自引:0,他引:9
An expression-invariant 3D face recognition approach is presented. Our basic assumption is that facial expressions can be modelled as isometries of the facial surface. This allows to construct expression-invariant representations of faces using the bending-invariant canonical forms approach. The result is an efficient and accurate face recognition algorithm, robust to facial expressions, that can distinguish between identical twins (the first two authors). We demonstrate a prototype system based on the proposed algorithm and compare its performance to classical face recognition methods.The numerical methods employed by our approach do not require the facial surface explicitly. The surface gradients field, or the surface metric, are sufficient for constructing the expression-invariant representation of any given face. It allows us to perform the 3D face recognition task while avoiding the surface reconstruction stage. 相似文献
19.
Exchanging Faces in Images 总被引:1,自引:0,他引:1
Volker Blanz Kristina Scherbaum Thomas Vetter Hans-Peter Seidel 《Computer Graphics Forum》2004,23(3):669-676
20.
针对人脸校正中单幅图像难以解决大姿态侧脸的问题,提出一种基于多姿态特征融合生成对抗网络(MFFGAN)的人脸校正方法,利用多幅不同姿态侧脸之间的相关信息来进行人脸校正,并采用对抗机制对网络参数进行调整。该方法设计了一种新的网络,包括由多姿态特征提取、多姿态特征融合、正脸合成三个模块组成的生成器,以及用于对抗训练的判别器。多姿态特征提取模块利用多个卷积层提取侧脸图像的多姿态特征;多姿态特征融合模块将多姿态特征融合成包含多姿态侧脸信息的融合特征;而正脸合成模块在进行姿态校正的过程中加入融合特征,通过探索多姿态侧脸图像之间的特征依赖关系来获取相关信息与全局结构,可以有效提高校正结果。实验结果表明,与现有基于深度学习的人脸校正方法相比,所提方法恢复出的正脸图像不仅轮廓清晰,而且从两幅侧脸中恢复出的正脸图像的识别率平均提高了1.9个百分点,并且输入侧脸图像越多,恢复出的正脸图像的识别率越高,表明所提方法可以有效融合多姿态特征来恢复出轮廓清晰的正脸图像。 相似文献