首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 29 毫秒
1.
In this paper, we present a kernel-based eigentransformation framework to hallucinate the high-resolution (HR) facial image of a low-resolution (LR) input. The eigentransformation method is a linear subspace approach, which represents an image as a linear combination of training samples. Consequently, those novel facial appearances not included in the training samples cannot be super-resolved properly. To solve this problem, we devise a kernel-based extension of the eigentransformation method, which takes higher-order statistics of the image data into account. To generate HR face images with higher fidelity, the HR face image reconstructed using this kernel-based eigentransformation method is treated as an initial estimation of the target HR face. The corresponding high-frequency components of this estimation are extracted to form a prior in the maximum a posteriori (MAP) formulation of the SR problem so as to derive the final reconstruction result. We have evaluated our proposed method using different kernels and configurations, and have compared these performances with some current SR algorithms. Experimental results show that our kernel-based framework, along with a proper kernel, can produce good HR facial images in terms of both visual quality and reconstruction errors.  相似文献   

2.
In this paper, we present a 3D face photography system based on a facial expression training dataset, composed of both facial range images (3D geometry) and facial texture (2D photography). The proposed system allows one to obtain a 3D geometry representation of a given face provided as a 2D photography, which undergoes a series of transformations through the texture and geometry spaces estimated. In the training phase of the system, the facial landmarks are obtained by an active shape model (ASM) extracted from the 2D gray-level photography. Principal components analysis (PCA) is then used to represent the face dataset, thus defining an orthonormal basis of texture and another of geometry. In the reconstruction phase, an input is given by a face image to which the ASM is matched. The extracted facial landmarks and the face image are fed to the PCA basis transform, and a 3D version of the 2D input image is built. Experimental tests using a new dataset of 70 facial expressions belonging to ten subjects as training set show rapid reconstructed 3D faces which maintain spatial coherence similar to the human perception, thus corroborating the efficiency and the applicability of the proposed system.  相似文献   

3.
Reconstructing 3D face models from 2D face images is usually done by using a single reference 3D face model or some gender/ethnicity specific 3D face models. However, different persons, even those of the same gender or ethnicity, usually have significantly different faces in terms of their overall appearance, which forms the base of person recognition via faces. Consequently, existing 3D reference model based methods have limited capability of reconstructing precise 3D face models for a large variety of persons. In this paper, we propose to explore a reservoir of diverse reference models for 3D face reconstruction from forensic mugshot face images, where facial examplars coherent with the input determine the final shape estimation. Specifically, our 3D face reconstruction is formulated as an energy minimization problem with: 1) shading constraint from multiple input face images, 2) distortion and self-occlusion based color consistency between different views, and 3) depth uncertainty based smoothness constraint on adjacent pixels. The proposed energy is minimized in a coarse to fine way, where the shape refinement step is done by using a multi-label segmentation algorithm. Experimental results on challenging datasets demonstrate that the proposed algorithm is capable of recovering high quality 3D face models. We also show that our reconstructed models successfully boost face recognition accuracy.  相似文献   

4.
In this paper, we propose a face-hallucination method, namely face hallucination based on sparse local-pixel structure. In our framework, a high resolution (HR) face is estimated from a single frame low resolution (LR) face with the help of the facial dataset. Unlike many existing face-hallucination methods such as the from local-pixel structure to global image super-resolution method (LPS-GIS) and the super-resolution through neighbor embedding, where the prior models are learned by employing the least-square methods, our framework aims to shape the prior model using sparse representation. Then this learned prior model is employed to guide the reconstruction process. Experiments show that our framework is very flexible, and achieves a competitive or even superior performance in terms of both reconstruction error and visual quality. Our method still exhibits an impressive ability to generate plausible HR facial images based on their sparse local structures.  相似文献   

5.
目的 基于学习的单幅图像超分辨率算法是借助实例训练库由一幅低分辨率图像产生高分辨率图像。提出一种基于图像块自相似性和对非线性映射拟合较好的支持向量回归模型的单幅超分辨率方法,该方法不需使用外部图像训练库。方法 首先根据输入的低分辨率图像建立图像金字塔及包含低/高分辨率图像块对的集合;然后在低/高分辨率图像块对的集合中寻找与输入低分辨率图像块的相似块,利用支持向量回归模型学习这些低分辨率相似块和其对应的高分辨率图像块的中心像素之间的映射关系,进而得到未知高分辨率图像块的中心像素。结果 为了验证本文设计算法的有效性,选取结构和纹理不同的7幅彩色高分辨率图像,对其进行高斯模糊的2倍下采样后所得的低分辨率图像进行超分辨率重构,与双三次插值、基于稀疏表示及基于支持向量回归这3个超分辨率方法重建的高分辨率图像进行比较,峰值信噪比平均依次提升了2.37 dB、0.70 dB和0.57 dB。结论 实验结果表明,本文设计的算法能够很好地实现图像的超分辨率重构,特别是对纹理结构相似度高的图像具有更好的重构效果。  相似文献   

6.
Represented in a Morphable Model, 3D faces follow curved trajectories in face space as they age. We present a novel algorithm that computes the individual aging trajectories for given faces, based on a non-linear function that assigns an age to each face vector. This function is learned from a database of 3D scans of teenagers and adults using support vector regression. To apply the aging prediction to images of faces, we reconstruct a 3D model from the input image, apply the aging transformation on both shape and texture, and then render the face back into the same image or into images of other individuals at the appropriate ages, for example images of older children. Among other applications, our system can help to find missing children.  相似文献   

7.
人脸超分辨率(super-resolution,SR)即将输入模糊的低分辨率(low-resolution,LR)人脸图像通过一系列算法处理得到较为清晰的高分辨率(high-resolution,HR)人脸图像的过程.相比自然图像,不同人脸图像的相同位置通常具有相似的结构.本文针对人脸图像的局部结构一致性特点,提出一种新的基于图结构的人脸超分辨率神经网络回归方法.将输入低分辨率图像表示为图结构,进而为图结构中每一个结点的局部表示训练一个浅层神经网络进行超分辨率回归.相比基于规则矩形网格的方法,图结构在描述一个像素的局部信息时,不仅考虑到图像坐标的相关性,同时关注了纹理的相似性,能更好表达图像局部特征.训练过程中,利用已收敛的相邻结点的神经网络参数初始化当前结点的神经网络参数,不仅加快神经网络的收敛速度,而且提高了预测精度.与包括深度卷积神经网络在内的基于学习的超分辨率最新算法比较实验表明,本文提出的算法取得了更高的准确率.本文提出的图神经网络(Graph Neural Networks,GNN)并不局限于解决人脸超分辨率问题,它还可以用于处理其它具有不规则拓扑结构的数据,解决不同的问题.  相似文献   

8.
In this paper, a face hallucination method based on two-dimensional joint learning is presented. Unlike the existing works on face super-resolution algorithms that first reshape the image or image patch into 1D vector, in our study the spatial construction of the high resolution (HR) and the low resolution (LR) face image are efficiently maintained in the reconstruction procedure. Enlightened by the 1D joint learning approach for image super-resolution, we propose a 2D joint learning algorithm to map the original 2D LR and HR image patch spaces onto a unified feature subspace. Subsequently, the neighbor-embedding (NE) based super-resolution algorithm can be conducted on the unified feature subspace to estimate the reconstruction weights. With these weights, the initial HR facial image can be generated. To refine further the initial HR estimate, the global reconstruction constraint is exploited to improve the quality of reconstruction result. Experiments on the face databases and real-world face images demonstrate the effectiveness of the proposed algorithm.  相似文献   

9.
In this paper we show how to estimate facial surface reflectance properties (a slice of the BRDF and the albedo) in conjunction with the facial shape from a single image. The key idea underpinning our approach is to iteratively interleave the two processes of estimating reflectance properties based on the current shape estimate and updating the shape estimate based on the current estimate of the reflectance function. For frontally illuminated faces, the reflectance properties can be described by a function of one variable which we estimate by fitting a curve to the scattered and noisy reflectance samples provided by the input image and estimated shape. For non-frontal illumination, we fit a smooth surface to the scattered 2D reflectance samples. We make use of a novel statistical face shape constraint which we term ‘model-based integrability’ which we use to regularise the shape estimation. We show that the method is capable of recovering accurate shape and reflectance information from single grayscale or colour images using both synthetic and real world imagery. We use the estimated reflectance measurements to render synthetic images of the face in varying poses. To synthesise images under novel illumination, we show how to fit a parametric model of reflectance to the estimated reflectance function.  相似文献   

10.
In this paper we present a robust and lightweight method for the automatic fitting of deformable 3D face models on facial images. Popular fitting techniques such as those based on statistical models of shape and appearance require a training stage based on a set of facial images and their corresponding facial landmarks, which have to be manually labeled. Therefore, new images in which to fit the model cannot differ too much in shape and appearance (including illumination variation, facial hair, wrinkles, etc.) from those used for training. By contrast, our approach can fit a generic face model in two steps: (1) the detection of facial features based on local image gradient analysis and (2) the backprojection of a deformable 3D face model through the optimization of its deformation parameters. The proposed approach can retain the advantages of both learning-free and learning-based approaches. Thus, we can estimate the position, orientation, shape and actions of faces, and initialize user-specific face tracking approaches, such as Online Appearance Models (OAMs), which have shown to be more robust than generic user tracking approaches. Experimental results show that our method outperforms other fitting alternatives under challenging illumination conditions and with a computational cost that allows its implementation in devices with low hardware specifications, such as smartphones and tablets. Our proposed approach lends itself nicely to many frameworks addressing semantic inference in face images and videos.  相似文献   

11.
Estimating the correspondence between the images using optical flow is the key component for image fusion, however, computing optical flow between a pair of facial images including backgrounds is challenging due to large differences in illumination, texture, color and background in the images. To improve optical flow results for image fusion, we propose a novel flow estimation method, wavelet flow, which can handle both the face and background in the input images. The key idea is that instead of computing flow directly between the input image pair, we estimate the image flow by incorporating multi‐scale image transfer and optical flow guided wavelet fusion. Multi‐scale image transfer helps to preserve the background and lighting detail of input, while optical flow guided wavelet fusion produces a series of intermediate images for further fusion quality optimizing. Our approach can significantly improve the performance of the optical flow algorithm and provide more natural fusion results for both faces and backgrounds in the images. We evaluate our method on a variety of datasets to show its high outperformance.  相似文献   

12.
《Image and vision computing》2002,20(5-6):359-368
Support vector machines (SVMs) have shown great potential for learning classification functions that can be applied to object recognition. In this work, we extend SVMs to model the appearance of human faces which undergo non-linear change across multiple views. The approach uses inherent factors in the nature of the input images and the SVM classification algorithm to perform both multi-view face detection and pose estimation.  相似文献   

13.
While many works consider moving faces only as collections of frames and apply still image-based methods, recent developments indicate that excellent results can be obtained using texture-based spatiotemporal representations for describing and analyzing faces in videos. Inspired by the psychophysical findings which state that facial movements can provide valuable information to face analysis, and also by our recent success in using LBP (local binary patterns) for combining appearance and motion for dynamic texture analysis, this paper investigates the combination of facial appearance (the shape of the face) and motion (the way a person is talking and moving his/her facial features) for face analysis in videos. We propose and study an approach for spatiotemporal face and gender recognition from videos using an extended set of volume LBP features and a boosting scheme. We experiment with several publicly available video face databases and consider different benchmark methods for comparison. Our extensive experimental analysis clearly assesses the promising performance of the LBP-based spatiotemporal representations for describing and analyzing faces in videos.  相似文献   

14.
《Pattern recognition》2014,47(2):556-567
For face recognition, image features are first extracted and then matched to those features in a gallery set. The amount of information and the effectiveness of the features used will determine the recognition performance. In this paper, we propose a novel face recognition approach using information about face images at higher and lower resolutions so as to enhance the information content of the features that are extracted and combined at different resolutions. As the features from different resolutions should closely correlate with each other, we employ the cascaded generalized canonical correlation analysis (GCCA) to fuse the information to form a single feature vector for face recognition. To improve the performance and efficiency, we also employ “Gabor-feature hallucination”, which predicts the high-resolution (HR) Gabor features from the Gabor features of a face image directly by local linear regression. We also extend the algorithm to low-resolution (LR) face recognition, in which the medium-resolution (MR) and HR Gabor features of a LR input image are estimated directly. The LR Gabor features and the predicted MR and HR Gabor features are then fused using GCCA for LR face recognition. Our algorithm can avoid having to perform the interpolation/super-resolution of face images and having to extract HR Gabor features. Experimental results show that the proposed methods have a superior recognition rate and are more efficient than traditional methods.  相似文献   

15.
人的年龄变化是引起人脸外观变化的主要原因,由于每个人的生活方式和条件不同,所以很难准确的从人脸图像中估计人的年龄。为此,论文提出一种基于人脸图像的年龄估计方法,利用AAM方法自动提取用于年龄估计的人脸特征,然后按不同年龄段将特征进行分类,为了提高分类的准确性,提出了一种基于PARTIALHAUSDORFF距离的人工免疫识别系统方法,给出了系统的训练、分类实现过程,实验证明了该方法的正确性。  相似文献   

16.
We propose a personality trait exaggeration system emphasizing the impression of human face in images, based on multi‐level features learning and exaggeration. These features are called Personality Trait Model (PTM). Abstract level of PTM is social psychology trait of face perception such as amiable, mean, cute and so on. Concrete level of PTM is shape feature and texture feature. A training phase is presented to learn multi‐level features of faces from different images. Statistical survey is taken to label sample images with people's first impressions. From images with the same labels, we capture not only shape features but also texture features to enhance exaggeration effect. Texture feature is expressed by matrix to reflect depth of facial organs, wrinkles and so on. In application phase, original images will be exaggerated using PTM iteratively. And exaggeration rate for each iteration is constrained to keep likeness with the original face. Experimental results demonstrate that our system can emphasize chosen social psychology traits effectively.  相似文献   

17.
目的 人类对人脸认知模式的探索由来已久,并且已经成功应用于美容整形等研究领域。然而,目前在计算机视觉和模式识别领域,计算人脸相似度的方法没有考虑人对人脸的认知模式,使得现有方法的计算结果从人的认知习惯角度来讲并非最佳。为克服以上缺陷,提出一种基于人脸认知模式的相似脸搜索算法。方法 依据人脸认知模式,选取特征点,并计算特征量,构造各面部器官(眼睛、鼻子、嘴巴、脸型)分类模型,即面部器官形状相似性度量模型,并采用圆形LBP算子,计算两幅人脸对应器官的纹理相似度,二者综合作为相似脸搜索的依据。结果 分别用本文方法和代表相似脸搜索最高水平的Face++的方法对80幅正面、中性表情、平视角度拍摄的人脸图像进行测试。本文方法的整体准确率高于Face++方法,其中,TOP1、TOP2最相似搜索结果准确率优势明显,均高出Face++方法12%以上。结论 实验结果表明,本文方法的搜索结果更加符合人脸认知模式,可应用于正面、中性表情、平视角度拍摄的人脸图像的相似脸搜索。此外,还可以将此类基于认知模式的图像搜索思路推广应用于商业领域,如基于图像的相似网购商品搜索等。  相似文献   

18.
Automatically locating facial landmarks in images is an important task in computer vision. This paper proposes a novel context modeling method for facial landmark detection, which integrates context constraints together with local texture model in the cascaded AdaBoost framework. The motivation of our method lies in the basic human psychology observation that not only the local texture information but also the global context information is used for human to locate facial landmarks in faces. Therefore, in our solution, a novel type of feature, called Non-Adjacent Rectangle (NAR) Haar-like feature, is proposed to characterize the co-occurrence between facial landmarks and its surroundings, i.e., the context information, in terms of low-level features. For the locating task, traditional Haar-like features (characterizing local texture information) and NAR Haar-like features (characterizing context constraints in global sense) are combined together to form more powerful representations. Through Real AdaBoost learning, the most discriminative feature set is selected automatically and used for facial landmark detection. To verify the effectiveness of the proposed method, we evaluate our facial landmark detection algorithm on BioID and Cohn-Kanade face databases. Experimental results convincingly show that the NAR Haar-like feature is effective to model the context and our proposed algorithm impressively outperforms the published state-of-the-art methods. In addition, the generalization capability of the NAR Haar-like feature is further validated by extended applications to face detection task on FDDB face database.  相似文献   

19.
In this paper, we present a new method to modify the appearance of a face image by manipulating the illumination condition, when the face geometry and albedo information is unknown. This problem is particularly difficult when there is only a single image of the subject available. Recent research demonstrates that the set of images of a convex Lambertian object obtained under a wide variety of lighting conditions can be approximated accurately by a low-dimensional linear subspace using a spherical harmonic representation. Moreover, morphable models are statistical ensembles of facial properties such as shape and texture. In this paper, we integrate spherical harmonics into the morphable model framework by proposing a 3D spherical harmonic basis morphable model (SHBMM). The proposed method can represent a face under arbitrary unknown lighting and pose simply by three low-dimensional vectors, i.e., shape parameters, spherical harmonic basis parameters, and illumination coefficients, which are called the SHBMM parameters. However, when the image was taken under an extreme lighting condition, the approximation error can be large, thus making it difficult to recover albedo information. In order to address this problem, we propose a subregion-based framework that uses a Markov random field to model the statistical distribution and spatial coherence of face texture, which makes our approach not only robust to extreme lighting conditions, but also insensitive to partial occlusions. The performance of our framework is demonstrated through various experimental results, including the improved rates for face recognition under extreme lighting conditions.  相似文献   

20.
Face images are difficult to interpret because they are highly variable. Sources of variability include individual appearance, 3D pose, facial expression, and lighting. We describe a compact parametrized model of facial appearance which takes into account all these sources of variability. The model represents both shape and gray-level appearance, and is created by performing a statistical analysis over a training set of face images. A robust multiresolution search algorithm is used to fit the model to faces in new images. This allows the main facial features to be located, and a set of shape, and gray-level appearance parameters to be recovered. A good approximation to a given face can be reconstructed using less than 100 of these parameters. This representation can be used for tasks such as image coding, person identification, 3D pose recovery, gender recognition, and expression recognition. Experimental results are presented for a database of 690 face images obtained under widely varying conditions of 3D pose, lighting, and facial expression. The system performs well on all the tasks listed above  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号