首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this paper, we propose a novel Patch Geodesic Distance (PGD) to transform the texture map of an object through its shape data for robust 2.5D object recognition. Local geodesic paths within patches and global geodesic paths for patches are combined in a coarse to fine hierarchical computation of PGD for each surface point to tackle the missing data problem in 2.5D images. Shape adjusted texture patches are encoded into local patterns for similarity measurement between two 2.5D images with different viewing angles and/or shape deformations. An extensive experimental investigation is conducted on 2.5 face images using the publicly available BU-3DFE and Bosphorus databases covering face recognition under expression and pose changes. The performance of the proposed method is compared with that of three benchmark approaches. The experimental results demonstrate that the proposed method provides a very encouraging new solution for 2.5D object recognition.  相似文献   

2.
提出一种灰度与边强度信息相结合的鲁棒特征并综合在线学习方法来进行自适应视频人脸多特征跟踪.算法思想是利用三维参数化网格模型对人脸及表情进行建模,利用弱透视模型对头部姿态建模,求取归一化后的形状无关灰度和边强度纹理组合成一种鲁棒特征,建立单高斯自适应纹理模型,并采用梯度下降迭代算法进行模型匹配得到姿态和表情参数.实验证明,本方法比单纯利用灰度特征在复杂光线和表情下具有更好的鲁棒性.  相似文献   

3.
提出并实现一种基于两张正交图像和一个标准3维头模型,并利用2D图像特征点和3D模型特征点的匹配进行3维头模型重建的算法。首先,进行面部区域和头发区域的分割,利用色彩传递对输入图像进行颜色处理。对正面图像利用改进后的ASM(主动形状模型)模型进行特征点定位。改进局部最大曲率跟踪(LMCT)方法,更为鲁棒的定位了侧面特征点。在匹配图像特征点与标准3维头上预先定义的特征点的基础上,利用径向基函数进行标准头形变,获得特定人的3维头部形状模型。采用重建好的3维头作为桥梁,自动匹配输入图像,进行无缝纹理融合。最后,将所得纹理映射到形状模型上,获得对应输入图像的特定真实感3维头模型。  相似文献   

4.
Matching 2.5D face scans to 3D models   总被引:7,自引:0,他引:7  
The performance of face recognition systems that use two-dimensional images depends on factors such as lighting and subject's pose. We are developing a face recognition system that utilizes three-dimensional shape information to make the system more robust to arbitrary pose and lighting. For each subject, a 3D face model is constructed by integrating several 2.5D face scans which are captured from different views. 2.5D is a simplified 3D (x,y,z) surface representation that contains at most one depth value (z direction) for every point in the (x, y) plane. Two different modalities provided by the facial scan, namely, shape and texture, are utilized and integrated for face matching. The recognition engine consists of two components, surface matching and appearance-based matching. The surface matching component is based on a modified iterative closest point (ICP) algorithm. The candidate list from the gallery used for appearance matching is dynamically generated based on the output of the surface matching component, which reduces the complexity of the appearance-based matching stage. Three-dimensional models in the gallery are used to synthesize new appearance samples with pose and illumination variations and the synthesized face images are used in discriminant subspace analysis. The weighted sum rule is applied to combine the scores given by the two matching components. Experimental results are given for matching a database of 200 3D face models with 598 2.5D independent test scans acquired under different pose and some lighting and expression changes. These results show the feasibility of the proposed matching scheme.  相似文献   

5.
The paper proposes a novel, pose-invariant face recognition system based on a deformable, generic 3D face model, that is a composite of: (1) an edge model, (2) a color region model and (3) a wireframe model for jointly describing the shape and important features of the face. The first two submodels are used for image analysis and the third mainly for face synthesis. In order to match the model to face images in arbitrary poses, the 3D model can be projected onto different 2D viewplanes based on rotation, translation and scale parameters, thereby generating multiple face-image templates (in different sizes and orientations). Face shape variations among people are taken into account by the deformation parameters of the model. Given an unknown face, its pose is estimated by model matching and the system synthesizes face images of known subjects in the same pose. The face is then classified as the subject whose synthesized image is most similar. The synthesized images are generated using a 3D face representation scheme which encodes the 3D shape and texture characteristics of the faces. This face representation is automatically derived from training face images of the subject. Experimental results show that the method is capable of determining pose and recognizing faces accurately over a wide range of poses and with naturally varying lighting conditions. Recognition rates of 92.3% have been achieved by the method with 10 training face images per person.  相似文献   

6.
Head pose estimation under non-rigid face movement is particularly useful in applications relating to eye-gaze tracking in less constrained scenarios, where the user is allowed to move naturally during tracking. Existing vision-based head pose estimation methods often require accurate initialisation and tracking of specific facial landmarks, while methods that handle non-rigid face deformations typically necessitate a preliminary training phase prior to head pose estimation. In this paper, we propose a method to estimate the head pose in real-time from the trajectories of a set of feature points spread randomly over the face region, without requiring a training phase or model-fitting of specific facial features. Conversely, our method exploits the 3-dimensional shape of the surface of interest, recovered via shape and motion factorisation, in combination with Kalman and particle filtering to determine the contribution of each feature point to the estimation of head pose based on a variance measure. Quantitative and qualitative results reveal the capability of our method in handling non-rigid face movement without deterioration of the head pose estimation accuracy.  相似文献   

7.
Head pose estimation is a key task for visual surveillance, HCI and face recognition applications. In this paper, a new approach is proposed for estimating 3D head pose from a monocular image. The approach assumes the full perspective projection camera model. Our approach employs general prior knowledge of face structure and the corresponding geometrical constraints provided by the location of a certain vanishing point to determine the pose of human faces. To achieve this, eye-lines, formed from the far and near eye corners, and mouth-line of the mouth corners are assumed parallel in 3D space. Then the vanishing point of these parallel lines found by the intersection of the eye-line and mouth-line in the image can be used to infer the 3D orientation and location of the human face. In order to deal with the variance of the facial model parameters, e.g. ratio between the eye-line and the mouth-line, an EM framework is applied to update the parameters. We first compute the 3D pose using some initially learnt parameters (such as ratio and length) and then adapt the parameters statistically for individual persons and their facial expressions by minimizing the residual errors between the projection of the model features points and the actual features on the image. In doing so, we assume every facial feature point can be associated to each of features points in 3D model with some a posteriori probability. The expectation step of the EM algorithm provides an iterative framework for computing the a posterori probabilities using Gaussian mixtures defined over the parameters. The robustness analysis of the algorithm on synthetic data and some real images with known ground-truth are included.  相似文献   

8.
Model-based face analysis is a general paradigm with applications that include face recognition, expression recognition, lip-reading, head pose estimation, and gaze estimation. A face model is first constructed from a collection of training data, either 2D images or 3D range scans. The face model is then fit to the input image(s) and the model parameters used in whatever the application is. Most existing face models can be classified as either 2D (e.g. Active Appearance Models) or 3D (e.g. Morphable Models). In this paper we compare 2D and 3D face models along three axes: (1) representational power, (2) construction, and (3) real-time fitting. For each axis in turn, we outline the differences that result from using a 2D or a 3D face model.  相似文献   

9.

针对室内复杂环境下的稠密三维建模问题, 提出一种基于RGB-D 相机的移动机器人同时定位与三维地图创建方法. 该方法利用架设在移动机器人上的RGB-D 相机获取环境信息, 根据点云和纹理加权模型建立结合局部纹理约束的混合位姿估计方法, 确保定位精度的同时减小失败率. 在关键帧选取机制下, 结合视觉闭环检测方法, 运用树结构网络优化(TORO) 算法最小化闭环误差, 实现三维地图的全局一致性优化. 在室内环境下的实验结果验证了所提出算法的有效性和可行性.

  相似文献   

10.
Recent face recognition algorithm can achieve high accuracy when the tested face samples are frontal. However, when the face pose changes largely, the performance of existing methods drop drastically. Efforts on pose-robust face recognition are highly desirable, especially when each face class has only one frontal training sample. In this study, we propose a 2D face fitting-assisted 3D face reconstruction algorithm that aims at recognizing faces of different poses when each face class has only one frontal training sample. For each frontal training sample, a 3D face is reconstructed by optimizing the parameters of 3D morphable model (3DMM). By rotating the reconstructed 3D face to different views, pose virtual face images are generated to enlarge the training set of face recognition. Different from the conventional 3D face reconstruction methods, the proposed algorithm utilizes automatic 2D face fitting to assist 3D face reconstruction. We automatically locate 88 sparse points of the frontal face by 2D face-fitting algorithm. Such 2D face-fitting algorithm is so-called Random Forest Embedded Active Shape Model, which embeds random forest learning into the framework of Active Shape Model. Results of 2D face fitting are added to the 3D face reconstruction objective function as shape constraints. The optimization objective energy function takes not only image intensity, but also 2D fitting results into account. Shape and texture parameters of 3DMM are thus estimated by fitting the 3DMM to the 2D frontal face sample, which is a non-linear optimization problem. We experiment the proposed method on the publicly available CMUPIE database, which includes faces viewed from 11 different poses, and the results show that the proposed method is effective and the face recognition results toward pose variants are promising.  相似文献   

11.
人脸特征点定位是根据输入的人脸数据自动定位出预先按人脸生理特征定义的眼角、鼻尖、嘴角和脸部轮廓等面部关键特征点,在人脸识别和分析等系统中起着至关重要的作用。本文对基于深度学习的人脸特征点自动定位进行综述,阐释了人脸特征点自动定位的含义,归纳了目前常用的人脸公开数据集,系统阐述了针对2维和3维数据特征点的自动定位方法,总结了各方法的研究现状及其应用,分析了当前人脸特征点自动定位技术在深度学习应用中的现状、存在问题及发展趋势。在公开的2维和3维人脸数据集上对不同方法进行了比较。通过研究可以看出,基于深度学习的2维人脸特征点的自动定位方法研究相对比较深入,而3维人脸特征点定位方法的研究在模型表示、处理方法和样本数量上都存在挑战。未来基于深度学习的3维人脸特征点定位方法将成为研究趋势。  相似文献   

12.
Face recognition based on fitting a 3D morphable model   总被引:31,自引:0,他引:31  
This paper presents a method for face recognition across variations in pose, ranging from frontal to profile views, and across a wide range of illuminations, including cast shadows and specular reflections. To account for these variations, the algorithm simulates the process of image formation in 3D space, using computer graphics, and it estimates 3D shape and texture of faces from single images. The estimate is achieved by fitting a statistical, morphable model of 3D faces to images. The model is learned from a set of textured 3D scans of heads. We describe the construction of the morphable model, an algorithm to fit the model to images, and a framework for face identification. In this framework, faces are represented by model parameters for 3D shape and texture. We present results obtained with 4,488 images from the publicly available CMU-PIE database and 1,940 images from the FERET database.  相似文献   

13.
多信息融合的多姿态三维人脸面部五官标志点定位方法   总被引:1,自引:0,他引:1  
针对三维人脸模型面部五官标志点定位对姿态变化非常敏感的问题,提出了一种基于多信息融合的多姿态三维人脸五官标志点定位方法.首先对二维人脸纹理图像采用仿射不变的Affine- SIFT方法进行特征点检测,再利用映射关系将其投影到三维空间,并采用局部邻域曲率变化最大规则和迭代约束优化相结合的方法对面部五官标志点进行精确定位.在FRGC2.0和自建NPU3D数据库的实验结果表明,文中方法无需对姿态和三维数据的格式进行预先估计和定义,算法复杂度低,同时对人脸模型的姿态有着较强的鲁棒性,与现有五官标志点定位方法相比,有着更高的定位精度.  相似文献   

14.
摘 要:采用人脸特征点调整三维形变模型的方法应用于面部三维重建,但模型形变的计 算往往会产生误差,且耗时较长。因此运用人脸二维特征点对通用三维形变模型的拟合方法进 行改进,提出了一种视频流的多角度实时三维人脸重建方法。首先利用带有三层卷积网络的 CLNF 算法识别二维特征点,并跟踪特征点位置;然后由五官特征点位置估计头部姿态,更新 模型的表情系数,其结果再作用于 PCA 形状系数,促使当前三维模型发生形变;最后采用 ISOMAP 算法提取网格纹理信息,进行纹理融合形成特定人脸模型。实验结果表明,该方法在 人脸重建过程中具有更好的实时性能,且精确度有所提高。  相似文献   

15.
We introduce a framework for unconstrained 3D human upper body pose estimation from multiple camera views in complex environment. Its main novelty lies in the integration of three components: single-frame pose recovery, temporal integration and model texture adaptation. Single-frame pose recovery consists of a hypothesis generation stage, in which candidate 3D poses are generated, based on probabilistic hierarchical shape matching in each camera view. In the subsequent hypothesis verification stage, the candidate 3D poses are re-projected into the other camera views and ranked according to a multi-view likelihood measure. Temporal integration consists of computing K-best trajectories combining a motion model and observations in a Viterbi-style maximum-likelihood approach. Poses that lie on the best trajectories are used to generate and adapt a texture model, which in turn enriches the shape likelihood measure used for pose recovery. The multiple trajectory hypotheses are used to generate pose predictions, augmenting the 3D pose candidates generated at the next time step.  相似文献   

16.
基于模型的头部运动估计和面部图像合成   总被引:9,自引:0,他引:9  
文中讨论一种基于模型的头部运动估计和面部图像合成方法。首先建立了一个基于人脸几何模型的可变形三维面部模型,此模型可根据不同人脸图像特征修正特定人脸模型。为了使特定人脸模型与特定人脸图像相匹配,需根据变形模型修正人脸模型。文中采用自动调整与人机交互相结合的方法实现特定人脸模型匹配。在调整完模型形状之后,应用3个方向的面部图像进行纹理映射生成不同视点方向的面部图像。应用合成面部图像与输入面部图像最佳匹  相似文献   

17.
Registering a 3D facial model onto a 2D image is important for constructing pixel-wise correspondences between different facial images. The registration is based on a 3 \(\times \) 4 dimensional projection matrix, which is obtained from pose estimation. Conventional pose estimation approaches employ facial landmarks to determine the coefficients inside the projection matrix and are sensitive to missing or incorrect landmarks. In this paper, a landmark-free pose estimation method is presented. The method can be used to estimate the matrix when facial landmarks are not available. Experimental results show that the proposed method outperforms several landmark-free pose estimation methods and achieves competitive accuracy in terms of estimating pose parameters. The method is also demonstrated to be effective as part of a 3D-aided face recognition pipeline (UR2D), whose rank-1 identification rate is competitive to the methods that use landmarks to estimate head pose.  相似文献   

18.
基于多点模型的3D人脸姿态估计方法   总被引:2,自引:1,他引:1       下载免费PDF全文
改进传统的活动形状模型法,准确地提取人脸特征点后,针对人脸形状特性,使用人脸的多个特征点作为人脸模型,通过最小二乘法优化求解,精确估计3D人脸空间姿态。实验结果表明,新方法不仅可以获得稳定的姿态解,而且与同类方法比较具有良好的姿态估计精确度。  相似文献   

19.
薛峰  丁晓青 《计算机应用》2007,27(3):686-689
传统的三维人脸形变模型是通过对大量的三维人脸数据进行学习,构建描述人脸三维形状和纹理的参数模型,通过模型优化完成对二维人脸图像的三维重构。但是,实际中大量的训练样本是很难获得的,这导致形变模型描述能力的不完善,制约了它的应用。如将整个人脸看成由若干个组件组合而成,则在样本数不变的情况下降低了描述空间的维数,提高了模型的描述能力。但是在重构人脸图像时必须解决组件间三维空间的重叠合并,并且随着组件数目的增加,模型参数也随之增加,所以对优化算法也提出了更高的要求。为了解决形变模型的这些困难,提出了一种全局模型和组件模型的折中算法,即在形状上保持全局约束而纹理上进行组件匹配,从而在算法性能和算法复杂度之间获得了一个有效的平衡。  相似文献   

20.
《Pattern recognition》2014,47(2):525-534
In this study, we develop a central profile-based 3D face pose estimation algorithm. The central profile is a unique curve on a 3D face surface that starts from forehead center, goes down through nose ridge, nose tip, mouth center, and ends at a chin tip. The points on the central profile are co-planar and belong to a symmetry plane that separates human face into two identical parts. The central profile is protrusive and has a certain length. Most importantly, the normal vectors of the central profile points are parallel to the symmetry plane. Based on the properties of the central profile, Hough transform is employed to determine the symmetry plane by invoking a voting procedure. An objective function is introduced in the parameter space to quantify the vote importance for face profile points and map the central profile to an accumulator cell with the maximal value. Subsequently, a nose model matching algorithm is used to detect nose tip on the central profile. A pitch angle estimation algorithm is also proposed. The pose estimation experiments completed for a synthetic 3D face model and the FRGC v2.0 3D database demonstrate the effectiveness of the proposed pose estimation algorithm. The obtained central profile detection rate is 99.9%, and the nose tip detection rate has reached 98.16% with error not larger than 10 mm.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号