首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
利用3D人脸建模的方法进行人脸识别有效地克服了2D人脸识别系统中识别率易受光照、姿态、表情影响的缺陷。文章采用一种依据人脸图像对3D通用人脸模型进行自适应调整的有效算法,构造出特定的人脸模型并运用于人脸识别中。通过比较从人脸图像中估算出的特征点与通用人脸模型在图像平面上的投影点之间的关系,对3D通用人脸模型进行全局和局部调整,以适应人脸中眼、口、鼻的个性化特征。最后以一个实例说明了此算法的应用。  相似文献   

2.
论文提出了一种新的基于三维人脸形变模型,并兼容于MPEG-4的三维人脸动画模型。采用基于均匀网格重采样的方法建立原型三维人脸之间的对齐,应用MPEG-4中定义的三维人脸动画规则,驱动三维模型自动生成真实感人脸动画。给定一幅人脸图像,三维人脸动画模型可自动重建其真实感的三维人脸,并根据FAP参数驱动模型自动生成人脸动画。  相似文献   

3.
This paper presents a virtual try-on system based on augmented reality for design personalization of facial accessory products. The system offers several novel functions that support real-time evaluation and modification of eyeglasses frame. 3D glasses model is embedded within video stream of the person who is wearing the glasses. Machine learning algorithms are developed for instantaneous tracking of facial features without use of markers. The tracking result enables continuously positioning of the glasses model on the user’s face while it is moving during the try-on process. In addition to color and texture, the user can instantly modify the glasses shape through simple semantic parameters. These functions not only facilitate evaluating products highly interactive with human users, but also engage them in the design process. This work has thus implemented the concept of human-centric design personalization.  相似文献   

4.
Face images are difficult to interpret because they are highly variable. Sources of variability include individual appearance, 3D pose, facial expression, and lighting. We describe a compact parametrized model of facial appearance which takes into account all these sources of variability. The model represents both shape and gray-level appearance, and is created by performing a statistical analysis over a training set of face images. A robust multiresolution search algorithm is used to fit the model to faces in new images. This allows the main facial features to be located, and a set of shape, and gray-level appearance parameters to be recovered. A good approximation to a given face can be reconstructed using less than 100 of these parameters. This representation can be used for tasks such as image coding, person identification, 3D pose recovery, gender recognition, and expression recognition. Experimental results are presented for a database of 690 face images obtained under widely varying conditions of 3D pose, lighting, and facial expression. The system performs well on all the tasks listed above  相似文献   

5.
Constructing a 3D individualized head model from two orthogonal views   总被引:7,自引:0,他引:7  
A new scheme for constructing a 3D individualized head model automatically from only a side view and the front view of the face is presented. The approach instantiates a generic 3D head model based on a set of the individual's facial features extracted by a local maximum-curvature tracking (LMCT) algorithm that we have developed. A distortion vector field that deforms the generic model to that of the individual is computed by correspondence matching and interpolation. The input of the two facial images are blended and texture-mapped onto the 3D head model. Arbitrary views of a person can be generated from two orthogonal images and can be implemented efficiently on a low-cost, PC-based platform.  相似文献   

6.
3D human face model reconstruction is essential to the generation of facial animations that is widely used in the field of virtual reality (VR). The main issues of 3D facial model reconstruction based on images by vision technologies are in twofold: one is to select and match the corresponding features of face from two images with minimal interaction and the other is to generate the realistic-looking human face model. In this paper, a new algorithm for realistic-looking face reconstruction is presented based on stereo vision. Firstly, a pattern is printed and attached to a planar surface for camera calibration, and corners generation and corners matching between two images are performed by integrating modified image pyramid Lucas-Kanade (PLK) algorithm and local adjustment algorithm, and then 3D coordinates of corners are obtained by 3D reconstruction. Individual face model is generated by the deformation of general 3D model and interpolation of the features. Finally, realistic-looking human face model  相似文献   

7.
Caricature is an interesting art to express exaggerated views of different persons and things through drawing. The face caricature is popular and widely used for different applications. To do this, we have to properly extract unique/specialized features of a person's face. A person's facial feature not only depends on his/her natural appearance, but also the associated expression style. Therefore, we would like to extract the neutural facial features and personal expression style for different applicaions. In this paper, we represent the 3D neutral face models in BU–3DFE database by sparse signal decomposition in the training phase. With this decomposition, the sparse training data can be used for robust linear subspace modeling of public faces. For an input 3D face model, we fit the model and decompose the 3D model geometry into a neutral face and the expression deformation separately. The neutral geomertry can be further decomposed into public face and individualized facial feature. We exaggerate the facial features and the expressions by estimating the probability on the corresponding manifold. The public face, the exaggerated facial features and the exaggerated expression are combined to synthesize a 3D caricature for a 3D face model. The proposed algorithm is automatic and can effectively extract the individualized facial features from an input 3D face model to create 3D face caricature.  相似文献   

8.
Head pose estimation is a key task for visual surveillance, HCI and face recognition applications. In this paper, a new approach is proposed for estimating 3D head pose from a monocular image. The approach assumes the full perspective projection camera model. Our approach employs general prior knowledge of face structure and the corresponding geometrical constraints provided by the location of a certain vanishing point to determine the pose of human faces. To achieve this, eye-lines, formed from the far and near eye corners, and mouth-line of the mouth corners are assumed parallel in 3D space. Then the vanishing point of these parallel lines found by the intersection of the eye-line and mouth-line in the image can be used to infer the 3D orientation and location of the human face. In order to deal with the variance of the facial model parameters, e.g. ratio between the eye-line and the mouth-line, an EM framework is applied to update the parameters. We first compute the 3D pose using some initially learnt parameters (such as ratio and length) and then adapt the parameters statistically for individual persons and their facial expressions by minimizing the residual errors between the projection of the model features points and the actual features on the image. In doing so, we assume every facial feature point can be associated to each of features points in 3D model with some a posteriori probability. The expectation step of the EM algorithm provides an iterative framework for computing the a posterori probabilities using Gaussian mixtures defined over the parameters. The robustness analysis of the algorithm on synthetic data and some real images with known ground-truth are included.  相似文献   

9.
In this paper we propose a method that exploits 3D motion-based features between frames of 3D facial geometry sequences for dynamic facial expression recognition. An expressive sequence is modelled to contain an onset followed by an apex and an offset. Feature selection methods are applied in order to extract features for each of the onset and offset segments of the expression. These features are then used to train GentleBoost classifiers and build a Hidden Markov Model in order to model the full temporal dynamics of the expression. The proposed fully automatic system was employed on the BU-4DFE database for distinguishing between the six universal expressions: Happy, Sad, Angry, Disgust, Surprise and Fear. Comparisons with a similar 2D system based on the motion extracted from facial intensity images was also performed. The attained results suggest that the use of the 3D information does indeed improve the recognition accuracy when compared to the 2D data in a fully automatic manner.  相似文献   

10.
根据头骨和面部肌肉的解剖结构提出了一个人脸区域控制模型,该模型采用骨骼和皮肤2层网格来表示一个三维人脸,其中外层的皮肤网格被划分成几个功能区域,不同区域采用不同的子区域变形模型.基于该模型实现了一个三维人脸重建系统,该系统在"计算机辅助眼镜设计系统"中得到了很好的应用.  相似文献   

11.
真实感虚拟人脸的实现和应用   总被引:2,自引:0,他引:2  
实现了一个交互式人脸建膜和动画的工具,用户可以很容易从一个人的正面和侧面的照片构造出头部的三维模型,并在这个模型上实现特定人脸的表情和动画,同时可以进行口型和声音的同步。基于以上技术,实现了一个虚拟人脸的动画组件,可以应用于WINDOWS应用系统中,给用户提供更加新颖和友好的局面。  相似文献   

12.
从正面侧照片合成三维人脸   总被引:5,自引:1,他引:5  
实现了一具交互式人脸建模和动画的工具,用户可以从一个人正面和侧面的照片构造了出头部的三维模型,并基于这个模型实现特定表情和简单的动画。详细阐述在系统实现过程中应用到的人脸几何表示,一般人脸变化到特定人脸、弹性网格、肌肉模型、全视角巾图、表情提取等技术。  相似文献   

13.
本文讨论动画系统的一个子集的发展.中间目标是产生一个系统,它基于人体面部的视频输入,能实时产生人面的3D模型动画.首要的主题是模型操作、设计和配合、声频视频操作和同步、3D模型演示和表演、图像处理和特征提取.用连续的红绿蓝图像,在一个时间抓一帧画面,图像处理被计时.实验表明,用全面搜索定位面部特征大约需要70ms,用局部搜索定位特征大约5ms.考虑到模型显示,模型显示用OpenInventor成功完成.  相似文献   

14.
基于二维半雕刻系统的三维人脸重建   总被引:2,自引:0,他引:2  
三维人脸的重建是二维半雕刻系统的核心内容,提出了一种从正、侧面图像上自动提取特征点来修正通用模型从而重建三维人脸的方法。首先使用特征点模板匹配的方法自动提取出特征点,然后以这些特征点为依据,通过整体变换和径向基插值来修正通用模型得到特定人脸。同时建立了一个人脸模型库来获得特征点模板,并且提出了一种分块存储的方法可以更精确地针对脸部不同区域进行重建。实验验证,这种方法效率高、交互操作少、取得了满意的重建效果。  相似文献   

15.
基于奇异值特征和统计模型的人像识别算法   总被引:35,自引:1,他引:34  
人像识别是模式识别领域中的一个前沿课题。目前多数研究者采用人脸的一维和二维几何特征来完成识别任务。人脸的几何特征抽取以及这些特性的有效性都面临着很多问题,至今人像识别的研究仍然处于较低的水平。作者证明了图象矩阵的奇异值特征矢量具备了代数上和几何上的不变性以及稳定性,提出用它作为识别人脸的代数特征。本文的人像识别算法是基于奇异值特征矢量建立Sammon最佳鉴别平面上的正态Bayes分类模型。在本文的  相似文献   

16.
《Real》1996,2(2):67-79
Many researchers have studied techniques related to the analysis and synthesis of human heads under motion with face deformations. These techniques can be used for defining low-rate image compression algorithms (model-based image coding), cinema technologies, video-phones, as well as for applications of virtual reality, etc. Such techniques need a real-time performance and a strong integration between the mechanisms of motion estimation and those of rendering and animation of the 3D synthetic head/face. In this paper, a complete and integrated system for tracking and synthesizing facial motions in real-time with low-cost architectures is presented. Facial deformations curves represented as spatiotemporal B-splines are used for tracking in order to model the main facial features. In addition, the system proposed is capable of adapting a generic 3D wire-frame model of a head/face to the face that must be tracked; therefore, the simulations of the face deformations are produced by using a realistic patterned face.  相似文献   

17.
This work investigates a new challenging problem: how to exactly recognize facial expression captured by a high-frame rate 3D sensing as early as possible, while most works generally focus on improving the recognition rate of 2D facial expression recognition. The recognition of subtle facial expressions in their early stage is unfortunately very sensitive to noise that cannot be ignored due to their low intensity. To overcome this problem, two novel feature enhancement methods, namely, adaptive wavelet spectral subtraction method and SVM-based linear discriminant analysis, are proposed to refine subtle features of facial expressions by employing an estimated noise model or not. Experiments on a custom-made dataset built using a high-speed 3D motion capture system corroborated that the two proposed methods outperform other feature refinement methods by enhancing the discriminability of subtle facial expression features and consequently make correct recognitions earlier.  相似文献   

18.
We propose a prototype of a facial surgery simulation system for surgical planning and the prediction of facial deformation. We use a physics-based human head model. Our head model has a 3D hierarchical structure that consists of soft tissue and the skull, constructed from the exact 3D CT patient data. Anatomic points measured on X-ray images from both frontal and side views are used to fire the model to the patient's head. The purposes of this research is to analyze the relationship between changes of mandibular position and facial morphology after orthognathic surgery, and to simulate the exact postoperative 3D facial shape. In the experiment, we used our model to predict the facial shape after surgery for patients with mandibular prognathism. Comparing the simulation results and the actual facial images after the surgery shows that the proposed method is practical.  相似文献   

19.
In this paper, we present a 3D face photography system based on a facial expression training dataset, composed of both facial range images (3D geometry) and facial texture (2D photography). The proposed system allows one to obtain a 3D geometry representation of a given face provided as a 2D photography, which undergoes a series of transformations through the texture and geometry spaces estimated. In the training phase of the system, the facial landmarks are obtained by an active shape model (ASM) extracted from the 2D gray-level photography. Principal components analysis (PCA) is then used to represent the face dataset, thus defining an orthonormal basis of texture and another of geometry. In the reconstruction phase, an input is given by a face image to which the ASM is matched. The extracted facial landmarks and the face image are fed to the PCA basis transform, and a 3D version of the 2D input image is built. Experimental tests using a new dataset of 70 facial expressions belonging to ten subjects as training set show rapid reconstructed 3D faces which maintain spatial coherence similar to the human perception, thus corroborating the efficiency and the applicability of the proposed system.  相似文献   

20.
基于HMM的单样本可变光照、姿态人脸识别   总被引:2,自引:1,他引:2  
提出了一种基于HMM的单样本可变光照、姿态人脸识别算法.该算法首先利用人工配准的训练集对单张正面人脸输入图像与Candide3模型进行自动配准,在配准的基础上重建特定人脸三维模型.对重建模型进行各种角度的旋转可得到姿态不同的数字人脸,然后利用球面谐波基图像调整数字人脸的光照系数可产生光照不同的数字人脸.将产生的光照、姿态不同的数字人脸同原始样本图像一起作为训练数据,为每个用户建立其独立的人脸隐马尔可夫模型.将所提算法对现有人脸库进行识别,并与基于光照补偿和姿态校正的识别方法进行比较.结果显示,该算法能有效避免光照补偿、姿态校正方法因对某些光照、姿态校正不理想而造成的识别率低的情况,能更好地适应光照、姿态不同条件下的人脸识别.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号