首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
With better understanding of face anatomy and technical advances in computer graphics, 3D face synthesis has become one of the most active research fields for many human-machine applications, ranging from immersive telecommunication to the video games industry. In this paper we proposed a method that automatically extracts features like eyes, mouth, eyebrows and nose from the given frontal face image. Then a generic 3D face model is superimposed onto the face in accordance with the extracted facial features in order to fit the input face image by transforming the vertex topology of the generic face model. The 3D-specific face can finally be synthesized by texturing the individualized face model. Once the model is ready six basic facial expressions are generated with the help of MPEG-4 facial animation parameters. To generate transitions between these facial expressions we use 3D shape morphing between the corresponding face models and blend the corresponding textures. Novelty of our method is automatic generation of 3D model and synthesis face with different expressions from frontal neutral face image. Our method has the advantage that it is fully automatic, robust, fast and can generate various views of face by rotation of 3D model. It can be used in a variety of applications for which the accuracy of depth is not critical such as games, avatars, face recognition. We have tested and evaluated our system using standard database namely, BU-3DFE.  相似文献   

2.
陈娜 《激光与红外》2022,52(6):923-930
基于单张人脸图片的3D人脸模型重构,无论是在计算机图形领域还是可见光成像领域都是一个极具挑战性的研究方向,对于人脸识别、人脸成像、人脸动画等实际应用更是具有重要意义。针对目前算法复杂度较高、运算量较大且存在局部最优解和初始化不良等问题,本文提出了一种基于深度卷积神经网络的单张图片向3D人脸自动重构算法。该算法首先基于3D转换模型来提取2D人脸图像的密集信息,然后构建深度卷积神经网络架构、设计总体损失函数,直接学习2D人脸图像从像素到3D坐标的映射,从而实现了3D人脸模型的自动构建。算法对比与仿真实验表明,该算法在3D人脸重建上的归一化平均误差更低,且仅需一张2D人脸图像便可自动重构生成3D人脸模型。所生成的3D人脸模型鲁棒性好,重构准确,完整保留表情细节,并且对不同姿态的人脸也具有较好的重建效果,能够在三维空间中无死角自由呈现,将满足更多实际应用需求。  相似文献   

3.
贾丽华  宋加涛  谢刚 《电视技术》2012,36(11):107-110,117
鼻子是人脸中一个突出的器官,其特征不易受面部表情变化的影响。鼻子检测是在图像或图像序列中搜索人鼻的位置及其轮廓线特征,其研究在人脸检测和定位、人脸识别、人脸姿态估计、3D人脸重构等方面具有重要的意义。近年来,研究者们在该领域做了大量研究,提出了很多有效的算法。对相关文献进行了综述,将现有的鼻子检测方法分为基于2D图像的方法和基于3D信息的方法,分析了这两类方法的优缺点。  相似文献   

4.
The increasing availability of 3D facial data offers the potential to overcome the difficulties inherent with 2D face recognition, including the sensitivity to illumination conditions and head pose variations. In spite of their rapid development, many 3D face recognition algorithms in the literature still suffer from the intrinsic complexity in representing and processing 3D facial data. In this paper, we propose the intrinsic 3D facial sparse representation (I3DFSR) algorithm for multi-pose 3D face recognition. In this algorithm, each 3D facial surface is first mapped homeomorphically onto a 2D lattice, where the value at each site is the depth of the corresponding vertex on the 3D surface. Each 2D lattice is then interpolated and converted into a 2D facial attribute image. Next, the sparse representation is applied to those attribute images. Finally, the identity of each query face can be obtained by using the corresponding sparse coefficients. The innovation of our approach lies in the strategy of converting irregular 3D facial surfaces into regular 2D attribute images such that 3D face recognition problem can be solved by using the sparse representation of those attribute images. We compare the proposed algorithm to three widely used 3D face recognition algorithms in the GavabDB database, to six state-of-the-art algorithms in the FRGC2.0 database, and to three baseline algorithms in the NPU3D database. Our results show that the proposed I3DFSR algorithm can substantially improve the accuracy and efficiency of multi-pose 3D face recognition.  相似文献   

5.
吴晓军  鞠光亮 《电子学报》2016,44(9):2141-2147
提出了一种无标记点的人脸表情捕捉方法.首先根据ASM(Active Shape Model)人脸特征点生成了覆盖人脸85%面部特征的人脸均匀网格模型;其次,基于此人脸模型提出了一种表情捕捉方法,使用光流跟踪特征点的位移变化并辅以粒子滤波稳定其跟踪结果,以特征点的位移变化驱动网格整体变化,作为网格跟踪的初始值,使用网格的形变算法作为网格的驱动方式.最后,以捕捉到的表情变化数据驱动不同的人脸模型,根据模型的维数不同使用不同的驱动方法来实现表情动画重现,实验结果表明,提出的算法能很好地捕捉人脸表情,将捕捉到的表情映射到二维卡通人脸和三维虚拟人脸模型都能取得较好的动画效果.  相似文献   

6.
於俊  汪增福 《电子学报》2013,41(1):185-192
针对动态变化背景下的人脸视频编解码问题,该文提出了一种2D-3D混合编解码系统;具体包括:(1)基于多种观测信息,在线外观模型和粒子滤波的人脸三维运动跟踪;(2)结合参数模型与肌肉模型的3D人脸动画合成;(3)基于头发检测和3D头发模型的头发合成;(4)无缝地拼接前景的三维编解码结果和背景的二维编解码结果.在极低码率下,客观实验表明,该系统在编码效率和解码质量上有较好的综合优势.主观实验表明,该系统的解码结果在脸部具有较高的辨识度.  相似文献   

7.
三维扫描仪可以准确获取人脸的几何形状与纹理,但原始的人脸扫描数据仅为一张连续曲面,不符合实际的人脸结构,无法用于人脸动画。针对此问题,提出了一种由三雏扫描数据进行人脸建模的方法,将一个具备完整结构的通用人脸模型与扫描数据进行初步适配,再采用细节重建技术恢复特定人脸的表面细节和皮肤纹理。实验表明,该方法建立的三维人脸模型真实感强,结构完整,可生成连续自然的表情动画。  相似文献   

8.
Real-time and reliable head pose tracking is the basis of human–computer interaction and face analysis applications. Aiming at the problems of accuracy and real time performance in current tracking method, a new head pose tracking method based on stereo visual SLAM is proposed in this paper. The sparse head map is constructed based on ORB feature points extraction and stereo matching, then the 3D-2D matching relations between 3D mappoints and 2D feature points are obtained by projection matching. Finally, the camera pose solved by the Bundle Adjustment is converted to head pose, which realizes the tracking of head pose. The experimental results show that this method can obtain high precise head pose. The mean errors of three Euler angles are all less than 1°. Therefore, the proposed head pose tracking method can track and estimate precise head pose in real time under smooth background.  相似文献   

9.
李晓峰  赵海  葛新  程显永 《电子学报》2010,38(5):1167-1171
由于外界环境的不确定性和人脸的复杂性,人脸表情的跟踪与计算机形象描绘是一个较难问题.基于此问题,提出了一种有别于模式识别、样本学习等传统手段的较为简单解决方法,在视频采集条件下,分析帧图像,通过对比多种边缘检测方法,采用一种基于边缘特征提取的人脸表情建模方法,来完成用于表情描绘的面部特征量提取与建模,并结合曲线拟合和模型控制等手段,进行人脸卡通造型生成和二维表情动画模拟.实现了从输入数据生成卡通造型画并真实地表现出表情变化情况.  相似文献   

10.
This paper presents a hierarchical animation method for transferring facial expressions extracted from a performance video to different facial sketches. Without any expression example obtained from target faces, our approach can transfer expressions by motion retargetting to facial sketches. However, in practical applications, the image noise in each frame will reduce the feature extraction accuracy from source faces. And the shape difference between source and target faces will influence the animation quality for representing expressions. To solve these difficulties, we propose a robust neighbor-expression transfer (NET) model, which aims at modeling the spatial relations among sparse facial features. By learning expression behaviors from neighbor face examples, the NET model can reconstruct facial expressions from noisy signals. Based on the NET model, we present a hierarchical method to animate facial sketches. The motion vectors on the source face are adjusted from coarse to fine on the target face. Accordingly, the animation results are generated to replicate source expressions. Experimental results demonstrate that the proposed method can effectively and robustly transfer expressions by noisy animation signals.  相似文献   

11.
3-D model-based vehicle tracking.   总被引:1,自引:0,他引:1  
This paper aims at tracking vehicles from monocular intensity image sequences and presents an efficient and robust approach to three-dimensional (3-D) model-based vehicle tracking. Under the weak perspective assumption and the ground-plane constraint, the movements of model projection in the two-dimensional image plane can be decomposed into two motions: translation and rotation. They are the results of the corresponding movements of 3-D translation on the ground plane (GP) and rotation around the normal of the GP, which can be determined separately. A new metric based on point-to-line segment distance is proposed to evaluate the similarity between an image region and an instantiation of a 3-D vehicle model under a given pose. Based on this, we provide an efficient pose refinement method to refine the vehicle's pose parameters. An improved EKF is also proposed to track and to predict vehicle motion with a precise kinematics model. Experimental results with both indoor and outdoor data show that the algorithm obtains desirable performance even under severe occlusion and clutter.  相似文献   

12.
Significant appearance changes of objects under different orientations could cause loss of tracking, "drifting." In this paper, we present a collaborative tracking framework to robustly track faces under large pose and expression changes and to learn their appearance models online. The collaborative tracking framework probabilistically combines measurements from an offline-trained generic face model with measurements from online-learned specific face appearance models in a dynamic Bayesian network. In this framework, generic face models provide the knowledge of the whole face class, while specific face models provide information on individual faces being tracked. Their combination, therefore, provides robust measurements for multiview face tracking. We introduce a mixture of probabilistic principal component analysis (MPPCA) model to represent the appearance of a specific face under multiple views, and we also present an online EM algorithm to incrementally update the MPPCA model using tracking results. Experimental results demonstrate that the collaborative tracking and online learning methods can handle large pose changes and are robust to distractions from the background.  相似文献   

13.
Face recognition is one of the most rapidly developing areas of image processing and computer vision. In this work, a new method for face recognition and identification using 3D facial surfaces is proposed. The method is invariant to facial expression and pose variations in the scene. The method uses 3D shape data without color or texture information. The method is based on conformal mapping of original facial surfaces onto a Riemannian manifold, followed by comparison of conformal and isometric invariants computed in this manifold. Computer results are presented using known 3D face databases that contain significant amount of expression and pose variations.  相似文献   

14.
倪奎  董兰芳 《电子技术》2009,36(12):64-67
人脸动画广泛地应用于游戏行业、远程会议、代理和化身等许多其它领域,近年吸引了很多学者的研究,其中口腔/眼睛等器官的动画一直是一个较大的难点。本文提出了一种将口腔/眼睛的器官样本图片融合到人脸图像中并根据单张中性人脸图片生成人脸动画的方法。该方法根据特征点生成样条,在极坐标上对样条插值来实现空间映射,然后采用后向映射和插值进行图像重采样得到融合图像。实验结果表明,该方法产生的融合图片较为自然,能实现口腔/眼球等器官的运动,能满足人脸动画生成的实时性要求。  相似文献   

15.
一种同步人脸运动跟踪与表情识别算法   总被引:1,自引:0,他引:1       下载免费PDF全文
於俊  汪增福  李睿 《电子学报》2015,43(2):371-376
针对单视频动态变化背景下的人脸表情识别问题,提出了一种同步人脸运动跟踪和表情识别算法,并在此基础上构建了一个实时系统.该系统达到了如下目标:首先在粒子滤波框架下结合在线外观模型和柱状几何模型进行人脸三维运动跟踪;接着基于生理知识来提取人脸表情的静态信息;然后基于流形学习来提取人脸表情的动态信息;最后在人脸运动跟踪过程中,结合人脸表情静态信息和动态信息来进行表情识别.实验结果表明,该系统在大姿态和丰富表情下具有较好的综合优势.  相似文献   

16.
17.
Reported 3D face recognition techniques assume the use of active 3D measurement for 3D facial capture. However, active method employ structured illumination (structure projection, phase shift, gray-code demodulation, etc) or laser scanning, which is not desirable in many applications. A major problem of using passive stereo is its lower 3D face resolution and thus no passive method for 3D face recognition has been reported. In this paper, a real-time passive stereo face recognition system is presented. Entire face detection, tracking, pose estimation and face recognition are investigated. We used SRI Stereo engine that outputs sub-pixel disparity automatically. An investigation is carried out in combining 3D and 2D information for face recognition. The straightforward two-stage principal component analysis plus linear discriminant analysis is carried out in appearance and depth face images respectively. A probe face is identified using sum of the weighted appearance and depth linear discriminant distances. We investigate the complete range of linear combinations to reveal the interplay between these two paradigms. The improvement of the face recognition rate using this combination is verified. The recognition rate by the combination is higher than that of either appearance alone or depth alone. We then discuss the implementation of the algorithm on a stereo vision system. A hybrid face and facial features detection/tracking approach is proposed which collects near-frontal views for face recognition. Our face detection/tracking approach automatically initializes without user intervention and can be re-initialized automatically if the tracking of the 3D face pose is lost. The experiments include two parts. Firstly, the performance of the proposed algorithm is verified on XM2VTS database; Secondly, the algorithm is demonstrated on a real-time stereo vision system. It is able to detect, track and recognize a person while walking toward a stereo camera.
Jian-Gang WangEmail:
  相似文献   

18.
In this paper, a novel face segmentation algorithm is proposed based on facial saliency map (FSM) for head-and-shoulder type video application. This method consists of three stages. The first stage is to generate the saliency map of input video image by our proposed facial attention model. In the second stage, a geometric model and an eye-map built from chrominance components are employed to localize the face region according to the saliency map. The third stage involves the adaptive boundary correction and the final face contour extraction. Based on the segmented result, an effective boundary saliency map (BSM) is then constructed, and applied for the tracking based segmentation of the successive frames. Experimental evaluation on test sequences shows that the proposed method is capable of segmenting the face area quite effectively.  相似文献   

19.
We present a new approach to face relighting by jointly estimating the pose, reflectance functions, and lighting from as few as one image of a face. Upon such estimation, we can synthesize the face image under any prescribed new lighting condition. In contrast to commonly used face shape models or shape-dependent models, we neither recover nor assume the 3-D face shape during the estimation process. Instead, we train a pose- and pixel-dependent subspace model of the reflectance function using a face database that contains samples of pose and illumination for a large number of individuals (e.g., the CMU PIE database and the Yale database). Using this subspace model, we can estimate the pose, the reflectance functions, and the lighting condition of any given face image. Our approach lends itself to practical applications thanks to many desirable properties, including the preservation of the non-Lambertian skin reflectance properties and facial hair, as well as reproduction of various shadows on the face. Extensive experiments show that, compared to recent representative face relighting techniques, our method successfully produces better results, in terms of subjective and objective quality, without reconstructing a 3-D shape.  相似文献   

20.
We describe the components of the system used for real-time facial communication using a cloned head. We begin with describing the automatic face cloning using two orthogonal photographs of a person. The steps in this process are the face model matching and texture generation. After an introduction to the MPEG-4 parameters that we are using, we proceed with the explanation of the facial feature tracking using a video camera. The technique requires an initialization step and is further divided into mouth and eye tracking. These steps are explained in detail. We then explain the speech processing techniques used for real-time phoneme extraction and subsequent speech animation module. We conclude with the results and comments on the integration of the modules towards a complete system  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号