首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 234 毫秒
1.
三维扫描仪可以准确获取人脸的几何形状与纹理,但原始的人脸扫描数据仅为一张连续曲面,不符合实际的人脸结构,无法用于人脸动画。针对此问题,提出了一种由三雏扫描数据进行人脸建模的方法,将一个具备完整结构的通用人脸模型与扫描数据进行初步适配,再采用细节重建技术恢复特定人脸的表面细节和皮肤纹理。实验表明,该方法建立的三维人脸模型真实感强,结构完整,可生成连续自然的表情动画。  相似文献   

2.
吴晓军  鞠光亮 《电子学报》2016,44(9):2141-2147
提出了一种无标记点的人脸表情捕捉方法.首先根据ASM(Active Shape Model)人脸特征点生成了覆盖人脸85%面部特征的人脸均匀网格模型;其次,基于此人脸模型提出了一种表情捕捉方法,使用光流跟踪特征点的位移变化并辅以粒子滤波稳定其跟踪结果,以特征点的位移变化驱动网格整体变化,作为网格跟踪的初始值,使用网格的形变算法作为网格的驱动方式.最后,以捕捉到的表情变化数据驱动不同的人脸模型,根据模型的维数不同使用不同的驱动方法来实现表情动画重现,实验结果表明,提出的算法能很好地捕捉人脸表情,将捕捉到的表情映射到二维卡通人脸和三维虚拟人脸模型都能取得较好的动画效果.  相似文献   

3.
多模型ASM及其在人脸面部特征点定位中的应用   总被引:1,自引:1,他引:0  
为了提高ASM在非均匀光照条件下的人脸面部特征点定位的精确度,提出了一种融合Log-Gabor小波特征的多模型ASM方法.该方法的主要特点有:在精确定位目标图像虹膜位置的基础上对全局形状模型进行较准确的初始化;特征点局部纹理特征采用灰度和Log-Gabor小波特征共同描述,减少光照和噪音对算法的影响;建立包括全局ASM和基于人脸面部显著特征区域的局部ASM的多模型ASM,交替使用这两种ASM模型在边缘约束策略基础上对特征点的定位结果进行约束.实验表明,多模型ASM算法对人脸面部特征点定位的准确率比传统ASM算法有明显提高.  相似文献   

4.
何永健  冯寿鹏  周荣坤 《电子科技》2011,24(10):119-121
针对传统ASM方法进行人脸面部特征定位的一些缺点,提出了几点改进算法。传统的ASM方法难以在每一个特征点上都取得较好的匹配结果,并且由于背景图像的不确定性,训练得到的模型依赖于训练样本的背景。针对以上问题,首先针对边缘特征点,提出复杂背景下的局部纹理模型;其次,针对轮廓的角点,使用二维模型和二维的特征点匹配算法;最后,...  相似文献   

5.
基于个性化原型的人脸衰老图像合成   总被引:2,自引:1,他引:1       下载免费PDF全文
王章野  曹玫璇  李理  彭群生 《电子学报》2009,37(Z1):118-124
 本文提出了一种新的个性化人脸衰老图像合成方法.基于已建立的黄种人脸图像数据库,提出了一种基于人脸外轮廓局部曲率标准差的个性化原型匹配算法,通过计算出表征人脸的脸型特征点的局部曲率标准差,在人脸图像数据库中匹配找出多幅图像,进行纹理增强的原型合成,再通过形状和颜色变换模型实现了人脸衰老图像的生成,能达到皱纹增加、眼袋生成、皮肤光泽减退、毛发变花白等具有真实感的衰老合成效果.实验表明本方法能方便逼真地合成不同种类的人脸在不同年龄段的衰老图像.  相似文献   

6.
针对基于三维(3D)变形模型的人脸图像重建方法对人脸特征点检测不准确导致的重构模型形状表达能力不稳健问题,提出了一种优化3D变形模型参数的重建方法.首先,通过改进的位置映射图网络准确提取和定位人脸特征点,并以此为基础得到初始模型参数.然后,为了提高模型的精准度和泛化能力,融合基于回归方法得到的参数获取优化的模型参数.最后,对3D变形模型进行优化,得到最终的人脸模型.用真实人脸作为实验数据的结果表明,本方法能实现精确的3D人脸重建.  相似文献   

7.
多表情人脸肖像的自动生成   总被引:1,自引:0,他引:1  
宋红  黄小川  王树良 《电子学报》2013,41(8):1494-1499
肖像是一种能够抓住人物特征,又能隐藏细节,保留个人隐私的艺术表现形式,本文提出一种以中性表情人脸作为输入,自动生成不同表情的肖像算法.首先利用主动形状模型(Active Shape Model,ASM)提取人脸的关键特征点,然后根据统计学习获取的不同表情人脸的FAP(Facial Animation Parameter)规则,对中性人脸的特征点进行变形,生成表情三角网格,将中性人脸图像作为纹理映射到表情人脸网格,生成表情人脸图像,最后利用图像的梯度域信息和非真实感渲染技术,将带表情的人脸生成具有肖像风格的人脸图像.所生成的多表情人脸肖像效果良好,能够应用于网络和报刊杂志等非真实感图形学和数字娱乐等领域中.  相似文献   

8.
针对现有的局部纹理特征在光照变化下的人脸分类准确率不高的问题,提出一种局部彩色二元模式纹理特征提取和表示方法.该方法首先求取多个信号通道的彩色向量的模,以减少单独对每个信号通道提取 LBP 特征导致的量化误差;其次提取任意两个信号通道之间的像素彩色夹角来减少光照变化的影响.对于给定的测试人脸图像集,通过 SVM 的二分类器投票得到人脸图像的分类.在 Color FERET 和 XM2VTSDB 人脸图像数据库上的实验证明该方法在光照变化下可以有效地分类人脸图像.  相似文献   

9.
李晓峰  赵海  葛新  程显永 《电子学报》2010,38(5):1167-1171
由于外界环境的不确定性和人脸的复杂性,人脸表情的跟踪与计算机形象描绘是一个较难问题.基于此问题,提出了一种有别于模式识别、样本学习等传统手段的较为简单解决方法,在视频采集条件下,分析帧图像,通过对比多种边缘检测方法,采用一种基于边缘特征提取的人脸表情建模方法,来完成用于表情描绘的面部特征量提取与建模,并结合曲线拟合和模型控制等手段,进行人脸卡通造型生成和二维表情动画模拟.实现了从输入数据生成卡通造型画并真实地表现出表情变化情况.  相似文献   

10.
署光  姚莉秀  杨晓超  左昕  杨杰 《电子学报》2010,38(8):1798-1802
 随着数字娱乐产业的发展,由照片生成卡通人脸的技术将取得广泛应用.此前的方法主要集中在平面卡通化的领域,风格较为单一.对于三维人脸,尽管形变模型方法可以由照片合成各种属性的三维人脸,但它计算量较大,不适用于实时应用场合.本文提出了一种基于稀疏形变模型的三维卡通人脸生成方法,提高了计算速度,且只需要单幅正面人脸照片.首先由稀疏形变模型拟合照片人脸获得特定的稀疏人脸模型;然后将一个一般人脸模型变形到特定人脸并合成纹理;最后对三维人脸进行卡通化.实验结果证明本文方法能够快速自动地合成生动的三维卡通人脸.  相似文献   

11.
With better understanding of face anatomy and technical advances in computer graphics, 3D face synthesis has become one of the most active research fields for many human-machine applications, ranging from immersive telecommunication to the video games industry. In this paper we proposed a method that automatically extracts features like eyes, mouth, eyebrows and nose from the given frontal face image. Then a generic 3D face model is superimposed onto the face in accordance with the extracted facial features in order to fit the input face image by transforming the vertex topology of the generic face model. The 3D-specific face can finally be synthesized by texturing the individualized face model. Once the model is ready six basic facial expressions are generated with the help of MPEG-4 facial animation parameters. To generate transitions between these facial expressions we use 3D shape morphing between the corresponding face models and blend the corresponding textures. Novelty of our method is automatic generation of 3D model and synthesis face with different expressions from frontal neutral face image. Our method has the advantage that it is fully automatic, robust, fast and can generate various views of face by rotation of 3D model. It can be used in a variety of applications for which the accuracy of depth is not critical such as games, avatars, face recognition. We have tested and evaluated our system using standard database namely, BU-3DFE.  相似文献   

12.
This paper presents a hierarchical animation method for transferring facial expressions extracted from a performance video to different facial sketches. Without any expression example obtained from target faces, our approach can transfer expressions by motion retargetting to facial sketches. However, in practical applications, the image noise in each frame will reduce the feature extraction accuracy from source faces. And the shape difference between source and target faces will influence the animation quality for representing expressions. To solve these difficulties, we propose a robust neighbor-expression transfer (NET) model, which aims at modeling the spatial relations among sparse facial features. By learning expression behaviors from neighbor face examples, the NET model can reconstruct facial expressions from noisy signals. Based on the NET model, we present a hierarchical method to animate facial sketches. The motion vectors on the source face are adjusted from coarse to fine on the target face. Accordingly, the animation results are generated to replicate source expressions. Experimental results demonstrate that the proposed method can effectively and robustly transfer expressions by noisy animation signals.  相似文献   

13.
Research on facial animation is as vast as the many interests and needs that can be found in the general public, television or film production. For Mac Guff Ligne, a company specialized in the fabrication of special effects and computer generated images, the needs and the constraints on such a topic are very big. Morphing is often used in facial animation, and consists in mixing several expression models. As we will discover, the advantages in using morphing are numerous, but the animation workload remains long and time-consuming. Our goal is to propose a fast and reliable animation tool that is based on the same morphing technique with which the graphic artists are familiar. Our method is to inverse the classical process of morphing, and to match a real facial animation to a 3D facial animation by automatic calculation of the weights of expression models. Firstly, we discretize the real facial animation using a number of characteristic points. Then we follow the path of each point by optic or magnetic motion capture, or through the filmed images. This tracked down animation is then decomposed, for each frame, according to a basis of characteristic expressions (joy, anger, etc.) that can be automatically taken out from the real animation during the calibration stage. It means that we express a simplified shape of the face by linear composition of a series of basic faces. Finally, we can reintroduce the results of this decomposition into a more complex facial morph which has a totally different topology and geometry. The user can thus complete and modify the resulting animation using a tool he knows well. This method, used in production at Mac Guff Ligne, proves to be a solid, effective and easy-to-use work base for facial animation.  相似文献   

14.
This paper describes a new and efficient method for facial expression generation on cloned synthetic head models. The system uses abstract facial muscles called action units (AUs) based on both anatomical muscles and the facial action coding system. The facial expression generation method has real-time performance, is less computationally expensive than physically based models, and has greater anatomical correspondence than rational free-form deformation or spline-based, techniques. Automatic cloning of a real human head is done by adapting a generic facial and head mesh to Cyberware laser scanned data. The conformation of the generic head to the individual data and the fitting of texture onto it are based on a fully automatic feature extraction procedure. Individual facial animation parameters are also automatically estimated during the conformation process. The entire animation system is hierarchical; emotions and visemes (the visual mouth shapes that occur during speech) are defined in terms of the AUs, and higher-level gestures are defined in terms of AUs, emotions, and visemes as well as the temporal relationships between them. The main emphasis of the paper is on the abstract muscle model, along with limited discussion on the automatic cloning process and higher-level animation control aspects.  相似文献   

15.
Due to the advent of MPEG-4 standard, facial animation has been receiving significant attention lately. A common approach for facial animation is to use the mesh model. The physics-based transformation, elastic body spline (EBS), has been proposed to deform the facial mesh model and generate realistic expression by assuming the whole facial image has the same elastic property. In this paper, we partition facial images into different regions and propose an iterative algorithm to find the elastic property of each facial region. By doing so, we can obtain the EBS for mesh vertices in the facial mesh model such that facial animation can be more realistically achieved.  相似文献   

16.
倪奎  董兰芳 《电子技术》2009,36(12):64-67
人脸动画广泛地应用于游戏行业、远程会议、代理和化身等许多其它领域,近年吸引了很多学者的研究,其中口腔/眼睛等器官的动画一直是一个较大的难点。本文提出了一种将口腔/眼睛的器官样本图片融合到人脸图像中并根据单张中性人脸图片生成人脸动画的方法。该方法根据特征点生成样条,在极坐标上对样条插值来实现空间映射,然后采用后向映射和插值进行图像重采样得到融合图像。实验结果表明,该方法产生的融合图片较为自然,能实现口腔/眼球等器官的运动,能满足人脸动画生成的实时性要求。  相似文献   

17.
Although several algorithms have been proposed for facial model adaptation from image sequences, the insufficient feature set to adapt a full facial model, imperfect matching of feature points, and imprecise head motion estimation may degrade the accuracy of model adaptation. In this paper, we propose to resolve these difficulties by integrating facial model adaptation, texture mapping, and head pose estimation as cooperative and complementary processes. By using an analysis-by-synthesis approach, salient facial feature points and head profiles are reliably tracked and extracted to form a growing and more complete feature set for model adaptation. A more robust head motion estimation is achieved with the assistance of the textured facial model. The proposed scheme is performed with image sequences acquired with single uncalibrated camera and requires only little manual adjustment in the initialization setup, which proves to be a feasible approach for facial model adaptation.  相似文献   

18.
Speech-driven facial animation combines techniques from different disciplines such as image analysis, computer graphics, and speech analysis. Active shape models (ASM) used in image analysis are excellent tools for characterizing lip contour shapes and approximating their motion in image sequences. By controlling the coefficients for an ASM, such a model can also be used for animation. We design a mapping of the articulatory parameters used in phonetics into ASM coefficients that control nonrigid lip motion. The mapping is designed to minimize the approximation error when articulatory parameters measured on training lip contours are taken as input to synthesize the training lip movements. Since articulatory parameters can also be estimated from speech, the proposed technique can form an important component of a speech-driven facial animation system.  相似文献   

19.
We propose a novel approach for face tracking, resulting in a visual feedback loop: instead of trying to adapt a more or less realistic artificial face model to an individual, we construct from precise range data a specific texture and wireframe face model, whose realism allows the analysis and synthesis modules to visually cooperate in the image plane, by directly using 2D patterns synthesized by the face model. Unlike other feedback loops found in the literature, we do not explicitly handle the 3D complex geometric data of the face model, to make real-time manipulations possible. Our main contribution is a complete face tracking and pose estimation framework, with few assumptions about the face rigid motion (allowing large rotations out of the image plane), and without marks or makeup on the user's face. Our framework feeds the feature-tracking procedure with synthesized facial patterns, controlled by an extended Kalman filter. Within this framework, we present original and efficient geometric and photometric modelling techniques, and a reformulation of a block-matching algorithm to make it match synthesized patterns with real images, and avoid background areas during the matching. We also offer some numerical evaluations, assessing the validity of our algorithms, and new developments in the context of facial animation. Our face-tracking algorithm may be used to recover the 3D position and orientation of a real face and generate a MPEG-4 animation stream to reproduce the rigid motion of the face with a synthetic face model. It may also serve as a pre-processing step for further facial expression analysis algorithms, since it locates the position of the facial features in the image plane, and gives precise 3D information to take into account the possible coupling between pose and expressions of the analysed facial images.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号