首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 468 毫秒
1.
《信息与电脑》2021,(1):50-52
本文基于Android平台获取二维人物图像,使用face++提供的人脸特征点提取SDK对输入照片的人脸特征点进行自动定位来提取二维人物图像的人脸特征点,根据所提取的特征点使用自适应算法进行调整,生成与二维人物图像相适应的三维面部模型。与一般的调整算法相比,本文针对存在角度的二维人物图像提出生成其对应三维面部模型的自适应调整算法。  相似文献   

2.
面向网上人际交流的实际需求,提出一种基于图像的便捷人脸动画方法.基于同一人脸的正面和半侧面图像,引入三维通用人脸模型估计其初始朝向,分别建立两图像的二维网格模型;然后采用图像插值和变形技术合成不同朝向人脸,并合理地解决遮挡问题,面部的唇动和表情则可将MPEG4三维模型的形变参数投影到相应的二维网格上,该方法无需重构对象人脸的三维模型和对摄像机进行标定,可同时合成面部表情与小范围朝向变化.实验结果证明了文中方法的便捷性、跨平台通用性,合成效果良好.  相似文献   

3.
分析人脸模型的动态表情合成方法并依据它们内在特点进行分类描述。尽管这个领域已经存在较多文献,但是动态人脸表情合成仍然是非常活跃的研究热点。根据输出类型的不同,分类概览二维图像平面和三维人脸曲面上的合成算法。对于二维图像平面空间合成人脸表情主要有如下几种算法:主动表情轮廓模型驱动的人脸表情合成算法,基于拉普拉斯算子迁移计算的合成方法,使用表情比率图合成框架的表情合成算法,基于面部主特征点offset驱动的人脸表情合成算法,基于通用表情映射函数的表情合成方法和近来基于深度学习的表情合成技术。对于三维空间人脸合成则主要包括:基于物理肌肉模型的合成,基于形变的表情合成,基于三维形状线性回归的表情合成,基于脸部运动图的表情合成和近来基于深度学习的三维人脸表情合成技术。对以上每一种类别讨论它们的方法论以及其主要优缺点。本工作有望帮助未来研究者更好地定位研究方向和技术突破口。  相似文献   

4.
在人脸部件几何特征及肌肉运动模型的基础上,本文提出了一种二维面部图像的生成方法。该方法采用特征网络模型精确地描述人脸部件,将网络上轮廓特征点的相对位移作为面部图像变形及表情综合的参数,并通过面部整合模型控制各部件之间相互作用关系及人脸轮廓变化。实验证明该方法产生的面部变形图像比较精确,表情效果也很自然。  相似文献   

5.
余烽  姜昱明 《计算机工程与设计》2007,28(18):4422-4423,4465
提出一种基于径向基函数的虚拟人脸合成方法.该方法根据正侧面图像获取人脸特征点的三维数据;利用基于径向基函数的散乱数据插值方法对通用人脸模型进行整体变形得到特定的人脸模型,并对模型进行Bezier磨光;将合成的人脸无缝拼接纹理图映射到特定的人脸模型.实验结果表明,合成的虚拟人脸具有较强的真实感.  相似文献   

6.
针对普通的基于卷积神经网络的人脸超分辨率方法未能结合人脸结构信息,重建图像易出现五官偏移、边缘模糊等问题,提出一种基于双层级联神经网络的人脸超分辨率重建方法。在重建网络中加入面部先验信息估计模块,捕捉输入图像的面部关键点信息,约束重建图像与目标图像的空间一致性。在CelebA与Helen数据集上的实验结果验证了该方法对正面人脸能够准确地重建面部五官,对侧面及遮挡人脸等不同的变形人脸也具有强鲁棒性。  相似文献   

7.
提出了一种基于CANDIDE-3算法的人脸替换算法,该算法使用CLM作为面部特征点定位算法,将2维的面部特征点和CANDIDE中3维的顶点相对应,建立相应的源人脸算法。根据面部特征点估计出头部姿态和表情相关参数,对源人脸进行方向和角度调整,通过颜色转移算法,将源人脸的色彩转换为目标人脸的色彩,然后利用图像融合算法进行融合。实验结果表明,本文的方法能够在图像和视频中进行有效的进行人脸替换。  相似文献   

8.
为合成真实感人脸动画,提出一种实时的基于单摄像头的3D虚拟人脸动画方法.首先根据用户的单张正面人脸图像重建用户的3D人脸模型,并基于该人脸模型合成姿态、光照和表情变化的人脸图像,利用这些图像训练特定用户的局部纹理模型;然后使用摄像头拍摄人脸视频,利用特定用户的局部纹理模型跟踪人脸特征点;最后由跟踪结果和3D关键形状估计Blendshape系数,通过Blendshape模型合成的人脸动画.实验结果表明,该方法能实时合成真实感3D人脸动画,且只需要一个普通的摄像头,非常适合普通用户使用.  相似文献   

9.
刘树利  胡茂林 《微机发展》2006,16(6):213-215
对在不同视角下,得到的人脸模型,文中提出一种基于人脸表面的识别方法,采用平面射影变换,将人脸的图像变换到一个相同的图像,使图像对齐;而后运用主成分分析法(PCA)进行分类。基于这种方法,由光线、面部表情、姿态的改变引起的不必要变化就可能被消除或可以忽略。这种方法可以达到比较准确的识别人脸的目的。实验结果显示,文中的方法对人脸模型提供了更好的表达,并且人脸识别的错误率更低。  相似文献   

10.
文中以较少的交互 ,给出克隆三维虚拟人脸的方法。即首先根据两幅人脸图像 ,对一般人脸模型进行整体变换 ,然后基于特定人脸的特征线相对于一般人脸模型上的特征线的位移 ,用变分插值技术变分一般人脸网格 ,适配特定人脸几何。最后用多分辨率样条技术产生无缝的人脸纹理镶嵌图(Mosaic) ,并对整个人脸模型进行纹理映射 ,从而生成高度真实感的能以任意视线方向观察的特定人脸。  相似文献   

11.
在人脸序列的图象编码中 ,由于模型基编码方法可以获得高的主观图象质量和低的码率 ,因而受到广泛重视 .但是 ,其运动参数的可靠估计还是一个难点 .为此 ,该文分析了头部运动的特点 ,并把它分为头部刚体运动、脸部表情的简单运动和脸部表情复杂运动 3种形式 .其中 ,提取头部刚体运动参数利用了基于特征点对的运动参数估计算法 ,并提出了一个线性的实现方法 ;文中还指出提高运动参数估计的精度在于选择合适的特征点和建立一个和特定人脸相一致的三维线框模型 ;另外 ,还为脸部表情的简单运动建立了形变矩阵 ;最后给出了用面积误差函数评价的运动参数估计误差 .  相似文献   

12.
In this paper we present a robust and lightweight method for the automatic fitting of deformable 3D face models on facial images. Popular fitting techniques such as those based on statistical models of shape and appearance require a training stage based on a set of facial images and their corresponding facial landmarks, which have to be manually labeled. Therefore, new images in which to fit the model cannot differ too much in shape and appearance (including illumination variation, facial hair, wrinkles, etc.) from those used for training. By contrast, our approach can fit a generic face model in two steps: (1) the detection of facial features based on local image gradient analysis and (2) the backprojection of a deformable 3D face model through the optimization of its deformation parameters. The proposed approach can retain the advantages of both learning-free and learning-based approaches. Thus, we can estimate the position, orientation, shape and actions of faces, and initialize user-specific face tracking approaches, such as Online Appearance Models (OAMs), which have shown to be more robust than generic user tracking approaches. Experimental results show that our method outperforms other fitting alternatives under challenging illumination conditions and with a computational cost that allows its implementation in devices with low hardware specifications, such as smartphones and tablets. Our proposed approach lends itself nicely to many frameworks addressing semantic inference in face images and videos.  相似文献   

13.
3-D motion estimation in model-based facial image coding   总被引:6,自引:0,他引:6  
An approach to estimating the motion of the head and facial expressions in model-based facial image coding is presented. An affine nonrigid motion model is set up. The specific knowledge about facial shape and facial expression is formulated in this model in the form of parameters. A direct method of estimating the two-view motion parameters that is based on the affine method is discussed. Based on the reasonable assumption that the 3-D motion of the face is almost smooth in the time domain, several approaches to predicting the motion of the next frame are proposed. Using a 3-D model, the approach is characterized by a feedback loop connecting computer vision and computer graphics. Embedding the synthesis techniques into the analysis phase greatly improves the performance of motion estimation. Simulations with long image sequences of real-world scenes indicate that the method not only greatly reduces computational complexity but also substantially improves estimation accuracy  相似文献   

14.
《Real》2001,7(2):173-182
Three-dimensional human head modeling is useful in video-conferencing or other virtual reality applications. However, manual construction of 3D models using CAD tools is often expensive and time-consuming. Here we present a robust and efficient method for the construction of a 3D human head model from perspective images viewing from different angles. In our system, a generic head model is first used, then three images of the head are required to adjust the deformable contours on the generic model to make it closer to the target head. Our contributions are as follows. Our system uses perspective images that are more realistic than orthographic projection approximations used in earlier works. Also, for shaping and positioning face organs, we present a method for estimating the camera focal length and the 3D coordinates of facial landmarks when the camera transformation is known. We also provide an alternative for the 3D coordinates estimation using epipolar geometry when the extrinsic parameters are absent. Our experiments demonstrate that our approach produces good and realistic results.  相似文献   

15.
This study presents a facial expression recognition system which separates the non-rigid facial expression from the rigid head rotation and estimates the 3D rigid head rotation angle in real time. The extracted trajectories of the feature points contain both rigid head motion components and non-rigid facial expression motion components. A 3D virtual face model is used to obtain accurate estimation of the head rotation angle such that the non-rigid motion components can be precisely separated to enhance the facial expression recognition performance. The separation performance of the proposed system is further improved through the use of a restoration mechanism designed to recover feature points lost during large pan rotations. Having separated the rigid and non-rigid motions, hidden Markov models (HMMs) are employed to recognize a prescribed set of facial expressions defined in terms of facial action coding system (FACS) action units (AUs).  相似文献   

16.
An approach to the analysis of dynamic facial images for the purposes of estimating and resynthesizing dynamic facial expressions is presented. The approach exploits a sophisticated generative model of the human face originally developed for realistic facial animation. The face model which may be simulated and rendered at interactive rates on a graphics workstation, incorporates a physics-based synthetic facial tissue and a set of anatomically motivated facial muscle actuators. The estimation of dynamical facial muscle contractions from video sequences of expressive human faces is considered. An estimation technique that uses deformable contour models (snakes) to track the nonrigid motions of facial features in video images is developed. The technique estimates muscle actuator controls with sufficient accuracy to permit the face model to resynthesize transient expressions  相似文献   

17.
基于三维可变形模板的眼睛特征提取   总被引:13,自引:0,他引:13  
眼睛特征提取在人脸感知的各种应用中有着非常重要的作用,为了解决人脸垂直旋转角度比较大时,眼睛特征提取的问题,提出了一种新的基于三维可变形模板的眼睛特征提取方法,此方法采用了两个新提出的技术,一个是人脸姿态估计策略用于估测人脸的垂直旋转方向,另一个就是三维可变形模板匹配技术用于具体提取眼睛的精确特征,实验表明该方法能够适应人脸图像垂直旋转角度的变化,获得很好的特征提取结果。  相似文献   

18.
Deformation modeling for robust 3D face matching   总被引:1,自引:0,他引:1  
Face recognition based on 3D surface matching is promising for overcoming some of the limitations of current 2D image-based face recognition systems. The 3D shape is generally invariant to the pose and lighting changes, but not invariant to the non-rigid facial movement, such as expressions. Collecting and storing multiple templates to account for various expressions for each subject in a large database is not practical. We propose a facial surface modeling and matching scheme to match 2.5D facial scans in the presence of both non-rigid deformations and pose changes (multiview) to a 3D face template. A hierarchical geodesic-based resampling approach is applied to extract landmarks for modeling facial surface deformations. We are able to synthesize the deformation learned from a small group of subjects (control group) onto a 3D neutral model (not in the control group), resulting in a deformed template. A user-specific (3D) deformable model is built by combining the templates with synthesized deformations. The matching distance is computed by fitting this generative deformable model to a test scan. A fully automatic and prototypic 3D face matching system has been developed. Experimental results demonstrate that the proposed deformation modeling scheme increases the 3D face matching accuracy.  相似文献   

19.
Automatic creation of 3D facial models   总被引:24,自引:0,他引:24  
Model-based encoding of human facial features for narrowband visual communication is described. Based on an already prepared 3D human model, this coding method detects and understands a person's body motion and facial expressions. It expresses the essential information as compact codes and transmits it. At the receiving end, this code becomes the basis for modifying the 3D model of the person and thereby generating lifelike human images. The feature extraction used by the system to acquire data for regions or edges that express the eyes, nose, mouth, and outlines of the face and hair is discussed. The way in which the system creates a 3D model of the person by using the features extracted in the first part to modify a generic head model is also discussed  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号