首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We present a novel method for caricature synthesis based on mean value coordinates (MVC). Our method can be applied to any single frontal face image to learn a specified caricature face pair for frontal and 3D caricature synthesis. This technique only requires one or a small number of exemplar pairs and a natural frontal face image training set, while the system can transfer the style of the exemplar pair across individuals. Further exaggeration can be fulfilled in a controllable way. Our method is further applied to facial expression transfer, interpolation, and exaggeration, which are applications of expression editing. Additionally, we have extended our approach to 3D caricature synthesis based on the 3D version of MVC. With experiments we demonstrate that the transferred expressions are credible and the resulting caricatures can be characterized and recognized.  相似文献   

2.
We propose a novel approach for face tracking, resulting in a visual feedback loop: instead of trying to adapt a more or less realistic artificial face model to an individual, we construct from precise range data a specific texture and wireframe face model, whose realism allows the analysis and synthesis modules to visually cooperate in the image plane, by directly using 2D patterns synthesized by the face model. Unlike other feedback loops found in the literature, we do not explicitly handle the 3D complex geometric data of the face model, to make real-time manipulations possible. Our main contribution is a complete face tracking and pose estimation framework, with few assumptions about the face rigid motion (allowing large rotations out of the image plane), and without marks or makeup on the user's face. Our framework feeds the feature-tracking procedure with synthesized facial patterns, controlled by an extended Kalman filter. Within this framework, we present original and efficient geometric and photometric modelling techniques, and a reformulation of a block-matching algorithm to make it match synthesized patterns with real images, and avoid background areas during the matching. We also offer some numerical evaluations, assessing the validity of our algorithms, and new developments in the context of facial animation. Our face-tracking algorithm may be used to recover the 3D position and orientation of a real face and generate a MPEG-4 animation stream to reproduce the rigid motion of the face with a synthetic face model. It may also serve as a pre-processing step for further facial expression analysis algorithms, since it locates the position of the facial features in the image plane, and gives precise 3D information to take into account the possible coupling between pose and expressions of the analysed facial images.  相似文献   

3.
Hallucinating a photo-realistic frontal face image from a low-resolution (LR) non-frontal face image is beneficial for a series of face-related applications. However, previous efforts either focus on super-resolving high-resolution (HR) face images from nearly frontal LR counterparts or frontalizing non-frontal HR faces. It is necessary to address all these challenges jointly for real-world face images in unconstrained environment. In this paper, we develop a novel Cross-view Information Interaction and Feedback Network (CVIFNet), which simultaneously handles the non-frontal LR face image super-resolution (SR) and frontalization in a unified framework and interacts them with each other to further improve their performance. Specifically, the CVIFNet is composed of two feedback sub-networks for frontal and profile face images. Considering the reliable correspondence between frontal and non-frontal face images can be crucial and contribute to face hallucination in a different manner, we design a cross-view information interaction module (CVIM) to aggregate HR representations of different views produced by the SR and frontalization processes to generate finer face hallucination results. Besides, since 3D rendered facial priors contain rich hierarchical features, such as low-level (e.g., sharp edge and illumination) and perception level (e.g., identity) information, we design an identity-preserving consistency loss based on 3D rendered facial priors, which can ensure that the high-frequency details of frontal face hallucination result are consistent with the profile. Extensive experiments demonstrate the effectiveness and advancement of CVIFNet.  相似文献   

4.
陈娜 《激光与红外》2022,52(6):923-930
基于单张人脸图片的3D人脸模型重构,无论是在计算机图形领域还是可见光成像领域都是一个极具挑战性的研究方向,对于人脸识别、人脸成像、人脸动画等实际应用更是具有重要意义。针对目前算法复杂度较高、运算量较大且存在局部最优解和初始化不良等问题,本文提出了一种基于深度卷积神经网络的单张图片向3D人脸自动重构算法。该算法首先基于3D转换模型来提取2D人脸图像的密集信息,然后构建深度卷积神经网络架构、设计总体损失函数,直接学习2D人脸图像从像素到3D坐标的映射,从而实现了3D人脸模型的自动构建。算法对比与仿真实验表明,该算法在3D人脸重建上的归一化平均误差更低,且仅需一张2D人脸图像便可自动重构生成3D人脸模型。所生成的3D人脸模型鲁棒性好,重构准确,完整保留表情细节,并且对不同姿态的人脸也具有较好的重建效果,能够在三维空间中无死角自由呈现,将满足更多实际应用需求。  相似文献   

5.
This paper presents a hierarchical animation method for transferring facial expressions extracted from a performance video to different facial sketches. Without any expression example obtained from target faces, our approach can transfer expressions by motion retargetting to facial sketches. However, in practical applications, the image noise in each frame will reduce the feature extraction accuracy from source faces. And the shape difference between source and target faces will influence the animation quality for representing expressions. To solve these difficulties, we propose a robust neighbor-expression transfer (NET) model, which aims at modeling the spatial relations among sparse facial features. By learning expression behaviors from neighbor face examples, the NET model can reconstruct facial expressions from noisy signals. Based on the NET model, we present a hierarchical method to animate facial sketches. The motion vectors on the source face are adjusted from coarse to fine on the target face. Accordingly, the animation results are generated to replicate source expressions. Experimental results demonstrate that the proposed method can effectively and robustly transfer expressions by noisy animation signals.  相似文献   

6.
The initial conception of a model-based analysis synthesis image coding (MBASIC) system is described and a construction method for a three-dimensional (3-D) facial model that includes synthesis methods for facial expressions is presented. The proposed MBASIC system is an image coding method that utilizes a 3-D model of the object which is to be reproduced. An input image is first analyzed and an output image using the 3-D model is then synthesized. A very low bit rate image transmission can be realized because the encoder sends only the required analysis parameters. Output images can be reconstructed without the noise corruption that reduces naturalness because the decoder synthesizes images from a similar 3-D model.

In order to construct a 3-D model of a person's face, a method is developed which uses a 3-D wire frame face model. A full-face image is then projected onto this wire frame model. For the synthesis of facial expressions two different methods are proposed; a clip-and-paste method and a facial structure deformation method.  相似文献   


7.
基于2D几何与纹理信息分离模型的表情分析与合成   总被引:1,自引:0,他引:1  
林学  洪鹏宇 《电子学报》1998,26(11):124-127
本文提出一种用于特定人逼真表情图像合成的新方法,该方法利用二维网格描述部呈现不同表情时的几何变化,并使用一分析方法将该网格序列用参数化模型表示,从而极大地压缩了数据,在重构表地利用参考帧提供了场景纹理信息,结合网格参数模型提供的几何信息,使用图像卷曲与插值技术,生成逼真表情图像,目前使用方法对人脸下部半表情图像分析及合成进行了初步的实验,实验结果表明了该种方法的有效性。  相似文献   

8.
An automatic field motion image synthesis scheme (driven by speech) and a real-time image synthesis design are presented. The purpose of this research is to realize an intelligent human-machine interface or intelligent communication system with talking head images. A human face is reconstructed on the display of a terminal using a 3-D surface model and texture mapping technique. Facial motion images are synthesized naturally by transformation of the lattice points on 3-D wire frames. Two driving motion methods, a text-to-image conversion scheme and a voice-to-image conversion scheme, are proposed. In the first method, the synthesized head image can appear to speak some given words and phrases naturally. In the second case, some mouth and jaw motions can be synthesized in synchronization with voice signals from a speaker. Facial expressions other than mouth shape and jaw position can be added at any moment, so it is easy to make the facial model appear angry, to smile, to appear sad, etc., by special modification rules. These schemes were implemented on a parallel image computer system. A real-time image synthesizer was able to generate facial motion images on the display at a TV image video rate  相似文献   

9.
In the domain of telecommunication applications, videophony, teleconferency, the representation and modelization of human face, and its expressions, knows an important development. In this paper, we present the basic principles of image sequences coding with main approaches and methods to lead to 3D model-based coding. Then, we introduce our 3D wire-frame model with which we have developed some compression and triangulated surface representation methods. An original approach to simulate and reproduce facial expressions with radial basis functions is also presented.  相似文献   

10.
In this paper, we present a probabilistic approach to determining whether extracted facial features from a video sequence are appropriate for creating a 3D face model. In our approach, the distance between two feature points selected from the MPEG‐4 facial object is defined as a random variable for each node of a probability network. To avoid generating an unnatural or non‐realistic 3D face model, automatically extracted 2D facial features from a video sequence are fed into the proposed probabilistic network before a corresponding 3D face model is built. Simulation results show that the proposed probabilistic network can be used as a quality control agent to verify the correctness of extracted facial features.  相似文献   

11.
多姿态人脸图像合成   总被引:1,自引:0,他引:1  
提出了一种从单张旋转人脸图像合成正面人脸图像的方法。首先把测试人脸图像表示为形状向量和纹理向量,用线性物体类的理论来合成正面形状和纹理,之后把测试图像的纹理和合成的纹理结合产生最终的正面纹理,最后应用分段三角形拉伸算法把合成的纹理和形状结合起来,生成了测试人脸的正面图像。实验结果表明,该方法能有效地从单张旋转人脸图像合成正面人脸图像,合成正面人脸图像使识别率大幅提高。  相似文献   

12.
多表情人脸肖像的自动生成   总被引:1,自引:0,他引:1  
宋红  黄小川  王树良 《电子学报》2013,41(8):1494-1499
肖像是一种能够抓住人物特征,又能隐藏细节,保留个人隐私的艺术表现形式,本文提出一种以中性表情人脸作为输入,自动生成不同表情的肖像算法.首先利用主动形状模型(Active Shape Model,ASM)提取人脸的关键特征点,然后根据统计学习获取的不同表情人脸的FAP(Facial Animation Parameter)规则,对中性人脸的特征点进行变形,生成表情三角网格,将中性人脸图像作为纹理映射到表情人脸网格,生成表情人脸图像,最后利用图像的梯度域信息和非真实感渲染技术,将带表情的人脸生成具有肖像风格的人脸图像.所生成的多表情人脸肖像效果良好,能够应用于网络和报刊杂志等非真实感图形学和数字娱乐等领域中.  相似文献   

13.
People instinctively recognize facial expression as a key to nonverbal communication, which has been confirmed by many different research projects. A change in intensity or magnitude of even one specific facial expression can cause different interpretations. A systematic method for generating facial expression syntheses, while mimicking realistic facial expressions and intensities, is a strong need in various applications. Although manually produced animation is typically of high quality, the process is slow and costly-therefore, often unrealistic for low polygonal applications. In this paper, we present a simple and efficient emotional-intensity-based expression cloning process for low-polygonal-based applications, by generating a customized face, as well as by cloning facial expressions. We define intensity mappings to measure expression intensity. Once a source expression is determined by a set of suitable parameter values in a customized 3D face and its embedded muscles, expressions for any target face(s) can be easily cloned by using the same set of parameters. Through experimental study, including facial expression simulation and cloning with intensity mapping, our research reconfirms traditional psychological findings. Additionally, we discuss the method's overall usability and how it allows us to automatically adjust a customized face with embedded facial muscles while mimicking the user's facial configuration, expression, and intensity.  相似文献   

14.
15.
三维扫描仪可以准确获取人脸的几何形状与纹理,但原始的人脸扫描数据仅为一张连续曲面,不符合实际的人脸结构,无法用于人脸动画。针对此问题,提出了一种由三雏扫描数据进行人脸建模的方法,将一个具备完整结构的通用人脸模型与扫描数据进行初步适配,再采用细节重建技术恢复特定人脸的表面细节和皮肤纹理。实验表明,该方法建立的三维人脸模型真实感强,结构完整,可生成连续自然的表情动画。  相似文献   

16.
署光  姚莉秀  杨晓超  左昕  杨杰 《电子学报》2010,38(8):1798-1802
 随着数字娱乐产业的发展,由照片生成卡通人脸的技术将取得广泛应用.此前的方法主要集中在平面卡通化的领域,风格较为单一.对于三维人脸,尽管形变模型方法可以由照片合成各种属性的三维人脸,但它计算量较大,不适用于实时应用场合.本文提出了一种基于稀疏形变模型的三维卡通人脸生成方法,提高了计算速度,且只需要单幅正面人脸照片.首先由稀疏形变模型拟合照片人脸获得特定的稀疏人脸模型;然后将一个一般人脸模型变形到特定人脸并合成纹理;最后对三维人脸进行卡通化.实验结果证明本文方法能够快速自动地合成生动的三维卡通人脸.  相似文献   

17.
This paper introduces a novel Gabor-Fisher (1936) classifier (GFC) for face recognition. The GFC method, which is robust to changes in illumination and facial expression, applies the enhanced Fisher linear discriminant model (EFM) to an augmented Gabor feature vector derived from the Gabor wavelet representation of face images. The novelty of this paper comes from (1) the derivation of an augmented Gabor feature vector, whose dimensionality is further reduced using the EFM by considering both data compression and recognition (generalization) performance; (2) the development of a Gabor-Fisher classifier for multi-class problems; and (3) extensive performance evaluation studies. In particular, we performed comparative studies of different similarity measures applied to various classifiers. We also performed comparative experimental studies of various face recognition schemes, including our novel GFC method, the Gabor wavelet method, the eigenfaces method, the Fisherfaces method, the EFM method, the combination of Gabor and the eigenfaces method, and the combination of Gabor and the Fisherfaces method. The feasibility of the new GFC method has been successfully tested on face recognition using 600 FERET frontal face images corresponding to 200 subjects, which were acquired under variable illumination and facial expressions. The novel GFC method achieves 100% accuracy on face recognition using only 62 features.  相似文献   

18.
Facial expressions contain most of the information on human face which is essential for human–computer interaction. Development of robust algorithms for automatic recognition of facial expressions with high recognition rates has been a challenge for the last 10 years. In this paper, we propose a novel feature selection procedure which recognizes basic facial expressions with high recognition rates by utilizing three-Dimensional (3D) geometrical facial feature positions. The paper presents a system of classifying expressions in one of the six basic emotional categories which are anger, disgust, fear, happiness, sadness, and surprise. The paper contributes on feature selections for each expression independently and achieves high recognition rates with the proposed geometric facial features selected for each expression. The novel feature selection procedure is entropy based, and it is employed independently for each of the six basic expressions. The system’s performance is evaluated using the 3D facial expression database, BU-3DFE. Experimental results show that the proposed method outperforms the latest methods reported in the literature.  相似文献   

19.
付昀  郑南宁  张婷 《电子学报》2003,31(Z1):1963-1970
非完整视点信息的重构技术可以利用少量的对象视点信息,结合统计学的理论方法对缺失视点信息进行预测和恢复.该文探讨了从单幅2D人像恢复其它视点图像并重构3D形态效果的技术,提出了非完整人像视点续变的技术框架,通过大视点数据库的离线创建,单视点输入的模型匹配,以及大视点空间映射与连续视点重建三个模块,从单视点合成连续的多视点图像.该技术以尽量少的人像2D信息获得高真实度的3D效果,绕过了3D模型重构,可恢复出未知人脸表面的纹理信息.最后结合AIAR人像库的实际建库经验,讨论了人像数据的采集技术和分类方法,提出了在有限拍摄条件下的人像库建立方法与建库实例.  相似文献   

20.
特定三维人脸的建模与动画是计算机图形学中一个非常令人感兴趣的领域.本文提出了一种新的从两幅正交照片建立特定人脸的模型以及动画方法,首先以主动轮廓跟踪技术snake自动获取人脸特征点的准确位置,然后以文中的局部弹性变形(local elastic deformation)方法进行通用人脸模型到特定人脸的定制,并辅以采用图像镶嵌技术生成的大分辨率纹理图像施行纹理绘制,该方法以特征点的位移和非特征点与特征点的相对位置为基础计算局部人脸面部的变形,同时还能够实现人脸剧烈的面部变化和动作,与肌肉模型相结合,可很好地实时完成人脸的动画,具有快速高效的特点.最后,给出了所得到的实验结果.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号