首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 609 毫秒
1.
In this paper, we present a novel approach to synthesizing frontal and semi-frontal cartoon-like facial caricatures from an image. The caricature is generated by warping the input face from the original feature points to the corresponding exaggerated feature points. A 3D mean face model is incorporated to facilitate face to caricatures by inferring the depth of 3D feature points and the spatial transformation. Then the 3D face is deformed by using non-negative matrix factorization and projected back to image plane for future warping. To efficiently solve the nonlinear spatial transformation, we propose a novel initialization scheme to set up Levenberg-Marquardt optimization. According to the spatial transformation, exaggeration is applied to the most salient features by exaggerating their normalized difference from the mean. Non-photorealistic rendering (NPR) based stylization completes the cartoon caricature. Experiments demonstrate that our method outperforms existing methods in terms of view angles and aesthetic visual quality.  相似文献   

2.
肖像风格迁移旨在将参考艺术肖像画中迁移到人物照片上,同时保留人物面部的基本语义结构。然而,由于人类视觉对肖像面部语义结构的敏感性,使得肖像风格迁移任务比一般图像的风格迁移更具挑战性,现有的风格迁移方法未考虑漫画风格的抽象性以及肖像面部语义结构的保持,所以应用到肖像漫画化任务时会出现严重的结构坍塌及特征信息混乱等问题。为此,提出了一个双流循环映射网DSCM。首先,引入了一个结构一致性损失来保持肖像整体语义结构的完整性;其次,设计了一个结合U2-Net的特征编码器在不同尺度下帮助网络捕获输入图像更多有用的特征信息;最后,引入了风格鉴别器来对编码后的风格特征进行鉴别从而辅助网络学习到更接近目标图像的抽象漫画风格特征。实验与五种先进方法进行了定性及定量的比较,该方法均优于其他方法,其不仅能够完整地保持肖像的整体结构和面部的基本语义结构,而且能够充分学习到风格类型。  相似文献   

3.
基于特征发现的卡通人脸肖像生成   总被引:6,自引:0,他引:6  
通过对成年男女各100幅真实照片进行特征提取和特征统计,获得平均人脸特征分布信息,对新输入的人脸照片进行特征比对,发现其相对突出的特征,采用主动形状模型特征提取和特征线对相结合的方法,对突出的特征实现自动变形,生成人物的卡通肖像.实验结果表明,该方法具有人脸数据量大、特征提取和发现的自动化、变形效果好等优点.  相似文献   

4.
Caricature is a popular artistic media widely used for effective communications. The fascination of caricature lies in its expressive depiction of a person’s prominent features, which is usually realized through the so-called exaggeration technique. This paper proposes a new example-based automatic caricature generation system supporting the exaggeration of both the shape of facial components and the spatial relationships among the components. Given the photograph of a face, the system automatically computes the feature vectors representing the shape of facial components as well as the spatial relationship among the components. Those features are exaggerated and then used to search the learning database for the corresponding caricature components and for arranging the retrieved components to create the caricature. Experimental results show that our system can generate the caricatures of the example style capturing the prominent features of the subjects.  相似文献   

5.

Generating dynamic 2D image-based facial expressions is a challenging task for facial animation. Much research work focused on performance-driven facial animation from given videos or images of a target face, while animating a single face image driven by emotion labels is a less explored problem. In this work, we treat the task of animating single face image from emotion labels as a conditional video prediction problem, and propose a novel framework by combining factored conditional restricted boltzmann machines (FCRBM) and reconstruction contractive auto-encoder (RCAE). A modified RCAE with an associated efficient training strategy is used to extract low dimensional features and reconstruct face images. FCRBM is used as animator to predict facial expression sequence in the feature space given discrete emotion labels and a frontal neutral face image as input. Both quantitative and qualitative evaluations on two facial expression databases, and comparison to state-of-the-art showed the effectiveness of our proposed framework for animating frontal neutral face image from given emotion labels.

  相似文献   

6.
傅由甲 《计算机工程》2021,47(4):197-203,210
针对目前基于学习的姿态估计方法对训练样本及设备要求较高的问题,提出一种基于面部特征点定位的无需训练即能估计单幅图像中人脸姿态的方法。通过Adrian Bulat人脸特征点定位器和Candide-3构建稀疏通用人脸模型并获得五官特征点,确定模型绕Z轴的旋转范围及搜索步长,在指定Z轴旋转角度下,使用修正牛顿法通过模型的旋转、平移及缩放变换对齐模型和图像中人脸五官角点,得到该角度下模型绕X轴、Y轴的旋转角度及绕Z轴候选角度下的损失函数值,根据最小损失函数值确定人脸绕3个轴旋转的最佳值。实验结果表明,该方法能够快速估计自遮挡的大姿态角度人脸,在公共人脸库Multi-PIE、BIWI和AFLW上的平均误差分别为3.79°、4.37°和6.04°,明显高于同类人脸姿态估计算法,具有较好的实用性能。  相似文献   

7.
We present a data-driven method for automatically generating a 3D cartoon of a real 3D face. Given a sparse set of 3D real faces and their corresponding cartoon faces modeled by an artist, our method models the face in each subspace as the deformation of its nearby exemplars and learn a mapping between the deformations defined by the real faces and their cartoon counterparts. To reduce the exemplars needed for learning, we regress a collection of linear mappings defined locally in both face geometry and identity spaces and develop a progressive scheme for users to gradually add new exemplars for training. At runtime, our method first finds the nearby exemplars of an input real face and then constructs the result cartoon face from the corresponding cartoon faces of the nearby real face exemplars and the local deformations mapped from the real face subspace. Our method greatly simplifies the cartoon generation process by learning artistic styles from a sparse set of exemplars. We validate the efficiency and effectiveness of our method by applying it to faces of different facial features. Results demonstrate that our method not only preserves the artistic style of the exemplars, but also keeps the unique facial geometric features of different identities.  相似文献   

8.
Pose-Robust Facial Expression Recognition Using View-Based 2D $+$ 3D AAM   总被引:1,自引:0,他引:1  
This paper proposes a pose-robust face tracking and facial expression recognition method using a view-based 2D 3D active appearance model (AAM) that extends the 2D 3D AAM to the view-based approach, where one independent face model is used for a specific view and an appropriate face model is selected for the input face image. Our extension has been conducted in many aspects. First, we use principal component analysis with missing data to construct the 2D 3D AAM due to the missing data in the posed face images. Second, we develop an effective model selection method that directly uses the estimated pose angle from the 2D 3D AAM, which makes face tracking pose-robust and feature extraction for facial expression recognition accurate. Third, we propose a double-layered generalized discriminant analysis (GDA) for facial expression recognition. Experimental results show the following: 1) The face tracking by the view-based 2D 3D AAM, which uses multiple face models with one face model per each view, is more robust to pose change than that by an integrated 2D 3D AAM, which uses an integrated face model for all three views; 2) the double-layered GDA extracts good features for facial expression recognition; and 3) the view-based 2D 3D AAM outperforms other existing models at pose-varying facial expression recognition.  相似文献   

9.
针对现有的人脸姿态估计方法易受“自遮挡”影响,采用改进的ASM 算法 提取人脸特征点,并利用人脸形态的几何统计知识来估计人脸特征点的深度值。以人脸主要 特征点建立人脸稀疏模型,在利用相关人脸特征点近似估计人脸姿态后,通过最小二乘法精 确估计三维人脸空间姿态。实验结果表明,对于“自遮挡”情况,该方法仍有较好的估计结果, 与同类方法比较具有良好的姿态估计精度。  相似文献   

10.
通过分析Gabor小波和稀疏表示的生物学背景和数学特性,提出一种基于Gabor小波和稀疏表示的人脸表情识别方法。采用Gabor小波变换对表情图像进行特征提取,建立训练样本Gabor特征的超完备字典,通过稀疏表示模型优化人脸表情图像的特征向量,利用融合识别方法进行多分类器融合识别分类。实验结果表明,该方法能够有效提取表情图像的特征信息,提高表情识别率。  相似文献   

11.
12.
We introduce a new markerless 3D face tracking approach for 2D videos captured by a single consumer grade camera. Our approach takes detected 2D facial features as input and matches them with projections of 3D features of a deformable model to determine its pose and shape. To make the tracking and reconstruction more robust we add a smoothness prior for pose and deformation changes of the faces. Our major contribution lies in the formulation of the deformation prior which we derive from a large database of facial animations showing different (dynamic) facial expressions of a fairly large number of subjects. We split these animation sequences into snippets of fixed length which we use to predict the facial motion based on previous frames. In order to keep the deformation model compact and independent from the individual physiognomy, we represent it by deformation gradients (instead of vertex positions) and apply a principal component analysis in deformation gradient space to extract the major modes of facial deformation. Since the facial deformation is optimized during tracking, it is particularly easy to apply them to other physiognomies and thereby re‐target the facial expressions. We demonstrate the effectiveness of our technique on a number of examples.  相似文献   

13.
Chen  Jingying  Xu  Ruyi  Liu  Leyuan 《Multimedia Tools and Applications》2018,77(22):29871-29887

Facial expression recognition (FER) is important in vision-related applications. Deep neural networks demonstrate impressive performance for face recognition; however, it should be noted that this method relies heavily on a great deal of manually labeled training data, which is not available for facial expressions in real-world applications. Hence, we propose a powerful facial feature called deep peak–neutral difference (DPND) for FER. DPND is defined as the difference between two deep representations of the fully expressive (peak) and neutral facial expression frames. The difference tends to emphasize the facial parts that are changed in the transition from the neutral to the expressive face and to eliminate the face identity information retained in the fine-tuned deep neural network for facial expression, the network has been trained on large-scale face recognition dataset. Furthermore, unsupervised clustering and semi-supervised classification methods are presented to automatically acquire the neutral and peak frames from the expression sequence. The proposed facial expression feature achieved encouraging results on public databases, which suggests that it has strong potential to recognize facial expressions in real-world applications.

  相似文献   

14.
将偏最小二乘回归方法用于人脸身份和表情的同步识别。首先,对每幅人脸图像进行脸部特征提取以及相应的语义特征定义。在脸部特征提取方面,从每幅图像中标定出若干脸部关键点位置,并提取图像在该关键点处的Gabor小波系数(Gabor特征)以及关键点的坐标值(几何特征),作为该图像的输入特征。语义特征则定义为该人脸图像所属的表情类别信息以及所对应的人脸身份信息。其次,利用核主成分分析(KPCA)方法对脸部Gabor特征和几何特征进行融合,使得输入特征具有更好的识别特性;最后,运用偏最小二乘回归(PLSR)方法建立脸部特征和语义特征之间的关系模型,并运用此模型对某一测试人脸图像进行表情和身份的同步识别。通过在JAFFE国际表情数据库和AR人脸数据库上的对比实验,证实了所提方法的有效性。  相似文献   

15.
This paper proposes a novel natural facial expression recognition method that recognizes a sequence of dynamic facial expression images using the differential active appearance model (AAM) and manifold learning as follows. First, the differential-AAM features (DAFs) are computed by the difference of the AAM parameters between an input face image and a reference (neutral expression) face image. Second, manifold learning embeds the DAFs on the smooth and continuous feature space. Third, the input facial expression is recognized through two steps: (1) computing the distances between the input image sequence and gallery image sequences using directed Hausdorff distance (DHD) and (2) selecting the expression by a majority voting of k-nearest neighbors (k-NN) sequences in the gallery. The DAFs are robust and efficient for the facial expression analysis due to the elimination of the inter-person, camera, and illumination variations. Since the DAFs treat the neutral expression image as the reference image, the neutral expression image must be found effectively. This is done via the differential facial expression probability density model (DFEPDM) using the kernel density approximation of the positively directional DAFs changing from neutral to angry (happy, surprised) and negatively directional DAFs changing from angry (happy, surprised) to neutral. Then, a face image is considered to be the neutral expression if it has the maximum DFEPDM in the input sequences. Experimental results show that (1) the DAFs improve the facial expression recognition performance over conventional AAM features by 20% and (2) the sequence-based k-NN classifier provides a 95% facial expression recognition performance on the facial expression database (FED06).  相似文献   

16.
人脸表情识别作为人机交互系统的重要组成部分,在安防监控、人机交互等领域有广泛的应用,是计算机视觉的研究热点。传统的卷积神经网络方法一般提取单张人脸图像或者人脸标记点作为特征提取的输入数据,未能考虑到人脸全域的表情信息。提出了一种基于三通道多信息融合的深度学习人脸表情识别模型,以人脸图像表情平静到高峰时期标记点坐标的相对位移为输入,提取整个人脸表情图像特征信息,模型融合了稀疏自编码器以提高对边缘特征提取效率。该模型在CK+数据集上进行了训练和测试,实验结果表明,与该领域中的同类算法相比,该算法模型提高了表情识别的准确率。  相似文献   

17.
Three-dimensional (3D) cartoon facial animation is one step further than the challenging 3D caricaturing which generates 3D still caricatures only. In this paper, a 3D cartoon facial animation system is developed for a subject given only a single frontal face image of a neutral expression. The system is composed of three steps consisting of 3D cartoon face exaggeration, texture processing, and 3D cartoon facial animation. By following caricaturing rules of artists, instead of mathematical formulations, 3D cartoon face exaggeration is accomplished at both global and local levels. As a result, the final exaggeration is capable of depicting the characteristics of an input face while achieving artistic deformations. In the texture processing step, texture coordinates of the vertices of the cartoon face model are obtained by mapping the parameterized grid of the standard face model to a cartoon face template and aligning the input face to the face template. Finally, 3D cartoon facial animation is implemented in the MPEG-4 animation framework. In order to avoid time-consuming construction of a face animation table, we propose to utilize the tables of existing models through model mapping. Experimental results demonstrate the effectiveness and efficiency of our proposed system.  相似文献   

18.
在三维面部表情迁移中,针对保持目标模型丰富的细节信息以使生成的新表情真实自然,以及减少表情迁移的学习训练时间这2个热点问题,提出一种细节特征保持的三维面部表情迁移方法.首先提取三维面部模型的细节特征,获得滤掉细节后的基本表情;然后利用改进的有参无监督回归方法将源模型的基本表情传递给目标模型;最后利用提出的细节特征向量调...  相似文献   

19.
在三维人脸表情识别中,基于局部二值模式(LBP)算子算法与传统的特征提取算法相比具有特征提取准确、精细、光照不变性等优点,但也有直方图维数高、判别能力差、冗余信息大的缺点.本文提出一种通过对整幅图像进行多尺度分块提取CBP特征的CBP算法,能够更有效的提取分类特征.再结合使用稀疏表达分类器实现对特征进行分类和识别.经实验结果表明,与传统LBP算法和SVM分类识别算法对比,文中算法用于人脸表情的识别的识别率得到大幅度提高.  相似文献   

20.
Constructing a 3D individualized head model from two orthogonal views   总被引:7,自引:0,他引:7  
A new scheme for constructing a 3D individualized head model automatically from only a side view and the front view of the face is presented. The approach instantiates a generic 3D head model based on a set of the individual's facial features extracted by a local maximum-curvature tracking (LMCT) algorithm that we have developed. A distortion vector field that deforms the generic model to that of the individual is computed by correspondence matching and interpolation. The input of the two facial images are blended and texture-mapped onto the 3D head model. Arbitrary views of a person can be generated from two orthogonal images and can be implemented efficiently on a low-cost, PC-based platform.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号