首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
利用人脸特征及其关系的漫画夸张与合成   总被引:1,自引:0,他引:1  
在人脸漫画中,人脸特征(脸型以及五官)形状的夸张与特征之间关系的夸张是存在差异的,而目前由计算机生成的人脸漫画算法中并未考虑这一点,为此提出一种漫画夸张与合成方法.该方法由基于样本的特征的形状夸张、基于规则的特征之间的关系夸张以及两者之间的合成3部分组成.在人脸特征的形状夸张部分不需要大量的样本库,只通过少量的漫画样本便可以捕捉形状的夸张;在人脸特征之间的关系夸张部分,引入了直观的"T型规则"方式描述人脸特征在夸张时的相互关系;在人脸形状夸张与关系夸张两者的合成部分,采用"比例"的形式来描述人脸特征以及特征之间的关系,使得夸张后可以方便地进行两者的合成.最后采用调查问卷的方式评测该方法生成的漫画效果,结果表明其效果良好.  相似文献   

2.
提出一种漫画中人脸特征的夸张以及相像度优化算法.夸张部分不仅比较了不同漫画对象同一特征之间的相互关系,还比较了同一漫画对象里不同特征之间的相互关系,使得夸张更加具有对比度.引入一种相像度测量方法,以反映漫画结果与原始照片之间的相似性.根据计算得到的"相像度",进一步优化人脸特征之间的夸张.分析与实验表明,该方法生成的漫画具有较好的艺术效果.  相似文献   

3.
In this paper, we present a novel approach to synthesizing frontal and semi-frontal cartoon-like facial caricatures from an image. The caricature is generated by warping the input face from the original feature points to the corresponding exaggerated feature points. A 3D mean face model is incorporated to facilitate face to caricatures by inferring the depth of 3D feature points and the spatial transformation. Then the 3D face is deformed by using non-negative matrix factorization and projected back to image plane for future warping. To efficiently solve the nonlinear spatial transformation, we propose a novel initialization scheme to set up Levenberg-Marquardt optimization. According to the spatial transformation, exaggeration is applied to the most salient features by exaggerating their normalized difference from the mean. Non-photorealistic rendering (NPR) based stylization completes the cartoon caricature. Experiments demonstrate that our method outperforms existing methods in terms of view angles and aesthetic visual quality.  相似文献   

4.
基于特征发现的卡通人脸肖像生成   总被引:6,自引:0,他引:6  
通过对成年男女各100幅真实照片进行特征提取和特征统计,获得平均人脸特征分布信息,对新输入的人脸照片进行特征比对,发现其相对突出的特征,采用主动形状模型特征提取和特征线对相结合的方法,对突出的特征实现自动变形,生成人物的卡通肖像.实验结果表明,该方法具有人脸数据量大、特征提取和发现的自动化、变形效果好等优点.  相似文献   

5.
We propose a personality trait exaggeration system emphasizing the impression of human face in images, based on multi‐level features learning and exaggeration. These features are called Personality Trait Model (PTM). Abstract level of PTM is social psychology trait of face perception such as amiable, mean, cute and so on. Concrete level of PTM is shape feature and texture feature. A training phase is presented to learn multi‐level features of faces from different images. Statistical survey is taken to label sample images with people's first impressions. From images with the same labels, we capture not only shape features but also texture features to enhance exaggeration effect. Texture feature is expressed by matrix to reflect depth of facial organs, wrinkles and so on. In application phase, original images will be exaggerated using PTM iteratively. And exaggeration rate for each iteration is constrained to keep likeness with the original face. Experimental results demonstrate that our system can emphasize chosen social psychology traits effectively.  相似文献   

6.
Caricature is an interesting art to express exaggerated views of different persons and things through drawing. The face caricature is popular and widely used for different applications. To do this, we have to properly extract unique/specialized features of a person's face. A person's facial feature not only depends on his/her natural appearance, but also the associated expression style. Therefore, we would like to extract the neutural facial features and personal expression style for different applicaions. In this paper, we represent the 3D neutral face models in BU–3DFE database by sparse signal decomposition in the training phase. With this decomposition, the sparse training data can be used for robust linear subspace modeling of public faces. For an input 3D face model, we fit the model and decompose the 3D model geometry into a neutral face and the expression deformation separately. The neutral geomertry can be further decomposed into public face and individualized facial feature. We exaggerate the facial features and the expressions by estimating the probability on the corresponding manifold. The public face, the exaggerated facial features and the exaggerated expression are combined to synthesize a 3D caricature for a 3D face model. The proposed algorithm is automatic and can effectively extract the individualized facial features from an input 3D face model to create 3D face caricature.  相似文献   

7.
Interactive 3D caricature from harmonic exaggeration   总被引:1,自引:0,他引:1  
A common variant of caricature relies on exaggerating characteristics of a shape that differs from a reference template, usually the distinctive traits of a human portrait. This work introduces a caricature tool that interactively emphasizes the differences between two three-dimensional meshes. They are represented in the manifold harmonic basis of the shape to be caricatured, providing intrinsic controls on the deformation and its scales. It further provides a smooth localization scheme for the deformation. This lets the user edit the caricature part by part, combining different settings and models of exaggeration, all expressed in terms of harmonic filter. This formulation also allows for interactivity, rendering the resulting 3d shape in real time.  相似文献   

8.
This paper presents a hierarchical multi-state pose-dependent approach for facial feature detection and tracking under varying facial expression and face pose. For effective and efficient representation of feature points, a hybrid representation that integrates Gabor wavelets and gray-level profiles is proposed. To model the spatial relations among feature points, a hierarchical statistical face shape model is proposed to characterize both the global shape of human face and the local structural details of each facial component. Furthermore, multi-state local shape models are introduced to deal with shape variations of some facial components under different facial expressions. During detection and tracking, both facial component states and feature point positions, constrained by the hierarchical face shape model, are dynamically estimated using a switching hypothesized measurements (SHM) model. Experimental results demonstrate that the proposed method accurately and robustly tracks facial features in real time under different facial expressions and face poses.  相似文献   

9.
10.
给出了一个移动平台下的卡通人脸动画系统,其输入是一张二维真实人脸照片和一段文本,输出为具有娱乐效果的手机卡通人脸动画。首先,根据输入的照片生成人物的卡通肖像,采用了基于特征发现的卡通人脸肖像生成方法。其次,在卡通肖像的基础上结合文本驱动产生卡通人脸动画。最后,将系统移植到移动平台,在手机上播放卡通人脸动画,该系统在局域网和PDA上具有很好的娱乐动画效果。  相似文献   

11.
肖像风格迁移旨在将参考艺术肖像画中迁移到人物照片上,同时保留人物面部的基本语义结构。然而,由于人类视觉对肖像面部语义结构的敏感性,使得肖像风格迁移任务比一般图像的风格迁移更具挑战性,现有的风格迁移方法未考虑漫画风格的抽象性以及肖像面部语义结构的保持,所以应用到肖像漫画化任务时会出现严重的结构坍塌及特征信息混乱等问题。为此,提出了一个双流循环映射网DSCM。首先,引入了一个结构一致性损失来保持肖像整体语义结构的完整性;其次,设计了一个结合U2-Net的特征编码器在不同尺度下帮助网络捕获输入图像更多有用的特征信息;最后,引入了风格鉴别器来对编码后的风格特征进行鉴别从而辅助网络学习到更接近目标图像的抽象漫画风格特征。实验与五种先进方法进行了定性及定量的比较,该方法均优于其他方法,其不仅能够完整地保持肖像的整体结构和面部的基本语义结构,而且能够充分学习到风格类型。  相似文献   

12.
13.
周仁琴  刘福新 《计算机工程》2008,34(10):277-279
给出一个基于人脸特征分析的卡通人脸动画系统,其输入是一张二维真实人脸照片和一段文本,输出为具有娱乐效果的卡通人脸动画。该文采用基于人脸特征分析的卡通人脸肖像生成方法,在卡通肖像的基础上结合文本驱动产生卡通人脸动画。将系统移植到移动平台,在手机上生成卡通人脸动画。实验结果表明该系统在PDA上能产生较好的娱乐效果。  相似文献   

14.
Facial expressions have always attracted considerable attention as a form of nonverbal communication. In visual applications such as movies, games, and animations, people tend to be interested in exaggerated expressions rather than regular expressions since the exaggerated ones deliver more vivid emotions. In this paper, we propose an automatic method for exaggeration of facial expressions from motion-captured data with a certain personality type. The exaggerated facial expressions are generated by using the exaggeration mapping (EM) that transforms facial motions into exaggerated motions. As all individuals do not have identical personalities, a conceptual mapping of the individual’s personality type for exaggerating facial expressions needs to be considered. The Myers–Briggs type indicator, which is a popular method for classifying personality types, is employed to define the personality-type-based EM. Further, we have experimentally validated the EM and simulations of facial expressions.  相似文献   

15.
脸部特征检测问题是计算机视觉领域的研究热点。因为脸部外观和形态随着条件的变化而变化,因此面部特征检测比较复杂。针对现有脸部特征检测算法的不足,提出一种已知图像测量数据后能够推断出真实脸部特征位置的分层概率模型。针对每个脸部子部位的局部形态变化进行间接建模;通过搜索模型的最优结构和参数设置,在更高层次上学习脸部子部位、脸部表情和姿态间的联合关系。该模型综合利用了脸部子部位自下而上的形态约束以及脸部子部位间自上而下的关系约束来推断出脸部特征的真实位置。利用基准数据库进行了仿真实验。实验结果表明,该方法的检测性能要明显优于目前最新的人脸特征检测算法。  相似文献   

16.
同时跟踪具有丰富表情的人脸多个特征是一个有挑战性的问题.提出了一个基于时空概率图模型的方法.在时间域上,使用几个相互独立的Condensation类型的粒子滤波器分别跟踪人脸的每个特征.粒子滤波对独立的视觉跟踪问题非常有效,但是多个独立的跟踪器忽视了人脸的空间约束和人脸特征间的自然相互联系;在空间域上,事先从人脸表情库中学习人脸特征轮廓的相互关系,使用贝叶斯推理一信任度传播算法来对人脸特征的轮廓位置进行求精.实验结果表明,文中算法可以在帧间运动较大的情况下,鲁棒地同时跟踪人脸多个特征.  相似文献   

17.
In this paper, we present a 3D face photography system based on a facial expression training dataset, composed of both facial range images (3D geometry) and facial texture (2D photography). The proposed system allows one to obtain a 3D geometry representation of a given face provided as a 2D photography, which undergoes a series of transformations through the texture and geometry spaces estimated. In the training phase of the system, the facial landmarks are obtained by an active shape model (ASM) extracted from the 2D gray-level photography. Principal components analysis (PCA) is then used to represent the face dataset, thus defining an orthonormal basis of texture and another of geometry. In the reconstruction phase, an input is given by a face image to which the ASM is matched. The extracted facial landmarks and the face image are fed to the PCA basis transform, and a 3D version of the 2D input image is built. Experimental tests using a new dataset of 70 facial expressions belonging to ten subjects as training set show rapid reconstructed 3D faces which maintain spatial coherence similar to the human perception, thus corroborating the efficiency and the applicability of the proposed system.  相似文献   

18.
Three-dimensional (3D) cartoon facial animation is one step further than the challenging 3D caricaturing which generates 3D still caricatures only. In this paper, a 3D cartoon facial animation system is developed for a subject given only a single frontal face image of a neutral expression. The system is composed of three steps consisting of 3D cartoon face exaggeration, texture processing, and 3D cartoon facial animation. By following caricaturing rules of artists, instead of mathematical formulations, 3D cartoon face exaggeration is accomplished at both global and local levels. As a result, the final exaggeration is capable of depicting the characteristics of an input face while achieving artistic deformations. In the texture processing step, texture coordinates of the vertices of the cartoon face model are obtained by mapping the parameterized grid of the standard face model to a cartoon face template and aligning the input face to the face template. Finally, 3D cartoon facial animation is implemented in the MPEG-4 animation framework. In order to avoid time-consuming construction of a face animation table, we propose to utilize the tables of existing models through model mapping. Experimental results demonstrate the effectiveness and efficiency of our proposed system.  相似文献   

19.
人脸肖像剪纸应该重现生动的图像细节,为了实现这一目标,提出了一种基于五官特征与图像变形算法的两阶段人脸剪纸合成方法。收集艺术家的人脸剪纸创作,分割五官部位并提取各组件的几何特征,建立数字化五官剪纸数据库。对目标人脸图像进行剪纸合成:在第一阶段,标定目标人脸图像的特征点,分割其五官部位,并提取各部位的几何特征,之后分别计算目标人脸五官与剪纸数据库中各对应组件基于几何特征和形状上下文特征的相似性度量值;通过融合几何特征和形状上下文特征,选择匹配相似度较高的剪纸部位,拼接得到初步的人脸剪纸图。在第二阶段,采用薄板样条(Thin Plate Spline,TPS)变形算法对第一阶段合成的人脸剪纸图进行变形,得到最终的剪纸图像。通过多人视觉测评实验,结果表明运用该方法得到的人脸剪纸图能够达到较为满意的效果。  相似文献   

20.
In this paper, we propose a set of automatic stress exaggeration methods that can enlarge the differences between stressed and unstressed syllables. Our stress exaggeration methods can be used in computer-aided language learning systems to assist second language learners perceive stress patterns. The intention of our automatic stress exaggeration methods is to support hyper-pronunciation training which is commonly used in classrooms by teachers. In hyper-pronunciation training, exaggeration is used to help learners increase their awareness of acoustic features and effectively apply these features into their pronunciation. Duration, pitch and intensity have been claimed to be the main acoustic features that are closely related to stress in English language. Thus, four stress exaggeration methods are proposed in this paper: (i) duration-based stress exaggeration, (ii) pitch-based stress exaggeration, (iii) intensity-based stress exaggeration, and (iv) a combined stress exaggeration method that integrates the duration-based, pitch-based and intensity-based exaggeration methods. Our perceptual experimental results show that resynthesised stimuli by our proposed stress exaggerated methods can help learners of English as a Second Language (ESL) better perceive English stress patterns significantly.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号