首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Oftentimes facial animation is created separately from overall body motion. Since convincing facial animation is challenging enough in itself, artists tend to create and edit the face motion in isolation. Or if the face animation is derived from motion capture, this is typically performed in a mo‐cap booth while sitting relatively still. In either case, recombining the isolated face animation with body and head motion is non‐trivial and often results in an uncanny result if the body dynamics are not properly reflected on the face (e.g. the bouncing of facial tissue when running). We tackle this problem by introducing a simple and intuitive system that allows to add physics to facial blendshape animation. Unlike previous methods that try to add physics to face rigs, our method preserves the original facial animation as closely as possible. To this end, we present a novel simulation framework that uses the original animation as per‐frame rest‐poses without adding spurious forces. As a result, in the absence of any external forces or rigid head motion, the facial performance will exactly match the artist‐created blendshape animation. In addition we propose the concept of blendmaterials to give artists an intuitive means to account for changing material properties due to muscle activation. This system allows to automatically combine facial animation and head motion such that they are consistent, while preserving the original animation as closely as possible. The system is easy to use and readily integrates with existing animation pipelines.  相似文献   

2.
We present a lightweight non-parametric method to generate wrinkles for 3D facial modeling and animation. The key lightweight feature of the method is that it can generate plausible wrinkles using a single low-cost Kinect camera and one high quality 3D face model with details as the example. Our method works in two stages: (1) offline personalized wrinkled blendshape construction. User-specific expressions are recorded using the RGB-Depth camera, and the wrinkles are generated through example-based synthesis of geometric details. (2) Online 3D facial performance capturing. These reconstructed expressions are used as blendshapes to capture facial animations in real-time. Experiments on a variety of facial performance videos show that our method can produce plausible results, approximating the wrinkles in an accurate way. Furthermore, our technique is low-cost and convenient for common users.  相似文献   

3.
Synthesizing expressive facial animation is a very challenging topic within the graphics community. In this paper, we present an expressive facial animation synthesis system enabled by automated learning from facial motion capture data. Accurate 3D motions of the markers on the face of a human subject are captured while he/she recites a predesigned corpus, with specific spoken and visual expressions. We present a novel motion capture mining technique that "learns" speech coarticulation models for diphones and triphones from the recorded data. A phoneme-independent expression eigenspace (PIEES) that encloses the dynamic expression signals is constructed by motion signal processing (phoneme-based time-warping and subtraction) and principal component analysis (PCA) reduction. New expressive facial animations are synthesized as follows: First, the learned coarticulation models are concatenated to synthesize neutral visual speech according to novel speech input, then a texture-synthesis-based approach is used to generate a novel dynamic expression signal from the PIEES model, and finally the synthesized expression signal is blended with the synthesized neutral visual speech to create the final expressive facial animation. Our experiments demonstrate that the system can effectively synthesize realistic expressive facial animation  相似文献   

4.
黄建峰  林奕城 《软件学报》2000,11(9):1139-1150
提出一个新的方法来产生脸部动画,即利用动作撷取系统捕捉真人脸上的细微动作,再将动态资料用来驱动脸部模型产生动画,首先,利用Oxford Metrics’VICON8系统,在真人的脸上贴了23个反光标记物,用以进行动作撷取,得到三维动态资料后,必须经过后继处理才能使用,因此,提出了消除头部运动的方法,并估计人头的旋转支点,经过处理后,剩余的动态资料代表脸部表情的变化,因此,可以直接运用到脸部模型。用  相似文献   

5.
黄建峰  林奕成  欧阳明 《软件学报》2000,11(9):1141-1150
提出一个新的方法来产生脸部动画,即利用动作撷取系统捕捉真人脸上的细微动作,再将动态资料用来驱动脸部模型产生动画.首先,OXfor Metrics'VICON8系统,在真人的脸上贴了23全反光标记物,用以进行动作撷取.得到三维动态资料后,必须经过后继处理才能使用,因此,提出了消除头部运动的方法,并估计人头的旋转支点,经过处理后,剩余的动态资料代表脸部表情的变化,因此,可以直接运用到脸部模型.用2.5D的脸模型来实作系统,这样可兼得二维模型与三维模型的优点:简单、在小角度旋转时显得生动、自然.在脸部动务的制作中,利用一个特殊的内差公式来计算非特征点的位移,并将脸部分成数个区域,用以限制模型上三维点的移动,使动画更加自然,此动画系统在Pentium Ⅲ500MHz的机器上,并配有OpenGL的加速卡,更新率可以超过每秒30张.  相似文献   

6.
The computer graphics and vision communities have dedicated long standing efforts in building computerized tools for reconstructing, tracking, and analyzing human faces based on visual input. Over the past years rapid progress has been made, which led to novel and powerful algorithms that obtain impressive results even in the very challenging case of reconstruction from a single RGB or RGB‐D camera. The range of applications is vast and steadily growing as these technologies are further improving in speed, accuracy, and ease of use. Motivated by this rapid progress, this state‐of‐the‐art report summarizes recent trends in monocular facial performance capture and discusses its applications, which range from performance‐based animation to real‐time facial reenactment. We focus our discussion on methods where the central task is to recover and track a three dimensional model of the human face using optimization‐based reconstruction algorithms. We provide an in‐depth overview of the underlying concepts of real‐world image formation, and we discuss common assumptions and simplifications that make these algorithms practical. In addition, we extensively cover the priors that are used to better constrain the under‐constrained monocular reconstruction problem, and discuss the optimization techniques that are employed to recover dense, photo‐geometric 3D face models from monocular 2D data. Finally, we discuss a variety of use cases for the reviewed algorithms in the context of motion capture, facial animation, as well as image and video editing.  相似文献   

7.
Blendshapes are the most commonly used approach to realistic facial animation in production. A blendshape model typically begins with a relatively small number of blendshape targets reflecting major muscles or expressions. However, the majority of the effort in constructing a production quality model occurs in the subsequent addition of targets needed to reproduce various subtle expressions and correct for the effects of various shapes in combination. To make this subsequent modeling process much more efficient, we present a novel editing method that removes the need for much of the iterative trial-and-error decomposition of an expression into targets. Isolated problematic frames of an animation are re-sculpted as desired and used as training for a nonparametric regression that associates these shapes with the underlying blendshape weights. Using this technique, the artist’s correction to a problematic expression is automatically applied to similar expressions in an entire sequence, and indeed to all future sequences. The extent and falloff of editing is controllable and the effect is continuously propagated to all similar expressions. In addition, we present a search scheme that allows effective reuse of pre-sculpted editing examples. Our system greatly reduces time and effort required by animators to create high quality facial animations.  相似文献   

8.
阐述了一般人脸模型及其数据结构,使用正侧面照片和一般人脸模型建立三维个性人脸模型,基于B样条曲线编辑人脸模型,以及系统基于OpenGL的显示编辑环境等内容。这是一个比较简单实用的由二维照片得到三维人脸图形的方法,并且可以通过修改脸部特征的关键点,来比较快速地得到一个以原来模型为基础的新的人脸模型。该系统可在影视、游戏、教学、医疗等诸多方面广泛应用。  相似文献   

9.
摘 要:采用人脸特征点调整三维形变模型的方法应用于面部三维重建,但模型形变的计 算往往会产生误差,且耗时较长。因此运用人脸二维特征点对通用三维形变模型的拟合方法进 行改进,提出了一种视频流的多角度实时三维人脸重建方法。首先利用带有三层卷积网络的 CLNF 算法识别二维特征点,并跟踪特征点位置;然后由五官特征点位置估计头部姿态,更新 模型的表情系数,其结果再作用于 PCA 形状系数,促使当前三维模型发生形变;最后采用 ISOMAP 算法提取网格纹理信息,进行纹理融合形成特定人脸模型。实验结果表明,该方法在 人脸重建过程中具有更好的实时性能,且精确度有所提高。  相似文献   

10.
11.
孙劲光    孟凡宇 《智能系统学报》2015,10(6):912-920
针对传统人脸识别算法在非限制条件下识别准确率不高的问题,提出了一种特征加权融合人脸识别方法(DLWF+)。根据人脸面部左眼、右眼、鼻子、嘴、下巴等5个器官位置,将人脸图像划分成5个局部采样区域;将得到的5个局部采样区域和整幅人脸图像分别输入到对应的神经网络中进行网络权值调整,完成子网络的构建;利用softmax回归求出6个相似度向量并组成相似度矩阵与权向量相乘得出最终的识别结果。经ORL和WFL人脸库上进行实验验证,识别准确率分别达到97%和91.63%。实验结果表明:该算法能够有效提高人脸识别能力,与传统识别算法相比在限制条件和非限制条件下都具有较高的识别准确率。  相似文献   

12.
提出了一种基于小波变换与纹理移植相结合的人脸衰老化合成(绘制)方法.首先,将衰老模板进行二维离散小波变换(2D discrete wavelet transform,简称2D DWT),提取出承载衰老皮肤纹理特征的高频子图与高通滤波后的低频子图;然后,将其与目标人脸图像的对应分量进行置换与融合,利用小波重构来完成衰老纹理向目标人脸的移植;同时,提取出年轻人群到年老人群在脸形上的平均变化,将其作用在目标人脸上以增强衰老化合成的效果.结合色彩渲染技术,设计实现了真实感人脸衰老化绘制的完整技术框架.实验部分分别将该方法应用于东西方人脸以及艺术图片,绘制结果显示出了具有真实感和感染力的效果.与基于PCA(principal components analysis),3D渐变模型、比例图等方法相比,该方法较好地解决了在人脸衰老化绘制中真实感与易操作性之间难以折衷的问题.  相似文献   

13.
We present a novel approach to synthesizing accurate visible speech based on searching and concatenating optimal variable-length units in a large corpus of motion capture data. Based on a set of visual prototypes selected on a source face and a corresponding set designated for a target face, we propose a machine learning technique to automatically map the facial motions observed on the source face to the target face. In order to model the long distance coarticulation effects in visible speech, a large-scale corpus that covers the most common syllables in English was collected, annotated and analyzed. For any input text, a search algorithm to locate the optimal sequences of concatenated units for synthesis is described. A new algorithm to adapt lip motions from a generic 3D face model to a specific 3D face model is also proposed. A complete, end-to-end visible speech animation system is implemented based on the approach. This system is currently used in more than 60 kindergartens through third grade classrooms to teach students to read using a lifelike conversational animated agent. To evaluate the quality of the visible speech produced by the animation system, both subjective evaluation and objective evaluation are conducted. The evaluation results show that the proposed approach is accurate and powerful for visible speech synthesis.  相似文献   

14.
基于实拍图像的人脸真实感重建   总被引:20,自引:1,他引:20  
给出了基于实拍人脸图像的三维逼真人脸模型的重建算法,该算法首先在两幅人脸图像上交互标识特征点对和输入摄像机的广角参数来实现摄像机定标,进而匹配出两幅人脸图像上的其它对应点,实现模型的三维重建,作者用半自动垭达到匹配目的。用手工编辑建立的二维对应网格,得到初始人脸外开和鲁棒的最大拟然立体虎法自动匹配出稠密的对应点,重建出表示人脸的散乱三维数据点团;最后利用这些稠密的三维数据点去迭代矫正和自适应细分手  相似文献   

15.
三维模型动画在数字化设计和应用中具有重要意义,受到越来越多研究者关注;但如何通过三维数字化原真再现民族舞蹈表演是极具挑战的问题.本论文通过动捕技术采集舞蹈动作实现舞蹈数字化展示.具体方法是:首先利用动捕设备捕获人体动作数据,然后在Maya中进行人物建模、骨骼绑定、蒙皮和权重调节,再通过MotionBuilder将3D模型与动捕数据结合,最终完成了现实舞蹈动作的虚拟人展演.论文构建了一个面向民族舞蹈展演的虚拟场景,并以13个民族的舞蹈为数字化内容,推广动捕驱动的舞蹈展演方法的应用.  相似文献   

16.
3-D Head Model Retrieval Using a Single Face View Query   总被引:1,自引:0,他引:1  
In this paper, a novel 3D head model retrieval approach is proposed, in which only a single 2D face view query is required. The proposed approach will be important for multimedia application areas such as virtual world construction and game design, in which 3D virtual characters with a given set of facial features can be rapidly constructed based on 2D view queries, instead of having to generate each model anew. To achieve this objective, we construct an adaptive mapping through which each 2D view feature vector is associated with its corresponding 3D model feature vector. Given this estimated 3D model feature vector, similarity matching can then be performed in the 3D model feature space. To avoid the explicit specification of the complex relationship between the 2D and 3D feature spaces, a neural network approach is adopted in which the required mapping is implicitly specified through a set of training examples. In addition, for efficient feature representation, principal component analysis (PCA) is adopted to achieve dimensionality reduction for facilitating both the mapping construction and the similarity matching process. Since the linear nature of the original PCA formulation may not be adequate to capture the complex characteristics of 3D models, we also consider the adoption of its nonlinear counterpart, i.e., the so-called kernel PCA approach, in this work. Experimental results show that the proposed approach is capable of successfully retrieving the set of 3D models which are similar in appearance to a given 2D face view.  相似文献   

17.
In this paper, we present a 3D face photography system based on a facial expression training dataset, composed of both facial range images (3D geometry) and facial texture (2D photography). The proposed system allows one to obtain a 3D geometry representation of a given face provided as a 2D photography, which undergoes a series of transformations through the texture and geometry spaces estimated. In the training phase of the system, the facial landmarks are obtained by an active shape model (ASM) extracted from the 2D gray-level photography. Principal components analysis (PCA) is then used to represent the face dataset, thus defining an orthonormal basis of texture and another of geometry. In the reconstruction phase, an input is given by a face image to which the ASM is matched. The extracted facial landmarks and the face image are fed to the PCA basis transform, and a 3D version of the 2D input image is built. Experimental tests using a new dataset of 70 facial expressions belonging to ten subjects as training set show rapid reconstructed 3D faces which maintain spatial coherence similar to the human perception, thus corroborating the efficiency and the applicability of the proposed system.  相似文献   

18.
Human face is a complex biomechanical system and non‐linearity is a remarkable feature of facial expressions. However, in blendshape animation, facial expression space is linearized by regarding linear relationship between blending weights and deformed face geometry. This results in the loss of reality in facial animation. To synthesize more realistic facial animation, aforementioned relationship should be non‐linear to allow the greatest generality and fidelity of facial expressions. Unfortunately, few existing works pay attention to the topic about how to measure the non‐linear relationship. In this paper, we propose an optimization scheme that automatically explores the non‐linear relationship of blendshape facial animation from captured facial expressions. Experiments show that the explored non‐linear relationship is consistent with the non‐linearity of facial expressions soundly and is able to synthesize more realistic facial animation than the linear one.  相似文献   

19.
提出了一个人脸动画与语音同步的系统,重点解决协同发音、表现人脸细微表情特征的问题。输入带有情绪标志的文本,就能够产生对应表情的、与语音同步的人脸动画;本系统能够生成各种不同性别、年龄、表情特征的高度真实感3D人脸模型,人脸细微表情特征(如额头皱纹)可以随人脸表情的变化而动态改变。基于语言学理论,本系统提出了解决协同发音问题的一套规则。  相似文献   

20.
For the existing motion capture (MoCap) data processing methods, manual interventions are always inevitable, most of which are derived from the data tracking process. This paper addresses the problem of tracking non-rigid 3D facial motions from sequences of raw MoCap data in the presence of noise, outliers and long time missing. We present a novel dynamic spatiotemporal framework to automatically solve the problem. First, based on a 3D facial topological structure, a sophisticated non-rigid motion interpreter (SNRMI) is put forward; together with a dynamic searching scheme, it cannot only track the non-missing data to the maximum extent but recover missing data (it can accurately recover more than five adjacent markers under long time (about 5 seconds) missing) accurately. To rule out wrong tracks of the markers labeled in open structures (such as mouth, eyes), a semantic-based heuristic checking method was raised. Second, since the existing methods have not taken the noise propagation problem into account, a forward processing framework is presented to solve the problem. Another contribution is the proposed method could track facial non-rigid motions automatically and forward, and is believed to greatly reduce even eliminate the requirements of human interventions during the facial MoCap data processing. Experimental results proved the effectiveness, robustness and accuracy of our system.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号