首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Motion capture cannot generate cartoon‐style animation directly. We emulate the rubber‐like exaggerations common in traditional character animation as a means of converting motion capture data into cartoon‐like movement. We achieve this using trajectory‐based motion exaggeration while allowing the violation of link‐length constraints. We extend this technique to obtain smooth, rubber‐like motion by dividing the original links into shorter sub‐links and computing the positions of joints using Bézier curve interpolation and a mass‐spring simulation. This method is fast enough to be used in real time.  相似文献   

2.
Three-dimensional (3D) cartoon facial animation is one step further than the challenging 3D caricaturing which generates 3D still caricatures only. In this paper, a 3D cartoon facial animation system is developed for a subject given only a single frontal face image of a neutral expression. The system is composed of three steps consisting of 3D cartoon face exaggeration, texture processing, and 3D cartoon facial animation. By following caricaturing rules of artists, instead of mathematical formulations, 3D cartoon face exaggeration is accomplished at both global and local levels. As a result, the final exaggeration is capable of depicting the characteristics of an input face while achieving artistic deformations. In the texture processing step, texture coordinates of the vertices of the cartoon face model are obtained by mapping the parameterized grid of the standard face model to a cartoon face template and aligning the input face to the face template. Finally, 3D cartoon facial animation is implemented in the MPEG-4 animation framework. In order to avoid time-consuming construction of a face animation table, we propose to utilize the tables of existing models through model mapping. Experimental results demonstrate the effectiveness and efficiency of our proposed system.  相似文献   

3.
This paper presents new methods for stylising video to produce cartoon motion emphasis cues and modern art. Specifically, we introduce “dynamic cues” as a class of motion emphasis cue, encompassing traditional animation techniques such as anticipation and motion exaggeration. We describe methods for automatically synthesising such cues within video premised upon the recovery of articulated figures, and the subsequent manipulation of the recovered pose trajectories. Additionally, we show how our motion emphasis framework may be applied to emulate artwork in the Futurist style, popularised by Duchamp.  相似文献   

4.
针对项目中对三维引擎的要求对骨骼动画进行了研究。提出了一种实现动画共享的方法。方法是给人物的各个关节指定一个位置,所存储各关节的位移量都针对此起始位置,需要共享的动画数据单独存储。由于采用的是相对位移,且是相对角色的,不同角色使用同样动画时,系统会根据角色的伸缩将动画绑定到不同位置,从而实现动画共享。并通过顶点索引法存贮数据,大大降低了运算量。最后实现了三维骨骼动画,并给出了相应的测试数据。  相似文献   

5.
赵威  李毅 《计算机应用》2022,42(9):2830-2837
为了生成更准确流畅的虚拟人动画,采用Kinect设备捕获三维人体姿态数据的同时,使用单目人体三维姿态估计算法对Kinect的彩色信息进行骨骼点数据推理,从而实时优化人体姿态估计效果,并驱动虚拟人物模型生成动画。首先,提出了一种时空优化的骨骼点数据处理方法,以提高单目估计人体三维姿态的稳定性;其次,提出了一种Kinect和遮挡鲁棒姿势图(ORPM)算法融合的人体姿态估计方法来解决Kinect的遮挡问题;最后,研制了基于四元数向量插值和逆向运动学约束的虚拟人动画系统,其能够进行运动仿真和实时动画生成。与仅利用Kinect捕获人体运动来生成动画的方法相比,所提方法的人体姿态估计数据鲁棒性更强,具备一定的防遮挡能力,而与基于ORPM算法的动画生成方法相比,所提方法生成的动画在帧率上提高了两倍,效果更真实流畅。  相似文献   

6.
We present a novel performance‐driven approach to animating cartoon faces starting from pure 2D drawings. A 3D approximate facial model automatically built from front and side view master frames of character drawings is introduced to enable the animated cartoon faces to be viewed from angles different from that in the input video. The expressive mappings are built by artificial neural network (ANN) trained from the examples of the real face in the video and the cartoon facial drawings in the facial expression graph for a specific character. The learned mapping model makes the resultant facial animation to properly get the desired expressiveness, instead of a mere reproduction of the facial actions in the input video sequence. Furthermore, the lit sphere, capturing the lighting in the painting artwork of faces, is utilized to color the cartoon faces in terms of the 3D approximate facial model, reinforcing the hand‐drawn appearance of the resulting facial animation. We made a series of comparative experiments to test the effectiveness of our method by recreating the facial expression in the commercial animation. The comparison results clearly demonstrate the superiority of our method not only in generating high quality cartoon‐style facial expressions, but also in speeding up the animation production of cartoon faces. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

7.
We present a physics-based approach to generate 3D biped character animation that can react to dynamical environments in real time. Our approach utilizes an inverted pendulum model to online adjust the desired motion trajectory from the input motion capture data. This online adjustment produces a physically plausible motion trajectory adapted to dynamic environments, which is then used as the desired motion for the motion controllers to track in dynamics simulation. Rather than using Proportional-Derivative controllers whose parameters usually cannot be easily set, our motion tracking adopts a velocity-driven method which computes joint torques based on the desired joint angular velocities. Physically correct full-body motion of the 3D character is computed in dynamics simulation using the computed torques and dynamical model of the character. Our experiments demonstrate that tracking motion capture data with real-time response animation can be achieved easily. In addition, physically plausible motion style editing, automatic motion transition, and motion adaptation to different limb sizes can also be generated without difficulty.  相似文献   

8.
Skeleton driven animation based on implicit skinning   总被引:1,自引:0,他引:1  
Skeleton-driven animation methods have been commonly used in animating 3D characters. In these methods, a skinning process that binds the character surface onto the skeleton is required. This process is usually accomplished manually and is a time-consuming task. In this paper, we propose a novel method for automatically skinning skeletal character models. Given the motion of a skeleton, our method can animate the character model automatically. In our method, each joint coordinate is parameterized by its surrounding local surface. In such a way, the character's surface is implicitly bound onto the skeleton. Character animation is achieved by minimizing an energy function that is carefully designed to prevent unnatural volume changes and to guarantee smooth deformations. Experiments demonstrate the efficiency and excellent performance of our method.  相似文献   

9.
计算机辅助美术动画的新方法综述   总被引:3,自引:0,他引:3  
随着数字媒体处理、非真实感绘制和交互式娱乐等技术的兴起,出现了一些新的计算机辅助美术动画设计和制作方法.从研究方法和技术路线上,可将当前的研究工作总结和归纳为4类:基于视频流的方法、结合二维半/三维几何模型的方法、基于素材重用的方法和借鉴三维动画技术的方法.在此基础上,对各类方法的基本思想和关键技术等进行了分析,并探讨了如何进一步超越二维世界,如结合三维几何、运动的时序变化、情感等来研究计算机辅助美术动画技术的趋势.  相似文献   

10.
针对传统三维角色动画制作成本高、时间长的问题,文章介绍了一种应用Kinect动作捕捉技术实现高效制作三维角色动画的方法。该方法借助Kinect体感摄像机捕捉真人的动作生成骨骼关节的关键帧数据并输出bvh动作路径文件,然后把bvh动作路径数据导入C4D软件中,就可以驱动角色模型完成角色动画的制作。把这种方法运用到三维角色动画教学实践中,有利于提高学生学习的兴趣和效率。  相似文献   

11.
目的 2D姿态估计的误差是导致3D人体姿态估计产生误差的主要原因,如何在2D误差或噪声干扰下从2D姿态映射到最优、最合理的3D姿态,是提高3D人体姿态估计的关键。本文提出了一种稀疏表示与深度模型联合的3D姿态估计方法,以将3D姿态空间几何先验与时间信息相结合,达到提高3D姿态估计精度的目的。方法 利用融合稀疏表示的3D可变形状模型得到单帧图像可靠的3D初始值。构建多通道长短时记忆MLSTM(multi-channel long short term memory)降噪编/解码器,将获得的单帧3D初始值以时间序列形式输入到其中,利用MLSTM降噪编/解码器学习相邻帧之间人物姿态的时间依赖关系,并施加时间平滑约束,得到最终优化的3D姿态。结果 在Human3.6M数据集上进行了对比实验。对于两种输入数据:数据集给出的2D坐标和通过卷积神经网络获得的2D估计坐标,相比于单帧估计,通过MLSTM降噪编/解码器优化后的视频序列平均重构误差分别下降了12.6%,13%;相比于现有的基于视频的稀疏模型方法,本文方法对视频的平均重构误差下降了6.4%,9.1%。对于2D估计坐标数据,相比于现有的深度模型方法,本文方法对视频的平均重构误差下降了12.8%。结论 本文提出的基于时间信息的MLSTM降噪编/解码器与稀疏模型相结合,有效利用了3D姿态先验知识,视频帧间人物姿态连续变化的时间和空间依赖性,一定程度上提高了单目视频3D姿态估计的精度。  相似文献   

12.
It is now possible to capture the 3D motion of the human body on consumer hardware and to puppet in real time skeleton‐based virtual characters. However, many characters do not have humanoid skeletons. Characters such as spiders and caterpillars do not have boned skeletons at all, and these characters have very different shapes and motions. In general, character control under arbitrary shape and motion transformations is unsolved ‐ how might these motions be mapped? We control characters with a method which avoids the rigging‐skinning pipeline — source and target characters do not have skeletons or rigs. We use interactively‐defined sparse pose correspondences to learn a mapping between arbitrary 3D point source sequences and mesh target sequences. Then, we puppet the target character in real time. We demonstrate the versatility of our method through results on diverse virtual characters with different input motion controllers. Our method provides a fast, flexible, and intuitive interface for arbitrary motion mapping which provides new ways to control characters for real‐time animation.  相似文献   

13.
Motion capture is a technique of digitally recording the movements of real entities, usually humans. It was originally developed as an analysis tool in biomechanics research, but has grown increasingly important as a source of motion data for computer animation. In this context it has been widely used for both cinema and video games. Hand motion capture and tracking in particular has received a lot of attention because of its critical role in the design of new Human Computer Interaction methods and gesture analysis. One of the main difficulties is the capture of human hand motion. This paper gives an overview of ongoing research “HandPuppet3D” being carried out in collaboration with an animation studio to employ computer vision techniques to develop a prototype desktop system and associated animation process that will allow an animator to control 3D character animation through the use of hand gestures. The eventual goal of the project is to support existing practice by providing a softer, more intuitive, user interface for the animator that improves the productivity of the animation workflow and the quality of the resulting animations. To help achieve this goal the focus has been placed on developing a prototype camera based desktop gesture capture system to capture hand gestures and interpret them in order to generate and control the animation of 3D character models. This will allow an animator to control 3D character animation through the capture and interpretation of hand gestures. Methods will be discussed for motion tracking and capture in 3D animation and in particular that of hand motion tracking and capture. HandPuppet3D aims to enable gesture capture with interpretation of the captured gestures and control of the target 3D animation software. This involves development and testing of a motion analysis system built from algorithms recently developed. We review current software and research methods available in this area and describe our current work.  相似文献   

14.
角色动画一直是计算机动画和虚拟现实领域的重要研究内容之一.近年来,随着3D游戏动漫以及电影特效制作产业的蓬勃发展,角色动画对物理真实性的要求日益迫切,基于物理的角色动画合成受到了研究者们越来越多的关注,催生了许多新方法与新技术.该研究问题的核心是人体运动合成方法,其旨在驱动虚拟角色运动,生成满足物理运动规律的动画.重点围绕角色动画合成方法的研究进展进行介绍.首先在对国内外研究工作全面分析与总结的基础上,根据关节力矩的计算方式不同将其分为7类:时空约束法、约束动力学优化法、低维模型法、有限状态机、数据驱动法、动力学过滤法、概率模型法,详细阐述每一类方法的原理及特点后,重点介绍每类方法中近期出现的新工作.其次,对上述各类方法的优缺点进行对照分析.最后,结合实际应用需求,针对目前工作中存在的不足,提出一些可继续深入研究的问题.  相似文献   

15.
One of the most common tasks in computer animation is inverse-kinematics, or determining a joint configuration required to place a particular part of an articulated character at a particular location in global space. Inverse-kinematics is required at design-time to assist artists using commercial 3D animation packages, for motion capture analysis, and for run-time applications such as games.
We present an efficient inverse-kinematics methodology based on the interpolation of example motions and positions. The technique is demonstrated on a number of inverse-kinematics positioning tasks for a human figure. In addition to simple positioning tasks, the method provides complete motion sequences that satisfy an inverse-kinematic goal. The interpolation at the heart of the algorithm allows an artist's influence to play a major role in ensuring that the system always generates plausible results. Due to the lightweight nature of the algorithm, we can position a character at extremely high frame rates, making the technique useful for time-critical run-time applications such as games.  相似文献   

16.
In this paper, we present a new impostor‐based representation for 3D animated characters supporting real‐time rendering of thousands of agents. We maximize rendering performance by using a collection of pre‐computed impostors sampled from a discrete set of view directions. Our approach differs from previous work on view‐dependent impostors in that we use per‐joint rather than per‐character impostors. Our characters are animated by applying the joint rotations directly to the impostors, instead of choosing a single impostor for the whole character from a set of pre‐defined poses. This offers more flexibility in terms of animation clips, as our representation supports any arbitrary pose, and thus, the agent behavior is not constrained to a small collection of pre‐defined clips. Because our impostors are intended to be valid for any pose, a key issue is to define a proper boundary for each impostor to minimize image artifacts while animating the agents. We pose this problem as a variational optimization problem and provide an efficient algorithm for computing a discrete solution as a pre‐process. To the best of our knowledge, this is the first time a crowd rendering algorithm encompassing image‐based performance, small graphics processing unit footprint, and animation independence is proposed. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

17.
针对目前人物动画制作以及绘画过程中对人体结构认识不足的问题,以及虚拟现实技术在教育中的优势,提出了一个基于Forge云的艺用人体解剖绘画仿真系统方案.系统按照人体结构比例,采用块面加线的模式完成人物模型的构建及可视化过程,完全遵照动画运动规律,以骨骼动画结合三维动作捕捉的方式实现人体运动仿真.通过Forge云平台和Three.js完成人机交互.最后,将漫画模块和Forge云模块双向通信,完成漫画人物姿态仿真.通过测试证明,该系统的仿真度和易用性较高,为数字化学习和移动学习提供了环境,有助于学习者深入理解人体解剖结构,并正确掌握漫画人物的造型方法.  相似文献   

18.
We present a new set of interface techniques for visualizing and editing animation directly in a single three-dimensional scene. Motion is edited using direct-manipulation tools which satisfy high-level goals such as “reach this point at this time” or “go faster at this moment”. These tools can be applied over an arbitrary temporal range and maintain arbitrary degrees of spatial and temporal continuity. We separate spatial and temporal control of position by using two curves for each animated object: the motion path which describes the 3D spatial path along which an object travels, and the motion graph, a function describing the distance traveled along this curve over time. Our direct-manipulation tools are implemented using displacement functions, a straightforward and scalable technique for satisfying motion constraints by composition of the displacement function with the motion graph or motion path. This paper will focus on applying displacement functions to positional change. However, the techniques presented are applicable to the animation of orientation, color, or any other attribute that varies over time.  相似文献   

19.
3D human pose estimation in motion is a hot research direction in the field of computer vision. However, the performance of the algorithm is affected by the complexity of 3D spatial information, self-occlusion of human body, mapping uncertainty and other problems. In this paper, we propose a 3D human joint localization method based on multi-stage regression depth network and 2D to 3D point mapping algorithm. First of all, we use a single RGB image as the input, through the introduction of heatmap and multi-stage regression to constantly optimize the coordinates of human joint points. Then we input the 2D joint points into the mapping network for calculation, and get the coordinates of 3D human body joint points, and then to complete the 3D human body pose estimation task. The MPJPE of the algorithm in Human3.6 M dataset is 40.7. The evaluation of dataset shows that our method has obvious advantages.  相似文献   

20.
We introduce the EXtract-and-COmplete Layering method (EXCOL)--a novel cartoon animation processing technique to convert a traditional animated cartoon video into multiple semantically meaningful layers. Our technique is inspired by vision-based layering techniques but focuses on shape cues in both the extraction and completion steps to reflect the unique characteristics of cartoon animation. For layer extraction, we define a novel similarity measure incorporating both shape and color of automatically segmented regions within individual frames and propagate a small set of user-specified layer labels among similar regions across frames. By clustering regions with the same labels, each frame is appropriately partitioned into different layers, with each layer containing semantically meaningful content. Then, a warping-based approach is used to fill missing parts caused by occlusion within the extracted layers to achieve a complete representation. EXCOL provides a flexible way to effectively reuse traditional cartoon animations with only a small amount of user interaction. It is demonstrated that our EXCOL method is effective and robust, and the layered representation benefits a variety of applications in cartoon animation processing.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号