首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Automatic Conversion of Mesh Animations into Skeleton-based Animations   总被引:1,自引:0,他引:1  
Recently, it has become increasingly popular to represent animations not by means of a classical skeleton‐based model, but in the form of deforming mesh sequences. The reason for this new trend is that novel mesh deformation methods as well as new surface based scene capture techniques offer a great level of flexibility during animation creation. Unfortunately, the resulting scene representation is less compact than skeletal ones and there is not yet a rich toolbox available which enables easy post‐processing and modification of mesh animations. To bridge this gap between the mesh‐based and the skeletal paradigm, we propose a new method that automatically extracts a plausible kinematic skeleton, skeletal motion parameters, as well as surface skinning weights from arbitrary mesh animations. By this means, deforming mesh sequences can be fully‐automatically transformed into fullyrigged virtual subjects. The original input can then be quickly rendered based on the new compact bone and skin representation, and it can be easily modified using the full repertoire of already existing animation tools.  相似文献   

2.
Over the past few decades many advances have been made in computer animation, which has largely replaced traditional hand-drawn animation. However, the traditional path model used in animation has not changed. This model as based on predefined paths does not effectively address the needs of scene animation, which demands unpredictable environmental influences, multiple motion interactions, and personal behaviors. Some of the suggested approaches to scene animation are sensor-effector model, rule-based model, and predefined environment model, but these approaches don't completely solve the problem. This paper presents a relation-based model for dealing with the important issues of scene animation. Two fundamental principles of our model are: the specification of atomic units of motion (relations), and the ability to combine these atomic units to produce sophisticated behaviors. Each relation models a simple responsive behavior between two objects, and has its own sensor, response, duration, strength, and state. The dynamic states of relations can be interactively structured into hierarchical layers that produce complex scene behaviors. During the animation, structured relations are dynamically triggered by either its sensing condition(s) or the state of the environment. The variable state-control hierarchies facilitate on-line scene behavior animations. Several dancing examples for the use of our model are illustrated  相似文献   

3.
Quick creation of 3D character animations is an important task in game design, simulations, forensic animation, education, training, and more. We present a framework for creating 3D animated characters using a simple sketching interface coupled with a large, unannotated motion database that is used to find the appropriate motion sequences corresponding to the input sketches. Contrary to the previous work that deals with static sketches, our input sketches can be enhanced by motion and rotation curves that improve matching in the context of the existing animation sequences. Our framework uses animated sequences as the basic building blocks of the final animated scenes, and allows for various operations with them such as trimming, resampling, or connecting by use of blending and interpolation. A database of significant and unique poses, together with a two-pass search running on the GPU, allows for interactive matching even for large amounts of poses in a template database. The system provides intuitive interfaces, an immediate feedback, and poses very small requirements on the user. A user study showed that the system can be used by novice users with no animation experience or artistic talent, as well as by users with an animation background. Both groups were able to create animated scenes consisting of complex and varied actions in less than 20 minutes.  相似文献   

4.
一种实时虚拟人反应式动画生成算法   总被引:2,自引:0,他引:2  
使用运动捕获数据驱动与动力学模拟相结合的控制方法,可以产生既真实又能对外界施加的作用力作出反应的人体运动.为减少以前方法中反应式运动数据搜索的时间开销并去除动画师需要的手工调节工作,采用并行计算,并引入人工神经网络的方法,根据虚拟人主要关节的位姿对反应运动类型进行预测,得到需搜索的反应运动子类型库.另外,对搜索匹配的算法进行改善以提高搜索效率.实验结果表明:系统中的虚拟人的运动能在两种控制方式之间灵活切换,并能实时响应外界的交互作用.  相似文献   

5.
基于学习的群体动画生成技术研究   总被引:1,自引:0,他引:1       下载免费PDF全文
为了降低群体动画中生成大量自然而又相似的人体运动的难度和复杂性,研究了一种基于学习的群体动画生成技术。该技术首先通过建立基于高斯过程隐变量模型和隐空间动态模型的运动姿势学习模型,将高维运动姿势映射到低维隐空间中,并在低维隐空间对相邻姿势的动态演化进行建模;然后通过对已有运动数据的学习来获得组成该运动的姿势的概率分布,再通过隐空间中的动态预测和Hybrid Monte Carlo采样来得到符合给定概率分布的隐轨迹;最后通过姿势重构来得到与原运动非常相似但又不同的一系列自然的运动,以产生群体动画,从而避开了传统的基于几何和物理约束的逆运动方法固有的困难和复杂性。  相似文献   

6.
动态3维场景中多角色动画的交互式模拟研究   总被引:1,自引:0,他引:1       下载免费PDF全文
当前角色动画的合成大多是在预设的场景中,采用导入与角色骨骼模型匹配的运动捕获数据的方法,这就满足不了多种拓扑结构的骨骼模型和实时变化场景的需要。针对上述问题,提出重定向运动捕获数据到多个任意骨骼拓扑结构的算法,通过采用以实时3维动态寻径算法为基础的角色智能寻径模型,结合语音用户接口代替图形用户接口的方法,实现虚拟角色在动态3维场景中的真实感运动。实验结果表明,本方法可以合成流畅而逼真的,与环境实时交互的角色动画,提高了数据重用性,降低了动画合成成本,满足不同动态3维场景中动画合成的需要。  相似文献   

7.
A scene animation involving a limited environmental boundary, obstacles, other static and dynamic objects, and scene events is more interdependent, variable, and personal than a single object's animation. Traditional path specification does not directly address the special issues in this problem domain, and its use can be very costly and time consuming. In addition to this approach, three other models are proposed for scene animations. These are the sensor-effector model, the rule-based model, and the predefined environment model. These models, however, are either incomplete or limited to certain behaviours or simple environments. This paper analyses the complexities of modular specification and processing in a general scene context, based on the concept of relations, its modelling framework, and state control hierarchies. These estimated complexities outline the general specification and system processing interfaces that can be used effectively in various scene applications.  相似文献   

8.
We present a method for a 3D snake model construction and terrestrial snake locomotion synthesis in 3D virtual environments using image sequences. The snake skeleton is extracted and partitioned into equal segments using a new iterative algorithm for solving the equipartition problem. This method is applied to 3D model construction and at the motion analysis stage. Concerning the snake motion, the snake orientation is controlled by a path planning method. An animation synthesis algorithm, based on a physical motion model and tracking data from image sequences, describes the snake’s velocity and skeleton shape transitions. Moreover, the proposed motion planning algorithm allows a large number of skeleton shapes, providing a general method for aperiodic motion sequences synthesis in any motion graph. Finally, the snake locomotion is adapted to the 3D local ground, while its behavior can be easily controlled by the model parameters yielding the appropriate realistic animations.  相似文献   

9.
Three-dimensional computer animation often struggles to compete with the flexibility and expressiveness commonly found in traditional animation, particularly when rendered non-photorealistically. We present an animation tool that takes skeleton-driven 3D computer animations and generates expressive deformations to the character geometry. The technique is based upon the cartooning and animation concepts of “lines of action” and “lines of motion” and automatically infuses computer animations with some of the expressiveness displayed by traditional animation. Motion and pose-based expressive deformations are generated from the motion data and the character geometry is warped along each limb’s individual line of motion. The effect of this subtle, yet significant, warping is twofold: geometric inter-frame consistency is increased which helps create visually smoother animated sequences, and the warped geometry provides a novel solution to the problem of implied motion in non-photorealistic imagery. Object-space and image-space versions of the algorithm have been implemented and are presented.  相似文献   

10.
We present a sketch-based user interface, which was designed to help novices to create 3D character animations by multi-pass sketching, avoiding the ambiguities usually present in sketch input. Our system also contains sketch-based editing and reproducing tools, which allow paths and motions to be partially updated rather than wholly redrawn; and graphical block interface permits motion sequences to be organized and reconfigured easily. A user evaluation with participants of different skill levels suggest that novices using this sketch interface can produce animations almost as quickly as users who are experienced in 3D animation.  相似文献   

11.
3D animations are an effective method to learn about complex dynamic phenomena, such as mesoscale biological processes. The animators' goals are to convey a sense of the scene's overall complexity while, at the same time, visually guiding the user through a story of subsequent events embedded in the chaotic environment. Animators use a variety of visual emphasis techniques to guide the observers' attention through the story, such as highlighting, halos – or by manipulating motion parameters of the scene. In this paper, we investigate the effect of smoothing the motion of contextual scene elements to attract attention to focus elements of the story exhibiting high-frequency motion. We conducted a crowdsourced study with 108 participants observing short animations with two illustrative motion smoothing strategies: geometric smoothing through noise reduction of contextual motion trajectories and visual smoothing through motion blur of context items. We investigated the observers' ability to follow the story as well as the effect of the techniques on speed perception in a molecular scene. Our results show that moderate motion blur significantly improves users' ability to follow the story. Geometric motion smoothing is less effective but increases the visual appeal of the animation. However, both techniques also slow down the perceived speed of the animation. We discuss the implications of these results and derive design guidelines for animators of complex dynamic visualizations.  相似文献   

12.
This paper presents a novel data‐driven expressive speech animation synthesis system with phoneme‐level controls. This system is based on a pre‐recorded facial motion capture database, where an actress was directed to recite a pre‐designed corpus with four facial expressions (neutral, happiness, anger and sadness). Given new phoneme‐aligned expressive speech and its emotion modifiers as inputs, a constrained dynamic programming algorithm is used to search for best‐matched captured motion clips from the processed facial motion database by minimizing a cost function. Users optionally specify ‘hard constraints’ (motion‐node constraints for expressing phoneme utterances) and ‘soft constraints’ (emotion modifiers) to guide this search process. We also introduce a phoneme–Isomap interface for visualizing and interacting phoneme clusters that are typically composed of thousands of facial motion capture frames. On top of this novel visualization interface, users can conveniently remove contaminated motion subsequences from a large facial motion dataset. Facial animation synthesis experiments and objective comparisons between synthesized facial motion and captured motion showed that this system is effective for producing realistic expressive speech animations.  相似文献   

13.
This paper presents an integrated set of methods for the automatic construction and interactive animation of solid systems that satisfy specified geometric constraints. Displacement contraints enable the user to design articulated bodies with various degrees of freedom in rotation or in translation at highes and to restrict the scope of the movement at will. The graph of constrained objects may contain closed loops. The animation is achieved by decoupling the free motion of each solid component from the action of the constraints. We do this with iterative tunings in displacements. The method is currently implemented in a dynamically based animation system and takes the physical parameters into account while reestablishing the constraints. In particular, first-order momenta are preserved during this process. The approach would be easy to extend to modeling systems or animation modules without a physical model just by allowing the user to control more parameters.  相似文献   

14.
15.
Now more and more motion capture (MoCap) systems are used to acquire realistic and highly detailed motion data which are widely used for producing animations of human-like characters in a variety of applications, such as simulations, video games and animation films. And recently large MoCap databases are available. As a kind of emerging multimedia data, 3D human motion has its own specific data form and standard format. But to the best of our knowledge, only a few approaches have been explored for 3D MoCap data feature representation and reusing. This paper proposes a group of novel approaches for posture feature representation, motion sequence segmentation, key-frame extraction and content-based motion retrieval, which are all very important for MoCap data reusing and benefit to the efficient animation production. To validate these approaches, we set up a MoCap database and implemented a prototype toolkit. The experiments show that the proposed algorithms could achieve the approvable results.  相似文献   

16.
Since adding background music and sound effects even to short animations is not simple, an automatic music generation system would help improve the total quality of computer generated animations. This paper describes a prototype system which automatically generates background music and sound effects for existing animations. The inputs to the system are music parameters (mood types and musical motifs) and motion parameters for individual scenes of an animation. Music is generated for each scene. The key for a scene is determined by considering the mood type and its degree, and the key of the previous scene. The melody for a scene is generated from the given motifs and the chord progression for the scene which is determined according to appropriate rules. The harmony accompaniment for a scene is selected based on the mood type. The rhythm accompaniment for a scene is selected based on the mood type and tempo. The sound effects for motions are determined according to the characteristics and intensity of the motions. Both the background music and sound effects are generated so that the transitions between scenes are smooth.  相似文献   

17.
18.
The aim of this work is the recovery of 3D structure and camera projection matrices for each frame of an uncalibrated image sequence. In order to achieve this, correspondences are required throughout the sequence. A significant and successful mechanism for automatically establishing these correspondences is by the use of geometric constraints arising from scene rigidity. However, problems arise with such geometry guided matching if general viewpoint and general structure are assumed whilst frames in the sequence and/or scene structure do not conform to these assumptions. Such cases are termed degenerate.In this paper we describe two important cases of degeneracy and their effects on geometry guided matching. The cases are a motion degeneracy where the camera does not translate between frames, and a structure degeneracy where the viewed scene structure is planar. The effects include the loss of correspondences due to under or over fitting of geometric models estimated from image data, leading to the failure of the tracking method. These degeneracies are not a theoretical curiosity, but commonly occur in real sequences where models are statistically estimated from image points with measurement error.We investigate two strategies for tackling such degeneracies: the first uses a statistical model selection test to identify when degeneracies occur: the second uses multiple motion models to overcome the degeneracies. The strategies are evaluated on real sequences varying in motion, scene type, and length from 13 to 120 frames.  相似文献   

19.
We introduce techniques for the processing of motion and animations of non‐rigid shapes. The idea is to regard animations of deformable objects as curves in shape space. Then, we use the geometric structure on shape space to transfer concepts from curve processing in ?n to the processing of motion of non‐rigid shapes. Following this principle, we introduce a discrete geometric flow for curves in shape space. The flow iteratively replaces every shape with a weighted average shape of a local neighborhood and thereby globally decreases an energy whose minimizers are discrete geodesics in shape space. Based on the flow, we devise a novel smoothing filter for motions and animations of deformable shapes. By shortening the length in shape space of an animation, it systematically regularizes the deformations between consecutive frames of the animation. The scheme can be used for smoothing and noise removal, e.g., for reducing jittering artifacts in motion capture data. We introduce a reduced‐order method for the computation of the flow. In addition to being efficient for the smoothing of curves, it is a novel scheme for computing geodesics in shape space. We use the scheme to construct non‐linear “Bézier curves” by executing de Casteljau's algorithm in shape space.  相似文献   

20.
We present a new set of interface techniques for visualizing and editing animation directly in a single three-dimensional scene. Motion is edited using direct-manipulation tools which satisfy high-level goals such as “reach this point at this time” or “go faster at this moment”. These tools can be applied over an arbitrary temporal range and maintain arbitrary degrees of spatial and temporal continuity. We separate spatial and temporal control of position by using two curves for each animated object: the motion path which describes the 3D spatial path along which an object travels, and the motion graph, a function describing the distance traveled along this curve over time. Our direct-manipulation tools are implemented using displacement functions, a straightforward and scalable technique for satisfying motion constraints by composition of the displacement function with the motion graph or motion path. This paper will focus on applying displacement functions to positional change. However, the techniques presented are applicable to the animation of orientation, color, or any other attribute that varies over time.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号