首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Sensing gloves are often used as an input device for virtual 3D games. We propose a new method to control characters such as humans or animals in real‐time by using sensing gloves. Based on existing motion data of the body, a new method to map the hand motion of the user to the locomotion of 3D characters in real‐time is proposed. The method was applied to control locomotion of characters such as humans or dogs. Various motions such as trotting, running, hopping, and turning could be produced. As the computational cost needed for our method is low, the response of the system is short enough to satisfy the real‐time requirements that are essential to be used for games. Using our method, users can directly control their characters intuitively and precisely than previous controlling devices such as mouse, keyboards or joysticks. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

2.
Interpolation synthesis of articulated figure motion   总被引:4,自引:0,他引:4  
Most conventional media depend on engaging and appealing characters. Empty spaces and buildings would not fare well as television or movie programming, yet virtual reality usually offers up such spaces. The problem lies in the difficulty of creating computer generated characters that display real time, engaging interaction and realistic motion. Articulated figure motion for real time computer graphics offers one solution to this problem. A common approach stores a set of motions and lets you choose one particular motion at a time. The article describes a process that greatly expands the range of possible motions. Mixing motions selected from a database lets you create a new motion to exact specifications. The synthesized motion retains the original motions' subtle qualities, such as the realism of motion capture or the expressive, exaggerated qualities of artistic animation. Our method provides a new way to achieve inverse kinematics capability-for example, placing the hands or feet of an articulated figure in specific positions. It proves useful for both real time graphics and prerendered animation production. The method, called interpolation synthesis, is based on motion capture data and it provides real time character motion for interactive entertainment or avatars in virtual worlds  相似文献   

3.
We present a novel approach for style retargeting to non‐humanoid characters by allowing extracted stylistic features from one character to be added to the motion of another character with a different body morphology. We introduce the concept of groups of body parts (GBPs), for example, the torso, legs and tail, and we argue that they can be used to capture the individual style of a character motion. By separating GBPs from a character, the user can define mappings between characters with different morphologies. We automatically extract the motion of each GBP from the source, map it to the target and then use a constrained optimization to adjust all joints in each GBP in the target to preserve the original motion while expressing the style of the source. We show results on characters that present different morphologies to the source motion from which the style is extracted. The style transfer is intuitive and provides a high level of control. For most of the examples in this paper, the definition of GBP takes around 5 min and the optimization about 7 min on average. For the most complicated examples, the definition of three GBPs and their mapping takes about 10 min and the optimization another 30 min.  相似文献   

4.
Motion retargetting refers to the process of adapting the motion of a source character to a target. This paper presents a motion retargetting model based on temporal dilated convolutions. In an unsupervised manner, the model generates realistic motions for various humanoid characters. The retargetted motions not only preserve the high-frequency detail of the input motions but also produce natural and stable trajectories despite the skeleton size differences between the source and target. Extensive experiments are made using a 3D character motion dataset and a motion capture dataset. Both qualitative and quantitative comparisons against prior methods demonstrate the effectiveness and robustness of our method.  相似文献   

5.
We present a real‐time system for character control that relies on the classification of locomotive actions in skeletal motion capture data. Our method is both progress dependent and style invariant. Two deep neural networks are used to correlate body shape and implicit dynamics to locomotive types and their respective progress. In comparison to related work, our approach does not require a setup step and enables the user to act in a natural, unconstrained manner. Also, our method displays better performance than the related work in scenarios where the actor performs sharp changes in direction and highly stylized motions while maintaining at least as good performance in other scenarios. Our motivation is to enable character control of non‐bipedal characters in virtual production and live immersive experiences, where mannerisms in the actor's performance may be an issue for previous methods.  相似文献   

6.
Pose Controlled Physically Based Motion   总被引:2,自引:0,他引:2  
In this paper we describe a new method for generating and controlling physically‐based motion of complex articulated characters. Our goal is to create motion from scratch, where the animator provides a small amount of input and gets in return a highly detailed and physically plausible motion. Our method relieves the animator from the burden of enforcing physical plausibility, but at the same time provides full control over the internal DOFs of the articulated character via a familiar interface. Control over the global DOFs is also provided by supporting kinematic constraints. Unconstrained portions of the motion are generated in real time, since the character is driven by joint torques generated by simple feedback controllers. Although kinematic constraints are satisfied using an iterative search (shooting), this process is typically inexpensive, since it only adjusts a few DOFs at a few time instances. The low expense of the optimization, combined with the ability to generate unconstrained motions in real time yields an efficient and practical tool, which is particularly attractive for high inertia motions with a relatively small number of kinematic constraints.  相似文献   

7.
Natural motion synthesis of virtual humans have been studied extensively, however, motion control of virtual characters actively responding to complex dynamic environments is still a challenging task in computer animation. It is a labor and cost intensive animator-driven work to create realistic human motions of character animations in a dynamically varying environment in movies, television and video games. To solve this problem, in this paper we propose a novel approach of motion synthesis that applies the optimal path planning to direct motion synthesis for generating realistic character motions in response to complex dynamic environment. In our framework, SIPP (Safe Interval Path Planning) search is implemented to plan a globally optimal path in complex dynamic environments. Three types of control anchors to motion synthesis are for the first time defined and extracted on the obtained planning path, including turning anchors, height anchors and time anchors. Directed by these control anchors, highly interactive motions of virtual character are synthesized by motion field which produces a wide variety of natural motions and has high control agility to handle complex dynamic environments. Experimental results have proven that our framework is capable of synthesizing motions of virtual humans naturally adapted to the complex dynamic environments which guarantee both the optimal path and the realistic motion simultaneously.  相似文献   

8.
Expressive facial animations are essential to enhance the realism and the credibility of virtual characters. Parameter‐based animation methods offer a precise control over facial configurations while performance‐based animation benefits from the naturalness of captured human motion. In this paper, we propose an animation system that gathers the advantages of both approaches. By analyzing a database of facial motion, we create the human appearance space. The appearance space provides a coherent and continuous parameterization of human facial movements, while encapsulating the coherence of real facial deformations. We present a method to optimally construct an analogous appearance face for a synthetic character. The link between both appearance spaces makes it possible to retarget facial animation on a synthetic face from a video source. Moreover, the topological characteristics of the appearance space allow us to detect the principal variation patterns of a face and automatically reorganize them on a low‐dimensional control space. The control space acts as an interactive user‐interface to manipulate the facial expressions of any synthetic face. This interface makes it simple and intuitive to generate still facial configurations for keyframe animation, as well as complete temporal sequences of facial movements. The resulting animations combine the flexibility of a parameter‐based system and the realism of real human motion. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

9.
Can we make virtual characters in a scene interact with their surrounding objects through simple instructions? Is it possible to synthesize such motion plausibly with a diverse set of objects and instructions? Inspired by these questions, we present the first framework to synthesize the full-body motion of virtual human characters performing specified actions with 3D objects placed within their reach. Our system takes textual instructions specifying the objects and the associated ‘intentions’ of the virtual characters as input and outputs diverse sequences of full-body motions. This contrasts existing works, where full-body action synthesis methods generally do not consider object interactions, and human-object interaction methods focus mainly on synthesizing hand or finger movements for grasping objects. We accomplish our objective by designing an intent-driven full-body motion generator, which uses a pair of decoupled conditional variational auto-regressors to learn the motion of the body parts in an autoregressive manner. We also optimize the 6-DoF pose of the objects such that they plausibly fit within the hands of the synthesized characters. We compare our proposed method with the existing methods of motion synthesis and establish a new and stronger state-of-the-art for the task of intent-driven motion synthesis.  相似文献   

10.
基于约束轨迹重构的运动编辑   总被引:2,自引:0,他引:2  
提出了一种运动数据重用的方法.采用偏移映射技术构建模型末端的位姿约束轨迹,该轨迹较好地保留了原始运动的特征;利用Unscented卡尔曼滤波方法求解约束,可以实时地生成与原始运动相似的新运动.实验结果表明,该方法可以生成满足不同模型和场景要求的多种运动,扩展了捕获数据的应用范围.  相似文献   

11.
Applying motion‐capture data to multi‐person interaction between virtual characters is challenging because one needs to preserve the interaction semantics while also satisfying the general requirements of motion retargeting, such as preventing penetration and preserving naturalness. An efficient means of representing interaction semantics is by defining the spatial relationships between the body parts of characters. However, existing methods consider only the character skeleton and thus are not suitable for capturing skin‐level spatial relationships. This paper proposes a novel method for retargeting interaction motions with respect to character skins. Specifically, we introduce the aura mesh, which is a volumetric mesh that surrounds a character's skin. The spatial relationships between two characters are computed from the overlap of the skin mesh of one character and the aura mesh of the other, and then the interaction motion retargeting is achieved by preserving the spatial relationships as much as possible while satisfying other constraints. We show the effectiveness of our method through a number of experiments.  相似文献   

12.
Synthesizing the movements of a responsive virtual character in the event of unexpected perturbations has proven a difficult challenge. To solve this problem, we devise a fully automatic method that learns a nonlinear probabilistic model of dynamic responses from very few perturbed walking sequences. This model is able to synthesize responses and recovery motions under new perturbations different from those in the training examples. When perturbations occur, we propose a physics‐based method that initiates motion transitions to the most probable response example based on the dynamic states of the character. Our algorithm can be applied to any motion sequences without the need for preprocessing such as segmentation or alignment. The results show that three perturbed motion clips can sufficiently generate a variety of realistic responses, and 14 clips can create a responsive virtual character that reacts realistically to external forces in different directions applied on different body parts at different moments in time.  相似文献   

13.
Juggling, which uses both hands to keep several objects in the air at once, is admired by anyone who sees it. However, skillful real‐world juggling requires long, hard practice. Therefore, we propose an interesting method to enable anyone to juggle skillfully in the virtual world. In the real world, the human motion has to follow the motion of the moving objects; in the virtual world, the objects' motion can be adjusted together with the human motion. By using this freedom, we have generated a juggling avatar that can follow the user's motion. The user simply makes juggling‐like motions in front of a motion sensor. Our system then searches for juggling motions that closely match the user's motions and connects them smoothly. We then generate moving objects that both satisfy the laws of physics and are synchronized with the synthesized motion of the avatar. In this way, we can generate a variety of juggling animations by an avatar in real time. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

14.
We present a general method for transferring skeletons and skinning weights between characters with distinct mesh topologies. Our pipeline takes as inputs a source character rig (consisting of a mesh, a transformation hierarchy of joints, and skinning weights) and a target character mesh. From these inputs, we compute joint locations and orientations that embed the source skeleton in the target mesh, as well as skinning weights to bind the target geometry to the new skeleton. Our method consists of two key steps. We first compute the geometric correspondence between source and target meshes using a semi‐automatic method relying on a set of markers. The resulting geometric correspondence is then used to formulate attribute transfer as an energy minimization and filtering problem. We demonstrate our approach on a variety of source and target bipedal characters, varying in mesh topology and morphology. Several examples demonstrate that the target characters behave well when animated with either forward or inverse kinematics. Via these examples, we show that our method preserves subtle artistic variations; spatial relationships between geometry and joints, as well as skinning weight details, are accurately maintained. Our proposed pipeline opens up many exciting possibilities to quickly animate novel characters by reusing existing production assets.  相似文献   

15.
为简化数字皮影动画的制作流程,提高数字皮影作品的交互性,降低数字皮影实时表演的难度,对皮影常见角色及动作进行了抽象、分类,提出一种针对特定类型的通用的骨骼模型。基于反向动力学,设计了一套描述皮影骨骼动画的脚本。该脚本通过对骨骼位置与旋转属性的控制定义皮影角色的基本动作,并通过对基本动作的叠加实现复杂动作。脚本对动作的定义基于骨骼模型而非具体角色,具有通用性。最后通过实验证明了脚本的有效性。  相似文献   

16.
In many virtual environment applications, paths have to be planned for characters to traverse from a start to a goal position in the virtual world while avoiding obstacles. Contemporary applications require a path planner that is fast (to ensure real‐time interaction with the environment) and flexible (to avoid local hazards such as small and dynamic obstacles). In addition, paths need to be smooth and short to ensure natural looking motions. Current path planning techniques do not obey these criteria simultaneously. For example, A* approaches generate unnatural looking paths, potential field‐based methods are too slow, and sampling‐based path planning techniques are inflexible. We propose a new technique, the Corridor Map Method (CMM), which satisfies all the criteria. In an off‐line construction phase, the CMM creates a system of collision‐free corridors for the static obstacles in an environment. In the query phase, paths can be planned inside the corridors for different types of characters while avoiding dynamic obstacles. Experiments show that high‐quality paths for single characters or groups of characters can be obtained in real‐time. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

17.
The most important goal of character animation is to efficiently control the motions of a character. Until now, many techniques have been proposed for human gait animation. Some techniques have been created to control the emotions in gaits such as ‘tired walking’ and ‘brisk walking’ by using parameter interpolation or motion data mapping. Since it is very difficult to automate the control over the emotion of a motion, the emotions of a character model have been generated by creative animators. This paper proposes a human running model based on a one‐legged planar hopper with a self‐balancing mechanism. The proposed technique exploits genetic programming to optimize movement and can be easily adapted to various character models. We extend the energy minimization technique to generate various motions in accordance with emotional specifications. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

18.
The design of autonomous characters capable of planning their own motions continues to be a challenge for computer animation. We present a novel kinematic motion‐planning algorithm for character animation which addresses some of the outstanding problems. The problem domain for our algorithm is as follows: given a constrained environment with designated handholds and footholds, plan a motion through this space towards some desired goal. Our algorithm is based on a stochastic search procedure which is guided by a combination of geometric constraints, posture heuristics, and distance‐to‐goal metrics. The method provides a single framework for the use of multiple modes of locomotion in planning motions through these constrained, unstructured environments. We illustrate our results with demonstrations of a human character using walking, swinging, climbing, and crawling in order to navigate through various obstacle courses. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

19.
We present a novel method for retargeting human motion to arbitrary 3D mesh models with as little user interaction as possible. Traditional motion‐retargeting systems try to preserve the original motion, while satisfying several motion constraints. Our method uses a few pose‐to‐pose examples provided by the user to extract the desired semantics behind the retargeting process while not limiting the transfer to being only literal. Thus, mesh models with different structures and/or motion semantics from humanoid skeletons become possible targets. Also considering the fact that most publicly available mesh models lack additional structure (e.g. skeleton), our method dispenses with the need for such a structure by means of a built‐in surface‐based deformation system. As deformation for animation purposes may require non‐rigid behaviour, we augment existing rigid deformation approaches to provide volume‐preserving and squash‐and‐stretch deformations. We demonstrate our approach on well‐known mesh models along with several publicly available motion‐capture sequences.  相似文献   

20.
We introduce the concept of 4D model flow for the precomputed alignment of dynamic surface appearance across 4D video sequences of different motions reconstructed from multi‐view video. Precomputed 4D model flow allows the efficient parametrization of surface appearance from the captured videos, which enables efficient real‐time rendering of interpolated 4D video sequences whilst accurately reproducing visual dynamics, even when using a coarse underlying geometry. We estimate the 4D model flow using an image‐based approach that is guided by available geometry proxies. We propose a novel representation in surface texture space for efficient storage and online parametric interpolation of dynamic appearance. Our 4D model flow overcomes previous requirements for computationally expensive online optical flow computation for data‐driven alignment of dynamic surface appearance by precomputing the appearance alignment. This leads to an efficient rendering technique that enables the online interpolation between 4D videos in real time, from arbitrary viewpoints and with visual quality comparable to the state of the art.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号