首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this paper, we propose a novel motion controller for the online generation of natural character locomotion that adapts to new situations such as changing user control or applying external forces. This controller continuously estimates the next footstep while walking and running, and automatically switches the stepping strategy based on situational changes. To develop the controller, we devise a new physical model called an inverted‐pendulum‐based abstract model (IPAM). The proposed abstract model represents high‐dimensional character motions, inheriting the naturalness of captured motions by estimating the appropriate footstep location, speed and switching time at every frame. The estimation is achieved by a deep learning based regressor that extracts important features in captured motions. To validate the proposed controller, we train the model using captured motions of a human stopping, walking, and running in a limited space. Then, the motion controller generates human‐like locomotion with continuously varying speeds, transitions between walking and running, and collision response strategies in a cluttered space in real time.  相似文献   

2.
基于逆运动学和重构式ICA的人体运动风格分析与合成   总被引:1,自引:1,他引:0  
蓝荣祎  孙怀江 《自动化学报》2014,40(6):1135-1147
使用独立成分分析(Independent component analysis,ICA)来建模运动风格、合成风格化的人体运动,是一种有效且有前景的手段.为了避免现有方法在设定独立成分个数或子空间结构时的人为影响,并提高风格成分的质量,提出一种基于重构式独立成分分析的运动风格分析方法.由于放弃了混合矩阵的正交性约束,一方面,拥有了更多的自由度来表示各独立成分;另一方面,利用特征的过完备性以及自身在特征选择时的稀疏特性,能够自动地确立独立成分数目.此外,通过结合基于主测地线分析的逆运动学与运动过渡技术,该方法能够合成包含多种风格、任意长度的行走运动,同时还能通过编辑特定帧的人体姿势来约束合成的结果.实验结果表明,该方法能够有效地分析出行走、跳跃和踢腿等运动中代表风格的独立成分,并根据用户对风格的编辑,实时地生成自然、平滑的运动.  相似文献   

3.
Conditional models for contextual human motion recognition   总被引:1,自引:0,他引:1  
We describe algorithms for recognizing human motion in monocular video sequences, based on discriminative conditional random fields (CRFs) and maximum entropy Markov models (MEMMs). Existing approaches to this problem typically use generative structures like the hidden Markov model (HMM). Therefore, they have to make simplifying, often unrealistic assumptions on the conditional independence of observations given the motion class labels and cannot accommodate rich overlapping features of the observation or long-term contextual dependencies among observations at multiple timesteps. This makes them prone to myopic failures in recognizing many human motions, because even the transition between simple human activities naturally has temporal segments of ambiguity and overlap. The correct interpretation of these sequences requires more holistic, contextual decisions, where the estimate of an activity at a particular timestep could be constrained by longer windows of observations, prior and even posterior to that timestep. This would not be computationally feasible with a HMM which requires the enumeration of a number of observation sequences exponential in the size of the context window. In this work we follow a different philosophy: instead of restrictively modeling the complex image generation process – the observation, we work with models that can unrestrictedly take it as an input, hence condition on it. Conditional models like the proposed CRFs seamlessly represent contextual dependencies and have computationally attractive properties: they support efficient, exact recognition using dynamic programming, and their parameters can be learned using convex optimization. We introduce conditional graphical models as complementary tools for human motion recognition and present an extensive set of experiments that show not only how these can successfully classify diverse human activities like walking, jumping, running, picking or dancing, but also how they can discriminate among subtle motion styles like normal walks and wander walks.  相似文献   

4.
This paper presents an efficient technique for synthesizing motions by stitching, or splicing, an upper‐body motion retrieved from a motion space on top of an existing lower‐body locomotion of another motion. Compared to the standard motion splicing problem, motion space splicing imposes new challenges as both the upper and lower body motions might not be known in advance. Our technique is the first motion (space) splicing technique that propagates temporal and spatial properties of the lower‐body locomotion to the newly generated upper‐body motion and vice versa. Whereas existing techniques only adapt the upper‐body motion to fit the lower‐body motion, our technique also adapts the lower‐body locomotion based on the upper body task for a more coherent full‐body motion. In this paper, we will show that our decoupled approach is able to generate high‐fidelity full‐body motion for interactive applications such as games.  相似文献   

5.
In the paper, we present an online real‐time method for automatically transforming a basic locomotive motion to a desired motion of the same type, based on biomechanical results. Given an online request for a motion of a certain type with desired moving speed and turning angle, our method first extracts a basic motion of the same type from a motion graph, and then transforms it to achieve the desired moving speed and turning angle by exploiting the following biomechanical observations: contact‐driven center‐of‐mass control, anticipatory reorientation of upper body segments, moving speed adjustment, and whole‐body leaning. Exploiting these observations, we propose a simple but effective method to add physical and behavioral naturalness to the resulting locomotive motions without preprocessing. Through experiments, we show that our method enables a character to respond agilely to online user commands while efficiently generating walking, jogging, and running motions with a compact motion library. Our method can also deal with certain dynamical motions such as forward roll. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

6.
In this paper, we introduce an approach to high‐level parameterisation of captured mesh sequences of actor performance for real‐time interactive animation control. High‐level parametric control is achieved by non‐linear blending between multiple mesh sequences exhibiting variation in a particular movement. For example, walking speed is parameterised by blending fast and slow walk sequences. A hybrid non‐linear mesh sequence blending approach is introduced to approximate the natural deformation of non‐linear interpolation techniques whilst maintaining the real‐time performance of linear mesh blending. Quantitative results show that the hybrid approach gives an accurate real‐time approximation of offline non‐linear deformation. An evaluation of the approach shows good performance not only for entire meshes but also with specific mesh areas. Results are presented for single and multi‐dimensional parametric control of walking (speed/direction), jumping (height/distance) and reaching (height) from captured mesh sequences. This approach allows continuous real‐time control of high‐level parameters such as speed and direction whilst maintaining the natural surface dynamics of captured movement. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

7.
Human motion indexing and retrieval are important for animators due to the need to search for motions in the database which can be blended and concatenated. Most of the previous researches of human motion indexing and retrieval compute the Euclidean distance of joint angles or joint positions. Such approaches are difficult to apply for cases in which multiple characters are closely interacting with each other, as the relationships of the characters are not encoded in the representation. In this research, we propose a topology-based approach to index the motions of two human characters in close contact. We compute and encode how the two bodies are tangled based on the concept of rational tangles. The encoded relationships, which we define as {it TangleList}, are used to determine the similarity of the pairs of postures. Using our method, we can index and retrieve motions such as one person piggy-backing another, one person assisting another in walking, and two persons dancing / wrestling. Our method is useful to manage a motion database of multiple characters. We can also produce motion graph structures of two characters closely interacting with each other by interpolating and concatenating topologically similar postures and motion clips, which are applicable to 3D computer games and computer animation.  相似文献   

8.
Controlling a crowd using multi‐touch devices appeals to the computer games and animation industries, as such devices provide a high‐dimensional control signal that can effectively define the crowd formation and movement. However, existing works relying on pre‐defined control schemes require the users to learn a scheme that may not be intuitive. We propose a data‐driven gesture‐based crowd control system, in which the control scheme is learned from example gestures provided by different users. In particular, we build a database with pairwise samples of gestures and crowd motions. To effectively generalize the gesture style of different users, such as the use of different numbers of fingers, we propose a set of gesture features for representing a set of hand gesture trajectories. Similarly, to represent crowd motion trajectories of different numbers of characters over time, we propose a set of crowd motion features that are extracted from a Gaussian mixture model. Given a run‐time gesture, our system extracts the K nearest gestures from the database and interpolates the corresponding crowd motions in order to generate the run‐time control. Our system is accurate and efficient, making it suitable for real‐time applications such as real‐time strategy games and interactive animation controls.  相似文献   

9.
The simulation of two‐dimensional human locomotion in a bird's eye perspective is a key technology for various domains to realistically predict walk paths. The generated trajectories, however, are frequently deviating from reality due to the usage of simplifying assumptions. For instance, common deterministic motion planning algorithms predominantly utilize a set of static steering parameters (e.g. maximum acceleration or velocity of the agent) to simulate the walking behaviour of a person. This procedure neglects important influence factors, which have a significant impact on the spatio‐temporal characteristics of the finally resulting motion—such as the operator's physical conditions or the probabilistic nature of the human locomotor system. In overcome this drawback, this paper presents an approach to derive probabilistic motion models from a database of captured human motions. Although being initially designed for industrial purposes, this method can be applied to a wide range of use cases while considering an arbitrary number of dependencies (input) and steering parameters (output). To underline its applicability, a probabilistic steering parameter model is implemented, which models velocity, angular velocity and acceleration as a function of the travel distances, path curvature and height of a respective person. Finally, the technical performance and advantages of this model are demonstrated within an evaluation.  相似文献   

10.
Dynamic human shape in video contains rich perceptual information, such as the body posture, identity, and even the emotional state of a person. Human locomotion activities, such as walking and running, have familiar spatiotemporal patterns that can easily be detected in arbitrary views. We present a framework for detecting shape outliers for human locomotion using a dynamic shape model that factorizes the body posture, the viewpoint, and the individual’s shape style. The model uses a common embedding of the kinematic manifold of the motion and factorizes the shape variability with respect to different viewpoints and shape styles in the space of the coefficients of the nonlinear mapping functions that are used to generate the shapes from the kinematic manifold representation. Given a corrupted input silhouette, an iterative procedure is used to recover the body posture, viewpoint, and shape style. We use the proposed outlier detection approach to fill in the holes in the input silhouettes, and detect carried objects, shadows, and abnormal motions.  相似文献   

11.
We propose an approach for modeling, measurement and tracking of rigid and articulated motion as viewed from a stationary or moving camera. We first propose an approach for learning temporal-flow models from exemplar image sequences. The temporal-flow models are represented as a set of orthogonal temporal-flow bases that are learned using principal component analysis of instantaneous flow measurements. Spatial constraints on the temporal-flow are then incorporated to model the movement of regions of rigid or articulated objects. These spatio-temporal flow models are subsequently used as the basis for simultaneous measurement and tracking of brightness motion in image sequences. Then we address the problem of estimating composite independent object and camera image motions. We employ the spatio-temporal flow models learned through observing typical movements of the object from a stationary camera to decompose image motion into independent object and camera motions. The performance of the algorithms is demonstrated on several long image sequences of rigid and articulated bodies in motion.  相似文献   

12.
Creating long motion sequences is a time‐consuming task even when motion capture equipment or motion editing tools are used. In this paper, we propose a system for creating a long motion sequence by combining elementary motion clips. The user is asked to first input motions on a timeline. The system then automatically generates a continuous and natural motion. Our system employs four motion synthesis methods: motion transition, motion connection, motion adaptation, and motion composition. Based on the constraints between the feet of the animated character and the ground, and the timing of the input motions, the appropriate method is determined for each pair of overlapped or sequential motions. As the user changes the arrangement of the motion clips, the system interactively changes the output motion. Alternatively, the user can make the system execute an input motion as soon as possible so that it follows the previous motion smoothly. Using our system, users can make use of existing motion clips. Because the entire process is automatic, even novices can easily use our system. A prototype system demonstrates the effectiveness of our approach.  相似文献   

13.
We present a novel approach for animating static images that contain objects that move in a subtle, stochastic fashion (e.g. rippling water, swaying trees, or flickering candles). To do this, our algorithm leverages example videos of similar objects, supplied by the user. Unlike previous approaches which estimate motion fields in the example video to transfer motion into the image, a process which is brittle and produces artefacts, we propose an Eulerian phase‐based approach which uses the phase information from the sample video to animate the static image. As is well known, phase variations in a signal relate naturally to the displacement of the signal via the Fourier Shift Theorem. To enable local and spatially varying motion analysis, we analyse phase changes in a complex steerable pyramid of the example video. These phase changes are then transferred to the corresponding spatial sub‐bands of the input image to animate it. We demonstrate that this simple, phase‐based approach for transferring small motion is more effective at animating still images than methods which rely on optical flow.  相似文献   

14.
Obtaining high-quality, realistic motions of articulated characters is both time consuming and expensive, necessitating the development of easy-to-use and effective tools for motion editing and reuse. We propose a new simple technique for generating constrained variations of different lengths from an existing captured or otherwise animated motion. Our technique is applicable to textural motions, such as walking or dancing, where the motion sequence can be decomposed into shorter motion segments without an obvious temporal ordering among them. Inspired by previous work on texture synthesis and video textures, our method essentially produces a reordering of these shorter segments. Discontinuities are eliminated by carefully choosing the transition points and applying local adaptive smoothing in their vicinity, if necessary. The user is able to control the synthesis process by specifying a small number of simple constraints.  相似文献   

15.
We present a new approach to motion rearrangement that preserves the syntactic structures of an input motion automatically by learning a context‐free grammar from the motion data. For grammatical analysis, we reduce an input motion into a string of terminal symbols by segmenting the motion into a series of subsequences, and then associating a group of similar subsequences with the same symbol. To obtain the most repetitive and precise set of terminals, we search for an optimial segmentation such that a large number of subsequences can be clustered into groups with little error. Once the input motion has been encoded as a string, a grammar induction algorithm is employed to build up a context‐free grammar so that the grammar can reconstruct the original string accurately as well as generate novel strings sharing their syntactic structures with the original string. Given any new strings from the learned grammar, it is straightforward to synthesize motion sequences by replacing each terminal symbol with its associated motion segment, and stitching every motion segment sequentially. We demonstrate the usefulness and flexibility of our approach by learning grammars from a large diversity of human motions, and reproducing their syntactic structures in new motion sequences.  相似文献   

16.
This study aims to improve human jumping capabilities from an engineering perspective by proposing a supportive device, as in a robot suit. The robot expresses a time-oriented optimal torque pattern, which is the most efficient jumping motion and timing, of each joint that can jump the highest using a motor. Then humans learn the most efficient jumping motion. To actualize the most efficient jumping motion, we observe human jumping motions and analytically calculate the most efficient torque patterns based on them. This work was presented in part at the 13th International Symposium on Artificial Life and Robotics, Oita, Japan, January 31–February 2, 2008  相似文献   

17.
In the last decade we have witnessed a rapid growth of Humanoid Robotics, which has already constituted an autonomous research field. Humanoid robots (or simply humanoids) are expected in all situations of humans’ everyday life, “living” and cooperating with us. They will work in services, in homes, and hospitals, and they are even expected to get involved in sports. Hence, they will have to be capable of doing diverse kinds of tasks. This forces the researchers to develop an appropriate mathematical model to support simulation, design, and control of these systems. Another important fact is that today’s, and especially tomorrow’s, humanoid robots will be more and more humanlike in their shape and behavior. A dynamic model developed for an advanced humanoid robot may become a very useful tool for the dynamic analysis of human motion in different tasks (walking, running and jumping, manipulation, various sports, etc.). So, we derive a general model and talk about a human-and-humanoid simulation system. The basic idea is to start from a human/humanoid considered as a free spatial system (“flier”). Particular problems (walking, jumping, etc.) are then considered as different contact tasks – interaction between the flier and various objects (being either single bodies or separate dynamic systems).  相似文献   

18.
We present a method for capturing the skeletal motions of humans using a sparse set of potentially moving cameras in an uncontrolled environment. Our approach is able to track multiple people even in front of cluttered and non‐static backgrounds, and unsynchronized cameras with varying image quality and frame rate. We completely rely on optical information and do not make use of additional sensor information (e.g. depth images or inertial sensors). Our algorithm simultaneously reconstructs the skeletal pose parameters of multiple performers and the motion of each camera. This is facilitated by a new energy functional that captures the alignment of the model and the camera positions with the input videos in an analytic way. The approach can be adopted in many practical applications to replace the complex and expensive motion capture studios with few consumer‐grade cameras even in uncontrolled outdoor scenes. We demonstrate this based on challenging multi‐view video sequences that are captured with unsynchronized and moving (e.g. mobile‐phone or GoPro) cameras.  相似文献   

19.
Motion capture cannot generate cartoon‐style animation directly. We emulate the rubber‐like exaggerations common in traditional character animation as a means of converting motion capture data into cartoon‐like movement. We achieve this using trajectory‐based motion exaggeration while allowing the violation of link‐length constraints. We extend this technique to obtain smooth, rubber‐like motion by dividing the original links into shorter sub‐links and computing the positions of joints using Bézier curve interpolation and a mass‐spring simulation. This method is fast enough to be used in real time.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号