首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this paper, we propose a novel motion controller for the online generation of natural character locomotion that adapts to new situations such as changing user control or applying external forces. This controller continuously estimates the next footstep while walking and running, and automatically switches the stepping strategy based on situational changes. To develop the controller, we devise a new physical model called an inverted‐pendulum‐based abstract model (IPAM). The proposed abstract model represents high‐dimensional character motions, inheriting the naturalness of captured motions by estimating the appropriate footstep location, speed and switching time at every frame. The estimation is achieved by a deep learning based regressor that extracts important features in captured motions. To validate the proposed controller, we train the model using captured motions of a human stopping, walking, and running in a limited space. Then, the motion controller generates human‐like locomotion with continuously varying speeds, transitions between walking and running, and collision response strategies in a cluttered space in real time.  相似文献   

2.
This paper presents an efficient technique for synthesizing motions by stitching, or splicing, an upper‐body motion retrieved from a motion space on top of an existing lower‐body locomotion of another motion. Compared to the standard motion splicing problem, motion space splicing imposes new challenges as both the upper and lower body motions might not be known in advance. Our technique is the first motion (space) splicing technique that propagates temporal and spatial properties of the lower‐body locomotion to the newly generated upper‐body motion and vice versa. Whereas existing techniques only adapt the upper‐body motion to fit the lower‐body motion, our technique also adapts the lower‐body locomotion based on the upper body task for a more coherent full‐body motion. In this paper, we will show that our decoupled approach is able to generate high‐fidelity full‐body motion for interactive applications such as games.  相似文献   

3.
Controlling a crowd using multi‐touch devices appeals to the computer games and animation industries, as such devices provide a high‐dimensional control signal that can effectively define the crowd formation and movement. However, existing works relying on pre‐defined control schemes require the users to learn a scheme that may not be intuitive. We propose a data‐driven gesture‐based crowd control system, in which the control scheme is learned from example gestures provided by different users. In particular, we build a database with pairwise samples of gestures and crowd motions. To effectively generalize the gesture style of different users, such as the use of different numbers of fingers, we propose a set of gesture features for representing a set of hand gesture trajectories. Similarly, to represent crowd motion trajectories of different numbers of characters over time, we propose a set of crowd motion features that are extracted from a Gaussian mixture model. Given a run‐time gesture, our system extracts the K nearest gestures from the database and interpolates the corresponding crowd motions in order to generate the run‐time control. Our system is accurate and efficient, making it suitable for real‐time applications such as real‐time strategy games and interactive animation controls.  相似文献   

4.
With the increase of innovations in vision-based hand gesture interaction system, new techniques and algorithms are being developed by researchers. However, less attention has been paid on the scope of dismantling hand tracking problems. There is also limited publicly available database developed as benchmark data to standardize the research on hand tracking area. For this purpose, we develop a versatile hand gesture tracking database. This database consists of 60 video sequences containing a total of 15,554 RGB color images. The tracking sequences are captured in different situations ranging from an easy indoor scene to extremely high challenging outdoor scenes. Complete with annotated ground truth data, this database is made available on the web for the sake of assisting other researchers in the related fields to test and evaluate their algorithms based on standard benchmark data.  相似文献   

5.
研究在行走时虚拟人动作与虚拟地形之间的交互性。通过碰撞检测来确定人体在地面之上的正确位置。利用动作融合的方法,即将几个典型动作按合适的权重结合产生新的动作数据,实时地驱动虚拟人并使之对环境变化的反应满足视觉上的逼真性。融合过程中各原始动作的权重取决于沿着和垂直于人体运动方向的2个地面坡度,同时也通过对地形的几何分析来实现虚拟人对其周边地形的感知。  相似文献   

6.
Stitching different character motions is one of the most commonly used techniques as it allows the user to make new animations that fit one's purpose from pieces of motion. However, current motion stitching methods often produce unnatural motion with foot sliding artefacts, depending on the performance of the interpolation. In this paper, we propose a novel motion stitching technique based on a recurrent motion refiner (RMR) that connects discontinuous locomotions into a single natural locomotion. Our model receives different locomotions as input, in which the root of the last pose of the previous motion and that of the first pose of the next motion are aligned. During runtime, the model slides through the sequence, editing frames window by window to output a smoothly connected animation. Our model consists of a two-layer recurrent network that comes between a simple encoder and decoder. To train this network, we created a sufficient number of paired data with a newly designed data generation. This process employs a K-nearest neighbour search that explores a predefined motion database to create the corresponding input to the ground truth. Once trained, the suggested model can connect various lengths of locomotion sequences into a single natural locomotion.  相似文献   

7.
Sensing gloves are often used as an input device for virtual 3D games. We propose a new method to control characters such as humans or animals in real‐time by using sensing gloves. Based on existing motion data of the body, a new method to map the hand motion of the user to the locomotion of 3D characters in real‐time is proposed. The method was applied to control locomotion of characters such as humans or dogs. Various motions such as trotting, running, hopping, and turning could be produced. As the computational cost needed for our method is low, the response of the system is short enough to satisfy the real‐time requirements that are essential to be used for games. Using our method, users can directly control their characters intuitively and precisely than previous controlling devices such as mouse, keyboards or joysticks. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

8.
In the paper, we present an online real‐time method for automatically transforming a basic locomotive motion to a desired motion of the same type, based on biomechanical results. Given an online request for a motion of a certain type with desired moving speed and turning angle, our method first extracts a basic motion of the same type from a motion graph, and then transforms it to achieve the desired moving speed and turning angle by exploiting the following biomechanical observations: contact‐driven center‐of‐mass control, anticipatory reorientation of upper body segments, moving speed adjustment, and whole‐body leaning. Exploiting these observations, we propose a simple but effective method to add physical and behavioral naturalness to the resulting locomotive motions without preprocessing. Through experiments, we show that our method enables a character to respond agilely to online user commands while efficiently generating walking, jogging, and running motions with a compact motion library. Our method can also deal with certain dynamical motions such as forward roll. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

9.
We present a wearable input system which enables interaction through 3D handwriting recognition. Users can write text in the air as if they were using an imaginary blackboard. The handwriting gestures are captured wirelessly by motion sensors applying accelerometers and gyroscopes which are attached to the back of the hand. We propose a two-stage approach for spotting and recognition of handwriting gestures. The spotting stage uses a support vector machine to identify those data segments which contain handwriting. The recognition stage uses hidden Markov models (HMMs) to generate a text representation from the motion sensor data. Individual characters are modeled by HMMs and concatenated to word models. Our system can continuously recognize arbitrary sentences, based on a freely definable vocabulary. A statistical language model is used to enhance recognition performance and to restrict the search space. We show that continuous gesture recognition with inertial sensors is feasible for gesture vocabularies that are several orders of magnitude larger than traditional vocabularies for known systems. In a first experiment, we evaluate the spotting algorithm on a realistic data set including everyday activities. In a second experiment, we report the results from a nine-user experiment on handwritten sentence recognition. Finally, we evaluate the end-to-end system on a small but realistic data set.  相似文献   

10.
11.
The design of autonomous characters capable of planning their own motions continues to be a challenge for computer animation. We present a novel kinematic motion‐planning algorithm for character animation which addresses some of the outstanding problems. The problem domain for our algorithm is as follows: given a constrained environment with designated handholds and footholds, plan a motion through this space towards some desired goal. Our algorithm is based on a stochastic search procedure which is guided by a combination of geometric constraints, posture heuristics, and distance‐to‐goal metrics. The method provides a single framework for the use of multiple modes of locomotion in planning motions through these constrained, unstructured environments. We illustrate our results with demonstrations of a human character using walking, swinging, climbing, and crawling in order to navigate through various obstacle courses. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

12.
Gestures that accompany speech are an essential part of natural and efficient embodied human communication. The automatic generation of such co-speech gestures is a long-standing problem in computer animation and is considered an enabling technology for creating believable characters in film, games, and virtual social spaces, as well as for interaction with social robots. The problem is made challenging by the idiosyncratic and non-periodic nature of human co-speech gesture motion, and by the great diversity of communicative functions that gestures encompass. The field of gesture generation has seen surging interest in the last few years, owing to the emergence of more and larger datasets of human gesture motion, combined with strides in deep-learning-based generative models that benefit from the growing availability of data. This review article summarizes co-speech gesture generation research, with a particular focus on deep generative models. First, we articulate the theory describing human gesticulation and how it complements speech. Next, we briefly discuss rule-based and classical statistical gesture synthesis, before delving into deep learning approaches. We employ the choice of input modalities as an organizing principle, examining systems that generate gestures from audio, text and non-linguistic input. Concurrent with the exposition of deep learning approaches, we chronicle the evolution of the related training data sets in terms of size, diversity, motion quality, and collection method (e.g., optical motion capture or pose estimation from video). Finally, we identify key research challenges in gesture generation, including data availability and quality; producing human-like motion; grounding the gesture in the co-occurring speech in interaction with other speakers, and in the environment; performing gesture evaluation; and integration of gesture synthesis into applications. We highlight recent approaches to tackling the various key challenges, as well as the limitations of these approaches, and point toward areas of future development.  相似文献   

13.
《Advanced Robotics》2013,27(15):1697-1713
Humans generate bipedal walking by cooperatively manipulating their complicated and redundant musculoskeletal systems to produce adaptive behaviors in diverse environments. To elucidate the mechanisms that generate adaptive human bipedal locomotion, we conduct numerical simulations based on a musculoskeletal model and a locomotor controller constructed from anatomical and physiological findings. In particular, we focus on the adaptive mechanism using phase resetting based on the foot-contact information that modulates the walking behavior. For that purpose, we first reconstruct walking behavior from the measured kinematic data. Next, we examine the roles of phase resetting on the generation of stable locomotion by disturbing the walking model. Our results indicate that phase resetting increases the robustness of the walking behavior against perturbations, suggesting that this mechanism contributes to the generation of adaptive human bipedal locomotion.  相似文献   

14.
Natural locomotion of virtual characters is very important in games and simulations. The naturalness of the total motion strongly depends on both the path the character chooses and the animation of the walking character. Therefore, much work has been done on path planning and generating walking animations. However, the combination of both fields has received less attention. Combining path planning and motion synthesis introduces several problems. In this paper, we will identify two problems and propose possible solutions. The first problem is selecting an appropriate distance metric for locomotion synthesis. When concatenating clips of locomotion, a distance metric is required to detect good transition points. We have evaluated three common distance metrics both quantitatively (in terms of footskating, path deviation and online running time) and qualitatively (user study). Based on our observations, we propose a set of guidelines when using these metrics in a motion synthesizer. The second problem is the fact that there is no single point on the body that can follow the path generated by the path planner without causing unnatural animations. This raises the question how the character should follow the path. We will show that enforcing the pelvis to follow the path will lead to unnatural animations and that our proposed solution, which uses path abstractions, generates significantly better animations. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

15.
This paper presents an approach for view-invariant gesture recognition. The approach is based on 3D data captured by a SwissRanger SR4000 camera. This camera produces both a depth map as well as an intensity image of a scene. Since the two information types are aligned, we can use the intensity image to define a region of interest for the relevant 3D data. This data fusion improves the quality of the motion detection and hence results in better recognition. The gesture recognition is based on finding motion primitives (temporal instances) in the 3D data. Motion is detected by a 3D version of optical flow and results in velocity annotated point clouds. The 3D motion primitives are represented efficiently by introducing motion context. The motion context is transformed into a view-invariant representation using spherical harmonic basis functions, yielding a harmonic motion context representation. A probabilistic Edit Distance classifier is applied to identify which gesture best describes a string of primitives. The approach is trained on data from one viewpoint and tested on data from a very different viewpoint. The recognition rate is 94.4% which is similar to the recognition rate when training and testing on gestures from the same viewpoint, hence the approach is indeed view-invariant.  相似文献   

16.
Sign language (SL) is a kind of natural language for the deaf. Chinese Sign Language (CSL) synthesis aims to translate text into virtual human animation, which makes information and service accessible to the deaf. Generally, sign language animation based on key frames is realized by concatenating sign words captured independently. That means a sign language word has the same pattern in diverse context, which is different from realistic sign language expression. This paper studies the effect of context on manual gesture and non-manual gesture, and presents a method for generating stylized manual gesture and non-manual gesture according to the context. Experimental results show that synthesized sign language animation considering context based on the proposed method is more accurate and intelligible than that irrespective of context.  相似文献   

17.
This paper introduces a method that can generate continuous human walking motion automatically on an arbitrary path in a three‐dimensional (3D) modelled scene. The method is based on a physical approach that solves the boundary value problem. In the motion generation stage, natural‐looking walking motion, which includes plane walking, walking upstairs and downstairs and walking on a curved path, is created by applying dynamics and kinematics. The human body is approximated as a simple rigid skeleton model, and dynamic motion is created based on the ground reaction force of the human foot. To adapt to the 3D environment, the 3D walking path is divided into steps which are tagged with the parameters needed for motion generation, and step‐by‐step motion is connected end‐to‐end. Additional features include fast calculation and a reduced need for user control. The proposed method can produce interesting human motion and can create realistic computer animation scenes. Copyright © 2000 John Wiley & Sons, Ltd.  相似文献   

18.
This study aims to develop a controller for use in the online simulation of two interacting characters. This controller is capable of generalizing two sets of interaction motions of the two characters based on the relationships between the characters. The controller can exhibit similar motions to a captured human motion while reacting in a natural way to the opponent character in real time. To achieve this, we propose a new type of physical model called a coupled inverted pendulum on carts that comprises two inverted pendulum on a cart models, one for each individual, which are coupled by a relationship model. The proposed framework is divided into two steps: motion analysis and motion synthesis. Motion analysis is an offline preprocessing step, which optimizes the control parameters to move the proposed model along a motion capture trajectory of two interacting humans. The optimization procedure generates a coupled pendulum trajectory which represents the relationship between two characters for each frame, and is used as a reference in the synthesis step. In the motion synthesis step, a new coupled pendulum trajectory is planned reflecting the effects of the physical interaction, and the captured reference motions are edited based on the planned trajectory produced by the coupled pendulum trajectory generator. To validate the proposed framework, we used a motion capture data set showing two people performing kickboxing. The proposed controller is able to generalize the behaviors of two humans to different situations such as different speeds and turning speeds in a realistic way in real time.  相似文献   

19.
Generating a visually appealing human motion sequence using low‐dimensional control signals is a major line of study in the motion research area in computer graphics. We propose a novel approach that allows us to reconstruct full body human locomotion using a single inertial sensing device, a smartphone. Smartphones are among the most widely used devices and incorporate inertial sensors such as an accelerometer and a gyroscope. To find a mapping between a full body pose and smartphone sensor data, we perform low dimensional embedding of full body motion capture data, based on a Gaussian Process Latent Variable Model. Our system ensures temporal coherence between the reconstructed poses by using a state decomposition model for automatic phase segmentation. Finally, application of the proposed nonlinear regression algorithm finds a proper mapping between the latent space and the sensor data. Our framework effectively reconstructs plausible 3D locomotion sequences. We compare the generated animation to ground truth data obtained using a commercial motion capture system.  相似文献   

20.
We present in this paper a hidden Markov model‐based system for real‐time gesture recognition and performance evaluation. The system decodes performed gestures and outputs at the end of a recognized gesture, a likelihood value that is transformed into a score. This score is used to evaluate a performance comparing to a reference one. For the learning procedure, a set of relational features has been extracted from high‐precision motion capture system and used to train hidden Markov models. At runtime, a low‐cost sensor (Microsoft Kinect) is used to capture a learner's movements. An intermediate step of model adaptation was hence requested to allow recognizing gestures captured by this low‐cost sensor. We present one application of this gesture evaluation system in the context of traditional dance basics learning. The estimation of the log‐likelihood allows giving a feedback to the learner as a score related to his performance. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号