首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到15条相似文献,搜索用时 15 毫秒
1.
Motion planning is an important problem in character animation and interactive simulation. However, few planning methods have considered domain‐specific knowledge that governs the agent's behaviors, and none of them is capable of planning the interactive task in which the agent interacts with the objects in the virtual environment. This paper presents a novel method to plan the interactive task based on Q‐learning for intelligent characters. The approach can be described as a three‐phase framework: data preprocessing phase, controller learning phase, and motion‐synthesis phase. In the data preprocessing phase, we abstract the motion clips as high‐level behaviors and construct the interactive behavior graph (IBG) to define the interactive capabilities of the agent in terms of interactive features. For the controller training phase, with IBG, Q‐learning algorithm is employed to train the control policy in the discrete domain with interactive features. In the motion‐synthesis phase, the optimal motion sequences can be generated by following the policy to accomplish the interactive task finally. The experimental results demonstrate that the uniform framework can generate reasonable and realistic motion sequences to plan interactive task in complex environment. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

2.
In the previous works, the real‐time fluid‐character animation could hardly be achieved because of the intensive processing demand on the character's movement and fluid simulation. This paper presents an effective approach to the real‐time generation of the fluid flow driven by the motion of a character in full 3D space, based on smoothed‐particle hydrodynamics method. The novel method of conducting and constraining the fluid particles by the geometric properties of the character motion trajectory is introduced. Furthermore, the optimized algorithms of particle searching and rendering are proposed, by taking advantage of the graphics processing unit parallelization. Consequently, both simulation and rendering of the 3D liquid effects with realistic character interactions can be implemented by our framework and performed in real‐time on a conventional PC. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

3.
This paper surveys the set of techniques developed in computer graphics for animating human walking. First we focus on the evolution from purely kinematic ‘knowledge‐based’ methods to approaches that incorporate dynamic constraints or use dynamic simulations to generate motion. Then we review the recent advances in motion editing that enable the control of complex animations by interactively blending and tuning synthetic or captured motions. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

4.
目前,国内外对人脸口型动画的研究主要是基于英文开展,而中文人脸口 型动画合成方面的研究还很少。论文结合已有的研究,综合汉语拼音发音时口型变化的规律 和汉语标号的时间控制,提出了声韵加权控制算法,并对整句、整段话中的标号加以权重向 量的分析,能够合成与语音同步的连续变化的三维口型动画模型。在两个连续口型动画过渡 处理上,通过一种余弦函数的插值方法,合成得到两个连续口型之间的过渡口型序列,从而 使合成的汉语语音同步的人脸口型动画更加平滑流畅。  相似文献   

5.
We present ZeroEGGS, a neural network framework for speech-driven gesture generation with zero-shot style control by example. This means style can be controlled via only a short example motion clip, even for motion styles unseen during training. Our model uses a Variational framework to learn a style embedding, making it easy to modify style through latent space manipulation or blending and scaling of style embeddings. The probabilistic nature of our framework further enables the generation of a variety of outputs given the input, addressing the stochastic nature of gesture motion. In a series of experiments, we first demonstrate the flexibility and generalizability of our model to new speakers and styles. In a user study, we then show that our model outperforms previous state-of-the-art techniques in naturalness of motion, appropriateness for speech, and style portrayal. Finally, we release a high-quality dataset of full-body gesture motion including fingers, with speech, spanning across 19 different styles. Our code and data are publicly available at https://github.com/ubisoft/ubisoft-laforge-ZeroEGGS .  相似文献   

6.
In the paper, we present an online real‐time method for automatically transforming a basic locomotive motion to a desired motion of the same type, based on biomechanical results. Given an online request for a motion of a certain type with desired moving speed and turning angle, our method first extracts a basic motion of the same type from a motion graph, and then transforms it to achieve the desired moving speed and turning angle by exploiting the following biomechanical observations: contact‐driven center‐of‐mass control, anticipatory reorientation of upper body segments, moving speed adjustment, and whole‐body leaning. Exploiting these observations, we propose a simple but effective method to add physical and behavioral naturalness to the resulting locomotive motions without preprocessing. Through experiments, we show that our method enables a character to respond agilely to online user commands while efficiently generating walking, jogging, and running motions with a compact motion library. Our method can also deal with certain dynamical motions such as forward roll. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

7.
Juggling, which uses both hands to keep several objects in the air at once, is admired by anyone who sees it. However, skillful real‐world juggling requires long, hard practice. Therefore, we propose an interesting method to enable anyone to juggle skillfully in the virtual world. In the real world, the human motion has to follow the motion of the moving objects; in the virtual world, the objects' motion can be adjusted together with the human motion. By using this freedom, we have generated a juggling avatar that can follow the user's motion. The user simply makes juggling‐like motions in front of a motion sensor. Our system then searches for juggling motions that closely match the user's motions and connects them smoothly. We then generate moving objects that both satisfy the laws of physics and are synchronized with the synthesized motion of the avatar. In this way, we can generate a variety of juggling animations by an avatar in real time. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

8.
Chinese character recognition :history ,status and prospects   总被引:1,自引:0,他引:1  
Chinese character recognition (CCR) is an important branch of pattern recognition. It was considered as an extremely difficult problem due to the very large number of categories, complicated structures, similarity between characters, and the variability of fonts or writing styles. Because of its unique technical challenges and great social needs, the last four decades witnessed the intensive research in this field and a rapid increase of successful applications. However, higher recognition performance is continuously needed to improve the existing applications and to exploit new applications. This paper first provides an overview of Chinese character recognition and the properties of Chinese characters. Some important methods and successful results in the history of Chinese character recognition are then summarized. As for classification methods, this article pays special attention to the syntactic-semantic approach for online Chinese character recognition, as well as the metasynthesis approach for discipline crossing. Finally, the remaining problems and the possible solutions are discussed.  相似文献   

9.
10.
This study focuses on the investigation of the effects of computer simulation and animation (CSA) on students' cognitive processes in an undergraduate engineering course. The revised Bloom's taxonomy, which consists of six categories in the cognitive process domain, was employed in this study. Five of the six categories were investigated, including remember, understand, apply, analyze, and evaluate. Data were collected via a think‐aloud protocol involving two groups of student participants: One group learned a worked example problem with a CSA module, and the other group learned the same problem with traditional textbook‐style instruction. A new concept called frequency index was proposed for use in qualitative research that involves the quantitative comparison of the overall popularity of a particular mental activity performed by two groups of students. The results show that as compared to traditional textbook‐style instruction, CSA significantly increases students' activities in the understand category of the revised Bloom's taxonomy during learning and significantly increases students' activities in the understand, apply, analyse, and evaluate categories during subsequent problem‐solving. That learning via CSA has a profound impact on subsequent problem‐solving is attributed to intensive human–computer interactions built in the CSA learning module.  相似文献   

11.
Our newly developed event-based planning and control theory is applied to robotic systems. It Introduces a suitable action or motion reference variable other than time, but directly related to the desired and measurable systems output, called event. Here the event is the length of the path tracked by a robot. It enables the construction of an integrated planning and control system where planning becomes a real-time closed-loop process. The path-based integration planning and control scheme is exemplified by a single-arm tracking problem. Time and energy optimal motion plans combined with nonlinear feedback control are derived in closed form. To the best of our knowledge, this closed-form solution was not obtained before. The equivalence of path-based and time-based representations of nonlinear feedback control is shown, and an overall system stability criterion has also been obtained. The application of event-based integrated planning and control provides the robotic systems the capability to cope with unexpected and uncertain events in real time, without the need for replanning. The theoretical results are illustrated and verified by experiments.  相似文献   

12.
Physically based characters have not yet received wide adoption in the entertainment industry because control remains both difficult and unreliable. Even with the incorporation of motion capture for reference, which adds believability, characters fail to be convincing in their appearance when the control is not robust. To address these issues, we propose a simple Jacobian transpose torque controller that employs virtual actuators to create a fast and reasonable tracking system for motion capture. We combine this controller with a novel approach we call the topple‐free foot strategy which conservatively applies artificial torques to the standing foot to produce a character that is capable of performing with arbitrary robustness. The system is both easy to implement and straightforward for the animator to adjust to the desired robustness, by considering the trade‐off between physical realism and stability. We showcase the benefit of our system with a wide variety of example simulations, including energetic motions with multiple support contact changes, such as capoeira, as well as an extension that highlights the approach coupled with a Simbicon controlled walker. With this work, we aim to advance the state‐of‐the‐art in the practical design for physically based characters that can employ unaltered reference motion (e.g. motion capture data) and directly adapt it to a simulated environment without the need for optimization or inverse dynamics.  相似文献   

13.
In this paper, we present a data‐driven approach to simulate realistic locomotion of virtual pedestrians. We focus on simulating low‐level pedestrians' motion, where a pedestrian's motion is mainly affected by other pedestrians and static obstacles nearby, and the preferred velocities of agents (direction and speed) are obtained from higher level path planning models. Before the simulation, collision avoidance processes (i.e. examples) are extracted from videos to describe how pedestrians avoid collisions, which are then clustered using hierarchical clustering algorithm with a novel distance function to find similar patterns of pedestrians' collision avoidance behaviours. During the simulation, at each time step, the perceived state of each agent is classified into one cluster using a neural network trained before the simulation. A sequence of velocity vectors, representing the agent's future motion, is selected among the examples corresponding to the chosen cluster. The proposed CLUST model is trained and applied to different real‐world datasets to evaluate its generality and effectiveness both qualitatively and quantitatively. The simulation results demonstrate that the proposed model can generate realistic crowd behaviours with comparable computational cost.  相似文献   

14.
Physics simulation offers the possibility of truly responsive and realistic animation. Despite wide adoption of physics simulation for the animation of passive phenomena, such as fluids, cloths and rag‐doll characters, commercial applications still resort to kinematics‐based approaches for the animation of actively controlled characters. However, following a renewed interest in the use of physics simulation for interactive character animation, many recent publications demonstrate tremendous improvements in robustness, visual quality and usability. We present a structured review of over two decades of research on physics‐based character animation, as well as point out various open research areas and possible future directions.  相似文献   

15.
Controlling rigid body dynamic simulations can pose a difficult challenge when constraints exist on the bodies' goal states and the sequence of intermediate states in the resulting animation. Manually adjusting individual rigid body control actions (forces and torques) can become a very labour‐intensive and non‐trivial task, especially if the domain includes a large number of bodies or if it requires complicated chains of inter‐body collisions to achieve the desired goal state. Furthermore, there are some interactive applications that rely on rigid body models where no control guidance by a human animator can be offered at runtime, such as video games. In this work, we present techniques to automatically generate intelligent control actions for rigid body simulations. We introduce sampling‐based motion planning methods that allow us to model goal‐driven behaviour through the use of non‐deterministic Tactics that consist of intelligent, sampling‐based control‐blocks, called Skills. We introduce and compare two variations of a Tactics‐driven planning algorithm, namely behavioural Kinodynamic Rapidly Exploring Random Trees (BK‐RRT) and Behavioural Kinodynamic Balanced Growth Trees (BK‐BGT). We show how our planner can be applied to automatically compute the control sequences for challenging physics‐based domains and that is scalable to solve control problems involving several hundred interacting bodies, each carrying unique goal constraints.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号