首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
2.
This paper presents a novel data‐driven expressive speech animation synthesis system with phoneme‐level controls. This system is based on a pre‐recorded facial motion capture database, where an actress was directed to recite a pre‐designed corpus with four facial expressions (neutral, happiness, anger and sadness). Given new phoneme‐aligned expressive speech and its emotion modifiers as inputs, a constrained dynamic programming algorithm is used to search for best‐matched captured motion clips from the processed facial motion database by minimizing a cost function. Users optionally specify ‘hard constraints’ (motion‐node constraints for expressing phoneme utterances) and ‘soft constraints’ (emotion modifiers) to guide this search process. We also introduce a phoneme–Isomap interface for visualizing and interacting phoneme clusters that are typically composed of thousands of facial motion capture frames. On top of this novel visualization interface, users can conveniently remove contaminated motion subsequences from a large facial motion dataset. Facial animation synthesis experiments and objective comparisons between synthesized facial motion and captured motion showed that this system is effective for producing realistic expressive speech animations.  相似文献   

3.
Natural locomotion of virtual characters is very important in games and simulations. The naturalness of the total motion strongly depends on both the path the character chooses and the animation of the walking character. Therefore, much work has been done on path planning and generating walking animations. However, the combination of both fields has received less attention. Combining path planning and motion synthesis introduces several problems. In this paper, we will identify two problems and propose possible solutions. The first problem is selecting an appropriate distance metric for locomotion synthesis. When concatenating clips of locomotion, a distance metric is required to detect good transition points. We have evaluated three common distance metrics both quantitatively (in terms of footskating, path deviation and online running time) and qualitatively (user study). Based on our observations, we propose a set of guidelines when using these metrics in a motion synthesizer. The second problem is the fact that there is no single point on the body that can follow the path generated by the path planner without causing unnatural animations. This raises the question how the character should follow the path. We will show that enforcing the pelvis to follow the path will lead to unnatural animations and that our proposed solution, which uses path abstractions, generates significantly better animations. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

4.
In this paper, we present a simple and robust mixed reality (MR) framework that allows for real-time interaction with virtual humans in mixed reality environments under consistent illumination. We will look at three crucial parts of this system: interaction, animation and global illumination of virtual humans for an integrated and enhanced presence. The interaction system comprises of a dialogue module, which is interfaced with a speech recognition and synthesis system. Next to speech output, the dialogue system generates face and body motions, which are in turn managed by the virtual human animation layer. Our fast animation engine can handle various types of motions, such as normal key-frame animations, or motions that are generated on-the-fly by adapting previously recorded clips. Real-time idle motions are an example of the latter category. All these different motions are generated and blended on-line, resulting in a flexible and realistic animation. Our robust rendering method operates in accordance with the previous animation layer, based on an extended for virtual humans precomputed radiance transfer (PRT) illumination model, resulting in a realistic rendition of such interactive virtual characters in mixed reality environments. Finally, we present a scenario that illustrates the interplay and application of our methods, glued under a unique framework for presence and interaction in MR.  相似文献   

5.
To evaluate how top-down and bottom-up processes contribute to learning from animated displays, we conducted four experiments that varied either in the design of animations or the prior knowledge of the learners. Experiments 1–3 examined whether adding interactivity and signaling to an animation benefits learners in developing a mental model of a mechanical system. Although learners utilized interactive controls and signaling devices, their comprehension of the system was no better than that of learners who saw animations without these design features. Furthermore, the majority of participants developed a mental model of the system that was incorrect and inconsistent with information displayed in the animation. Experiment 4 tested effects of domain knowledge and found, surprisingly, that even some learners with high domain knowledge initially constructed the incorrect mental model. After multiple exposures to the materials, the high knowledge learners revised their mental models to the correct one, while the low-knowledge learners maintained their erroneous models. These results suggest that learning from animations involves a complex interplay between top-down and bottom-up processes and that more emphasis should be placed on how prior knowledge is applied to interpreting animations.  相似文献   

6.
Especially in a constrained virtual environment, precise control of foot placement during character locomotion is crucial to avoid collisions and to ensure a natural locomotion. In this paper, we present an extension of the step space: a novel technique for generating animations of a character walking over a set of desired foot steps in real time. We use an efficient greedy nearest-neighbor approach and warp the resulting animation so that it adheres to both spatial and temporal constraints. We will show that our technique can generate realistic locomotion animations over an input path very efficiently even though we impose many constraints on the animation. We also present a simple footstep planning technique that automatically plans regular stepping and sidestepping based on an input path with clearance information generated by a path planner.  相似文献   

7.
Segmentation of animations, that is presenting them in pieces rather than as a continuous stream of information, has been shown to have a beneficial effect on cognitive load and learning for novices. Two different explanations of this segmentation effect have been proposed. Firstly, pauses are usually inserted between the segments, which may give learners extra time to perform necessary cognitive processes. Secondly, because segmentation divides animations into meaningful pieces, it provides a form of temporal cueing which may support learners in perceiving the underlying structure of the process or procedure depicted in the animation. This study investigates which of these explanations is the most plausible. Secondary education students (N = 161) studied animations on probability calculation, after having been randomly assigned to one of four conditions: non-segmented animations, animations segmented by pauses only, animations segmented by temporarily darkening the screen only, and animations segmented by both pauses and temporarily darkening the screen. The results suggest that both pauses and cues play a role in the segmentation effect, but in a different way.  相似文献   

8.
The purpose of this research is a quantitative analysis of movement patterns of dance,which cannot be analyzed with a motion capture system alone,using simultaneous measurement of body motion and biophysical information.In this research,two kinds of same leg movement are captured by simultaneous measurement;one is a leg movement with given strength,the other is a leg movement without strength on condition of basic experiment using optical motion capture and electromyography (EMG) equipment in order to quantitatively analyze characteristics of leg movement.Also,we measured the motion of the traditional Japanese dance using the constructed system.We can visualize leg movement of Japanese dance by displaying a 3D CG character animation with motion data and EMG data.In addition,we expect that our research will help dancers and researchers on dance through giving new information on dance movement which cannot be analyzed with only motion capture.  相似文献   

9.
We propose a data‐driven method to realize high‐quality detailed hair animations in interactive applications like games. By devising an error metric method to evaluate hair animation similarities, we take hair features into consideration as much as possible. We also propose a novel database construction algorithm based on Secondary Motion Graph. Our algorithm can improve the efficiency of such graphs to reduce redundant data and also achieves visually smooth connection of two animation clips while taking into consideration their future motions. The costs for the run‐time process using our Secondary Motion Graph are relatively low, allowing real‐time interactive operations. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

10.
手绘服装动画是艺术化绘制风格与服装运动规律的结合体,将手绘风格与服装运动模拟相融合,提出一种高效生成具有手绘效果的服装动画方法.通过手绘图来指定服装动画的艺术效果,并基于逆向动力学原理,采用非线性优化方法从用户输入的手绘图中恢复出服装运动的动力学参数;在此基础上,设计实现了基于广义的中间帧插值方法和结合显式动作数据的外推方法,自动生成用户所期望的、与手绘图风格一致的服装动画序列.实验结果表明,该方法可有效地生成融合服装物理运动属性并保留动画师手绘风格的服装动画,显著提高了手绘服装动画的制作效率.  相似文献   

11.
As animations become more readily available, simultaneously the complexity of creating animations has also increased. In this paper, we address the issue by describing an animation toolkit based on a database approach for reusing geometric animation models and their motion sequences. The aim of our approach is to create a framework aimed for novice animators. Here, we use an alternative notion of a VRML scene graph to describe a geometric model, specifically intended for reuse. We represent this scene graph model as a relational database. A set of spatial, temporal, and motion operations are then used to manipulate the models and motions in an animation database. Spatial operations help in inserting/deleting geometric models in a new animation scene. Temporal and motion operations help in generating animation sequences in a variety of ways. For instance, motion information of one geometric model can be applied to another model or a motion sequence can be retargeted to meet additional constraints (e.g., wiping action on a table can be retargeted with constraints that reduce the size of the table). We present the design and implementation of this toolkit along with several interesting examples of animation sequences that can be generated using this toolkit.  相似文献   

12.
One important aspect of creating computer programs is having a sound understanding of the underlying algorithms used by programs. Learning about algorithms, just like learning to program, is difficult, however. A number of prior studies have found that using animation to help teach algorithms had less beneficial effects on learning than hoped. Those results surprise many computer science instructors whose intuition leads them to believe that algorithm animations should assist instruction. This article reports on a study in which animation is utilized in more of a “homework” learning scenario rather than a “final exam” scenario. Our focus is on understanding how learners will utilize animation and other instructional materials in trying to understand a new algorithm, and on gaining insight into how animations can fit into successful learning strategies. The study indicates that students use sophisticated combinations of instructional materials in learning scenarios. In particular, the presence of algorithm animations seems to make a complicated algorithm more accessible and less intimidating, thus leading to enhanced student interaction with the materials and facilitating learning.  相似文献   

13.
《Graphical Models》2012,74(5):265-282
We present a new agent-based system for detailed traffic animation on urban arterial networks with diverse junctions like signalized crossing, merging and weaving areas. To control the motion of traffic for visualization and animation purposes, we utilize the popular follow-the-leader method to simulate various vehicle types and intelligent driving styles. We also introduce a continuous lane-changing model to imitate the vehicle’s decision-making process and dynamic interactions with neighboring vehicles. By applying our approach in several typical urban traffic scenarios, we demonstrate that our system can well visualize vehicles’ behaviors in a realistic manner on complex road networks and generate immersive traffic flow animations with smooth accelerating strategies and flexible lane changes.  相似文献   

14.
We introduce techniques for the processing of motion and animations of non‐rigid shapes. The idea is to regard animations of deformable objects as curves in shape space. Then, we use the geometric structure on shape space to transfer concepts from curve processing in ?n to the processing of motion of non‐rigid shapes. Following this principle, we introduce a discrete geometric flow for curves in shape space. The flow iteratively replaces every shape with a weighted average shape of a local neighborhood and thereby globally decreases an energy whose minimizers are discrete geodesics in shape space. Based on the flow, we devise a novel smoothing filter for motions and animations of deformable shapes. By shortening the length in shape space of an animation, it systematically regularizes the deformations between consecutive frames of the animation. The scheme can be used for smoothing and noise removal, e.g., for reducing jittering artifacts in motion capture data. We introduce a reduced‐order method for the computation of the flow. In addition to being efficient for the smoothing of curves, it is a novel scheme for computing geodesics in shape space. We use the scheme to construct non‐linear “Bézier curves” by executing de Casteljau's algorithm in shape space.  相似文献   

15.
《Computers & Education》1999,33(4):253-278
We conducted two experiments designed to examine whether animations of algorithms would help students learn the algorithms more effectively. Across the two studies we used two different algorithms — depth-first search and binomial heaps — and used two different subject populations — students with little or no computer science background and students who were computer science majors — and examined whether animations helped students acquire procedural and conceptual knowledge about the algorithms. The results suggest that one way animations may aid learning of procedural knowledge is by encouraging learners to predict the algorithm's behavior. However, such a learning improvement was also found when learners made predictions of an algorithm's behavior from static diagrams. This suggests that prediction, rather than animation per se, may have been the key factor in aiding learning in the present studies. These initial experiments served to highlight a number of methodological issues that need to be systematically addressed in future experiments in order to fully test the relationship between animation and prediction as well as to examine other possible benefits of animations on learning.  相似文献   

16.
We present an efficient algorithm for building an adaptive bounding volume hierarchy (BVH) in linear time on commodity graphics hardware using CUDA. BVHs are widely used as an acceleration data structure to quickly ray trace animated polygonal scenes. We accelerate the construction process with auxiliary grids that help us build high quality BVHs with SAH in O(k?n). We partition scene triangles and build a temporary grid structure only once. We also handle non-uniformly tessellated and long/thin triangles that we split into several triangle references with tight bounding box approximations. We make no assumptions on the type of geometry or animation motion. However, our algorithm takes advantage of coherent geometry layout and coherent frame-by-frame motion. We demonstrate the performance and quality of resulting BVHs that are built quickly with good spatial partitioning.  相似文献   

17.
Stroke surfaces: temporally coherent artistic animations from video   总被引:3,自引:0,他引:3  
The contribution of this paper is a novel framework for synthesizing nonphotorealistic animations from real video sequences. We demonstrate that, through automated mid-level analysis of the video sequence as a spatiotemporal volume—a block of frames with time as the third dimension—we are able to generate animations in a wide variety of artistic styles, exhibiting a uniquely high degree of temporal coherence. In addition to rotoscoping, matting, and novel temporal effects unique to our method, we demonstrate the extension of static nonphotorealistic rendering (NPR) styles to video, including painterly, sketchy, and cartoon shading. We demonstrate how this novel coherent shading framework may be combined with our earlier motion emphasis work to produce a comprehensive "Video Paintbox” capable of rendering complete cartoon-styled animations from video clips.  相似文献   

18.
In this paper, we introduce Canis, a high-level domain-specific language that enables declarative specifications of data-driven chart animations. By leveraging data-enriched SVG charts, its grammar of animations can be applied to the charts created by existing chart construction tools. With Canis, designers can select marks from the charts, partition the selected marks into mark units based on data attributes, and apply animation effects to the mark units, with the control of when the effects start. The Canis compiler automatically synthesizes the Lottie animation JSON files [Aira], which can be rendered natively across multiple platforms. To demonstrate Canis’ expressiveness, we present a wide range of chart animations. We also evaluate its scalability by showing the effectiveness of our compiler in reducing the output specification size and comparing its performance on different platforms against D3.  相似文献   

19.
This paper compares the effects of graphical study aids and animation on the problem-solving performance of students learning computer algorithms. Prior research has found inconsistent effects of animation on learning, and we believe this is partly attributable to animations not being designed to convey key information to learners. We performed an instructional analysis of the to-be-learned algorithms and designed the teaching materials based on that analysis. Participants studied stronger or weaker text-based information about the algorithm, and then some participants additionally studied still frames or an animation. Across 2 studies, learners who studied materials based on the instructional analysis tended to outperform other participants on both near and far transfer tasks. Animation also aided performance, particularly for participants who initially read the weaker text. These results suggest that animation might be added to curricula as a way of improving learning without needing revisions of existing texts and materials. Actual or potential applications of this research include the development of animations for learning complex systems as well as guidelines for determining when animations can aid learning.  相似文献   

20.
We propose a coupled hidden Markov model (CHMM) approach to video-realistic speech animation, which realizes realistic facial animations driven by speaker independent continuous speech. Different from hidden Markov model (HMM)-based animation approaches that use a single-state chain, we use CHMMs to explicitly model the subtle characteristics of audio-visual speech, e.g., the asynchrony, temporal dependency (synchrony), and different speech classes between the two modalities. We derive an expectation maximization (EM)-based A/V conversion algorithm for the CHMMs, which converts acoustic speech into decent facial animation parameters. We also present a video-realistic speech animation system. The system transforms the facial animation parameters to a mouth animation sequence, refines the animation with a performance refinement process, and finally stitches the animated mouth with a background facial sequence seamlessly. We have compared the animation performance of the CHMM with the HMMs, the multi-stream HMMs and the factorial HMMs both objectively and subjectively. Results show that the CHMMs achieve superior animation performance. The ph-vi-CHMM system, which adopts different state variables (phoneme states and viseme states) in the audio and visual modalities, performs the best. The proposed approach indicates that explicitly modelling audio-visual speech is promising for speech animation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号