首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
Dynamically simulated characters in virtual environments   总被引:9,自引:0,他引:9  
Animated characters can play the role of teachers or guides, team mates or competitors, or just provide a source of interesting motion in virtual environments. Characters in a compelling virtual environment must have a variety of complex and interesting behaviors, and be responsive to the user's actions. The difficulty of constructing such synthetic characters currently hinders the development of these environments, particularly when realism is required. The authors present one approach to populating virtual environments-using dynamic simulation to generate the motion of characters. They explore this approach's effectiveness with two virtual environments: the border collie environment, in which the user acts as a border collie to herd robots into a corral, and the Olympic bicycle race environment, in which the user participates in a bicycle race with synthetic competitors  相似文献   

2.
Interpolation synthesis of articulated figure motion   总被引:4,自引:0,他引:4  
Most conventional media depend on engaging and appealing characters. Empty spaces and buildings would not fare well as television or movie programming, yet virtual reality usually offers up such spaces. The problem lies in the difficulty of creating computer generated characters that display real time, engaging interaction and realistic motion. Articulated figure motion for real time computer graphics offers one solution to this problem. A common approach stores a set of motions and lets you choose one particular motion at a time. The article describes a process that greatly expands the range of possible motions. Mixing motions selected from a database lets you create a new motion to exact specifications. The synthesized motion retains the original motions' subtle qualities, such as the realism of motion capture or the expressive, exaggerated qualities of artistic animation. Our method provides a new way to achieve inverse kinematics capability-for example, placing the hands or feet of an articulated figure in specific positions. It proves useful for both real time graphics and prerendered animation production. The method, called interpolation synthesis, is based on motion capture data and it provides real time character motion for interactive entertainment or avatars in virtual worlds  相似文献   

3.
Natural motion synthesis of virtual humans have been studied extensively, however, motion control of virtual characters actively responding to complex dynamic environments is still a challenging task in computer animation. It is a labor and cost intensive animator-driven work to create realistic human motions of character animations in a dynamically varying environment in movies, television and video games. To solve this problem, in this paper we propose a novel approach of motion synthesis that applies the optimal path planning to direct motion synthesis for generating realistic character motions in response to complex dynamic environment. In our framework, SIPP (Safe Interval Path Planning) search is implemented to plan a globally optimal path in complex dynamic environments. Three types of control anchors to motion synthesis are for the first time defined and extracted on the obtained planning path, including turning anchors, height anchors and time anchors. Directed by these control anchors, highly interactive motions of virtual character are synthesized by motion field which produces a wide variety of natural motions and has high control agility to handle complex dynamic environments. Experimental results have proven that our framework is capable of synthesizing motions of virtual humans naturally adapted to the complex dynamic environments which guarantee both the optimal path and the realistic motion simultaneously.  相似文献   

4.
5.
By offering a natural, intuitive interface with the virtual world, auditory display can enhance a user's experience in a multimodal virtual environment and further improve the user's sense of presence. However, compared to graphical display, sound synthesis has not been well investigated because of the extremely high computational cost for simulating realistic sounds. The state of the art for sound production in virtual environments is to use recorded sound clips that events in the virtual environment trigger, similar to how recorded animation sequences in the earlier days generated all the character motion in the virtual world. In this article, we describe several techniques for accelerating sound simulation, thereby enabling realistic, physically based sound synthesis for large-scale virtual environments  相似文献   

6.
We present an interactive method that allows animated characters to navigate through cluttered environments. Our characters are equipped with a variety of motion skills to clear obstacles, narrow passages, and highly constrained environment features. Our control method incorporates a behavior model into well‐known, standard path planning algorithms. Our behavior model, called deformable motion, consists of a graph of motion capture fragments. The key idea of our approach is to add flexibility on motion fragments such that we can situate them into a cluttered environment via constraint‐based formulation. We demonstrate our deformable motion for realtime interactive navigation and global path planning in highly constrained virtual environments.  相似文献   

7.
In this paper, we propose an approach to synthesize new dance routines by combining body part motions from a human motion database. The proposed approach aims to provide a movement source to allow robots or animation characters to perform improvised dances to music, and also to inspire choreographers with the provided movements. Based on the observation that some body parts perform more appropriately than other body parts during dance performances, a correlation analysis of music and motion is conducted to identify the expressive body parts. We then combine the body part movement sources to create a new motion, which differs from all sources in the database. The generated performances are evaluated by a user questionnaire assessment, and the results are discussed to understand what is important in generating more appealing dance routines.  相似文献   

8.
Three-dimensional sound's effectiveness in virtual reality (VR) environments has been widely studied. However, due to the big differences between VR and augmented reality (AR) systems in registration, calibration, perceptual difference of immersiveness, navigation, and localization, it is important to develop new approaches to seamlessly register virtual 3-D sound in AR environments and conduct studies on 3-D sound's effectiveness in AR context. In this paper, we design two experimental AR environments to study the effectiveness of 3-D sound both quantitatively and qualitatively. Two different tracking methods are applied to retrieve the 3-D position of virtual sound sources in each experiment. We examine the impacts of 3-D sound on improving depth perception and shortening task completion time. We also investigate its impacts on immersive and realistic perception, different spatial objects identification, and subjective feeling of "human presence and collaboration". Our studies show that applying 3-D sound is an effective way to complement visual AR environments. It helps depth perception and task performance, and facilitates collaborations between users. Moreover, it enables a more realistic environment and more immersive feeling of being inside the AR environment by both visual and auditory means. In order to make full use of the intensity cues provided by 3-D sound, a process to scale the intensity difference of 3-D sound at different depths is designed to cater small AR environments. The user study results show that the scaled 3-D sound significantly increases the accuracy of depth judgments and shortens the searching task completion time. This method provides a necessary foundation for implementing 3-D sound in small AR environments. Our user study results also show that this process does not degrade the intuitiveness and realism of an augmented audio reality environment  相似文献   

9.
Audio-based virtual environments have been increasingly used to foster cognitive and learning skills. A number of studies have also highlighted that the use of technology can help learners to develop effective skills such as motivation and self-esteem. This study presents the design and usability of 3D interactive environments for children with visual disabilities to help them solve problems in Chilean geography and culture. We introduce AudioChile, a virtual environment that can be navigated through 3D sound to enhance spatiality and immersion throughout the environment. 3D sound is used to orientate, avoid obstacles, and identify the positions of various characters and objects within the environment. We have found during the usability evaluation that sound can be fundamental for attention and motivation purposes during interaction. Learners identified and clearly discriminated environmental sounds to solve everyday problems, spatial orientation, and laterality.  相似文献   

10.
A review of musical creativity in collaborative virtual environments (CVE) shows recurring interaction metaphors that tend from precise control of individual parameters to higher level gestural influence over whole systems. Musical performances in CVE also show a consistent re-emergence of a unique form of collaboration called “melding” in which individual virtuosity is subsumed to that of the group. Based on these observations, we hypothesized that CVE could be a medium for creating new forms of music, and developed the audiovisual augmented reality system (AVIARy) to explore higher level metaphors for composing spatial music in CVE. This paper describes the AVIARy system, the initial experiments with interaction metaphors, and the application of the system to develop and stage a collaborative musical performance at a sound art concert. The results from these experiments indicate that CVE can be a medium for new forms of musical creativity and distinctive forms of music.  相似文献   

11.
Expression could play a key role in the audio rendering of virtual reality applications. Its understanding is an ambitious issue in the scientific environment, and several studies have investigated the analysis techniques to detect expression in music performances. The knowledge coming from these analyses is widely applicable: embedding expression on audio interfaces can drive to attractive solutions to emphasize interfaces in mixed-reality environments. Synthesized expressive sounds can be combined with real stimuli to experience augmented reality, and they can be used in multi-sensory stimulations to provide the sensation of first-person experience in virtual expressive environments. In this work we focus on the expression of violin and flute performances, with reference to sensorial and affective domains. By means of selected audio features, we draw a set of parameters describing performers’ strategies which are suitable both for tuning expressive synthesis instruments and enhancing audio in human–computer interfaces.  相似文献   

12.
Can we make virtual characters in a scene interact with their surrounding objects through simple instructions? Is it possible to synthesize such motion plausibly with a diverse set of objects and instructions? Inspired by these questions, we present the first framework to synthesize the full-body motion of virtual human characters performing specified actions with 3D objects placed within their reach. Our system takes textual instructions specifying the objects and the associated ‘intentions’ of the virtual characters as input and outputs diverse sequences of full-body motions. This contrasts existing works, where full-body action synthesis methods generally do not consider object interactions, and human-object interaction methods focus mainly on synthesizing hand or finger movements for grasping objects. We accomplish our objective by designing an intent-driven full-body motion generator, which uses a pair of decoupled conditional variational auto-regressors to learn the motion of the body parts in an autoregressive manner. We also optimize the 6-DoF pose of the objects such that they plausibly fit within the hands of the synthesized characters. We compare our proposed method with the existing methods of motion synthesis and establish a new and stronger state-of-the-art for the task of intent-driven motion synthesis.  相似文献   

13.
Curvilinear features extracted from a 2D user‐sketched feature map have been used successfully to constraint a patch‐based texture synthesis of real landscapes. This map‐based user interface does not give fine control over the height profile of the generated terrain. We propose a new texture‐based terrain synthesis framework controllable by a terrain sketching interface. We enhance the realism of the generated landscapes by using a novel patch merging method that reduces boundary artefacts caused by overlapping terrain patches. A more constrained synthesis process is used to produce landscapes that better match user requirements. The high computational cost of texture synthesis is reduced with a parallel implementation on graphics hardware. Our GPU‐accelerated solution provides a significant speedup depending on the size of the example terrain. We show experimentally that our framework is more successful in generating realistic landscapes than current example‐based terrain synthesis methods. We conclude that texture‐based terrain synthesis combined with sketching provides an excellent solution to the user control and realism challenges of virtual landscape generation.  相似文献   

14.

As current virtual environments are less visually rich than real-world environments, careful consideration must be given to their design to ameliorate the lack of visual cues. One important design criterion in this respect is to make certain that adequate navigational cues are incorporated into complex virtual worlds. In this paper we show that adding 3D spatialized sound to a virtual environment can help people navigate through it. We conducted an experiment to determine if the incorporation of 3D sound (a) helps people find specific locations in the environment, and (b) influences the extent to which people acquire spatial knowledge about their environment. Our results show that the addition of 3D sound did reduce time taken to locate objects in a complex environment. However, the addition of sound did not increase the amount of spatial knowledge users were able to acquire. In fact, the addition of 3D auditory sound cues appears to suppress the development of overall spatial knowledge of the virtual environment.  相似文献   

15.
Animated virtual human characters are a common feature in interactive graphical applications, such as computer and video games, online virtual worlds and simulations. Due to dynamic nature of such applications, character animation must be responsive and controllable in addition to looking as realistic and natural as possible. Though procedural and physics-based animation provide a great amount of control over motion, they still look too unnatural to be of use in all but a few specific scenarios, which is why interactive applications nowadays still rely mainly on recorded and hand-crafted motion clips. The challenge faced by animation system designers is to dynamically synthesize new, controllable motion by concatenating short motion segments into sequences of different actions or by parametrically blending clips that correspond to different variants of the same logical action. In this article, we provide an overview of research in the field of example-based motion synthesis for interactive applications. We present methods for automated creation of supporting data structures for motion synthesis and describe how they can be employed at run-time to generate motion that accurately accomplishes tasks specified by the AI or human user.  相似文献   

16.
Physically based characters have not yet received wide adoption in the entertainment industry because control remains both difficult and unreliable. Even with the incorporation of motion capture for reference, which adds believability, characters fail to be convincing in their appearance when the control is not robust. To address these issues, we propose a simple Jacobian transpose torque controller that employs virtual actuators to create a fast and reasonable tracking system for motion capture. We combine this controller with a novel approach we call the topple‐free foot strategy which conservatively applies artificial torques to the standing foot to produce a character that is capable of performing with arbitrary robustness. The system is both easy to implement and straightforward for the animator to adjust to the desired robustness, by considering the trade‐off between physical realism and stability. We showcase the benefit of our system with a wide variety of example simulations, including energetic motions with multiple support contact changes, such as capoeira, as well as an extension that highlights the approach coupled with a Simbicon controlled walker. With this work, we aim to advance the state‐of‐the‐art in the practical design for physically based characters that can employ unaltered reference motion (e.g. motion capture data) and directly adapt it to a simulated environment without the need for optimization or inverse dynamics.  相似文献   

17.
When interacting in a virtual environment, users are confronted with a number of interaction techniques. These interaction techniques may complement each other, but in some circumstances can be used interchangeably. Because of this situation, it is difficult for the user to determine which interaction technique to use. Furthermore, the use of multimodal feedback, such as haptics and sound, has proven beneficial for some, but not all, users. This complicates the development of such a virtual environment, as designers are not sure about the implications of the addition of interaction techniques and multimodal feedback. A promising approach for solving this problem lies in the use of adaptation and personalization. By incorporating knowledge of a user’s preferences and habits, the user interface should adapt to the current context of use. This could mean that only a subset of all possible interaction techniques is presented to the user. Alternatively, the interaction techniques themselves could be adapted, e.g. by changing the sensitivity or the nature of the feedback. In this paper, we propose a conceptual framework for realizing adaptive personalized interaction in virtual environments. We also discuss how to establish, verify and apply a user model, which forms the first and important step in implementing the proposed conceptual framework. This study results in general and individual user models, which are then verified to benefit users interacting in virtual environments. Furthermore, we conduct an investigation to examine how users react to a specific type of adaptation in virtual environments (i.e. switching between interaction techniques). When an adaptation is integrated in a virtual environment, users positively respond to this adaptation as their performance significantly improve and their level of frustration decrease.  相似文献   

18.
环境声音作为日常生活中分布最为广泛的一类声音,是人们获取外部信息的重要来源.近十几年来,随着用户对虚拟场景真实度要求不断提升,为虚拟场景打造同步、真实的环境音效已成为构建高度沉浸式虚拟环境不可或缺的一部分.其中环境声源仿真作为打造真实感虚拟环境音效的基石,得到了研究人员的广泛关注与探索.与传统的人工声源仿真相比,通过算...  相似文献   

19.
多角色与虚拟场景的运动融合   总被引:1,自引:0,他引:1  
在基于运动捕获的计算机动画中,已提出的各种运动编辑的方法大都针对单个角色运动进行处理,而且角色的运动多数是事先规划好的,缺乏对外界环境变化的响应能力.为了提高角色对环境的感知、响应能力以及角色间的自主协同能力,提出将多个结构化环境下捕获的单角色运动融合到同一个非结构化虚拟环境下的"运动融合"的新概念和方法,并根据运动决策、运动协调、运动求解和运动执行的体系结构对运动规划、多角色协同、离散运动模式决策和连续运动姿势生成等关键问题进行了深入研究.实验结果表明,提出的方法能够真正有效地实现同一场景下角色与场景协调的运动融合.动画角色的高自主性和运动捕获数据的高重用性使得提出的方法在计算机动画和游戏中具有普遍的应用价值.  相似文献   

20.
The believeable portrayal of character performances is critical in engaging the immersed player in interactive entertainment. The story, the emotion and the relationship between the player and the world they are interacting within are hugely dependent on how appropriately the world's characters look, move and behave. We’re concerned here with the character's motion; with next generation game consoles like Xbox360 TM and Playstation®3 the graphical representation of characters will take a major step forward which places even more emphasis on the motion of the character. The behavior of the character is driven by story and design which are adapted to game context by the game's AI system. The motion of the characters populating the game's world, however, is evolving to an interesting blend of kinematics, dynamics, biomechanics and AI drivenmotion planning. Our goal here is to present the technologies involved in creating what are essentially character automata, emotionless and largely brainless character shells that nevertheless exhibit enough "behavior" to move as directed while adapting to the environment through sensing and actuating responses. This abstracts the complexities of low level motion control, dynamics, collision detection etc. and allows the game's artificial intelligence system to direct these characters at a higher level. While much research has already been conducted in this area and some great results have been published, we will present the particular issues that face game developers working on current and next generation consoles, and how these technologies may be integrated into game production pipelines so to facilitate the creation of character performances in games. The challenges posed by the limited memory and CPU bandwidth (though this is changing somewhat with next generation) and the challenges of integrating these solutions with current game design approaches leads to some interesting problems, some of which the industry has solutions for and some others which still remain largely unsolved.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号