首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 437 毫秒
1.
Behavioral and cognitive modeling for virtual characters is a promising field. It significantly reduces the workload on the animator, allowing characters to act autonomously in a believable fashion. It also makes interactivity between humans and virtual characters more practical than ever before. In this paper we present a novel technique where an artificial neural network is used to approximate a cognitive model. This allows us to execute the model much more quickly, making cognitively empowered characters more practical for interactive applications. Through this approach, we can animate several thousand intelligent characters in real time on a PC. We also present a novel technique for how a virtual character, instead of using an explicit model supplied by the user, can automatically learn an unknown behavioral/cognitive model by itself through reinforcement learning. The ability to learn without an explicit model appears promising for helping behavioral and cognitive modeling become more broadly accepted and used in the computer graphics community, as it can further reduce the workload on the animator. Further, it provides solutions for problems that cannot easily be modeled explicitly. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

2.
The availability of high‐performance 3D workstations has increased the range of application for interactive real‐time animation. In these applications the user can directly interact with the objects in the animation and direct the evolution of their motion, rather than simply watching a pre‐computed animation sequence. Interactive real‐time animation has fast‐growing applications in virtual reality, scientific visualization, medical training and distant learning. Traditional approaches to computer animation have been based on the animator having complete control over all aspects of the motion. In interactive animation the user can interact with any of the objects, which changes the current motion path or behaviour in real time. The objects in the animation must be capable of reacting to the user's actions and not simply replay a canned motion sequence. This paper presents a framework for interactive animation that allows the animator to specify the reactions of objects to events generated by other objects and the user. This framework is based on the concept of relations that describe how an object reacts to the influence of a dynamic environment. Each relation specifies one motion primitive triggered by either its enabling condition or the state of the environment. A collection of the relations is structured through several hierarchical layers to produce responsive behaviours and their variations. This framework is illustrated by several room‐based dancing examples that are modelled by relations. Copyright © 2000 John Wiley & Sons, Ltd.  相似文献   

3.
The animation of realistic characters necessitates the construction of complicated anatomical structures such as muscles, which allow subtle shape variation of the character’s outer surface to be displayed believably. Unfortunately, despite numerous efforts, the modelling of muscle structures is still left for an animator who has to painstakingly build up piece by piece, making it a very tedious process. What is even more frustrating is the animator has to build the same muscle structure for every new character. We propose a muscle retargeting technique to help an animator to automatically construct a muscle structure by reusing an already built and tested model (the template model). Our method defines a spatial transfer between the template model and a new model based on the skin surface and the rigging structure. To ensure that the retargeted muscle is tightly packed inside a new character, we define a novel spatial optimization based on spherical parameterization. Our method requires no manual input, meaning that an animator does not require anatomical knowledge to create realistic accurate musculature models.  相似文献   

4.
Pose Controlled Physically Based Motion   总被引:2,自引:0,他引:2  
In this paper we describe a new method for generating and controlling physically‐based motion of complex articulated characters. Our goal is to create motion from scratch, where the animator provides a small amount of input and gets in return a highly detailed and physically plausible motion. Our method relieves the animator from the burden of enforcing physical plausibility, but at the same time provides full control over the internal DOFs of the articulated character via a familiar interface. Control over the global DOFs is also provided by supporting kinematic constraints. Unconstrained portions of the motion are generated in real time, since the character is driven by joint torques generated by simple feedback controllers. Although kinematic constraints are satisfied using an iterative search (shooting), this process is typically inexpensive, since it only adjusts a few DOFs at a few time instances. The low expense of the optimization, combined with the ability to generate unconstrained motions in real time yields an efficient and practical tool, which is particularly attractive for high inertia motions with a relatively small number of kinematic constraints.  相似文献   

5.
In this paper we argue for our NPAR system as an effective 2D alternative to most NPR research, which is focused on frame coherent stylised rendering of 3D models. Our approach gives a highly stylised look to images without the support of 3D models. Nevertheless, they still behave as though they are animated by drawing, which they are. First, a stylised brush tool is used to freely draw extreme poses of characters. Each character is built of 2D drawn brush strokes which are manually grouped into layers. Each layer is assigned its place in a drawing hierarchy called a hierarchical display model (HDM). Next, multiple HDMs are created for the same character, each corresponding to a specific view. A collection of HDMs essentially reintroduces some correspondence information to the 2D drawings needed for inbetweening and, in effect, eliminates the need for a true 3D model. Once the models are composed the animator starts by defining keyframes from extreme poses in time. Next, brush stroke trajectories defined by the keyframe HDMs are inbetweened automatically across intermediate frames. Finally, each HDM of each generated inbetween frame is traversed and all elements are drawn one on another from back to front. Our techniques support highly rendered styles which are particularly difficult to animate by traditional means including the ‘airbrushed’, scraperboard, watercolour, Gouache, ‘ink-wash’, pastel, and the ‘crayon’ styles. In addition, we describe the data path to be followed to create highly stylised animations by incorporating real footage. We believe our system offers a new fresh perspective on computer-aided animation production and associated tools.  相似文献   

6.
We propose a design framework to assist with user‐generated content in facial animation — without requiring any animation experience or ground truth reference. Where conventional prototyping methods rely on handcrafting by experienced animators, our approach looks to encode the role of the animator as an Evolutionary Algorithm acting on animation controls, driven by visual feedback from a user. Presented as a simple interface, users sample control combinations and select favourable results to influence later sampling. Over multiple iterations of disregarding unfavourable control values, parameters converge towards the user's ideal. We demonstrate our framework through two non‐trivial applications: creating highly nuanced expressions by evolving control values of a face rig and non‐linear motion through evolving control point positions of animation curves.  相似文献   

7.
严肃游戏是计算机游戏一个新的发展方向,可以提供形象互动的模拟教学环境,已经广泛应用于科学教育、康复医疗、应急管理、军事训练等领域。虚拟角色是严肃游戏中模拟具有生命特征的图形实体,行为可信的虚拟角色能够提升用户使用严肃游戏的体验感。严肃游戏中的图形渲染技术已经逐步成熟,而虚拟角色行为建模的研究尚在初级阶段。可信的虚拟角色必须能够具有感知、情绪和行为能力。本文分别从游戏剧情与行为、行为建模方法、行为学习和行为建模评价等4个方面来分析虚拟角色行为建模研究。分析了有限状态机和行为树的特点,讨论了虚拟角色的行为学习方法。指出了强化学习的关键要素,探讨了深度强化学习的应用途径。综合已有研究,归纳了虚拟角色行为框架,该框架主要包括感觉输入、知觉分析、行为决策和动作4大模块。从情感计算的融入、游戏剧情和场景设计、智能手机平台和多通道交互4个角度讨论需要进一步研究的问题。虚拟角色的行为建模需要综合地考虑游戏剧情、机器学习和人机交互技术,构建具有自主感知、情绪、行为、学习能力、多通道交互的虚拟角色能够极大地提升严肃游戏的感染力,更好地体现寓教于乐。  相似文献   

8.
Traditional methods for creating dynamic objects and characters from static drawings involve careful tweaking of animation curves and/or simulation parameters. Sprite sheets offer a more drawing‐centric solution, but they do not encode timing information or the logic that determines how objects should transition between poses and cannot generalize outside the given drawings. We present an approach for creating dynamic sprites that leverages sprite sheets while addressing these limitations. In our system, artists create a drawing, deform it to specify a small number of example poses, and indicate which poses can be interpolated. To make the object move, we design a procedural simulation to navigate the pose manifold in response to external or user‐controlled forces. Powerful artistic control is achieved by allowing the artist to specify both the pose manifold and how it is navigated, while physics is leveraged to provide timing and generality. We used our method to create sprites with a range of different dynamic properties. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

9.
目的 建立行为可信的虚拟角色能够使严肃游戏更加有趣,提升用户使用的体验感。尽管严肃游戏的图形渲染技术已经日趋成熟,但现有的虚拟角色行为表现方式多采用确定性模型,很难反映虚拟角色行为表现的多样性。方法 本文构建了符合辅助社交训练的严肃游戏剧情,采用智能体来描述虚拟角色,赋予虚拟角色视觉、听觉双通道感知。基于马斯洛动机理论,采用食物、休息、交流和安全等动机来描述情绪的产生,利用大五(OCEAN)个性模型来描述虚拟角色的不同个性差别。用外部刺激和内部动机需求来计算情绪强度,利用行为树描述虚拟角色的行为。运用正态云模型处理虚拟角色行为表现的不确定性,并以行走方向、社交距离、交流时身体朝向3个典型的行为表现给出了具体处理方法。结果 在所实现的游戏原型系统中,对于虚拟角色的自主行为和行为表现的不确定性进行了用户体验测试。结果表明,在场景探索任务中,虚拟角色的自主行为模型能减少用户探索场景所耗费的时间,并且可以促进用户与虚拟角色交流;在行为表现测试中,本文模型的自然性评价要高于确定性模型。结论 本文虚拟角色行为模型在一定程度上可提升用户的体验感,有望为建立行为可信的虚拟角色提供一种新的途径。  相似文献   

10.
11.
Detailed animation of 3D articulated body models is in principle desirable but is also a highly resource‐intensive task. Resource limitations are particularly critical in 3D visualizations of multiple characters in real‐time game sequences. We investigated to what extent observers perceptually process the level of detail in naturalistic character animations. Only if such processing occurs would it be justified to spend valuable resources on richness of detail. An experiment was designed to test the effectiveness of 3D body animation. Observers had to judge the level of overall skill exhibited by four simulated soccer teams. The simulations were based on recorded RoboCup simulation league games. Thus objective skill levels were known from the teams' placement in the tournament. The animations' level of detail was varied in four increasing steps of modelling complexity. Results showed that observers failed to notice the differences in detail. Nonetheless, clear effects of character animation on perceived skill were found. We conclude that character animation co‐determines perceptual judgements even when observers are completely unaware of these manipulations. Copyright © 2000 John Wiley & Sons, Ltd.  相似文献   

12.
Creation of detailed character models is a very challenging task in animation production. Sketch‐based character model creation from a 3D template provides a promising solution. However, how to quickly find correct correspondences between user's drawn sketches and the 3D template model, how to efficiently deform the 3D template model to exactly match user's drawn sketches, and realize real‐time interactive modeling is still an open topic. In this paper, we propose a new approach and develop a user interface to effectively tackle this problem. Our proposed approach includes using user's drawn sketches to retrieve a most similar 3D template model from our dataset and marrying human's perception and interactions with computer's highly efficient computing to extract occluding and silhouette contours of the 3D template model and find correct correspondences quickly. We then combine skeleton‐based deformation and mesh editing to deform the 3D template model to fit user's drawn sketches and create new and detailed 3D character models. The results presented in this paper demonstrate the effectiveness and advantages of our proposed approach and usefulness of our developed user interface. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

13.
Rig‐space physics a finite element method(FEM) based simulation technique that aims at adding secondary motion on a character while maintaining seamless cooperation with traditional animation pipelines. We enhance the rig‐space physics by introducing several techniques, including general field interaction, proportional‐derivative control, and improved material control. This allows an animator to perform various interferences to the simulation process and create more abundant animation effects. Moreover, we also improve the numerical stability of the simulation algorithm by prepending a conjugate gradient procedure. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

14.
为简化数字皮影动画的制作流程,提高数字皮影作品的交互性,降低数字皮影实时表演的难度,对皮影常见角色及动作进行了抽象、分类,提出一种针对特定类型的通用的骨骼模型。基于反向动力学,设计了一套描述皮影骨骼动画的脚本。该脚本通过对骨骼位置与旋转属性的控制定义皮影角色的基本动作,并通过对基本动作的叠加实现复杂动作。脚本对动作的定义基于骨骼模型而非具体角色,具有通用性。最后通过实验证明了脚本的有效性。  相似文献   

15.
16.
Today's computer animators have access to many systems and techniques to author high-quality motion. Unfortunately, available techniques typically produce a particular motion for a specific character. In this paper we present a constraint-based approach to adapt previously created motions to new situations and characters. We combine constraint methods that compute changes to motion to meet specified needs with motion signal processing methods that modify signals yet preserve desired properties of the original motion. The combination allows the adaptation of motions to meet new goals while retaining much of the motion's original quality. © 1998 John Wiley & Sons, Ltd.  相似文献   

17.
基于视频的人体动画   总被引:14,自引:0,他引:14  
现有的基于运动捕获的动画普遍存在投资大,人体的运动受到捕获设备限制等缺陷。提出了新的基于视频的人体动画技术,首先从视频中捕获人体运动,然后将编辑处理后的运动重定向到动画角色,产生满足动画师要求的动画片段。对其中的运动捕获、运动编辑和运动重定向等关键技术进行了深入研究,并基于这些关键技术开发了一个基于双像机的视频动画系统(VBHA),系统运行结果表明了从来源广、成本低的视频中捕获运动,并经过运动编辑和运动重定向后生成逼真动画的可行性。  相似文献   

18.
Physically based characters have not yet received wide adoption in the entertainment industry because control remains both difficult and unreliable. Even with the incorporation of motion capture for reference, which adds believability, characters fail to be convincing in their appearance when the control is not robust. To address these issues, we propose a simple Jacobian transpose torque controller that employs virtual actuators to create a fast and reasonable tracking system for motion capture. We combine this controller with a novel approach we call the topple‐free foot strategy which conservatively applies artificial torques to the standing foot to produce a character that is capable of performing with arbitrary robustness. The system is both easy to implement and straightforward for the animator to adjust to the desired robustness, by considering the trade‐off between physical realism and stability. We showcase the benefit of our system with a wide variety of example simulations, including energetic motions with multiple support contact changes, such as capoeira, as well as an extension that highlights the approach coupled with a Simbicon controlled walker. With this work, we aim to advance the state‐of‐the‐art in the practical design for physically based characters that can employ unaltered reference motion (e.g. motion capture data) and directly adapt it to a simulated environment without the need for optimization or inverse dynamics.  相似文献   

19.
A person's emotions and state of mind are apparent in their face and eyes. As a Latin proverb states: ‘The face is the portrait of the mind; the eyes, its informers’. This presents a significant challenge for Computer Graphics researchers who generate artificial entities that aim to replicate the movement and appearance of the human eye, which is so important in human–human interactions. This review article provides an overview of the efforts made on tackling this demanding task. As with many topics in computer graphics, a cross‐disciplinary approach is required to fully understand the workings of the eye in the transmission of information to the user. We begin with a discussion of the movement of the eyeballs, eyelids and the head from a physiological perspective and how these movements can be modelled, rendered and animated in computer graphics applications. Furthermore, we present recent research from psychology and sociology that seeks to understand higher level behaviours, such as attention and eye gaze, during the expression of emotion or during conversation. We discuss how these findings are synthesized in computer graphics and can be utilized in the domains of Human–Robot Interaction and Human–Computer Interaction for allowing humans to interact with virtual agents and other artificial entities. We conclude with a summary of guidelines for animating the eye and head from the perspective of a character animator.  相似文献   

20.
Autonomous virtual characters (AVCs) are becoming more prevalent both for real‐time interaction and also as digital actors in film and TV production. AVCs require believable virtual human animations, accompanied by natural attention generation, and thus the software that controls the AVCs needs to model when and how to interact with the objects and other characters that exist in the virtual environment. This paper models automatic attention behaviour using a saliency model that generates plausible targets for combined gaze and head motions. The model was compared with the default behaviour of the Second Life (SL) system in an object observation scenario while it was compared with real actors' behaviour in a conversation scenario. Results from a study run within the SL system demonstrate a promising attention model that is not just believable and realistic but also adaptable to varying task, without any prior knowledge of the virtual scene. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号