共查询到18条相似文献,搜索用时 171 毫秒
1.
一种综合可计算情感建模方法 总被引:2,自引:0,他引:2
情感作为人类具有智能的一个重要体现,是创建可信服的虚拟智能体不可或缺的环节.如何通过情感来提高虚拟智能体的智能性和可信服性已成为亟待解决的关键问题.结合生理和认知对情感的影响,提出了一种综合的可计算情感建模方法;设计了一个完全的过程框架,以描述情感在不同时刻如何动态变化以及如何处理多种混合情感的情况;并建立了基于具体描述事件和情感关系的情感结构,以产生具体、真实的情感行为;提出了交互学习机制以增强虚拟智能体对动态环境的适应能力.实例验证表明,此情感模型能有效地增加虚拟智能体在交互过程中的可信服性. 相似文献
2.
情感作为人类具有智能的一个重要体现,是创建丰富细腻的虚拟智能体的不可或缺的环节。本文结合认知对情感的影响,提出一个改进的情感建模方法和一种新的行为建模方法,设计具体的情感情境,并通过行为树的行为组织形式对虚拟环境中的行为进行管理,以产生具体、真实的虚拟角色情感行为。提出"个性-情感-情绪"的三层情感层次结构,并最后完成了一个实验系统。实验结果证明,该情感模型和行为组织模型能有效地反映虚拟角色在虚拟环境中的智能情感行为。 相似文献
3.
4.
5.
6.
7.
8.
本文主要介绍一个基于多智能体的国际商务开放式虚拟实训系统,通过设计实例引出基于智能代理技术的虚拟实训系统模型。根据逻辑应用的不同,总结归纳多智能Agent在该系统的各项功能,并对各功能细项作出分析,对在虚拟实训系统中运用Multi-Agent技术实现方法进行了探讨。 相似文献
9.
合作-竞争混合型多智能体系统由受控的目标智能体和不受控的外部智能体组成.目标智能体之间互相合作,同外部智能体展开竞争,应对环境和外部智能体的动态变化,最终完成指定的任务.针对如何训练目标智能体使他们获得完成任务的最优策略的问题,现有工作从两个方面展开:(1)仅关注目标智能体间的合作,将外部智能体视为环境的一部分,利用多智能体强化学习来训练目标智能体.这种方法难以应对外部智能体策略未知或者动态改变的情况;(2)仅关注目标智能体和外部智能体间的竞争,将竞争建模为双人博弈,采用自博弈的方法训练目标智能体.这种方法主要针对单个目标智能体和单个外部智能体的情况,难以扩展到由多个目标智能体和多个外部智能体组成的系统中.结合这两类研究,提出一种基于虚拟遗憾优势的自博弈方法.具体地,首先以虚拟遗憾最小化和虚拟多智能体策略梯度为基础,设计虚拟遗憾优势策略梯度方法,使目标智能体能更准确地更新策略;然后,引入模仿学习,以外部智能体的历史决策轨迹作为示教数据,模仿外部智能体的策略,显式地建模外部智能体的行为,来应对自博弈过程中外部智能体策略的动态变化;最后,以虚拟遗憾优势策略梯度和外部智能体行为建模为基础,设... 相似文献
10.
11.
情绪体验能够有效地提高虚拟现实系统用户的兴趣,虚拟人的情绪设计正成为构建虚拟环境的一项核心技术,目前的虚拟人情绪模型仍然处于初级阶段.综述了情绪模型的研究,讨论了情绪模型尚未解决的问题.根据情绪模型相关研究和认知科学的成果,提出了建立虚拟人情绪模型的一种新思,其目标是提高虚拟人情绪设计的效率,虚拟人的情绪状态通过情绪模型来控制,虚拟人可以具有感知、动机、情绪、个性,并可在虚拟环境中表现出恰当的自主情绪.情绪设计软件可以融合软计算理论和人机交互技术,为建立人性化的图形界面提供一种高效工具. 相似文献
12.
基于Markov决策过程的交互虚拟人情感计算模型 总被引:1,自引:0,他引:1
情感在生物体的交流和适应性方面起到了关键作用。同样,交互虚拟人也需要有恰如其分的表达情感的能力。由于具有情感交互能力的虚拟人在虚拟现实、电子教育、娱乐等领域均有着广阔的应用前景,当前,在虚拟人中加入情感成分的研究受到了越来越多的重视。本文提出了一个人工心理的情感计算模型,模型用马尔可夫过程来描述情感的变化过程,并且使用马尔可夫决策过程建立了情感、个性与环境之间的联系,并且我们把该模型应用到了一个交互虚拟人系统中。研究结果表明,模型能够构建具有不同性格特征的虚拟人,使之产生较为自然的情感反应。 相似文献
13.
Ziqi Tu Dongdong Weng Bin Liang Le Luo 《Journal of the Society for Information Display》2022,30(8):609-620
In this paper, a flexible deep learning-based framework is proposed that can extract expression and identity information from monocular images and can combine the extracted identity and expression from different images to generate new face models. In this framework, two encoders are used to extract expression and identity information, and three decoders are used to visualize the information by generating face models containing only expression, only identity, and fused expression and identity. By aligning the corresponding vertices of the parts with the same semantic on the face, an error evaluation method between face models with different topologies is proposed, which can more intuitively reflect the error distribution. The experimental results show that the proposed framework has higher accuracy than face component extraction by blendshape. The framework can be used for the facial expression generation of virtual humans, which is helpful for emotion transmission and language supplementation. 相似文献
14.
15.
Virtual humans: thirty years of research, what next? 总被引:2,自引:0,他引:2
In this paper, we present research results and future challenges in creating realistic and believable Virtual Humans. To realize
these modeling goals, real-time realistic representation is essential, but we also need interactive and perceptive Virtual
Humans to populate the Virtual Worlds. Three levels of modeling should be considered to create these believable Virtual Humans:
1) realistic appearance modeling, 2) realistic, smooth and flexible motion modeling, and 3) realistic high-level behaviors
modeling. At first, the issues of creating virtual humans with better skeleton and realistic deformable bodies are illustrated.
To give a level of believable behavior, challenges are laid on generating on the fly flexible motion and complex behaviours
of Virtual Humans inside their environments using a realistic perception of the environment. Interactivity and group behaviours
are also important parameters to create believable Virtual Humans which have challenges in creating believable relationship
between real and virtual humans based on emotion and personality, and simulating realistic and believable behaviors of groups
and crowds. Finally, issues in generating realistic virtual clothed and haired people are presented. 相似文献
16.
近年来 ,虚拟人行为动画已经成为计算机动画一个新的分枝 ,已往的研究多集中在虚拟人局部的情绪表达动画方面 ,如脸部动画 ,而对于给定的虚拟场景 ,则尚未考虑情绪产生的原因 .情绪是虚拟人和虚拟环境交互作用的结果 ,然而在计算机动画领域 ,虚拟人的情绪至今尚未得到清楚的描述 .为此 ,依据心理学的理论 ,提出了虚拟人情绪行为的动画模型 ,即 ,首先提出了情绪集合和情绪表达集合的概念 ,并建立了从情绪状态到情绪表达之间的映射 ;其次 ,着重分析了情绪产生的原因 ,并引入了情绪源的概念 ,如果一种情绪刺激的强度大于情绪的抵抗强度 ,那么这种情绪就会产生 ;此外 ,情绪状态可以采用有限状态机来描述 ,据此提出了情绪的变化流程 ;最后 ,在微机上通过调用 Microsoft Direct3D API,实现了虚拟人的情绪行为动画 相似文献
17.
As technology advances, robots and virtual agents will be introduced into the home and healthcare settings to assist individuals, both young and old, with everyday living tasks. Understanding how users recognize an agent׳s social cues is therefore imperative, especially in social interactions. Facial expression, in particular, is one of the most common non-verbal cues used to display and communicate emotion in on-screen agents (Cassell et al., 2000). Age is important to consider because age-related differences in emotion recognition of human facial expression have been supported (Ruffman et al., 2008), with older adults showing a deficit for recognition of negative facial expressions. Previous work has shown that younger adults can effectively recognize facial emotions displayed by agents (Bartneck and Reichenbach, 2005, Courgeon et al., 2009, Courgeon et al., 2011, Breazeal, 2003); however, little research has compared in-depth younger and older adults’ ability to label a virtual agent׳s facial emotions, an import consideration because social agents will be required to interact with users of varying ages. If such age-related differences exist for recognition of virtual agent facial expressions, we aim to understand if those age-related differences are influenced by the intensity of the emotion, dynamic formation of emotion (i.e., a neutral expression developing into an expression of emotion through motion), or the type of virtual character differing by human-likeness. Study 1 investigated the relationship between age-related differences, the implication of dynamic formation of emotion, and the role of emotion intensity in emotion recognition of the facial expressions of a virtual agent (iCat). Study 2 examined age-related differences in recognition expressed by three types of virtual characters differing by human-likeness (non-humanoid iCat, synthetic human, and human). Study 2 also investigated the role of configural and featural processing as a possible explanation for age-related differences in emotion recognition. First, our findings show age-related differences in the recognition of emotions expressed by a virtual agent, with older adults showing lower recognition for the emotions of anger, disgust, fear, happiness, sadness, and neutral. These age-related difference might be explained by older adults having difficulty discriminating similarity in configural arrangement of facial features for certain emotions; for example, older adults often mislabeled the similar emotions of fear as surprise. Second, our results did not provide evidence for the dynamic formation improving emotion recognition; but, in general, the intensity of the emotion improved recognition. Lastly, we learned that emotion recognition, for older and younger adults, differed by character type, from best to worst: human, synthetic human, and then iCat. Our findings provide guidance for design, as well as the development of a framework of age-related differences in emotion recognition. 相似文献
18.
Magalie Ochs David Sadek Catherine Pelachaud 《Autonomous Agents and Multi-Agent Systems》2012,24(3):410-440
Recent research has shown that virtual agents expressing empathic emotions toward users have the potential to enhance human–machine
interaction. To provide empathic capabilities to a rational dialog agent, we propose a formal model of emotions based on an
empirical and theoretical analysis of the users’ conditions of emotion elicitation. The emotions are represented by particular
mental states of the agent, composed of beliefs, uncertainties and intentions. This semantically grounded formal representation
enables a rational dialog agent to identify from a dialogical situation the empathic emotion that it should express. An implementation
and an evaluation of an empathic rational dialog agent have enabled us to validate the proposed model of empathy. 相似文献