首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 596 毫秒
1.
基于情绪心理学的情感建模   总被引:1,自引:0,他引:1       下载免费PDF全文
杨国亮  任金霞  王志良 《计算机工程》2007,33(22):209-211,225
基于情绪心理学的基本理论,定义了个性空间、情感空间和心情空间,建立了个性与心情、心情与情感的映射关系,给出了心情与情感状态更新方程,提出了一种能够合理反映人类情感变化规律的情感计算模型。实验表明,该模型能合理反映出在外界刺激作用下,不同个性者心情状态和情感状态的波动过程,为情感机器人的情感决策提供了一种新的机制。  相似文献   

2.
朱飒飒  王巍 《现代计算机》2010,(5):72-74,84
提出虚拟人的人工情感模型,并以模型输出的情绪向量对应人脸表情变化特征点,再由3DSMAX工具生成人脸模型,定义其各个与表情相关的特征点,并将其读入OpenGL中,再通过ViSual C++进行控制.通过实验仿真实现人工情感模型对人脸表情的实时驱动.结论表明,情感模型的建立,实现计算机更加智能、友好和更有能力,与表情表达相结合,实现人类与计算机能更好的交互.  相似文献   

3.
情感机器人的情感模型研究   总被引:2,自引:0,他引:2  
本文建立了仿人类情感的情感模型,在情感模型中建立了三维情感空间,采用马尔可夫过程来描述情感状态的变化转移过程,并提出了性格矩阵,对比分析了不同性格人对相同刺激的反应.应用D-S证据理论融合来自视觉、声音及其他方面的情感信息,分析两种因素同时作用时引起情感状态的转移规律.最后将情感模型应用于情感机器人系统,使机器人可以根据外界刺激产生情感,并做出相应的表情.实验结果证明了情感模型的有效性.  相似文献   

4.
基于BBN情感模型的和谐人机交互研究*   总被引:1,自引:0,他引:1  
提出了一种将个性、情感、情绪分层表示的思想,并且利用贝叶斯网络进行情感建模,通过虚拟人脸的面部表情来反映情绪的变化,旨在赋予机器类人的情感,达到更加真实和谐的人机交互。最后将此情感模型应用于情感虚拟人交互系统,实验证明,该模型简单、稳定,且易于实现。  相似文献   

5.
机器人情感建模是研究情感机器人的热点问题。文中以情感心理学知识为基础,模拟具有不同个性的情感机器人在外界刺激作用下情感动态变化的过程,研究个性和外界刺激对情感转移过程的影响。采用基于状态空间的情感空间模型来描述机器人的情感状态,并用HMM过程来模拟情感状态的转移过程。但HMM过程只能求得当前情感状态的概率,为得到具体的情感状态,文中提出一种基于状态空间与概率空间映射的极大相似度匹配的情感转移模型。首先利用HMM过程计算出当前情感概率,然后通过极大相似度匹配来得到转移后具体的情感状态。通过调节模型参数来模拟不同个性和外界刺激,该模型能有效模拟情感状态变化过程。实验结果验证模型模拟的情感变化过程符合人类情感变化的一般规律。  相似文献   

6.
根据基本情感理论建立了家庭服务机器人的情感状态概率空间模型,并应用马尔可夫链的特性,建立了基于隐马尔可夫模型的情感计算模型.详细地阐述了该情感计算模型中各参数的意义以及估算方法.通过仿真实验验证了该情感计算模型可以较好地模拟情感状态的自发转移,以及在外部刺激作用下的情感转移.通过对实验数据分析,发现机器人的情感经外部刺激作用或者自发演变,最终趋于稳定状态,这个稳定状态与情感转移概率矩阵有关,而与机器人所处的初始情感状态无关.  相似文献   

7.
基于体态语言的三维虚拟教师情感表达   总被引:2,自引:0,他引:2       下载免费PDF全文
赵慧勤  孙波  胡晓雁  谢彬  田燕琴 《计算机工程》2011,37(23):159-161,164
虚拟教师一般仅通过面部表情表达情感。为此,在情感、个性等高层心理参数与虚拟教师人脸、人体几何模型等低层动画参数之间建立映射机制,构建体态语言情感表达行为模型,以表达情感所使用的不同肢体部位建立原子动作库,通过贝叶斯网络设计情感表达生成策略,基于脚本技术实现体态语言逼真和丰富的表达。在OpenSim上的实验结果证明了该模型的实用性。  相似文献   

8.
基于情感交互的仿人头部机器人   总被引:4,自引:0,他引:4  
本研究的目的是设计一台机器人,使它可以与人互动,并在日常生活中和常见的地方协助人类.为了 完成这些任务,机器人必须友好地显示出一些情感,表现出友好的特点和个性.依据仿生学,研制了一台仿人头部 机器人,建立了机器人的行为决策模型.该机器人具有人类的6 种基本面部表情,以及人脸检测、语音情感识别与 合成、情感行为决策等能力,能够通过机器视觉、语音交互、情感表达等方式与人进行有效的情感交互.  相似文献   

9.
引入一种新的情感建模方法--基于灰色系统理论的情感建模.首先抽象出影响机器人情感生成的各主要因素,并给出情感机器人系统的MAS结构,最后构造出情感模型并在现有的机器人平台中进行了实现.实验表明,基于GM(1, N)的情感模型能够很好地模仿人类简单情感的生成.情感机器人属于个人机器人范畴,所以情感模型的建立体现了个性化的特点.  相似文献   

10.
描述一种3维人脸模型的单视频驱动方法。该方法在传统肌肉模型的基础上,根据嘴部运动特性采用机构学原理建立嘴部运动控制模型,通过视频图像序列跟踪得到特征点的运动规律曲线,进而驱动眼部、嘴部及面部其他部分的网格点运动,产生具有真实感的面部表情动作。仿真结果表明,采用此方法可以得到逼真的人脸表情模拟动画。  相似文献   

11.
This study develops a face robot with human-like appearance for making facial expressions similar to a specific subject. First, an active drive points (ADPs) model is proposed for establishing a robotic face with less active degree of freedom for bipedal humanoid robots. Then, a robotic face design method is proposed, with the robot possessing similar facial appearance and expressions to that of a human subject. A similarity evaluation method is presented to evaluate the similarity of facial expressions between a robot and a specific human subject. Finally, the proposed facial model and the design methods are verified and implemented on a humanoid robot platform.  相似文献   

12.
Recently, several different types of telepresence robots have been developed. Their ability to communicate with a person in a remote location has been analyzed to achieve rich communication, including the presence of an operator. These studies focused on human-sized robots, small-sized non-humanoid robots, or small-sized humanoid robots without human-like proportions. A small-sized humanoid robot with human-like proportions has not been studied because of the absence of such a small humanoid. It should be noted that human-like proportions play a very important role in enhancing the presence of an operator through an avatar. In our previous work, a small-sized humanoid robot named MH-2 was developed for wearable telepresence system. The MH-2 consists of seven degrees-of-freedom (7-DOF) arms, a 3-DOF head, and a torso scaled according to human-like proportions. Wire–pulley mechanisms were employed to achieve a compact design to adopt a requirement for wearability. In this work, a telepresence system is introduced for the evaluation of the communication ability of the MH-2. In the experiment, Skype, using a flat display, was compared with the MH-2. The “Indian Poker” game was employed for the experiment. It is an ideal evaluation platform because it is a psychological game that requires careful observations among the players. The evaluations were conducted in terms of six aspects: emotion or personality, line of sight, familiarity, presence, enjoyment, and smoothness of the game progress. The experimental results showed that the MH-2 performed positively in two out of six aspects, namely the line of sight and smoothness of the game progress. On the other hand, facial expressions play important role to display individual presence. From a comprehensive standpoint, the MH-2 demonstrated good capabilities for rich remote communications.  相似文献   

13.
A robot's face is its symbolic feature, and its facial expressions are the best method for interacting with people with emotional information. Moreover, a robot's facial expressions play an important role in human-robot emotional interactions. This paper proposes a general rule for the design and realization of expressions when some mascot-type facial robots are developed. Mascot-type facial robots are developed to enable friendly human feelings. The number and type of control points for six basic expressions or emotions were determined through a questionnaire. A linear affect-expression space model is provided to realize continuous and various expressions effectively, and the effects of the proposed method are shown through experiments using a simulator and an actual robot system.  相似文献   

14.
For social robots to respond to humans in an appropriate manner, they need to use apt affect displays, revealing underlying emotional intelligence. We present an artificial emotional intelligence system for robots, with both a generative and a perceptual aspect. On the generative side, we explore the expressive capabilities of an abstract, faceless, creature-like robot, with very few degrees of freedom, lacking both facial expressions and the complex humanoid design found often in emotionally expressive robots. We validate our system in a series of experiments: in one study, we find an advantage in classification for animated vs static affect expressions and advantages in valence and arousal estimation and personal preference ratings for both animated vs static and physical vs on-screen expressions. In a second experiment, we show that our parametrically generated expression variables correlate with the intended user affect perception. Combining the generative system with a perceptual component of natural language sentiment analysis, we show in a third experiment that our automatically generated affect responses cause participants to show signs of increased engagement and enjoyment compared with arbitrarily chosen comparable motion parameters.  相似文献   

15.
16.
CEOs of big companies may travel frequently to give their philosophies and policies to the employees who are working at world wide branches. Video technology makes it possible to give their lectures anywhere and anytime in the world very easily. However, 2-dimentional video systems lack the reality. If we can give natural realistic lectures through humanoid robots, CEOs do not need to meet the employees in person. They can save their time and money for traveling.We propose a substitute robot of remote person. The substitute robot is a humanoid robot that can reproduce the lecturers’ facial expressions and body movements, and that can send the lecturers to everywhere in the world instantaneously with the feeling of being at a live performance. There are two major tasks for the development; they are the facial expression recognition/reproduction and the body language reproduction.For the former task, we proposed a facial expression recognition method based on a neural network model. We recognized five emotions, or surprise, anger, sadness, happiness and no emotion, in real time. We also developed a facial robot to reproduce the recognized emotion on the robot face. Through experiments, we showed that the robot could reproduce the speakers’ emotions with its face.For the latter task, we proposed a degradation control method to reproduce the natural movement of the lecturer even when a robot rotary joint fails. For the fundamental stage of our research for this sub-system, we proposed a control method for the front view movement model, or 2-dimentional model.  相似文献   

17.
Automatic perception of human affective behaviour from facial expressions and recognition of intentions and social goals from dialogue contexts would greatly enhance natural human robot interaction. This research concentrates on intelligent neural network based facial emotion recognition and Latent Semantic Analysis based topic detection for a humanoid robot. The work has first of all incorporated Facial Action Coding System describing physical cues and anatomical knowledge of facial behaviour for the detection of neutral and six basic emotions from real-time posed facial expressions. Feedforward neural networks (NN) are used to respectively implement both upper and lower facial Action Units (AU) analysers to recognise six upper and 11 lower facial actions including Inner and Outer Brow Raiser, Lid Tightener, Lip Corner Puller, Upper Lip Raiser, Nose Wrinkler, Mouth Stretch etc. An artificial neural network based facial emotion recogniser is subsequently used to accept the derived 17 Action Units as inputs to decode neutral and six basic emotions from facial expressions. Moreover, in order to advise the robot to make appropriate responses based on the detected affective facial behaviours, Latent Semantic Analysis is used to focus on underlying semantic structures of the data and go beyond linguistic restrictions to identify topics embedded in the users’ conversations. The overall development is integrated with a modern humanoid robot platform under its Linux C++ SDKs. The work presented here shows great potential in developing personalised intelligent agents/robots with emotion and social intelligence.  相似文献   

18.
ABSTRACT

The design of humanoid robots’ emotional behaviors has attracted many scholars’ attention. However, users’ emotional responses to humanoid robots’ emotional behaviors which differ from robots’ traditional behaviors remain well understood. This study aims to investigate the effect of a humanoid robot’s emotional behaviors on users’ emotional responses using subjective reporting, pupillometry, and electroencephalography. Five categories of the humanoid robot’s emotional behaviors expressing joy, fear, neutral, sadness, or anger were designed, selected, and presented to users. Results show that users have a significant positive emotional response to the humanoid robot’s joy behavior and a significant negative emotional response to the humanoid robot’s sadness behavior, indicated by the metrics of reported valence and arousal, pupil diameter, frontal middle relative theta power, and frontal alpha asymmetry score. The results suggest that humanoid robot’s emotional behaviors can evocate users’ significant emotional response. The evocation might relate to the recognition of these emotional behaviors. In addition, the study provides a multimodal physiological method of evaluating users’ emotional responses to the humanoid robot’s emotional behaviors.  相似文献   

19.
Conventional humanoid robotic behaviors are directly programmed depending on the programmer's personal experience. With this method, the behaviors usually appear unnatural. It is believed that a humanoid robot can acquire new adaptive behaviors from a human, if the robot has the criteria underlying such behaviors. The aim of this paper is to establish a method of acquiring human behavioral criteria. The advantage of acquiring behavioral criteria is that the humanoid robots can then autonomously produce behaviors for similar tasks with the same behavioral criteria but without transforming data obtained from morphologically different humans every time for every task. In this paper, a manipulator robot learns a model behavior, and another robot is created to perform the model behavior instead of being performed by a person. The model robot is presented some behavioral criteria, but the learning manipulator robot does not know them and tries to infer them. In addition, because of the difference between human and robot bodies, the body sizes of the learning robot and the model robot are also made different. The method of obtaining behavioral criteria is realized by comparing the efficiencies with which the learning robot learns the model behaviors. Results from the simulation have demonstrated that the proposed method is effective for obtaining behavioral criteria. The proposed method, the details regarding the simulation, and the results are presented in this paper.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号