首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
2.
CEOs of big companies may travel frequently to give their philosophies and policies to the employees who are working at world wide branches. Video technology makes it possible to give their lectures anywhere and anytime in the world very easily. However, 2-dimentional video systems lack the reality. If we can give natural realistic lectures through humanoid robots, CEOs do not need to meet the employees in person. They can save their time and money for traveling.We propose a substitute robot of remote person. The substitute robot is a humanoid robot that can reproduce the lecturers’ facial expressions and body movements, and that can send the lecturers to everywhere in the world instantaneously with the feeling of being at a live performance. There are two major tasks for the development; they are the facial expression recognition/reproduction and the body language reproduction.For the former task, we proposed a facial expression recognition method based on a neural network model. We recognized five emotions, or surprise, anger, sadness, happiness and no emotion, in real time. We also developed a facial robot to reproduce the recognized emotion on the robot face. Through experiments, we showed that the robot could reproduce the speakers’ emotions with its face.For the latter task, we proposed a degradation control method to reproduce the natural movement of the lecturer even when a robot rotary joint fails. For the fundamental stage of our research for this sub-system, we proposed a control method for the front view movement model, or 2-dimentional model.  相似文献   

3.
《Advanced Robotics》2013,27(6):585-604
We are attempting to introduce a 3D, realistic human-like animated face robot to human-robot communication. The face robot can recognize human facial expressions as well as produce realistic facial expressions in real time. For the animated face robot to communicate interactively, we propose a new concept of 'active human interface', and we investigate the performance of real time recognition of facial expressions by neural networks (NN) and the expressionability of facial messages on the face robot. We find that the NN recognition of facial expressions and the face robot's performance in generating facial expressions are of almost same level as that in humans. We also construct an artificial emotion model able to generate six basic emotions in accordance with the recognition of a given facial expression and the situational context. This implies a high potential for the animated face robot to undertake interactive communication with humans, when integrating these three component technologies into the face robot.  相似文献   

4.
The ability to recognize facial emotions is target behaviour when treating people with social impairment. When assessing this ability, the most widely used facial stimuli are photographs. Although their use has been shown to be valid, photographs are unable to capture the dynamic aspects of human expressions. This limitation can be overcome by creating virtual agents with feasible expressed emotions. The main objective of the present study was to create a new set of dynamic virtual faces with high realism that could be integrated into a virtual reality (VR) cyberintervention to train people with schizophrenia in the full repertoire of social skills. A set of highly realistic virtual faces was created based on the Facial Action Coding System. Facial movement animation was also included so as to mimic the dynamism of human facial expressions. Consecutive healthy participants (n = 98) completed a facial emotion recognition task using both natural faces (photographs) and virtual agents expressing five basic emotions plus a neutral one. Repeated-measures ANOVA revealed no significant difference in participants’ accuracy of recognition between the two presentation conditions. However, anger was better recognized in the VR images, and disgust was better recognized in photographs. Age, the participant’s gender and reaction times were also explored. Implications of the use of virtual agents with realistic human expressions in cyberinterventions are discussed.  相似文献   

5.
学业情绪能够影响和调节学习者的注意、记忆、思维等认知活动,情绪自动识别是智慧学习环境中情感交互和教学决策的基础。目前情绪识别研究主要集中在离散情绪的识别,其在时间轴上是非连续的,无法精准刻画学生学业情绪演变过程,为解决这个问题,基于众包方法建立真实在线学习情境中的中学生学习维度情感数据集,设计基于连续维度情感预测的深度学习分析模型。实验中根据学生学习风格确定触发学生学业情绪的学习材料,并招募32位实验人员进行自主在线学习,实时采集被试面部图像,获取157个学生学业情绪视频;对每个视频进行情感Arousal和Valence二维化,建立包含2 178张学生面部表情的维度数据库;建立基于ConvLSTM网络的维度情感模型,并在面向中学生的维度情感数据库上进行实验,得到一致性相关系数(Concordance Correlation Coefficient,CCC)均值为0.581,同时在Aff-Wild公开数据集上进行实验,得到的一致相关系数均值为0.222。实验表明,提出的基于维度情感模型在Aff-Wild公开数据集维度情绪识别中CCC相关度系数指标提升了7.6%~43.0%。  相似文献   

6.
7.
Pattern Analysis and Applications - Most research on facial expressions recognition has focused on binary Action Units (AUs) detection, while graded changes in their intensity have rarely been...  相似文献   

8.

Emotion recognition from facial images is considered as a challenging task due to the varying nature of facial expressions. The prior studies on emotion classification from facial images using deep learning models have focused on emotion recognition from facial images but face the issue of performance degradation due to poor selection of layers in the convolutional neural network model.To address this issue, we propose an efficient deep learning technique using a convolutional neural network model for classifying emotions from facial images and detecting age and gender from the facial expressions efficiently. Experimental results show that the proposed model outperformed baseline works by achieving an accuracy of 95.65% for emotion recognition, 98.5% for age recognition, and 99.14% for gender recognition.

  相似文献   

9.
基于情感识别的智能教学系统研究   总被引:1,自引:0,他引:1  
针对传统的智能教学系统(ITS)在情感方面的缺失,提出了基于情感识别技术的ITS模型.该系统模型在传统的教学系统上新增情感识别模块,利用人脸表情识别以及文本识别等技术所构建,可以获取和识别学生的学习情感,并根据学习情感进行相应的情感激励策略,实现情感化的教学.  相似文献   

10.
在情感机器人研究中,不同个性的面部表情是情感机器人增强真实感的重要基础。为实现情感机器人更加丰富细腻的表情,将人类的个性特征引入情感机器人,分析个性理论和情感模型理论,得知不同个性机器人的情感强度。结合面部动作编码系统中面部表情与机器人控制点之间的映射关系,得到情感机器人不同个性的基本表情实现方法。利用Solidworks建立情感机器人脸部模型,在ANSYS工程软件中将SHFR-Ⅲ情感机器人脸部模型设置为弹性体,通过有限元仿真计算方法,对表情的有限元仿真方法进行了探究,得到实现SHFR-Ⅲ不同个性基本表情的控制区域载荷大小和仿真结果。最后,根据仿真结果,进行SHFR-Ⅲ情感机器人不同个性的表情动作实验。实验结果表明,有限元表情仿真可以指导SHFR-Ⅲ情感机器人实现近似人类的不同个性的基本面部表情。  相似文献   

11.
游戏中图像内容情感语义的分析与研究   总被引:1,自引:0,他引:1  
针对目前游戏中忽略情感因素的问题,在游戏场景中引入情感语义的概念,提出基于情感语义的游戏场景图像检索系统和情感计算研究情感信息的数字化处理方法,采用支持向量机算法建立场景图像内容和其所表达的情感语义之间的联系。来建立一个能感知识别人的情感,能对人的情感做出智能反应的游戏系统,从而使人机交互更加和谐化。  相似文献   

12.
As technology advances, robots and virtual agents will be introduced into the home and healthcare settings to assist individuals, both young and old, with everyday living tasks. Understanding how users recognize an agent׳s social cues is therefore imperative, especially in social interactions. Facial expression, in particular, is one of the most common non-verbal cues used to display and communicate emotion in on-screen agents (Cassell et al., 2000). Age is important to consider because age-related differences in emotion recognition of human facial expression have been supported (Ruffman et al., 2008), with older adults showing a deficit for recognition of negative facial expressions. Previous work has shown that younger adults can effectively recognize facial emotions displayed by agents (Bartneck and Reichenbach, 2005, Courgeon et al., 2009, Courgeon et al., 2011, Breazeal, 2003); however, little research has compared in-depth younger and older adults’ ability to label a virtual agent׳s facial emotions, an import consideration because social agents will be required to interact with users of varying ages. If such age-related differences exist for recognition of virtual agent facial expressions, we aim to understand if those age-related differences are influenced by the intensity of the emotion, dynamic formation of emotion (i.e., a neutral expression developing into an expression of emotion through motion), or the type of virtual character differing by human-likeness. Study 1 investigated the relationship between age-related differences, the implication of dynamic formation of emotion, and the role of emotion intensity in emotion recognition of the facial expressions of a virtual agent (iCat). Study 2 examined age-related differences in recognition expressed by three types of virtual characters differing by human-likeness (non-humanoid iCat, synthetic human, and human). Study 2 also investigated the role of configural and featural processing as a possible explanation for age-related differences in emotion recognition. First, our findings show age-related differences in the recognition of emotions expressed by a virtual agent, with older adults showing lower recognition for the emotions of anger, disgust, fear, happiness, sadness, and neutral. These age-related difference might be explained by older adults having difficulty discriminating similarity in configural arrangement of facial features for certain emotions; for example, older adults often mislabeled the similar emotions of fear as surprise. Second, our results did not provide evidence for the dynamic formation improving emotion recognition; but, in general, the intensity of the emotion improved recognition. Lastly, we learned that emotion recognition, for older and younger adults, differed by character type, from best to worst: human, synthetic human, and then iCat. Our findings provide guidance for design, as well as the development of a framework of age-related differences in emotion recognition.  相似文献   

13.
At the present, emotion is considered as a critical point of human behaviour, and thus it should be embedded within the reasoning module when an intelligent system or a autonomous robot aims to emulate or anticipate human reactions. Therefore, current research in Artificial Intelligence shows an increasing interest in artificial emotion research for developing human-like systems. Based on Thayer's emotion model and Fuzzy Cognitive Maps, this paper presents a proposal for forecasting artificial emotions. It provides an innovative method for forecasting artificial emotions and designing an affective decision system. This work includes an experiment with three simulated artificial scenarios for testing the proposal. Each scenario generate different emotions according to the artificial experimental model.  相似文献   

14.
机器的情感是通过融入具有情感能力的智能体实现的,虽然目前在人机交互领域已经有大量研究成果,但有关智能体情感计算方面的研究尚处起步阶段,深入开展这项研究对推动人机交互领域的发展具有重要的科学和应用价值。本文通过检索Scopus数据库选择有代表性的文献,重点关注情感在智能体和用户之间的双向流动,分别从智能体对用户的情绪感知和对用户情绪调节的角度开展分析总结。首先梳理了用户情绪的识别方法,即通过用户的表情、语音、姿态、生理信号和文本信息等多通道信息分析用户的情绪状态,归纳了情绪识别中的一些机器学习方法。其次从用户体验角度分析具有情绪表现力的智能体对用户的影响,总结了智能体的情绪生成和表现技术,指出智能体除了通过表情之外,还可以通过注视、姿态、头部运动和手势等非言语动作来表现情绪。并且梳理了典型的智能体情绪架构,举例说明了强化学习在智能体情绪设计中的作用。同时为了验证模型的准确性,比较了已有的情感评估手段和评价指标。最后指出智能体情感计算急需解决的问题。通过对现有研究的总结,智能体情感计算研究是一个很有前景的研究方向,希望本文能够为深入开展相关研究提供借鉴。  相似文献   

15.
颜文靖  蒋柯  傅小兰   《智能系统学报》2022,17(5):1039-1053
自动表情识别是心理学与计算机科学等深度交叉的前沿领域。情绪心理学、模式识别、情感计算等领域的研究者发展表情识别相关的理论、数据库和算法,极大地推动了自动表情识别技术的进步。文章基于心理学视角,结合我们前期开展的相关工作,首先梳理自动表情识别的心理学基础、情绪的面部表达方式、表情数据的演化、表情样本的标注等方面的理论观点与实践进展,然后分析指出自动表情识别面临的主要问题,最后基于预测加工理论的建构观点,提出注重交互过程中的表情“理解”,有望进一步提高自动表情识别的有效性,并预期这可能是自动表情识别研究的未来发展方向。  相似文献   

16.
为解决基于视觉的情感识别无法捕捉人物所处环境和与周围人物互动对情感识别的影响、单一情感种类无法更丰富地描述人物情感、无法对未来情感进行合理预测的问题,提出了融合背景上下文特征的视觉情感识别与预测方法。该方法由融合背景上下文特征的情感识别模型(Context-ER)和基于GRU与Valence-Arousal连续情感维度的情感预测模型(GRU-mapVA)组成。Context-ER同时综合了面部表情、身体姿态和背景上下文(所处环境、与周围人物互动行为)特征,进行26种离散情感类别的多标签分类和3个连续情感维度的回归。GRU-mapVA根据所提映射规则将Valence-Arousal的预测值投影到改进的Valence-Arousal模型上,使得情感预测类间差异更为明显。Context-ER在Emotic数据集上进行了测试,结果表明,识别情感的平均精确率比现有最优方法提高4%以上;GRU-mapVA在三段视频样本上进行了测试,结果表明情感预测效果相较于现有方法有很大提升。  相似文献   

17.
该研究采用事件相关电位(ERP)观察了被试在识别积极、中性和消极的脸部情绪时,在大脑颞枕部电极点上引发的N170效应,来探索阅读严肃文学小说是否会影响人对他人情绪的反应。阅读组被试在两次脸部情绪识别测试之间阅读严肃文学小说,而对照组没有。第二次测试相比第一次测试,N170的幅度增大,但是阅读严肃文学小说会抑制N170幅度增益,且对情绪越积极的刺激图片抑制越大。据此,阅读对他人脸部情绪的识别确有影响。研究推测阅读可能抑制大脑中的脸部情绪特异性,进而可能提高对脸部情绪的感知力。  相似文献   

18.
基于情感交互的仿人头部机器人   总被引:4,自引:0,他引:4  
本研究的目的是设计一台机器人,使它可以与人互动,并在日常生活中和常见的地方协助人类.为了 完成这些任务,机器人必须友好地显示出一些情感,表现出友好的特点和个性.依据仿生学,研制了一台仿人头部 机器人,建立了机器人的行为决策模型.该机器人具有人类的6 种基本面部表情,以及人脸检测、语音情感识别与 合成、情感行为决策等能力,能够通过机器视觉、语音交互、情感表达等方式与人进行有效的情感交互.  相似文献   

19.
A fully automated, multistage system for real-time recognition of facial expression is presented. The system uses facial motion to characterize monochrome frontal views of facial expressions and is able to operate effectively in cluttered and dynamic scenes, recognizing the six emotions universally associated with unique facial expressions, namely happiness, sadness, disgust, surprise, fear, and anger. Faces are located using a spatial ratio template tracker algorithm. Optical flow of the face is subsequently determined using a real-time implementation of a robust gradient model. The expression recognition system then averages facial velocity information over identified regions of the face and cancels out rigid head motion by taking ratios of this averaged motion. The motion signatures produced are then classified using Support Vector Machines as either nonexpressive or as one of the six basic emotions. The completed system is demonstrated in two simple affective computing applications that respond in real-time to the facial expressions of the user, thereby providing the potential for improvements in the interaction between a computer user and technology.  相似文献   

20.
This study develops a face robot with human-like appearance for making facial expressions similar to a specific subject. First, an active drive points (ADPs) model is proposed for establishing a robotic face with less active degree of freedom for bipedal humanoid robots. Then, a robotic face design method is proposed, with the robot possessing similar facial appearance and expressions to that of a human subject. A similarity evaluation method is presented to evaluate the similarity of facial expressions between a robot and a specific human subject. Finally, the proposed facial model and the design methods are verified and implemented on a humanoid robot platform.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号