首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Ontological reasoning for improving the treatment of emotions in text   总被引:2,自引:2,他引:0  
With the advent of affective computing, the task of adequately identifying, representing and processing the emotional connotations of text has acquired importance. Two problems facing this task are addressed in this paper: the composition of sentence emotion from word emotion, and a representation of emotion that allows easy conversion between existing computational representations. The emotion of a sentence of text should be derived by composition of the emotions of the words in the sentence, but no method has been proposed so far to model this compositionality. Of the various existing approaches for representing emotions, some are better suited for some problems and some for others, but there is no easy way of converting from one to another. This paper presents a system that addresses these two problems by reasoning with two ontologies implemented with Semantic Web technologies: one designed to represent word dependency relations within a sentence, and one designed to represent emotions. The ontology of word dependency relies on roles to represent the way emotional contributions project over word dependencies. By applying automated classification of mark-up results in terms of the emotion ontology the system can interpret unrestricted input in terms of a restricted set of concepts for which particular rules are provided. The rules applied at the end of the process provide configuration parameters for a system for emotional voice synthesis.  相似文献   

2.
This paper introduces the integrated system of a smart-device-based cognitive robot partner called iPhonoid-C. Interaction with a robot partner requires many elements, including verbal communication, nonverbal communication, and embodiment as well. A robot partner should be able to understand human sentences, as well as nonverbal information such as human gestures. In the proposed system, the robot has an emotional model connecting the input information from the human with the robot’s behavior. Since emotions are involved in human natural communication, and emotion has a significant impact on humans’ actions, it is important to develop an emotional model for the robot partner to enhance human robot interaction. In our proposed system, human sentences and gestures influence the robot’s emotional state, and then the robot will perform gestural and facial expressions and generate sentences according to its emotional state. The proposed cognitive method is validated using a real robot partner.  相似文献   

3.
To date, most of the human emotion recognition systems are intended to sense the emotions and their dominance individually. This paper discusses a fuzzy model for multilevel affective computing based on the dominance dimensional model of emotions. This model can detect any other possible emotions simultaneously at the time of recognition. One hundred and thirty volunteers from various countries with different cultural backgrounds were selected to record their emotional states. These volunteers have been selected from various races and different geographical locations. Twenty-seven different emotions with their strengths in a scale of 5 were questioned through a survey. Recorded emotions were analyzed with the other possible emotions and their levels of dominance to build the fuzzy model. Then this model was integrated into a fuzzy emotion recognition system using three input devices of mouse, keyboard and the touch screen display. Support vector machine classifier detected the other possible emotions of the users along with the directly sensed emotion. The binary system (non-fuzzy) sensed emotions with an incredible accuracy of 93 %. However, it only could sense limited emotions. By integrating this model, the system was able to detect more possible emotions at a time with slightly lower recognition accuracy of 86 %. The recorded false positive rates of this model for four emotions were measured at 16.7 %. The resulted accuracy and its false positive rate are among the top three accurate human emotion recognition (affective computing) systems.  相似文献   

4.
How we design and evaluate for emotions depends crucially on what we take emotions to be. In affective computing, affect is often taken to be another kind of information—discrete units or states internal to an individual that can be transmitted in a loss-free manner from people to computational systems and back. While affective computing explicitly challenges the primacy of rationality in cognitivist accounts of human activity, at a deeper level it often relies on and reproduces the same information-processing model of cognition. Drawing on cultural, social, and interactional critiques of cognition which have arisen in human–computer interaction (HCI), as well as anthropological and historical accounts of emotion, we explore an alternative perspective on emotion as interaction: dynamic, culturally mediated, and socially constructed and experienced. We demonstrate how this model leads to new goals for affective systems—instead of sensing and transmitting emotion, systems should support human users in understanding, interpreting, and experiencing emotion in its full complexity and ambiguity. In developing from emotion as objective, externally measurable unit to emotion as experience, evaluation, too, alters focus from externally tracking the circulation of emotional information to co-interpreting emotions as they are made in interaction.  相似文献   

5.
Weblogs are increasingly popular modes of communication and they are frequently used as mediums for emotional expression in the ever changing online world. This work uses blogs as object and data source for Chinese emotional expression analysis. First, a textual emotional expression space model is described, and based on this model, a relatively fine-grained annotation scheme is proposed for manual annotation of an emotion corpus. In document and paragraph levels, emotion category, emotion intensity, topic word and topic sentence are annotated. In sentence level, emotion category, emotion intensity, emotional keyword and phrase, degree word, negative word, conjunction, rhetoric, punctuation, objective or subjective, and emotion polarity are annotated. Then, using this corpus, we explore these linguistic expressions that indicate emotion in Chinese, and present a detailed data analysis on them, involving mixed emotions, independent emotion, emotion transfer, and analysis on words and rhetorics for emotional expression.  相似文献   

6.
This paper introduces OpenPsi, a computational model for emotion generation and function by formalizing part of Dörner's PSI theory, which is an extensive psychological model of human brains, including knowledge representation, perception and bounded rationality. We also borrowed some technical ideas from MicroPsi, one of the concrete implementations of PSI theory by Joscha Bach. The proposed emotional model is then applied to control a virtual robot living in a game world inspired by Minecraft. Simulation experiments have been performed and evaluated for three different scenarios. The emergent emotions fit quite well with these circumstances. The dynamics of this affective model are also analyzed using Lewis's dynamic theory of emotions. Evidences of phase transitions suggested by Lewis are observed in simulations, including trigger, self-amplification and self-stabilization phases. These experiment results show that the proposed model is a quite promising approach of modeling both emotion emergence and dynamics.  相似文献   

7.
Recognition of emotion in speech has recently matured to one of the key disciplines in speech analysis serving next generation human-machine interaction and communication. However, compared to automatic speech recognition, that emotion recognition from an isolated word or a phrase is inappropriate for conversation. Because a complete emotional expression may stride across several sentences, and may fetch-up on any word in dialogue. In this paper, we present a segment-based emotion recognition approach to continuous Mandarin Chinese speech. In this proposed approach, the unit for recognition is not a phrase or a sentence but an emotional expression in dialogue. To that end, the following procedures are presented: First, we evaluate the performance of several classifiers in short sentence speech emotion recognition architectures. The results of the experiments show that the WD-KNN classifier achieves the best accuracy for the 5-class emotion recognition what among the five classification techniques. We then implemented a continuous Mandarin Chinese speech emotion recognition system with an emotion radar chart which is based on WD-KNN; this system can represent the intensity of each emotion component in speech. This proposed approach shows how emotions can be recognized by speech signals, and in turn how emotional states can be visualized.  相似文献   

8.
This paper presents a non-verbal and non-facial method for effective communication of a “mechanoid robot” by conveying the emotions through gestures. This research focuses on human–robot interaction using a mechanoid robot that does not possess any anthropomorphic facial features for conveying gestures. Another feature of this research is the use of human-like smooth motion of this mechanoid robot in contrast to the traditional trapezoidal velocity profile for its communication. For conveying gestures, the connection between motion of robot and perceived emotions is established by varying the velocity and acceleration of the mechanoid structure. The selected motion parameters are changed systematically to observe the variation in perceived emotions. The perceived emotions have been further investigated using three different emotional behavior models: Russell’s circumplex model of affect, Tellegen–Watson–Clark model and PAD model. Results obtained show that the designated motion parameters are linked with the change of emotions. Moreover, the emotions perceived by the user are same through all three models, validating the reliability of all the three emotional scale models and also of the emotions perceived by the user.  相似文献   

9.
Multi-modal affective data such as EEG and physiological signals is increasingly utilized to analyze of human emotional states. Due to the noise existed in collected affective data, however, the performance of emotion recognition is still not satisfied. In fact, the issue of emotion recognition can be regarded as channel coding, which focuses on reliable communication through noise channels. Using affective data and its label, the redundant codeword would be generated to correct signals noise and recover emotional label information. Therefore, we utilize multi-label output codes method to improve accuracy and robustness of multi-dimensional emotion recognition by training a redundant codeword model, which is the idea of error-correcting output codes. The experiment results on DEAP dataset show that the multi-label output codes method outperforms other traditional machine learning or pattern recognition methods for the prediction of emotional multi-labels.  相似文献   

10.
This paper deals withspontaneous behavior for cooperation through interaction in a distributed autonomous robot system. Though a human gives the robots evaluation functions for the relation of cooperation among robots, each robot decides its behavior depending on its environment, its experience, and the behavior of other robots. The robot acquires a model of the behavior of the other robots through learning. Inspired by biological systems, the robot's behaviors are interpreted as emotional by an observer of the system. In psychology, the emotions have been considered to play important roles for generation of motivation and behavior selection. In this paper, the robot's behaviors are interpreted as follows: each robot feels frustration when its behavior decision does not fit its environment. Then, it changes its behavior to change its situation actively and spontaneously. The results show potential of intelligent behavior by emotions. This work was presented, in part, at the International Symposium on Artificial Life and Robotics, Oita, Japan, February 18–20, 1996  相似文献   

11.
《Advanced Robotics》2013,27(6):767-784
The purpose of this study is to develop an interactive and emotional robot that is designed with passive characteristics and a delicate interaction concept using the interaction response: 'Bruises and complexion color due to emotional stimuli'. In order to overcome the mismatch of cue realism issues that cause the Uncanny Valley, an emotional interaction robot, Mung, was developed with a simple design composed of a body and two eyes. Mung could recognize human emotions from human–robot or human–human verbal communications. The developed robot expresses its emotions according to its emotional state modeled by a mass-spring-damper system with an elastic-hysteresis spring. The robot displays a bruise when it is in a negative emotional state, the same as a human becomes bruised when being physically hurt, and the robot shows a natural complexion when its emotional wound is removed. The effectiveness of the emotional expression using color with the concepts of bruising and complexion colors is qualified, and the feasibility of the developed robot was tested in several exhibitions and field trials.  相似文献   

12.
The mystery surrounding emotions, how they work and how they affect our lives has not yet been unravelled. Scientists still debate the real nature of emotions, whether they are evolutionary, physiological or cognitive are just a few of the different approaches used to explain affective states. Regardless of the various emotional paradigms, neurologists have made progress in demonstrating that emotion is as, or more, important than reason in the process of making decisions and deciding actions. The significance of these findings should not be overlooked in a world that is increasingly reliant on computers to accommodate to user needs. In this paper, a novel approach for recognizing and classifying positive and negative emotional changes in real time using physiological signals is presented. Based on sequential analysis and autoassociative networks, the emotion detection system outlined here is potentially capable of operating on any individual regardless of their physical state and emotional intensity without requiring an arduous adaptation or pre-analysis phase. Results from applying this methodology on real-time data collected from a single subject demonstrated a recognition level of 71.4% which is comparable to the best results achieved by others through off-line analysis. It is suggested that the detection mechanism outlined in this paper has all the characteristics needed to perform emotion recognition in pervasive computing.  相似文献   

13.
情绪句分类是情绪分析研究领域的核心问题之一,旨在解决情绪句类别的自动判断问题。传统基于情绪认知模型(OCC模型)的情绪句分类方法大多依赖词典和规则,在文本信息缺失的情况下分类精度不高。文中提出基于OCC模型和贝叶斯网络的情绪句分类方法,通过分析OCC模型的情绪生成规则,提取情绪评估变量并结合情绪句中含有的表情符号特征构建情绪分类贝叶斯网络;通过概率推理,可以实现句子级文本的情绪分类,并减小句中信息缺失所带来的影响。与NLPCC2014中文微博情绪分析评测的子任务情绪句分类评测结果的对比表明,所提方法具有有效性。  相似文献   

14.
15.
Automatic perception of human affective behaviour from facial expressions and recognition of intentions and social goals from dialogue contexts would greatly enhance natural human robot interaction. This research concentrates on intelligent neural network based facial emotion recognition and Latent Semantic Analysis based topic detection for a humanoid robot. The work has first of all incorporated Facial Action Coding System describing physical cues and anatomical knowledge of facial behaviour for the detection of neutral and six basic emotions from real-time posed facial expressions. Feedforward neural networks (NN) are used to respectively implement both upper and lower facial Action Units (AU) analysers to recognise six upper and 11 lower facial actions including Inner and Outer Brow Raiser, Lid Tightener, Lip Corner Puller, Upper Lip Raiser, Nose Wrinkler, Mouth Stretch etc. An artificial neural network based facial emotion recogniser is subsequently used to accept the derived 17 Action Units as inputs to decode neutral and six basic emotions from facial expressions. Moreover, in order to advise the robot to make appropriate responses based on the detected affective facial behaviours, Latent Semantic Analysis is used to focus on underlying semantic structures of the data and go beyond linguistic restrictions to identify topics embedded in the users’ conversations. The overall development is integrated with a modern humanoid robot platform under its Linux C++ SDKs. The work presented here shows great potential in developing personalised intelligent agents/robots with emotion and social intelligence.  相似文献   

16.
Authentic facial expression analysis   总被引:1,自引:0,他引:1  
There is a growing trend toward emotional intelligence in human–computer interaction paradigms. In order to react appropriately to a human, the computer would need to have some perception of the emotional state of the human. We assert that the most informative channel for machine perception of emotions is through facial expressions in video. One current difficulty in evaluating automatic emotion detection is that there are currently no international databases which are based on authentic emotions. The current facial expression databases contain facial expressions which are not naturally linked to the emotional state of the test subject. Our contributions in this work are twofold: first, we create the first authentic facial expression database where the test subjects are showing the natural facial expressions based upon their emotional state. Second, we evaluate the several promising machine learning algorithms for emotion detection which include techniques such as Bayesian networks, SVMs, and decision trees.  相似文献   

17.
A robot's face is its symbolic feature, and its facial expressions are the best method for interacting with people with emotional information. Moreover, a robot's facial expressions play an important role in human-robot emotional interactions. This paper proposes a general rule for the design and realization of expressions when some mascot-type facial robots are developed. Mascot-type facial robots are developed to enable friendly human feelings. The number and type of control points for six basic expressions or emotions were determined through a questionnaire. A linear affect-expression space model is provided to realize continuous and various expressions effectively, and the effects of the proposed method are shown through experiments using a simulator and an actual robot system.  相似文献   

18.
This paper deals with the implementation of emotions in mobile robots performing a specified task in a group in order to develop intelligent behavior and easier forms of communication. The overall group performance depends on the individual performance, group communication, and the synchronization of cooperation. With their emotional capability, each robot can distinguish the changed environment, can understand a colleague robot’s state, and can adapt and react with a changed world. The adaptive behavior of a robot is derived from the dominating emotion in an intelligent manner. In our control architecture, emotion plays a role to select the control precedence among alternatives such as behavior modes, cooperation plans, and goals. Emotional interaction happens among the robots, and a robot is biased by the emotional state of a colleague robot in performing a task. Here, emotional control is used for a better understanding of the colleague’s internal state, for faster communication, and for better performance eliminating dead time. This work was presented in part at the 12th International Symposium on Artificial Life and Robotics, Oita, Japan, January 25–27, 2007  相似文献   

19.
This paper presents an artificial emotional-cognitive system-based autonomous robot control architecture for a four-wheel driven and four-wheel steered mobile robot. Discrete stochastic state-space mathematical model is considered for behavioral and emotional transition processes of the autonomous mobile robot in the dynamic realistic environment. The term of cognitive mechanism system which is composed from rule base and reinforcement self-learning algorithm explain all of the deliberative events such as learning, reasoning and memory (rule spaces) of the autonomous mobile robot. The artificial cognitive model of autonomous robot control architecture has a dynamic associative memory including behavioral transition rules which are able to be learned for achieving multi-objective robot tasks. Motivation module of architecture has been considered as behavioral gain effect generator for achieving multi-objective robot tasks. According to emotional and behavioral state transition probabilities, artificial emotions determine sequences of behaviors for long-term action planning. Also reinforcement self-learning and reasoning ability of artificial cognitive model and motivational gain effects of proposed architecture can be observed on the executing behavioral sequences during simulation. The posture and speed of the robot and the configurations, speeds and torques of the wheels and all deliberative and cognitive events can be observed from the simulation plant and virtual reality viewer. This study constitutes basis for the multi-goal robot tasks and artificial emotions and cognitive mechanism-based behavior generation experiments on a real mobile robot.  相似文献   

20.
Detecting changing emotions in human speech by machine and humans   总被引:1,自引:1,他引:0  
The goals of this research were: (1) to develop a system that will automatically measure changes in the emotional state of a speaker by analyzing his/her voice, (2) to validate this system with a controlled experiment and (3) to visualize the results to the speaker in 2-d space. Natural (non-acted) human speech of 77 (Dutch) speakers was collected and manually divided into meaningful speech units. Three recordings per speaker were collected, in which he/she was in a positive, neutral and negative state. For each recording, the speakers rated 16 emotional states on a 10-point Likert Scale. The Random Forest algorithm was applied to 207 speech features that were extracted from recordings to qualify (classification) and quantify (regression) the changes in speaker’s emotional state. Results showed that predicting the direction of change of emotions and predicting the change of intensity, measured by Mean Squared Error, can be done better than the baseline (the most frequent class label and the mean value of change, respectively). Moreover, it turned out that changes in negative emotions are more predictable than changes in positive emotions. A controlled experiment investigated the difference in human and machine performance on judging the emotional states in one’s own voice and that of another. Results showed that humans performed worse than the algorithm in the detection and regression problems. Humans, just like the machine algorithm, were better in detecting changing negative emotions rather than positive ones. Finally, results of applying the Principal Component Analysis (PCA) to our data provided a validation of dimensional emotion theories and they suggest that PCA is a promising technique for visualizing user’s emotional state in the envisioned application.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号