首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.

The potential importance of human affect during human - computer interaction (HCI) is becoming increasingly well recognised. However, measuring and analysing affective behaviour is problematic. Physiological indicators reveal only some, sometimes ambiguous information. Video analysis and existing coding schemes are notoriously lengthy and complex, and examine only certain aspects of affect. This paper describes the development of a practical methodology to assess user affect, as displayed by emotional expressions. Interaction analysis techniques were used to identify discrete affective messages 'affectemes' and their components. This paper explains the rationale for this approach and demonstrates how it can be applied in practice. Preliminary evidence for its efficacy and reliability is also presented.  相似文献   

2.
Affective video content representation and modeling   总被引:7,自引:0,他引:7  
This paper looks into a new direction in video content analysis - the representation and modeling of affective video content . The affective content of a given video clip can be defined as the intensity and type of feeling or emotion (both are referred to as affect) that are expected to arise in the user while watching that clip. The availability of methodologies for automatically extracting this type of video content will extend the current scope of possibilities for video indexing and retrieval. For instance, we will be able to search for the funniest or the most thrilling parts of a movie, or the most exciting events of a sport program. Furthermore, as the user may want to select a movie not only based on its genre, cast, director and story content, but also on its prevailing mood, the affective content analysis is also likely to contribute to enhancing the quality of personalizing the video delivery to the user. We propose in this paper a computational framework for affective video content representation and modeling. This framework is based on the dimensional approach to affect that is known from the field of psychophysiology. According to this approach, the affective video content can be represented as a set of points in the two-dimensional (2-D) emotion space that is characterized by the dimensions of arousal (intensity of affect) and valence (type of affect). We map the affective video content onto the 2-D emotion space by using the models that link the arousal and valence dimensions to low-level features extracted from video data. This results in the arousal and valence time curves that, either considered separately or combined into the so-called affect curve, are introduced as reliable representations of expected transitions from one feeling to another along a video, as perceived by a viewer.  相似文献   

3.
Automated analysis of human affective behavior has attracted increasing attention from researchers in psychology, computer science, linguistics, neuroscience, and related disciplines. However, the existing methods typically handle only deliberately displayed and exaggerated expressions of prototypical emotions despite the fact that deliberate behaviour differs in visual appearance, audio profile, and timing from spontaneously occurring behaviour. To address this problem, efforts to develop algorithms that can process naturally occurring human affective behaviour have recently emerged. Moreover, an increasing number of efforts are reported toward multimodal fusion for human affect analysis including audiovisual fusion, linguistic and paralinguistic fusion, and multi-cue visual fusion based on facial expressions, head movements, and body gestures. This paper introduces and surveys these recent advances. We first discuss human emotion perception from a psychological perspective. Next we examine available approaches to solving the problem of machine understanding of human affective behavior, and discuss important issues like the collection and availability of training and test data. We finally outline some of the scientific and engineering challenges to advancing human affect sensing technology.  相似文献   

4.
In the context of affective human behavior analysis, we use the term continuous input to refer to naturalistic settings where explicit or implicit input from the subject is continuously available, where in a human–human or human–computer interaction setting, the subject plays the role of a producer of the communicative behavior or the role of a recipient of the communicative behavior. As a result, the analysis and the response provided by the automatic system are also envisioned to be continuous over the course of time, within the boundaries of digital machine output. The term continuous affect analysis is used as analysis that is continuous in time as well as analysis that uses affect phenomenon represented in dimensional space. The former refers to acquiring and processing long unsegmented recordings for detection of an affective state or event (e.g., nod, laughter, pain), and the latter refers to prediction of an affect dimension (e.g., valence, arousal, power). In line with the Special Issue on Affect Analysis in Continuous Input, this survey paper aims to put the continuity aspect of affect under the spotlight by investigating the current trends and provide guidance towards possible future directions.  相似文献   

5.
6.
ABSTRACT

The relation between affect-driven feedback and engagement on a given task has been largely investigated. This relation can be used to make personalised instructional decisions and/or modify the affect content within the feedback. However, although it is generally assumed that providing encouraging feedback to students should help them adopt a state of flow, there are instances where those messages might result counterproductive. In this paper, we present a case study with 48 secondary school students using an Intelligent Tutoring System for arithmetical word problem solving. This system, which makes some common assumptions on how to relate affective state with performance, takes into account subjective (user's affective state) and objective information (previous problem performance) to decide the upcoming difficulty levels and the type of affective feedback to be delivered. Surprisingly, results revealed that feedback was more effective when no emotional content was included, and lead to the conclusion that purely instructional and concise help messages are more important than the emotional reinforcement contained therein. This finding shows that this is still an open issue. Different settings present different constraints generating related compounding factors that affect obtained results. This research confirms that new approaches are required to determine when, how and where affect-driven feedback is needed. Affect-driven feedback, engagement and their mutual relation have been largely investigated. Student's interactions combined with their emotional state can be used to make personalised instructional decisions and/or modify the affect content within the feedback, aiming to entice engagement on the task. However, although it is generally assumed that providing encouraging feedback to the students should help them adopt a state of flow, there are instances where those encouraging messages might result counterproductive. In this paper, we analyze these issues in terms of a case study with 48 secondary school students using an Intelligent Tutoring System for arithmetical word problem solving. This system, which makes some common assumptions on how to relate affective state with performance, takes into account subjective (user's affective state) and objective (previous problem performance) information to decide the difficulty level of the next exercise and the type of affective feedback to be delivered. Surprisingly, findings revealed that feedback was more effective when no emotional content was included in the messages, and lead to the conclusion that purely instructional and concise help messages are more important than the emotional reinforcement contained therein. This finding, which coincides with related work, shows that this is still an open issue. Different settings present different constraints and there are related compounding factors that affect obtained results, such as the message's contents and their target, how to measure the effect of the message on engagement through affective variables considering other issues involved, and to what extent engagement can be manipulated solely in terms of affective feedback. The contribution here is that this research confirms that new approaches are needed to determine when, how and where affect-driven feedback is needed. In particular, based on our previous experience in developing educational recommender systems, we suggest the combination of user-centred design methodologies with data mining methods to yield a more effective feedback.  相似文献   

7.
A Kansei mining system for affective design   总被引:4,自引:0,他引:4  
Affective design has received much attention from both academia and industries. It aims at incorporating customers' affective needs into design elements that deliver customers' affective satisfaction. The main challenge for affective design originates from difficulties in mapping customers' subjective impressions, namely Kansei, to perceptual design elements. This paper intends to develop an explicit decision support to improve the Kansei mapping process by reusing knowledge from past sales records and product specifications. As one of the important applications of data mining, association rule mining lends itself to the discovery of useful patterns associated with the mapping of affective needs. A Kansei mining system is developed to utilize valuable affect information latent in customers' impressions of existing affective designs. The goodness of association rules is evaluated according to their achievements of customers' expectations. Conjoint analysis is applied to measure the expected and achieved utilities of a Kansei mapping relationship. Based on goodness evaluation, mapping rules are further refined to empower the system with useful inference patterns. The system architecture and implementation issues are discussed in detail. An application of Kansei mining to mobile phone affective design is presented.  相似文献   

8.
What role can computers play in the study of strategic interpersonal behaviours, and research on affective influences on social behaviour in particular? Despite intense recent interest in affective phenomena, the role of affect in social interaction has rarely been studied. This paper reviews past work on affective influences on interpersonal behaviour, with special emphasis on Michael Argyle’s pioneering studies in this field. We then discuss historical and contemporary theories of affective influences on social behaviour. A series of experiments using computer-mediated interaction tasks are described, investigating affective influences on interpersonal behaviours such as self-disclosure strategies and the production of persuasive arguments. It is suggested that computer-mediated interaction offers a reliable and valid technique for studying the cognitive, information processing variables that facilitate or inhibit affective influences on interpersonal behaviour. These studies show that mild affective states produce significant differences in the way people perform in interpersonal situations, and can accentuate or attenuate (through affective priming) self-disclosure intimacy or persuasive argument quality. The implications of these studies for recent theories and affect-cognition models, and for our understanding of people’s everyday interpersonal strategies are discussed.  相似文献   

9.
Affective states have become a crucial part of human–computer interaction research. Many studies have analysed the impact of the technology on the users' affective states as a part of what is called user experience (UX). We consider the impact of antecedent affective states on interaction with a technological artefact. We induced positive and negative affective states using film clips. Then we analysed the impact of affects on the subsequent interaction with a tablet PC. Results show that positive and negative affective states have different emotional activation patterns. Positive affect was more sensitive for changes in tasks and experimental setting. In addition, these activation patterns affected behaviour for a short time only. These findings are discussed against the background of research into UX dynamics, dynamics of affect, and user-centred design research.  相似文献   

10.
Investigation into robot-assisted intervention for children with autism spectrum disorder (ASD) has gained momentum in recent years. Therapists involved in interventions must overcome the communication impairments generally exhibited by children with ASD by adeptly inferring the affective cues of the children to adjust the intervention accordingly. Similarly, a robot must also be able to understand the affective needs of these children—an ability that the current robot-assisted ASD intervention systems lack—to achieve effective interaction that addresses the role of affective states in human–robot interaction and intervention practice. In this paper, we present a physiology-based affect-inference mechanism for robot-assisted intervention where the robot can detect the affective states of a child with ASD as discerned by a therapist and adapt its behaviors accordingly. This paper is the first step toward developing “understanding” robots for use in future ASD intervention. Experimental results with six children with ASD from a proof-of-concept experiment (i.e., a robot-based basketball game) are presented. The robot learned the individual liking level of each child with regard to the game configuration and selected appropriate behaviors to present the task at his/her preferred liking level. Results show that the robot automatically predicted individual liking level in real time with 81.1% accuracy. This is the first time, to our knowledge, that the affective states of children with ASD have been detected via a physiology-based affect recognition technique in real time. This is also the first time that the impact of affect-sensitive closed-loop interaction between a robot and a child with ASD has been demonstrated experimentally.   相似文献   

11.
情感建模与情感识别   总被引:10,自引:2,他引:10  
情感计算是关于、产生于和影响于情感方面的计算,其目的是赋予计算机识别、理解、表达和适应人情感的能力。情感计算通过各种传感器获取由人的情感所引起的表情及生理变化信号,利用“情感模型”对这些信号进行识别,从而理解人的情感并做出适当的响应。该文主要讨论了Picard教授在情感计算中情感识别部分的研究成果,着重分析了面部表情、语音、生理信号的情感模型与情感识别,这是情感计算研究的一个关键问题之一,也是建立和谐人机环境的基础之一。  相似文献   

12.
Affective computing is an emerging interdisciplinary research field bringing together researchers and practitioners from various fields, ranging from artificial intelligence, natural language processing, to cognitive and social sciences. With the proliferation of videos posted online (e.g., on YouTube, Facebook, Twitter) for product reviews, movie reviews, political views, and more, affective computing research has increasingly evolved from conventional unimodal analysis to more complex forms of multimodal analysis. This is the primary motivation behind our first of its kind, comprehensive literature review of the diverse field of affective computing. Furthermore, existing literature surveys lack a detailed discussion of state of the art in multimodal affect analysis frameworks, which this review aims to address. Multimodality is defined by the presence of more than one modality or channel, e.g., visual, audio, text, gestures, and eye gage. In this paper, we focus mainly on the use of audio, visual and text information for multimodal affect analysis, since around 90% of the relevant literature appears to cover these three modalities. Following an overview of different techniques for unimodal affect analysis, we outline existing methods for fusing information from different modalities. As part of this review, we carry out an extensive study of different categories of state-of-the-art fusion techniques, followed by a critical analysis of potential performance improvements with multimodal analysis compared to unimodal analysis. A comprehensive overview of these two complementary fields aims to form the building blocks for readers, to better understand this challenging and exciting research field.  相似文献   

13.
The ability of a computer to detect and appropriately respond to changes in a user's affective state has significant implications to human-computer interaction (HCI). In this paper, we present our efforts toward audio-visual affect recognition on 11 affective states customized for HCI application (four cognitive/motivational and seven basic affective states) of 20 nonactor subjects. A smoothing method is proposed to reduce the detrimental influence of speech on facial expression recognition. The feature selection analysis shows that subjects are prone to use brow movement in face, pitch and energy in prosody to express their affects while speaking. For person-dependent recognition, we apply the voting method to combine the frame-based classification results from both audio and visual channels. The result shows 7.5% improvement over the best unimodal performance. For person-independent test, we apply multistream HMM to combine the information from multiple component streams. This test shows 6.1% improvement over the best component performance  相似文献   

14.
Psychologists have long explored mechanisms with which humans recognize other humans' affective states from modalities, such as voice and face display. This exploration has led to the identification of the main mechanisms, including the important role played in the recognition process by the modalities' dynamics. Constrained by the human physiology, the temporal evolution of a modality appears to be well approximated by a sequence of temporal segments called onset, apex, and offset. Stemming from these findings, computer scientists, over the past 15 years, have proposed various methodologies to automate the recognition process. We note, however, two main limitations to date. The first is that much of the past research has focused on affect recognition from single modalities. The second is that even the few multimodal systems have not paid sufficient attention to the modalities' dynamics: The automatic determination of their temporal segments, their synchronization to the purpose of modality fusion, and their role in affect recognition are yet to be adequately explored. To address this issue, this paper focuses on affective face and body display, proposes a method to automatically detect their temporal segments or phases, explores whether the detection of the temporal phases can effectively support recognition of affective states, and recognizes affective states based on phase synchronization/alignment. The experimental results obtained show the following: 1) affective face and body displays are simultaneous but not strictly synchronous; 2) explicit detection of the temporal phases can improve the accuracy of affect recognition; 3) recognition from fused face and body modalities performs better than that from the face or the body modality alone; and 4) synchronized feature-level fusion achieves better performance than decision-level fusion.  相似文献   

15.
This paper investigates the psychological plausibility of the bipolarity concept, i.e., that positive and negative kinds of information are treated differently. Sections 2 and 3 review relevant investigations of the representational and affective systems in the experimental psychology literature. Section 4 provides new data supporting the idea that even when considering how affective changes occur, a certain level of independence exists between the positive and negative sides of affect. Together the studies reported here strongly support the psychological plausibility of bipolarity: Positive and negative kinds of information are not processed in the same way whichever domain is considered, preferences (affect) or beliefs (mental categories). © 2008 Wiley Periodicals Inc.  相似文献   

16.
This paper presents our research on automatic annotation of a five-billion-word corpus of Japanese blogs with information on affect and sentiment. We first perform a study in emotion blog corpora to discover that there has been no large scale emotion corpus available for the Japanese language. We choose the largest blog corpus for the language and annotate it with the use of two systems for affect analysis: ML-Ask for word- and sentence-level affect analysis and CAO for detailed analysis of emoticons. The annotated information includes affective features like sentence subjectivity (emotive/non-emotive) or emotion classes (joy, sadness, etc.), useful in affect analysis. The annotations are also generalized on a two-dimensional model of affect to obtain information on sentence valence (positive/negative), useful in sentiment analysis. The annotations are evaluated in several ways. Firstly, on a test set of a thousand sentences extracted randomly and evaluated by over forty respondents. Secondly, the statistics of annotations are compared to other existing emotion blog corpora. Finally, the corpus is applied in several tasks, such as generation of emotion object ontology or retrieval of emotional and moral consequences of actions.  相似文献   

17.
18.
Affective computing is important in human–computer interaction. Especially in interactive cloud computing within big data, affective modeling and analysis have extremely high complexity and uncertainty for emotional status as well as decreased computational accuracy. In this paper, an approach for affective experience evaluation in an interactive environment is presented to help enhance the significance of those findings. Based on a person-independent approach and the cooperative interaction as core factors, facial expression features and states as affective indicators are applied to do synergetic dependence evaluation and to construct a participant’s affective experience distribution map in interactive Big Data space. The resultant model from this methodology is potentially capable of analyzing the consistency between a participant’s inner emotional status and external facial expressions regardless of hidden emotions within interactive computing. Experiments are conducted to evaluate the rationality of the affective experience modeling approach outlined in this paper. The satisfactory results on real-time camera demonstrate an availability and validity comparable to the best results achieved through the facial expressions only from reality big data. It is suggested that the person-independent model with cooperative interaction and synergetic dependence evaluation has the characteristics to construct a participant’s affective experience distribution, and can accurately perform real-time analysis of affective experience consistency according to interactive big data. The affective experience distribution is considered as the most individual intelligent method for both an analysis model and affective computing, based on which we can further comprehend affective facial expression recognition and synthesis in interactive cloud computing.  相似文献   

19.
特定目标情感分析的目的是从不同目标词语的角度来预测文本的情感,关键是为给定的目标分配适当的情感词。当句子中出现多个情感词描述多个目标情感的情况时,可能会导致情感词和目标之间的不匹配。由此提出了一个CRT机制混合神经网络用于特定目标情感分析,模型使用CNN层从经过BiLSTM变换后的单词表示中提取特征,通过CRT组件生成单词的特定目标表示并保存来自BiLSTM层的原始上下文信息。在三种公开数据集上进行了实验,结果表明,该模型在特定目标情感分析任务中较之前的情感分析模型在准确率和稳定性上有着明显的提升,证明CRT机制能很好地整合CNN和LSTM的优势,这对于特定目标情感分析任务具有重要的意义。  相似文献   

20.
Supporting group decision‐making when the decision makers are spread around the world is a complex process. The mechanisms of automated negotiation, such as argumentation, can be used in Ubiquitous Group Decision Support Systems (UbiGDSS) to help decision makers find a solution based on their preferences. However, the decision‐making process is much more than just a simple criteria and alternative analysis. There are many cognitive and affective issues that affect the outcome, and these issues should not be ignored; otherwise, the quality of the decision could be compromised. In this paper, we detail an UbiGDSS architecture and explore 2 cognitive and affective methods that are essential to the group decision‐making process. We explain how agents can reason about self‐expertise and other decision makers' credibility, and how agents can verify and react to tendencies throughout the decision‐making process. We intend agents to achieve higher quality and more consensual decisions. In any simulation environment that we tested, agents that analysed credibility, expertise, and/or analysed tendencies always achieved a higher consensus compared to agents that used neither of the proposed methods. Likewise, agents that used neither of the proposed methods or only performed tendencies analysis obtained the worst average satisfaction levels for each simulation environment.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号