首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 21 毫秒
1.
Whang MC  Lim JS  Boucsein W 《Human factors》2003,45(4):623-634
Despite rapid advances in technology, computers remain incapable of responding to human emotions. An exploratory study was conducted to find out what physiological parameters might be useful to differentiate among 4 emotional states, based on 2 dimensions: pleasantness versus unpleasantness and arousal versus relaxation. The 4 emotions were induced by exposing 26 undergraduate students to different combinations of olfactory and auditory stimuli, selected in a pretest from 12 stimuli by subjective ratings of arousal and valence. Changes in electroencephalographic (EEG), heart rate variability, and electrodermal measures were used to differentiate the 4 emotions. EEG activity separates pleasantness from unpleasantness only in the aroused but not in the relaxed domain, where electrodermal parameters are the differentiating ones. All three classes of parameters contribute to a separation between arousal and relaxation in the positive valence domain, whereas the latency of the electrodermal response is the only differentiating parameter in the negative domain. We discuss how such a psychophysiological approach may be incorporated into a systemic model of a computer responsive to affective communication from the user.  相似文献   

2.

Human perception is inherently multi-sensorial involving five traditional senses: sight, hearing, touch, taste, and smell. In contrast to traditional multimedia, based on audio and visual stimuli, mulsemedia seek to stimulate all the human senses. One way to produce multi-sensorial content is authoring videos with sensory effects. These effects are represented as metadata attached to the video content, which are processed and rendered through physical devices into the user’s environment. However, creating sensory effects metadata is not a trivial activity because authors have to identify carefully different details in a scene such as the exact point where each effect starts, finishes, and also its presentation features such as intensity, direction, etc. It is a subjective task that requires accurate human perception and time. In this article, we aim at finding out whether a crowdsourcing approach is suitable for authoring coherent sensory effects associated with video content. Our belief is that the combination of a collective common sense to indicate time intervals of sensory effects with an expert fine-tuning is a viable way to generate sensory effects from the point of view of users. To carry out the experiment, we selected three videos from a public mulsemedia dataset, sent them to the crowd through a cascading microtask approach. The results showed that the crowd can indicate intervals in which users agree that there should be insertions of sensory effects, revealing a way of sharing authoring between the author and the crowd.

  相似文献   

3.
Service robots have been developed to assist nurses in routine patient services. Prior research has recognized that patient emotional experiences with robots may be as important as robot task performance in terms of user acceptance and assessments of effectiveness. The objective of this study was to understand the effect of different service robot interface features on elderly perceptions and emotional responses in a simulated medicine delivery task. Twenty-four participants sat in a simulated patient room and a service robot delivered a bag of “medicine” to them. Repeated trials were used to present variations on three robot features, including facial configuration, voice messaging and interactivity. Participant heart rate (HR) and galvanic skin response (GSR) were collected. Participant ratings of robot humanness [perceived anthropomorphism (PA)] were collected post-trial along with subjective ratings of arousal (bored–excited) and valence (unhappy–happy) using the self-assessment manikin (SAM) questionnaire. Results indicated the presence of all three types of robot features promoted higher PA, arousal and valence, compared to a control condition (a robot without any of the features). Participant physiological responses varied with events in their interaction with the robot. The three types of features also had different utility for stimulating participant arousal and valence, as well as physiological responses. In general, results indicated that adding anthropomorphic and interactive features to service robots promoted positive emotional responses [increased excitement (GSR) and happiness (HR)] in elderly users. It is expected that results from this study could be used as a basis for developing affective robot interface design guidelines to promote user emotional experiences.  相似文献   

4.
The work presented in this paper aims at assessing human emotions using peripheral as well as electroencephalographic (EEG) physiological signals on short-time periods. Three specific areas of the valence–arousal emotional space are defined, corresponding to negatively excited, positively excited, and calm-neutral states. An acquisition protocol based on the recall of past emotional life episodes has been designed to acquire data from both peripheral and EEG signals. Pattern classification is used to distinguish between the three areas of the valence–arousal space. The performance of several classifiers has been evaluated on 10 participants and different feature sets: peripheral features, EEG time–frequency features, EEG pairwise mutual information (MI) features. Comparison of results obtained using either peripheral or EEG signals confirms the interest of using EEGs to assess valence and arousal in emotion recall conditions. The obtained accuracy for the three emotional classes is 63% using EEG time–frequency features, which is better than the results obtained from previous studies using EEG and similar classes. Fusion of the different feature sets at the decision level using a summation rule also showed to improve accuracy to 70%. Furthermore, the rejection of non-confident samples finally led to a classification accuracy of 80% for the three classes.  相似文献   

5.
Built environments play an essential role in our day-to-day lives since people spend more than 85% of their times indoors. Previous studies at the conjunction of neuroscience and architecture confirmed the impact of architectural design features on varying human experience, which propelled researchers to study the improvement of human experience in built environments using quantitative methods such as biometric sensing. However, a notable gap in the knowledge persists as researchers are faced with sensors that are commonly used in the neuroscience domain, resulting in a disconnect regarding the selection of effective sensors that can be used to measure human experience in designed spaces. This issue is magnified when considering the variety of sensor signal features that have been proposed and used in previous studies. This study builds on data captured during a series of user studies conducted to measure subjects’ physiological responses in designed spaces using the combination of virtual environments and biometric sensing. This study focuses on the data analysis of the collected sensor data to identify effective sensors and their signal features in classifying human experience. To that end, we used a feature attribution model (i.e., SHAP), which calculates the importance of each signal feature in terms of Shapley values. Results show that electroencephalography (EEG) sensors are more effective as compared to galvanic skin response (GSR) and photoplethysmogram (PPG) (i.e., achieving the highest SHAP values among the three at 3.55 as compared to 0.34 for GSR and 0.21 for PPG) when capturing human experience in alternate designed spaces. For EEG, signal features calculated from the back channels (occipital and parietal areas) were found to possess comparable effectiveness as the frontal channel (i.e., have similar mean SHAP values per channel). In addition, frontal and occipital asymmetry were found to be effective in identifying human experience in designed spaces.  相似文献   

6.
情感作为人脑的高级功能,对人们的个性特征和心理健康有很大的影响,利用网上公开的脑电情感数据库(DEAP(Database for emotion analysis using physiological signals)数据库),根据心理效价和激励唤醒度等级进行情感划分,对压力和平静等5种情感进行研究分析.针对脑电信号时空特征结合的特点,把深度学习中的卷积神经网络(Convolutional neural network,CNN)和长短期记忆网络(Long short term memory,LSTM)两者作为基本前提,并在此基础之上设计了一个RCNN-LSTM的脑电情感信号分类模型.利用循环卷积神经网络(Recurrent convolutional neural network,RCNN)自动提取脑电信号中的抽象特征,省去了人工选择与降维的过程,然后结合LSTM网络对脑电情感信号进行分类识别.实验结果表明,利用该方法对5种情感类别的平均分类识别率达到了96.63%,证明了该方法的有效性.  相似文献   

7.
Haptic technologies and applications have received enormous attention in the last decade. The incorporation of haptic modality into multimedia applications adds excitement and enjoyment to an application. It also adds a more natural feel to multimedia applications, that otherwise would be limited to vision and audition, by engaging as well the user’s sense of touch, giving a more intrinsic feel essential for ambient intelligent applications. However, the improvement of an application’s Quality of Experience (QoE) by the addition of haptic feedback is still not completely understood. The research presented in this paper focuses on the effect of haptic feedback and what it potentially adds to the experience of the user as opposed to the traditional visual and auditory feedback. In essence, it investigates certain issues regarding stylus-based haptic education applications and haptic-enhanced entertainment videos. To this end, we used two haptic applications: the haptic handwriting learning tool to experiment with force feedback haptic interaction and the tactile YouTube application for tactile haptic feedback. In both applications, our analysis shows that the addition of haptic feedback will increase the QoE in the absence of fatigue or discomfort for this category of applications. This implies that the incorporation of haptic modality (both force feedback as well as tactile feedback) has positively contributed to the overall QoE for the users.  相似文献   

8.
Emotional design for multimedia learning can be implemented primarily using aesthetic design elements (e.g., color, shape and layout); however, we still have limited understanding on emotional design using content features visually depicting motivational cues. This study examines how valence and arousal of emotional learning content, generated by background images, in multimedia learning influenced learners' retention of factual knowledge and cognitive load, using a 2 (valence: positive, negative) × 2 (arousal: moderately low, moderately high) between-subjects experiment. The results showed significant interaction effects, implying the positive effects of using moderately low arousing negative images on both recognition and cued-recall test scores. Both scores were significantly lower when moderately high arousing negative images were used; however, they did not differ with increasing arousal of positive images. The emotional content did not influence germane and extraneous loads. Our findings emphasize the need for considering optimal levels of valence and arousal in emotional design.  相似文献   

9.

Emotion is considered a physiological state that appears whenever a transformation is observed by an individual in their environment or body. While studying the literature, it has been observed that combining the electrical activity of the brain, along with other physiological signals for the accurate analysis of human emotions is yet to be explored in greater depth. On the basis of physiological signals, this work has proposed a model using machine learning approaches for the calibration of music mood and human emotion. The proposed model consists of three phases (a) prediction of the mood of the song based on audio signals, (b) prediction of the emotion of the human-based on physiological signals using EEG, GSR, ECG, Pulse Detector, and finally, (c) the mapping has been done between the music mood and the human emotion and classifies them in real-time. Extensive experimentations have been conducted on the different music mood datasets and human emotion for influential feature extraction, training, testing and performance evaluation. An effort has been made to observe and measure the human emotions up to a certain degree of accuracy and efficiency by recording a person’s bio- signals in response to music. Further, to test the applicability of the proposed work, playlists are generated based on the user’s real-time emotion determined using features generated from different physiological sensors and mood depicted by musical excerpts. This work could prove to be helpful for improving mental and physical health by scientifically analyzing the physiological signals.

  相似文献   

10.
Different physiological signals are of different origins and may describe different functions of the human body. This paper studied respiration (RSP) signals alone to figure out its ability in detecting psychological activity. A deep learning framework is proposed to extract and recognize emotional information of respiration. An arousal-valence theory helps recognize emotions by mapping emotions into a two-dimension space. The deep learning framework includes a sparse auto-encoder (SAE) to extract emotion-related features, and two logistic regression with one for arousal classification and the other for valence classification. For the development of this work an international database for emotion classification known as Dataset for Emotion Analysis using Physiological signals (DEAP) is adopted for model establishment. To further evaluate the proposed method on other people, after model establishment, we used the affection database established by Augsburg University in Germany. The accuracies for valence and arousal classification on DEAP are 73.06% and 80.78% respectively, and the mean accuracy on Augsburg dataset is 80.22%. This study demonstrates the potential to use respiration collected from wearable deices to recognize human emotions.  相似文献   

11.
While nowadays the most usual way to show emotions in digital contexts is via virtual characters, its use may raise false expectations (the user attributes human abilities to the virtual character). This paper proposes and explores an approach to express emotions which intends to minimize the user's expectations by using a non-anthropomorphic model. Emotions are represented in terms of arousal and valence dimensions. They are visualized in a simple way through the behaviour and appearance of a series of cartoonish clouds. In particular, the arousal value is expressed through the movement of these clouds (controlled by a flocking algorithm), while the valence value is expressed through their degree of darkness. Furthermore, the paper describes a user experiment which investigated whether the arousal and valence expressed by our model are appropriately interpreted by the users or not. The results suggest that movement and darkness are interpreted as arousal and valence respectively and that they are independent of each other.  相似文献   

12.
In addressing user experience issues, users’ perceptions and emotions need to be considered important. This study examines the relationships between perceived usability/aesthetics and emotional valence/arousal/engagement through an experiment using 15 existing websites from various domains and questionnaire items developed to measure users’ responses. According to the experimental results, both perceived usability and perceived aesthetics were positively correlated with emotional valence and negatively correlated with emotional engagement. No specific relationship was found between perceived usability/aesthetics and emotional arousal. Perceived aesthetics potentially had a greater impact on valence than perceived usability. Unlike valence, engagement could be more influenced by perceived usability than by perceived aesthetics. These findings can be utilized as bases for applying users’ emotional responses in each dimension to the product-use situations in the chain of perceptions, emotions, and behaviors.  相似文献   

13.
The machine learning-based classification model can predict the operator emotional states in human–machine system based on nonlinear, multidimensional neurophysiological features. However, the dynamical properties of the testing physiological data regarding the time series may influence the feature distribution variation and inter-class discrimination across different time steps. To overcome this shortcoming, we propose a novel EEG feature selection method, dynamical recursive feature elimination (D-RFE), to find the optimal but different feature rankings at each time instant for arousal and valence recognition. With the classification framework implemented via a model-selected least square support vector machine, the participant-specific classification performance has been significantly improved against conventional RFE model and several common classifiers. The optimal classification accuracy and F1-score elicited by the proposed method are 0.7896, 0.7991, 0.7143, and 0.7257 for arousal and valence dimensions, respectively, which are quite competitive among recent reported works on the same EEG database.  相似文献   

14.
Accurate classification of human emotions in designed spaces is essential for architects and engineers, who aim to maximize positive emotions by configuring architectural design features. Previous studies at the conjunction of neuroscience and architecture confirmed the impact of architectural design features on human emotions. Recent development of biometric sensors enabled researchers to identify emotions by measuring human physiological responses (e.g., the use of electroencephalogram (EEG) to measure brain activities). However, a gap in the knowledge exists in terms of an accurate classification model for human emotions in design variants. This study proposed a convolutional neural network (CNN) based approach to classify human emotions. The approach considered two types of CNN architectures as CNN ensemble and auto-encoders. The inputs of these CNN algorithms were 2D images generated by projecting the frequency band power of EEG onto the scalp graph in accordance with the electrode placements. This transformation from time-series EEG data to 2D frequency band power images retain the spatial, time and frequency domain features from participants’ brain dynamics. Performance of the proposed approach was validated using multiple metrics, including precision, recall, f-1 score, and Area Under Curve (AUC). Results showed that the auto-encoder based approach achieved the best performance with an AUC of 0.95.  相似文献   

15.
The popularity of computer games has exploded in recent years, yet methods of evaluating user emotional state during play experiences lag far behind. There are few methods of assessing emotional state, and even fewer methods of quantifying emotion during play. This paper presents a novel method for continuously modeling emotion using physiological data. A fuzzy logic model transformed four physiological signals into arousal and valence. A second fuzzy logic model transformed arousal and valence into five emotional states relevant to computer game play: boredom, challenge, excitement, frustration, and fun. Modeled emotions compared favorably with a manual approach, and the means were also evaluated with subjective self-reports, exhibiting the same trends as reported emotions for fun, boredom, and excitement. This approach provides a method for quantifying emotional states continuously during a play experience.  相似文献   

16.
Emotion recognition is a crucial application in human–computer interaction. It is usually conducted using facial expressions as the main modality, which might not be reliable. In this study, we proposed a multimodal approach that uses 2-channel electroencephalography (EEG) signals and eye modality in addition to the face modality to enhance the recognition performance. We also studied the use of facial images versus facial depth as the face modality and adapted the common arousal–valence model of emotions and the convolutional neural network, which can model the spatiotemporal information from the modality data for emotion recognition. Extensive experiments were conducted on the modality and emotion data, the results of which showed that our system has high accuracies of 67.8% and 77.0% in valence recognition and arousal recognition, respectively. The proposed method outperformed most state-of-the-art systems that use similar but fewer modalities. Moreover, the use of facial depth has outperformed the use of facial images. The proposed method of emotion recognition has significant potential for integration into various educational applications.  相似文献   

17.

Human–robot interaction was always based on estimation of human emotions from human facial expressions, voice and gestures. Human emotions were always categorized in a discretized manner, while we estimate facial images from common datasets for continuous emotions. Linear regression was used in this study which numerically quantizes human emotions as valence and arousal by displaying the raw images on the two-respective coordinate axis. The face image datasets from the Japanese female facial expression (JAFFE) dataset and the extended Cohn–Kanade (CK+) dataset were used in this experiment. Human emotions for the above-mentioned datasets were interpreted by 85 participants who were used in the experimentation. The best result from a series of experiments shows that the minimum of root mean square error for the JAFFE dataset was 0.1661 for valence and 0.1379 for arousal. The proposed method has been compared with previous methods such as songs, sentences, and it is observed that the proposed method for common datasets testing showed an outstanding emotion estimation performance.

  相似文献   

18.
The accurate quantification of the correlation between product color attributes and user emotions is the key to product color emotional design (PCED). However, the traditional method of emotion quantification is subjective, one-sided, and contingent, which reduces the reliability of the research results. To this end, this paper proposes a method for PCED based on the quantification of electroencephalogram (EEG) physiological indicators. A medical product, namely an infant incubator, is used as the experimental stimulus samples, and “unsafe-safe” is used as the perceptual imager word pair to conduct EEG measurement experiments of the user's emotional state. Two types of data are obtained in the experiment, namely behavioral data and EEG data. Via the analysis of the two types of data, the EEG physiological characteristic indicators (the event-related potentials (ERPs) components) are obtained, which can explain the user’s emotional cognitive mechanism. Finally, the relationship between the user’s emotional state and EEG characteristics under the influence of product color attributes is explored. The conclusions are as follows. (1) The “safety” emotional value of the two-color samples is higher than that of the three-color samples, which indicates that the simpler the color matching, the higher the safety emotion attribute of the samples. (2) Via the study of the three attributes of hue, lightness, and chroma in the assistant colors of the two-color samples, it is found that when the hue attributes of the samples are red, cyan, and blue, the safety emotional value is higher; moreover, the higher the lightness attribute and the chroma attribute, the higher the safety emotion. Research on the two color attribute dimensions of hue harmony and color tone harmony between the auxiliary color of the three-color samples and the embellishment color revealed that the more consistent the hue harmony, the higher the safety emotion, and the more significant the difference in the color tone harmony, the higher the safety emotion. (3) The reaction time data in the behavioral data demonstrate that the participants had the longest average reaction time under neutral emotions. (4) The results of the time–frequency analysis of the EEG data reveal that there are apparent mid-to-early ERPs components between 100 and 300 ms after the appearance of the stimulus samples. The ERPs component analysis results show that the P1 component can reflect the emotional valence to a certain extent; the higher the amplitude of the P1 component, the more pronounced the negative emotions of the participants, and the lower the safety emotional evaluation value of the two-color samples, the higher the amplitude of the P1 component. Moreover, the N2 and P3 components can reflect the degree of emotional arousal to a certain extent, and the increase in their amplitude indicates the more substantial emotion of the participant; furthermore, the correlation between N2 and emotional arousal is higher. The two-color and three-color experiments revealed that the more neutral the emotional state, the lower the amplitudes of N2 and P3. This research is expected to provide a theoretical foundation and experimental data basis for PCED methods based on EEG physiological indicators.  相似文献   

19.
Emotional human–computer interactions are attracting increasing interest with the improvement in the available technology. Through presenting affective stimuli and empathic communication, computer agents are able to adjust to users' emotional states. As a result, users may produce better task performance. Existing studies have mainly focused on the effect of only a few basic emotions, such as happiness and frustration, on human performance. Furthermore, most research explored this issue from the psychological perspective. This paper presents an emotion and performance relation model in the context of vehicle driving. This general emotion–performance model is constructed on an arousal–valence plane and is not limited to basic emotions. Fifteen paid participants took part in two driving simulation experiments that induced 115 pairs of emotion–performance sample. These samples revealed the following: (1) driving performance has a downward U-shaped relationship with both intensities of arousal and valence. It deteriorates at extreme arousal and valence. (2) Optimal driving performance, corresponding to the appropriate emotional state, matches the “sweet spot” phenomenon of the engagement psychology. (3) Arousal and valence are not perfectly independent across the entire 2-D emotion plane. Extreme valence is likely to stimulate a high level of arousal, which, in turn, deteriorates task performance. The emotion–performance relation model proposed in the paper is useful in designing emotion-aware intelligent systems to predict and prevent task performance degradation at an early stage and throughout the human–computer interactions.  相似文献   

20.
情绪是一种大脑产生的主观认知的概括。脑信号解码技术可以以一种较客观的方式来有效地研究人的情绪及其相关认知行为。本文提出了一种基于图注意力网络的脑电情绪识别方法(multi-path graph attention networks, MPGAT),该方法通过对脑电信号通道建图,利用卷积层提取脑电信号的时域特征以及各频带的特征,使用图注意力网络进一步捕捉情绪脑电信号的局部特征以及各脑区之间的内在功能关系,进而构建出更好的脑电信号表征。MPGAT在SEED和SEED-IV数据集的跨被试情绪识别平均准确率分别为86.03%、72.71%,在DREAMER数据集的效价(valence)和唤醒(arousal)维度的跨被试平均准确率分别为76.35%和75.46%,达到并部分超过了目前最先进脑电情绪识别方法的性能。本文所提出的脑电信号处理方法有望为情绪认知科学研究与情绪脑机接口系统提供新的技术手段。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号