首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 531 毫秒
1.
2.
Current emotion recognition computational techniques have been successful on associating the emotional changes with the EEG signals, and so they can be identified and classified from EEG signals if appropriate stimuli are applied. However, automatic recognition is usually restricted to a small number of emotions classes mainly due to signal’s features and noise, EEG constraints and subject-dependent issues. In order to address these issues, in this paper a novel feature-based emotion recognition model is proposed for EEG-based Brain–Computer Interfaces. Unlike other approaches, our method explores a wider set of emotion types and incorporates additional features which are relevant for signal pre-processing and recognition classification tasks, based on a dimensional model of emotions: Valenceand Arousal. It aims to improve the accuracy of the emotion classification task by combining mutual information based feature selection methods and kernel classifiers. Experiments using our approach for emotion classification which combines efficient feature selection methods and efficient kernel-based classifiers on standard EEG datasets show the promise of the approach when compared with state-of-the-art computational methods.  相似文献   

3.
We propose a human emotion regarding social network, namely, the Affective Social Network. Our suggestion firstly builds a user’s emotion profile in terms of the personality, mood and emotion by analysing the user’s activities in social network. This subsequently builds an Emotional Relationship Matrix (ERM) which represents the depth of the emotional relationship based on the emotion profile. From our proposal, the more elaborate services based on the current user’s emotional state can be provided to users. By considering emotional aspects in a social network, we can effectively answer which users or media contents will show the best results for inducing users’ emotional states. From the experiments, we verified that message containing emotional words and users’ relationship in a social network has significant correlations.  相似文献   

4.
With the advent of the ubiquitous era, many studies have been devoted to various situation-aware services in the semantic web environment. One of the most challenging studies involves implementing a situation-aware personalized music recommendation service which considers the user’s situation and preferences. Situation-aware music recommendation requires multidisciplinary efforts including low-level feature extraction and analysis, music mood classification and human emotion prediction. In this paper, we propose a new scheme for a situation-aware/user-adaptive music recommendation service in the semantic web environment. To do this, we first discuss utilizing knowledge for analyzing and retrieving music contents semantically, and a user adaptive music recommendation scheme based on semantic web technologies that facilitates the development of domain knowledge and a rule set. Based on this discussion, we describe our Context-based Music Recommendation (COMUS) ontology for modeling the user’s musical preferences and contexts, and supporting reasoning about the user’s desired emotions and preferences. Basically, COMUS defines an upper music ontology that captures concepts on the general properties of music such as titles, artists and genres. In addition, it provides functionality for adding domain-specific ontologies, such as music features, moods and situations, in a hierarchical manner, for extensibility. Using this context ontology, we believe that logical reasoning rules can be inferred based on high-level (implicit) knowledge such as situations from low-level (explicit) knowledge. As an innovation, our ontology can express detailed and complicated relations among music clips, moods and situations, which enables users to find appropriate music. We present some of the experiments we performed as a case-study for music recommendation.  相似文献   

5.
The interaction with software systems is often affected by many types of hurdles that induce users to make errors and mistakes, and to break the continuity of their reasoning while carrying out a working task with the computer. As a consequence, negative emotional states, such as frustration, dissatisfaction, and anxiety, may arise. In this paper, we illustrate how the Software Shaping Workshop (SSW) methodology can represent a solution to the problem of developing interactive systems that are correctly perceived and interpreted by end-users, thus becoming more acceptable and favouring positive emotional states. In the methodology, a key role is played by domain-expert users, that is, experts in a specific domain, not necessarily experts in computer science. Domain-expert users’ skills and background, including their knowledge of the domain and users’ needs and habits, are exploited to create context and emotion aware visual interactive systems. Examples of these systems are illustrated by referring to a case study in the automation field.  相似文献   

6.
Tang  Zhichuan  Li  Xintao  Xia  Dan  Hu  Yidan  Zhang  Lingtao  Ding  Jun 《Multimedia Tools and Applications》2022,81(5):7085-7101

Self-assessment methods are widely used in art therapy evaluation, but emotional recognition methods using physiological signals’ features are more objectively and accurately. In this study, we proposed an electroencephalogram (EEG)-based art therapy evaluation method that could evaluate the therapeutic effect based on the emotional changes before and after the art therapy. Twelve participants were recruited in a two-step experiment (emotion stimulation step and drawing therapy step), and their EEG signals and self-assessment scores were collected. The self-assessment model (SAM) was used to obtain and label the actual emotional states; the long short-term memory (LSTM) network was used to extract the deep temporal features of EEG to recognize emotions. Further, the classification performances in different sequence lengths, time-window lengths and frequency combinations were compared and analyzed. The results showed that emotion recognition models with LSTM deep temporal features achieved the better classification performances than the state-of-the-art methods with non-temporal features; the classification accuracies in high-frequency bands (α, β, and γ bands) were higher than those in low-frequency bands (δ and θ bands); there was a highest emotion classification accuracy (93.24%) in 10-s sequence length, 2-s time-window length and 5-band frequency combination. Our proposed method could be used for emotion recognition effectively and accurately, and was an objective approach to assist therapists or patients in evaluating the effect of art therapy.

  相似文献   

7.
ContextLearning can be regarded as knowledge construction in which prior knowledge and experience serve as basis for the learners to expand their knowledge base. Such a process of knowledge construction has to take place continuously in order to enhance the learners’ competence in a competitive working environment. As the information consumers, the individual users demand personalised information provision which meets their own specific purposes, goals, and expectations.ObjectivesThe current methods in requirements engineering are capable of modelling the common user’s behaviour in the domain of knowledge construction. The users’ requirements can be represented as a case in the defined structure which can be reasoned to enable the requirements analysis. Such analysis needs to be enhanced so that personalised information provision can be tackled and modelled. However, there is a lack of suitable modelling methods to achieve this end. This paper presents a new ontological method for capturing individual user’s requirements and transforming the requirements onto personalised information provision specifications. Hence the right information can be provided to the right user for the right purpose.MethodAn experiment was conducted based on the qualitative method. A medium size of group of users participated to validate the method and its techniques, i.e. articulates, maps, configures, and learning content. The results were used as the feedback for the improvement.ResultThe research work has produced an ontology model with a set of techniques which support the functions for profiling user’s requirements, reasoning requirements patterns, generating workflow from norms, and formulating information provision specifications.ConclusionThe current requirements engineering approaches provide the methodical capability for developing solutions. Our research outcome, i.e. the ontology model with the techniques, can further enhance the RE approaches for modelling the individual user’s needs and discovering the user’s requirements.  相似文献   

8.
We describe a novel design, implementation and evaluation of a speech interface, as part of a platform for the development of serious games. The speech interface consists of the speech recognition component and the emotion recognition from speech component. The speech interface relies on a platform designed and implemented to support the development of serious games, which supports cognitive-based treatment of patients with mental disorders. The implementation of the speech interface is based on the Olympus/RavenClaw framework. This framework has been extended for the needs of the specific serious games and the respective application domain, by integrating new components, such as emotion recognition from speech. The evaluation of the speech interface utilized purposely collected domain-specific dataset. The speech recognition experiments show that emotional speech moderately affects the performance of the speech interface. Furthermore, the emotion detectors demonstrated satisfying performance for the emotion states of interest, Anger and Boredom, and contributed towards successful modelling of the patient’s emotion status. The performance achieved for speech recognition and for the detection of the emotional states of interest was satisfactory. Recent evaluation of the serious games showed that the patients started to show new coping styles with negative emotions in normal stress life situations.  相似文献   

9.
Emotion recognition of music objects is a promising and important research issues in the field of music information retrieval. Usually, music emotion recognition could be considered as a training/classification problem. However, even given a benchmark (a training data with ground truth) and using effective classification algorithms, music emotion recognition remains a challenging problem. Most previous relevant work focuses only on acoustic music content without considering individual difference (i.e., personalization issues). In addition, assessment of emotions is usually self-reported (e.g., emotion tags) which might introduce inaccuracy and inconsistency. Electroencephalography (EEG) is a non-invasive brain-machine interface which allows external machines to sense neurophysiological signals from the brain without surgery. Such unintrusive EEG signals, captured from the central nervous system, have been utilized for exploring emotions. This paper proposes an evidence-based and personalized model for music emotion recognition. In the training phase for model construction and personalized adaption, based on the IADS (the International Affective Digitized Sound system, a set of acoustic emotional stimuli for experimental investigations of emotion and attention), we construct two predictive and generic models \(AN\!N_1\) (“EEG recordings of standardized group vs. emotions”) and \(AN\!N_2\) (“music audio content vs. emotion”). Both models are trained by an artificial neural network. We then collect a subject’s EEG recordings when listening the selected IADS samples, and apply the \(AN\!N_1\) to determine the subject’s emotion vector. With the generic model and the corresponding individual differences, we construct the personalized model H by the projective transformation. In the testing phase, given a music object, the processing steps are: (1) to extract features from the music audio content, (2) to apply \(AN\!N_2\) to calculate the vector in the arousal-valence emotion space, and (3) to apply the transformation matrix H to determine the personalized emotion vector. Moreover, with respect to a moderate music object, we apply a sliding window on the music object to obtain a sequence of personalized emotion vectors, in which those predicted vectors will be fitted and organized as an emotion trail for revealing dynamics in the affective content of music object. Experimental results suggest the proposed approach is effective.  相似文献   

10.
As a new cyber physical application, emotion recognition has been shown to make human-in-the-loop cyber-physical system (HilCPS) more efficient and sustainable. Therefore, emotion recognition is of great significance for HilCPS. Electroencephalogram (EEG) signals contain abundant and useful information, and can objectively reflect human emotional states. According to EEG signals, using machine learning to recognize emotion is the main method at present. This method depends on the quantity and quality of samples as well as the capability of classification model. However, the quantity of EEG samples is often insufficient and the quality of EEG samples is often irregular. Meanwhile, EEG samples possess strong nonlinearity. Therefore, an EEG emotion recognition method based on transfer learning (TL) and echo state network (ESN) for HilCPS is proposed in this paper. First, a selection algorithm of EEG samples based on average Frechet distance is proposed to improve the sample quality. Second, a feature transfer algorithm of EEG samples based on transfer component analysis is proposed to expand the sample quantity. Third, in order to solve the problem of the nonlinearity of EEG samples, a classification model of EEG samples based on ESN is constructed to accurately classify emotional states. Finally, experimental results show that compared with traditional methods, the proposed method can expand the quantity of the high-quality EEG samples and effectively improve the accuracy of emotion recognition.  相似文献   

11.
Exergames platform can be more appealing to the users if they can interact with emotion. Therefore, in this paper, we propose an automatic emotion recognition system from speech to be embedded in the Exergames platform. While playing and doing exercise, the user expresses his or her feeling by uttering some phrases. The speech is recorded by an omnidirectional mic and transmitted to an emotion recognition server in a cloud environment, where the emotion (e.g. happy, sad or neutral) is recognized. For the recognition, we use MPEG-7 low-level audio features and a Gaussian mixture model based classifier. A tactile vibration is generated based on the emotion and feedback to the user for a real feeling. The user can thus have an instantaneous vibrational feeling based on his or her satisfaction. The recognized emotion can also be used as the user’s satisfaction of the framework on the fly without the need of a survey session. The experimental study and performance comparison show that the proposed framework has positive effects on the perception of physical activities.  相似文献   

12.
刘嘉敏  苏远歧  魏平  刘跃虎 《自动化学报》2020,46(10):2137-2147
基于视频-脑电信号交互协同的情感识别是人机交互重要而具有挑战性的研究问题.本文提出了基于长短记忆神经网络(Long-short term memory, LSTM)和注意机制(Attention mechanism)的视频-脑电信号交互协同的情感识别模型.模型的输入是实验参与人员观看情感诱导视频时采集到的人脸视频与脑电信号, 输出是实验参与人员的情感识别结果.该模型在每一个时间点上同时提取基于卷积神经网络(Convolution neural network, CNN)的人脸视频特征与对应的脑电信号特征, 通过LSTM进行融合并预测下一个时间点上的关键情感信号帧, 直至最后一个时间点上计算出情感识别结果.在这一过程中, 该模型通过空域频带注意机制计算脑电信号${\alpha}$波, ${\beta}$波与${\theta}$波的重要度, 从而更加有效地利用脑电信号的空域关键信息; 通过时域注意机制, 预测下一时间点上的关键信号帧, 从而更加有效地利用情感数据的时域关键信息.本文在MAHNOB-HCI和DEAP两个典型数据集上测试了所提出的方法和模型, 取得了良好的识别效果.实验结果表明本文的工作为视频-脑电信号交互协同的情感识别问题提供了一种有效的解决方法.  相似文献   

13.
王湖斐 《传感技术学报》2020,33(1):63-67,90
研究了人脑不同区域情绪脑电信号的差异特性。按照国际10-20电极分布系统将大脑分成5个脑区,选用视频情绪诱发素材诱发被试产生正性、中性、负性情绪同时采集其脑电信号,设置各脑区小波相干指数为参数,研究其差异性并进行模式识别。结果显示:不同情绪状态下额叶、顶叶δ波段的小波相干指数具有显著差异(p<0.05),并且统计发现将中性情绪小波相干指数作为基准,负性情绪的小波相干指数增大,正性情绪的小波相干指数降低。实验结果验证了额叶和顶叶的小波相干指数对情绪三分类问题有较好的识别效果,顶叶情绪识别率高达96.67%,进一步证明了情绪处理时额叶、顶叶两个脑区被激活,且不同情绪状态下激活程度不同。  相似文献   

14.
Multi-modal affective data such as EEG and physiological signals is increasingly utilized to analyze of human emotional states. Due to the noise existed in collected affective data, however, the performance of emotion recognition is still not satisfied. In fact, the issue of emotion recognition can be regarded as channel coding, which focuses on reliable communication through noise channels. Using affective data and its label, the redundant codeword would be generated to correct signals noise and recover emotional label information. Therefore, we utilize multi-label output codes method to improve accuracy and robustness of multi-dimensional emotion recognition by training a redundant codeword model, which is the idea of error-correcting output codes. The experiment results on DEAP dataset show that the multi-label output codes method outperforms other traditional machine learning or pattern recognition methods for the prediction of emotional multi-labels.  相似文献   

15.
As users may have different needs in different situations and contexts, it is increasingly important to consider user context data when filtering information. In the field of web personalization and recommender systems, most of the studies have focused on the process of modelling user profiles and the personalization process in order to provide personalized services to the user, but not on contextualized services. Rather limited attention has been paid to investigate how to discover, model, exploit and integrate context information in personalization systems in a generic way. In this paper, we aim at providing a novel model to build, exploit and integrate context information with a web personalization system. A context-aware personalization system (CAPS) is developed which is able to model and build contextual and personalized ontological user profiles based on the user’s interests and context information. These profiles are then exploited in order to infer and provide contextual recommendations to users. The methods and system developed are evaluated through a user study which shows that considering context information in web personalization systems can provide more effective personalization services and offer better recommendations to users.  相似文献   

16.
The accurate quantification of the correlation between product color attributes and user emotions is the key to product color emotional design (PCED). However, the traditional method of emotion quantification is subjective, one-sided, and contingent, which reduces the reliability of the research results. To this end, this paper proposes a method for PCED based on the quantification of electroencephalogram (EEG) physiological indicators. A medical product, namely an infant incubator, is used as the experimental stimulus samples, and “unsafe-safe” is used as the perceptual imager word pair to conduct EEG measurement experiments of the user's emotional state. Two types of data are obtained in the experiment, namely behavioral data and EEG data. Via the analysis of the two types of data, the EEG physiological characteristic indicators (the event-related potentials (ERPs) components) are obtained, which can explain the user’s emotional cognitive mechanism. Finally, the relationship between the user’s emotional state and EEG characteristics under the influence of product color attributes is explored. The conclusions are as follows. (1) The “safety” emotional value of the two-color samples is higher than that of the three-color samples, which indicates that the simpler the color matching, the higher the safety emotion attribute of the samples. (2) Via the study of the three attributes of hue, lightness, and chroma in the assistant colors of the two-color samples, it is found that when the hue attributes of the samples are red, cyan, and blue, the safety emotional value is higher; moreover, the higher the lightness attribute and the chroma attribute, the higher the safety emotion. Research on the two color attribute dimensions of hue harmony and color tone harmony between the auxiliary color of the three-color samples and the embellishment color revealed that the more consistent the hue harmony, the higher the safety emotion, and the more significant the difference in the color tone harmony, the higher the safety emotion. (3) The reaction time data in the behavioral data demonstrate that the participants had the longest average reaction time under neutral emotions. (4) The results of the time–frequency analysis of the EEG data reveal that there are apparent mid-to-early ERPs components between 100 and 300 ms after the appearance of the stimulus samples. The ERPs component analysis results show that the P1 component can reflect the emotional valence to a certain extent; the higher the amplitude of the P1 component, the more pronounced the negative emotions of the participants, and the lower the safety emotional evaluation value of the two-color samples, the higher the amplitude of the P1 component. Moreover, the N2 and P3 components can reflect the degree of emotional arousal to a certain extent, and the increase in their amplitude indicates the more substantial emotion of the participant; furthermore, the correlation between N2 and emotional arousal is higher. The two-color and three-color experiments revealed that the more neutral the emotional state, the lower the amplitudes of N2 and P3. This research is expected to provide a theoretical foundation and experimental data basis for PCED methods based on EEG physiological indicators.  相似文献   

17.
Many experts predict that the next huge step forward in Web information technology will be achieved by adding semantics to Web data, and will possibly consist of (some form of) the Semantic Web. In this paper, we present a novel approach to Semantic Web search, called Serene, which allows for a semantic processing of Web search queries, and for evaluating complex Web search queries that involve reasoning over the Web. More specifically, we first add ontological structure and semantics to Web pages, which then allows for both attaching a meaning to Web search queries and Web pages, and for formulating and processing ontology-based complex Web search queries (i.e., conjunctive queries) that involve reasoning over the Web. Here, we assume the existence of an underlying ontology (in a lightweight ontology language) relative to which Web pages are annotated and Web search queries are formulated. Depending on whether we use a general or a specialized ontology, we thus obtain a general or a vertical Semantic Web search interface, respectively. That is, we are actually mapping the Web into an ontological knowledge base, which then allows for Semantic Web search relative to the underlying ontology. The latter is then realized by reduction to standard Web search on standard Web pages and logically completed ontological annotations. That is, standard Web search engines are used as the main inference motor for ontology-based Semantic Web search. We develop the formal model behind this approach and also provide an implementation in desktop search. Furthermore, we report on extensive experiments, including an implemented Semantic Web search on the Internet Movie Database.  相似文献   

18.
Electroencephalography (EEG) reflects the activities of human brain and it can represent different emotional states to provide impersonal scientific evidence for daily-life emotional health monitoring. However, traditional multi-channel EEG sensing contains irrelevant or even interferential features, channels or samples, leading to redundant data and hardware complexity. This paper proposes a feature-channel-sample hybrid selection method to improve the channel selection, feature extraction and classification scheme for daily-life EEG emotion recognition. The features and channels are selected in pair with sparsity constrained differential evolution where the feature-channel pairs are optimized synchronously in the global search. Furthermore, the distance evaluation is carried out to remove abnormal samples to improve the emotion recognition accuracy. Therefore, efficient feature vectors for valence-arousal classification can be obtained by a small number of sparsely distributed channels. The experiments are based on the widely-used emotion recognition database DEAP and generate a feature-channel-sample hybrid selection scheme with optimized parameter settings. It can be derived that the proposed method can reduce the EEG channels sharply and maintain a relatively high accuracy compared with the related work. Furthermore, by applying this optimal scheme in practice, the real-scene daily-life EEG emotion recognition experiments are carried out on a sparsity constrained web-enabled system and a 10-fold cross validation is organized to confirm the performance. In conclusion, this paper provides a practical and efficient hardware configuration and feature-channel-sample optimal selection scheme for daily-life EEG emotion recognition.  相似文献   

19.
情绪是一种大脑产生的主观认知的概括。脑信号解码技术可以以一种较客观的方式来有效地研究人的情绪及其相关认知行为。本文提出了一种基于图注意力网络的脑电情绪识别方法(multi-path graph attention networks, MPGAT),该方法通过对脑电信号通道建图,利用卷积层提取脑电信号的时域特征以及各频带的特征,使用图注意力网络进一步捕捉情绪脑电信号的局部特征以及各脑区之间的内在功能关系,进而构建出更好的脑电信号表征。MPGAT在SEED和SEED-IV数据集的跨被试情绪识别平均准确率分别为86.03%、72.71%,在DREAMER数据集的效价(valence)和唤醒(arousal)维度的跨被试平均准确率分别为76.35%和75.46%,达到并部分超过了目前最先进脑电情绪识别方法的性能。本文所提出的脑电信号处理方法有望为情绪认知科学研究与情绪脑机接口系统提供新的技术手段。  相似文献   

20.
Context-Independent Multilingual Emotion Recognition from Speech Signals   总被引:3,自引:0,他引:3  
This paper presents and discusses an analysis of multilingual emotion recognition from speech with database-specific emotional features. Recognition was performed on English, Slovenian, Spanish, and French InterFace emotional speech databases. The InterFace databases included several neutral speaking styles and six emotions: disgust, surprise, joy, fear, anger and sadness. Speech features for emotion recognition were determined in two steps. In the first step, low-level features were defined and in the second high-level features were calculated from low-level features. Low-level features are composed from pitch, derivative of pitch, energy, derivative of energy, and duration of speech segments. High-level features are statistical presentations of low-level features. Database-specific emotional features were selected from high-level features that contain the most information about emotions in speech. Speaker-dependent and monolingual emotion recognisers were defined, as well as multilingual recognisers. Emotion recognition was performed using artificial neural networks. The achieved recognition accuracy was highest for speaker-dependent emotion recognition, smaller for monolingual emotion recognition and smallest for multilingual recognition. The database-specific emotional features are most convenient for use in multilingual emotion recognition. Among speaker-dependent, monolingual, and multilingual emotion recognition, the difference between emotion recognition with all high-level features and emotion recognition with database-specific emotional features is smallest for multilingual emotion recognition—3.84%.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号