首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Human emotion recognition using brain signals is an active research topic in the field of affective computing. Music is considered as a powerful tool for arousing emotions in human beings. This study recognized happy, sad, love and anger emotions in response to audio music tracks from electronic, rap, metal, rock and hiphop genres. Participants were asked to listen to audio music tracks of 1 min for each genre in a noise free environment. The main objectives of this study were to determine the effect of different genres of music on human emotions and indicating age group that is more responsive to music. Thirty men and women of three different age groups (15–25 years, 26–35 years and 36–50 years) underwent through the experiment that also included self reported emotional state after listening to each type of music. Features from three different domains i.e., time, frequency and wavelet were extracted from recorded EEG signals, which were further used by the classifier to recognize human emotions. It has been evident from results that MLP gives best accuracy to recognize human emotion in response to audio music tracks using hybrid features of brain signals. It is also observed that rock and rap genres generated happy and sad emotions respectively in subjects under study. The brain signals of age group (26–35 years) gave best emotion recognition accuracy in accordance to the self reported emotions.  相似文献   

2.
Music often plays an important role in people’s daily lives. Because it has the power to affect human emotion, music has gained a place in work environments and in sports training as a way to enhance the performance of particular tasks. Studies have shown that office workers perform certain jobs better and joggers run longer distances when listening to music. However, a personalized music system which can automatically recommend songs according to user’s physiological response remains absent. Therefore, this study aims to establish an intelligent music selection system for individual users to enhance their learning performance. We first created an emotional music database using data analytics classifications. During testing, innovative wearable sensing devices were used to detect heart rate variability (HRV) in experiments, which subsequently guided music selection. User emotions were then analyzed and appropriate songs were selected by using the proposed application software (App). Machine learning was used to record user preference, ensuring accurate and precise classification. Significant results generated through experimental validation indicate that this system generates high satisfaction levels, does not increase mental workload, and improves users’ performance. Under the trend of the Internet of Things (IoT) and the continuing development of wearable devices, the proposed system could stimulate innovative applications for smart factory, home, and health care.  相似文献   

3.
In this paper, we suggest a new approach of genetic programming for music emotion classification. Our approach is based on Thayer’s arousal-valence plane which is one of representative human emotion models. Thayer’s plane which says human emotions is determined by the psychological arousal and valence. We map music pieces onto the arousal-valence plane, and classify the music emotion in that space. We extract 85 acoustic features from music signals, rank those by the information gain and choose the top k best features in the feature selection process. In order to map music pieces in the feature space onto the arousal-valence space, we apply genetic programming. The genetic programming is designed for finding an optimal formula which maps given music pieces to the arousal-valence space so that music emotions are effectively classified. k-NN and SVM methods which are widely used in classification are used for the classification of music emotions in the arousal-valence space. For verifying our method, we compare with other six existing methods on the same music data set. With this experiment, we confirm the proposed method is superior to others.  相似文献   

4.
Music mood classification is one of the most interesting research areas in music information retrieval, and it has many real-world applications. Many experiments have been performed in mood classification or emotion recognition of Western music; however, research on mood classification of Indian music is still at initial stage due to scarcity of digitalized resources. In the present work, a mood taxonomy is proposed for Hindi and Western songs; both audio and lyrics were annotated using the proposed mood taxonomy. Differences in mood were observed during the annotation of the audio and lyrics for Hindi songs only. The detailed studies on mood classification of Hindi and Western music are presented for the requirement of the recommendation system. LibSVM and Feed-forward neural networks have been used to develop mood classification systems based on audio, lyrics, and a combination of them. The multimodal mood classification systems using Feed-forward neural networks for Hindi and Western songs obtained the maximum F-measures of 0.751 and 0.835, respectively.  相似文献   

5.
Emotion recognition of music objects is a promising and important research issues in the field of music information retrieval. Usually, music emotion recognition could be considered as a training/classification problem. However, even given a benchmark (a training data with ground truth) and using effective classification algorithms, music emotion recognition remains a challenging problem. Most previous relevant work focuses only on acoustic music content without considering individual difference (i.e., personalization issues). In addition, assessment of emotions is usually self-reported (e.g., emotion tags) which might introduce inaccuracy and inconsistency. Electroencephalography (EEG) is a non-invasive brain-machine interface which allows external machines to sense neurophysiological signals from the brain without surgery. Such unintrusive EEG signals, captured from the central nervous system, have been utilized for exploring emotions. This paper proposes an evidence-based and personalized model for music emotion recognition. In the training phase for model construction and personalized adaption, based on the IADS (the International Affective Digitized Sound system, a set of acoustic emotional stimuli for experimental investigations of emotion and attention), we construct two predictive and generic models \(AN\!N_1\) (“EEG recordings of standardized group vs. emotions”) and \(AN\!N_2\) (“music audio content vs. emotion”). Both models are trained by an artificial neural network. We then collect a subject’s EEG recordings when listening the selected IADS samples, and apply the \(AN\!N_1\) to determine the subject’s emotion vector. With the generic model and the corresponding individual differences, we construct the personalized model H by the projective transformation. In the testing phase, given a music object, the processing steps are: (1) to extract features from the music audio content, (2) to apply \(AN\!N_2\) to calculate the vector in the arousal-valence emotion space, and (3) to apply the transformation matrix H to determine the personalized emotion vector. Moreover, with respect to a moderate music object, we apply a sliding window on the music object to obtain a sequence of personalized emotion vectors, in which those predicted vectors will be fitted and organized as an emotion trail for revealing dynamics in the affective content of music object. Experimental results suggest the proposed approach is effective.  相似文献   

6.
With the advent of the ubiquitous era, many studies have been devoted to various situation-aware services in the semantic web environment. One of the most challenging studies involves implementing a situation-aware personalized music recommendation service which considers the user’s situation and preferences. Situation-aware music recommendation requires multidisciplinary efforts including low-level feature extraction and analysis, music mood classification and human emotion prediction. In this paper, we propose a new scheme for a situation-aware/user-adaptive music recommendation service in the semantic web environment. To do this, we first discuss utilizing knowledge for analyzing and retrieving music contents semantically, and a user adaptive music recommendation scheme based on semantic web technologies that facilitates the development of domain knowledge and a rule set. Based on this discussion, we describe our Context-based Music Recommendation (COMUS) ontology for modeling the user’s musical preferences and contexts, and supporting reasoning about the user’s desired emotions and preferences. Basically, COMUS defines an upper music ontology that captures concepts on the general properties of music such as titles, artists and genres. In addition, it provides functionality for adding domain-specific ontologies, such as music features, moods and situations, in a hierarchical manner, for extensibility. Using this context ontology, we believe that logical reasoning rules can be inferred based on high-level (implicit) knowledge such as situations from low-level (explicit) knowledge. As an innovation, our ontology can express detailed and complicated relations among music clips, moods and situations, which enables users to find appropriate music. We present some of the experiments we performed as a case-study for music recommendation.  相似文献   

7.
Dance has universally been used as a form of human expression for thousands of years. This common human behaviour and communication method has not been explored much in the context of computer-based technology, even within the field of virtual human research. This paper presents an experimental study investigating the impact of watching dancing virtual characters on human emotions. The study analysed the responses of 55 participants, composed of a mix of dancers and non-dancers, who watched a dancing virtual character perform 3 different dances that represented anger, sadness and happiness in different display orders. The participants’ reported changes in their emotions and their feelings of anger, sadness and happiness were significantly dependent on which dancing character’s emotion they watched and the emotional change did not rely on correct recognition of the depicted emotion. For experimental control, our characters were faceless and danced without music. Our results suggest that just by watching a dancing virtual character some of the benefits associated with dancing could be accessed in circumstances where it is not desirable or feasible to dance, justifying further research to develop a personalised character with a face and music that adapts according to the humans’ emotions and preferences.  相似文献   

8.
随着音乐科技研究的不断深入,音乐情感识别已被广泛实践和应用在音乐推荐、音乐心理治疗、声光场景构建等方面。模拟人类感受音乐表现情感的过程,针对音乐情感识别中长短时记忆神经网络的长距离依赖和训练效率低的问题,提出一种新的网络模型CBSA(CNN BiLSTM self attention),应用于长距离音乐情感识别回归训练。模型使用二维卷积神经网络获取音乐情感局部关键特征,采用双向长短时记忆神经网络从获取的局部关键特征中提取序列化音乐情感信息,利用自注意力模型对获取的序列化信息进行动态权重调整,突出音乐情感全局关键点。实验结果表明,CBSA模型可缩短分析音乐情感信息中数据规律的训练时间,有效地提高音乐情感识别精确度。  相似文献   

9.
With the growth of digital music, the development of music recommendation is helpful for users to pick desirable music pieces from a huge repository of music. The existing music recommendation approaches are based on a user’s preference on music. However, sometimes, it might better meet users’ requirement to recommend music pieces according to emotions. In this paper, we propose a novel framework for emotion-based music recommendation. The core of the recommendation framework is the construction of the music emotion model by affinity discovery from film music, which plays an important role in conveying emotions in film. We investigate the music feature extraction and propose the Music Affinity Graph and Music Affinity Graph-Plus algorithms for the construction of music emotion model. Experimental result shows the proposed emotion-based music recommendation achieves 85% accuracy in average.  相似文献   

10.
音乐的情感标签预测对音乐的情感分析有着重要的意义。该文提出了一种基于情感向量空间模型的歌曲情感标签预测算法,首先,提取歌词中的情感特征词构建情感空间向量模型,然后利用SVM分类器对已知情感标签的音乐进行训练,通过分类技术找到与待预测歌曲情感主类一致的歌曲集合,最后,通过歌词的情感相似度计算找到最邻近的k首歌曲,将其标签推荐给待预测歌曲。实验发现本文提出的情感向量空间模型和“情感词—情感标签”共现的特征降维方法比传统的文本特征向量模型能够更好地提高歌曲情感分类准确率。同时,在分类基础上进行的情感标签预测方法可以有效地防止音乐“主类情感漂移”,比最近邻居方法达到更好的标签预测准确率。  相似文献   

11.
The mystery surrounding emotions, how they work and how they affect our lives has not yet been unravelled. Scientists still debate the real nature of emotions, whether they are evolutionary, physiological or cognitive are just a few of the different approaches used to explain affective states. Regardless of the various emotional paradigms, neurologists have made progress in demonstrating that emotion is as, or more, important than reason in the process of making decisions and deciding actions. The significance of these findings should not be overlooked in a world that is increasingly reliant on computers to accommodate to user needs. In this paper, a novel approach for recognizing and classifying positive and negative emotional changes in real time using physiological signals is presented. Based on sequential analysis and autoassociative networks, the emotion detection system outlined here is potentially capable of operating on any individual regardless of their physical state and emotional intensity without requiring an arduous adaptation or pre-analysis phase. Results from applying this methodology on real-time data collected from a single subject demonstrated a recognition level of 71.4% which is comparable to the best results achieved by others through off-line analysis. It is suggested that the detection mechanism outlined in this paper has all the characteristics needed to perform emotion recognition in pervasive computing.  相似文献   

12.
Different physiological signals are of different origins and may describe different functions of the human body. This paper studied respiration (RSP) signals alone to figure out its ability in detecting psychological activity. A deep learning framework is proposed to extract and recognize emotional information of respiration. An arousal-valence theory helps recognize emotions by mapping emotions into a two-dimension space. The deep learning framework includes a sparse auto-encoder (SAE) to extract emotion-related features, and two logistic regression with one for arousal classification and the other for valence classification. For the development of this work an international database for emotion classification known as Dataset for Emotion Analysis using Physiological signals (DEAP) is adopted for model establishment. To further evaluate the proposed method on other people, after model establishment, we used the affection database established by Augsburg University in Germany. The accuracies for valence and arousal classification on DEAP are 73.06% and 80.78% respectively, and the mean accuracy on Augsburg dataset is 80.22%. This study demonstrates the potential to use respiration collected from wearable deices to recognize human emotions.  相似文献   

13.
In the current Internet environment, a lot of multimedia information is navigated on the on-line computer systems. Among the multimedia information, video sequence has the most valuable and meaningful influence on human emotions. Therefore, one human’s emotions to see and feel the same video can be different from that of others depending on the person’s mental state. In this research, we propose a new real-time emotion retrieval scheme in video with image sequence features. The features of image sequence consist of color information, key frame extraction, video sound, and optical flow. Each video feature is combined with the weight for the emotion retrieval. The experimental results show the new approach of real-time emotion retrieval in video with the better results compared to the previous studies. The proposed scheme will be applied to the many multimedia fields: movie, computer game, video conference, and so on.  相似文献   

14.
15.
石琳  李志刚  王志良  赵巍 《计算机应用》2010,30(5):1367-1370
为了在智能虚拟环境中赋予主体情感能力,提高其逼真度及人机交互的自然性,并且兼顾虚拟环境的实时性要求,以心理学中的基本情绪理论和认知评价理论为依据,模拟推理规则为基础,提出了一种情绪产生器模型。该模型首先根据模糊IF-THEN规则制定情绪激发条件规则,进而推理得出“情绪因子”;然后建立了一个受情绪因子、个性及前一时刻的情绪状态制约的非线性函数,用来生成当前情绪及计算情绪强度。仿真结果表明,模型较符合人类的基本情绪状态的产生、迁移及衰减规律,在一定程度上体现了人类情绪的模糊性和非线性,而且易于机器实现。  相似文献   

16.
The most important goal of character animation is to efficiently control the motions of a character. Until now, many techniques have been proposed for human gait animation. Some techniques have been created to control the emotions in gaits such as ‘tired walking’ and ‘brisk walking’ by using parameter interpolation or motion data mapping. Since it is very difficult to automate the control over the emotion of a motion, the emotions of a character model have been generated by creative animators. This paper proposes a human running model based on a one‐legged planar hopper with a self‐balancing mechanism. The proposed technique exploits genetic programming to optimize movement and can be easily adapted to various character models. We extend the energy minimization technique to generate various motions in accordance with emotional specifications. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

17.

This study proposes a system that can recognize human emotional state from bio-signal. The technology is provided to improve the interaction between humans and computers to achieve an effective human–machine that is capable for intelligent interaction. The proposed method is able to recognize six emotional states, such as joy, happiness, fear, anger, despair, and sadness. These set of emotional states are widely used for emotion recognition purposes. The result shows that the proposed method can distinguish one emotion compared to all other possible emotional states. The method is composed of two steps: 1) multi-modal bio-signal evaluation and 2) emotion recognition using artificial neural network. In the first step, we present a method to analyze and fix human sensitivity using physiological signals, such as electroencephalogram, electrocardiogram, photoplethysmogram, respiration, and galvanic skin response. The experimental analysis shows that the proposed method has good accuracy performance and could be applied on many human–computer interaction devices for emotion detection.

  相似文献   

18.
Tang  Zhichuan  Li  Xintao  Xia  Dan  Hu  Yidan  Zhang  Lingtao  Ding  Jun 《Multimedia Tools and Applications》2022,81(5):7085-7101

Self-assessment methods are widely used in art therapy evaluation, but emotional recognition methods using physiological signals’ features are more objectively and accurately. In this study, we proposed an electroencephalogram (EEG)-based art therapy evaluation method that could evaluate the therapeutic effect based on the emotional changes before and after the art therapy. Twelve participants were recruited in a two-step experiment (emotion stimulation step and drawing therapy step), and their EEG signals and self-assessment scores were collected. The self-assessment model (SAM) was used to obtain and label the actual emotional states; the long short-term memory (LSTM) network was used to extract the deep temporal features of EEG to recognize emotions. Further, the classification performances in different sequence lengths, time-window lengths and frequency combinations were compared and analyzed. The results showed that emotion recognition models with LSTM deep temporal features achieved the better classification performances than the state-of-the-art methods with non-temporal features; the classification accuracies in high-frequency bands (α, β, and γ bands) were higher than those in low-frequency bands (δ and θ bands); there was a highest emotion classification accuracy (93.24%) in 10-s sequence length, 2-s time-window length and 5-band frequency combination. Our proposed method could be used for emotion recognition effectively and accurately, and was an objective approach to assist therapists or patients in evaluating the effect of art therapy.

  相似文献   

19.
In human–computer interaction (HCI), electroencephalogram (EEG) signals can be added as an additional input to computer. An integration of real-time EEG-based human emotion recognition algorithms in human–computer interfaces can make the users experience more complete, more engaging, less emotionally stressful or more stressful depending on the target of the applications. Currently, the most accurate EEG-based emotion recognition algorithms are subject-dependent, and a training session is needed for the user each time right before running the application. In this paper, we propose a novel real-time subject-dependent algorithm with the most stable features that gives a better accuracy than other available algorithms when it is crucial to have only one training session for the user and no re-training is allowed subsequently. The proposed algorithm is tested on an affective EEG database that contains five subjects. For each subject, four emotions (pleasant, happy, frightened and angry) are induced, and the affective EEG is recorded for two sessions per day in eight consecutive days. Testing results show that the novel algorithm can be used in real-time emotion recognition applications without re-training with the adequate accuracy. The proposed algorithm is integrated with real-time applications “Emotional Avatar” and “Twin Girls” to monitor the users emotions in real time.  相似文献   

20.
Extracting and understanding of emotion is of high importance for the interaction between human and machine communication systems. The most expressive way to display the human’s emotion is through facial expression analysis. This paper proposes a multiple emotion recognition system that can recognize combinations of up to a maximum of three different emotions using an active appearance model (AAM), the proposed classification standard, and a k-nearest neighbor (k-NN) classifier in mobile environments. AAM can take the expression of variations that are calculated by the proposed classification standard according to changes in human expressions in real time. The proposed k-NN can classify basic emotions (normal, happy, sad, angry, surprise) as well as more ambiguous emotions by combining the basic emotions in real time, and each recognized emotion that can be subdivided has strength. Whereas most previous methods of emotion recognition recognize various kind of a single emotion, this paper recognizes various emotions with a combination of the five basic emotions. To be easily understood, the recognized result is presented in three ways on a mobile camera screen. The result of the experiment was an average 85 % recognition rate and a 40 % performance showed optimized emotions. The implemented system can be represented by one of the example for augmented reality on displaying combination of real face video and virtual animation with user’s avatar.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号