首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 312 毫秒
1.
本文通过人脸图像和脑电两个输入信号对情绪识别技术展开研究. 采用对应不同情绪的电影片段对被实验者进行情绪刺激的方法采集输入信号. 通过表情识别出八种基本表情的分类, 通过脑电识别出情绪的三种强弱波动. 通过决策层面的信息融合, 进行情绪分类. 最终的识别准确率达到89.5%, 高于采用单模态进行识别的准确率, 分别为: 表情识别: 81.35%, 脑电识别: 71.53%.  相似文献   

2.
针对目前基于智能手机的情绪识别研究中所用数据较为单一,不能全面反应用户行为模式,进而不能真实反应用户情绪这一问题展开研究,基于智能手机从多个维度全面收集反应用户日常行为的细粒度感知数据,采用多维数据特征融合方法,利用支持向量机(support vector machine,SVM)、随机森林(random forest)等6种分类方法,基于离散情绪模型和环状情绪模型两种情绪分类模型,对12名志愿者的混合数据和个人数据分别进行情绪识别,并进行了对比实验。实验结果表明,该全面反应用户行为的多维数据特征融合方法能够很好地对用户的情绪进行识别,其中使用个人数据进行情绪识别的准确率最高可达到79.78%,而且环状情感模型分类结果明显优于离散分类模型。  相似文献   

3.
驾驶员情绪状态的实时识别与预警,对保证道路交通安全系统的正常运行有着重要的作用与意义.本研究基于便携式脑电设备采集了 16位被试前额双通道脑电数据,分别从时域和频域上进行特征提取,使用集成学习分类的方法对正负情绪进行分类.结果显示频域特征以及特征的不对称指数在正负性情绪的分类起到了关键的作用,得到基于梯度提升决策树(GBDT)分类器的正负情绪识别准确率最佳,为92.4%.本研究提出了一种对驾驶员正负性情绪状态识别的新方法,为后续情绪状态的实时识别奠定了基础.  相似文献   

4.
情绪识别是指通过人的面部表情、行为动作或者生理信号等信息识别人的情绪状态,其成果在医疗辅助、教育、交通安全等方面有很大的应用价值.由于脑电信号的客观真实性等特点,使用脑电信号进行情绪识别研究受到国内外学者们的广泛关注.查阅了大量脑电情绪识别相关文献并进行归纳、分析和总结.首先,对情绪以及情绪识别的定义、情绪的分类模型、...  相似文献   

5.
通过面部表情、语音语调以及脑电等生理信号对人的情绪状态进行识别分类,即情绪识别,其在医疗、交通以及教育等领域有广泛应用。脑电信号由于其真实可靠,在情绪识别领域日益得到广泛关注。总结了近年来脑电情绪识别研究所取得的进展,主要介绍基于深度学习和迁移学习进行的脑电情绪识别研究。介绍了脑电情绪识别基础理论、常用公开数据集、信号的采集和预处理,介绍特征提取与选择,重点介绍了深度学习和迁移学习在脑电情绪识别上的应用。指出该领域目前面临的挑战和前景。  相似文献   

6.
信息融合技术在情绪识别领域的研究展望   总被引:1,自引:0,他引:1  
简要介绍目前几种基于不同数据源的情绪识别方法和信息融合技术基础, 为工程技术人员提供一定的理论背景。对多源信息融合领域的情绪识别现状进行了分类介绍, 说明和分析了基于多源信息融合的情感识别存在的问题, 简述了其在情绪识别领域的应用前景。  相似文献   

7.
情绪识别与日常生活的诸多领域都有很大联系.然而,通过单一算法难以获得较高的情绪识别准确率,为此,提出一种基于支持向量机(support vector machine,SVM)和K近邻(K-nearest neighbors,KNN)融合算法(SVM-KNN)的情绪脑电识别模型.在情绪分类时,首先计算待识别样本与最优分类...  相似文献   

8.
情绪识别在人工智能领域具有广阔的应用前景,目前基于人脸表情的情绪识别已经相对成熟,而根据人类肢体动作进行情绪识别的研究却不多。通过VLBP和LBP-TOP算子从三维空间中提取图像序列的肢体动作特征,分析愤怒、无聊、厌恶、恐惧、高兴、疑惑和悲伤7种自然情绪的特点,并用参数优化的支持向量机对情绪分类识别,识别率最高能够达到77.0%。实验结果表明,VLBP和LBP-TOP算子具有较强的鲁棒性,能有效的从肢体动作中识别人的情绪。  相似文献   

9.
近年,情绪识别研究已经不再局限于面部和语音识别,基于脑电等生理信号的情绪识别日趋火热.但由于特征信息提取不完整或者分类模型不适应等问题,使得情绪识别分类效果不佳.基于此,本文提出一种微分熵(DE)、卷积神经网络(CNN)和门控循环单元(GRU)结合的混合模型(DE-CNN-GRU)进行基于脑电的情绪识别研究.将预处理后的脑电信号分成5个频带,分别提取它们的DE特征作为初步特征,输入到CNN-GRU模型中进行深度特征提取,并结合Softmax进行分类.在SEED数据集上进行验证,该混合模型得到的平均准确率比单独使用CNN或GRU算法的平均准确率分别高出5.57%与13.82%.  相似文献   

10.
情绪识别旨在自动识别文本是否含有情绪。情绪识别是情感分析研究中的一项基本任务。针对该任务,提出了一种基于句法信息的微博文本情绪识别方法。该方法的特色在于充分考虑了微博文本的句法信息。 具体实现中,首先利用词性标注(POS)序列和结构句法树来表示句法信息,以分别提取POS序列模式、重写规则和二元句法标签作为特征进行文本表示;然后利用最大熵分类算法对微博文本进行情绪识别。实验结果表明, 所提方法能够获得较好的识别效果。  相似文献   

11.
语音情感信息具有非线性、信息冗余、高维等复杂特点,数据含有大量噪声,传统识别模型难以消除冗余和噪声信息,导致语音情感识别正确率十分低.为了提高语音情感识别正确率,利用小波分析去噪和神经网络的非线性处理能力,提出一种基于过程神经元网络的语音情感智能识别模型.采用小波分析对语音情感信号进行去噪处理,利用主成分分析消除语音情感特征中的冗余信息,采用过程神经元网络对语音情感进行分类识别.仿真结果表明,基于过程神经元网络的识别模型的识别率比K近邻提高了13%,比支持向量机提高了8.75%,该模型是一种有效的语音情感智能识别工具.  相似文献   

12.
基于多模态生理数据的连续情绪识别技术在多个领域有重要用途,但碍于被试数据的缺乏和情绪的主观性,情绪识别模型的训练仍需更多的生理模态数据,且依赖于同源被试数据.本文基于人脸图像和脑电提出了多种连续情绪识别方法.在人脸图像模态,为解决人脸图像数据集少而造成的过拟合问题,本文提出了利用迁移学习技术训练的多任务卷积神经网络模型...  相似文献   

13.
14.
近年来,利用计算机技术实现基于多模态数据的情绪识别成为自然人机交互和人工智能领域重要 的研究方向之一。利用视觉模态信息的情绪识别工作通常都将重点放在脸部特征上,很少考虑动作特征以及融合 动作特征的多模态特征。虽然动作与情绪之间有着紧密的联系,但是从视觉模态中提取有效的动作信息用于情绪 识别的难度较大。以动作与情绪的关系作为出发点,在经典的 MELD 多模态情绪识别数据集中引入视觉模态的 动作数据,采用 ST-GCN 网络模型提取肢体动作特征,并利用该特征实现基于 LSTM 网络模型的单模态情绪识别。 进一步在 MELD 数据集文本特征和音频特征的基础上引入肢体动作特征,提升了基于 LSTM 网络融合模型的多 模态情绪识别准确率,并且结合文本特征和肢体动作特征提升了上下文记忆模型的文本单模态情绪识别准确率, 实验显示虽然肢体动作特征用于单模态情绪识别的准确度无法超越传统的文本特征和音频特征,但是该特征对于 多模态情绪识别具有重要作用。基于单模态和多模态特征的情绪识别实验验证了人体动作中含有情绪信息,利用 肢体动作特征实现多模态情绪识别具有重要的发展潜力。  相似文献   

15.
语音情感识别的精度很大程度上取决于不同情感间的特征差异性。从分析语音的时频特性入手,结合人类的听觉选择性注意机制,提出一种基于语谱特征的语音情感识别算法。算法首先模拟人耳的听觉选择性注意机制,对情感语谱信号进行时域和频域上的分割提取,从而形成语音情感显著图。然后,基于显著图,提出采用Hu不变矩特征、纹理特征和部分语谱特征作为情感识别的主要特征。最后,基于支持向量机算法对语音情感进行识别。在语音情感数据库上的识别实验显示,提出的算法具有较高的语音情感识别率和鲁棒性,尤其对于实用的烦躁情感的识别最为明显。此外,不同情感特征间的主向量分析显示,所选情感特征间的差异性大,实用性强。  相似文献   

16.
Emotion recognition has become an important component of human–computer interaction systems. Research on emotion recognition based on electroencephalogram (EEG) signals are mostly conducted by the analysis of all channels' EEG signals. Although some progresses are achieved, there are still several challenges such as high dimensions, correlation between different features and feature redundancy in the realistic experimental process. These challenges have hindered the applications of emotion recognition to portable human–computer interaction systems (or devices). This paper explores how to find out the most effective EEG features and channels for emotion recognition so as to only collect data as less as possible. First, discriminative features of EEG signals from different dimensionalities are extracted for emotion classification, including the first difference, multiscale permutation entropy, Higuchi fractal dimension, and discrete wavelet transform. Second, relief algorithm and floating generalized sequential backward selection algorithm are integrated as a novel channel selection method. Then, support vector machine is employed to classify the emotions for verifying the performance of the channel selection method and extracted features. At last, experimental results demonstrate that the optimal channel set, which are mostly located at the frontal, has extremely high similarity on the self‐collected data set and the public data set and the average classification accuracy is achieved up to 91.31% with the selected 10‐channel EEG signals. The findings are valuable for the practical EEG‐based emotion recognition systems.  相似文献   

17.
To improve effectively the performance on spoken emotion recognition, it is needed to perform nonlinear dimensionality reduction for speech data lying on a nonlinear manifold embedded in a high-dimensional acoustic space. In this paper, a new supervised manifold learning algorithm for nonlinear dimensionality reduction, called modified supervised locally linear embedding algorithm (MSLLE) is proposed for spoken emotion recognition. MSLLE aims at enlarging the interclass distance while shrinking the intraclass distance in an effort to promote the discriminating power and generalization ability of low-dimensional embedded data representations. To compare the performance of MSLLE, not only three unsupervised dimensionality reduction methods, i.e., principal component analysis (PCA), locally linear embedding (LLE) and isometric mapping (Isomap), but also five supervised dimensionality reduction methods, i.e., linear discriminant analysis (LDA), supervised locally linear embedding (SLLE), local Fisher discriminant analysis (LFDA), neighborhood component analysis (NCA) and maximally collapsing metric learning (MCML), are used to perform dimensionality reduction on spoken emotion recognition tasks. Experimental results on two emotional speech databases, i.e. the spontaneous Chinese database and the acted Berlin database, confirm the validity and promising performance of the proposed method.  相似文献   

18.
A growing body of research suggests that affective computing has many valuable applications in enterprise systems research and e-businesses. This paper explores affective computing techniques for a vital sub-area in enterprise systems—consumer satisfaction measurement. We propose a linguistic-based emotion analysis and recognition method for measuring consumer satisfaction. Using an annotated emotion corpus (Ren-CECps), we first present a general evaluation of customer satisfaction by comparing the linguistic characteristics of emotional expressions of positive and negative attitudes. The associations in four negative emotions are further investigated. After that, we build a fine-grained emotion recognition system based on machine learning algorithms for measuring customer satisfaction; it can detect and recognize multiple emotions using customers’ words or comments. The results indicate that blended emotion recognition is able to gain rich feedback data from customers, which can provide more appropriate follow-up for customer relationship management.  相似文献   

19.
本文介绍了语音情感识别领域的最新进展和今后的发展方向,特别是介绍了结合实际应用的实用语音情感识别的研究状况。主要内容包括:对情感计算研究领域的历史进行了回顾,探讨了情感计算的实际应用;对语音情感识别的一般方法进行了总结,包括情感建模、情感数据库的建立、情感特征的提取,以及情感识别算法等;结合具体应用领域的需求,对实用语音情感识别方法进行了重点分析和探讨;分析了实用语音情感识别中面临的困难,针对烦躁等实用情感,总结了实用情感语音语料库的建立、特征分析和实用语音情感建模的方法等。最后,对实用语音情感识别研究的未来发展方向进行了展望,分析了今后可能面临的问题和解决的途径。  相似文献   

20.
Context-Independent Multilingual Emotion Recognition from Speech Signals   总被引:3,自引:0,他引:3  
This paper presents and discusses an analysis of multilingual emotion recognition from speech with database-specific emotional features. Recognition was performed on English, Slovenian, Spanish, and French InterFace emotional speech databases. The InterFace databases included several neutral speaking styles and six emotions: disgust, surprise, joy, fear, anger and sadness. Speech features for emotion recognition were determined in two steps. In the first step, low-level features were defined and in the second high-level features were calculated from low-level features. Low-level features are composed from pitch, derivative of pitch, energy, derivative of energy, and duration of speech segments. High-level features are statistical presentations of low-level features. Database-specific emotional features were selected from high-level features that contain the most information about emotions in speech. Speaker-dependent and monolingual emotion recognisers were defined, as well as multilingual recognisers. Emotion recognition was performed using artificial neural networks. The achieved recognition accuracy was highest for speaker-dependent emotion recognition, smaller for monolingual emotion recognition and smallest for multilingual recognition. The database-specific emotional features are most convenient for use in multilingual emotion recognition. Among speaker-dependent, monolingual, and multilingual emotion recognition, the difference between emotion recognition with all high-level features and emotion recognition with database-specific emotional features is smallest for multilingual emotion recognition—3.84%.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号