共查询到16条相似文献,搜索用时 109 毫秒
1.
2.
3.
4.
5.
6.
语音情感识别是利用计算机建立语音信息载体与情感度量之间的关系,并赋予计算机识别、理解人类情感的能力,语音情感识别在人机交互中起着重要作用,是人工智能领域重要发展方向。本文从语音情感识别在国内外发展历史以及开展的一系列会议、期刊和竞赛入手,分别从6个方面对语音情感识别的研究现状进行了梳理与归纳:首先,针对情感表达从离散、维度模型进行了阐述;其次,针对现有的情感数据库进行了统计与总结;然后,回顾了近20年部分代表性语音情感识别发展历程,并分别阐述了基于人工设计的语音情感特征的情感识别技术和基于端到端的语音情感识别技术;在此基础之上,总结了近几年的语音情感识别性能,尤其是近两年在语音领域的重要会议和期刊上的语音情感识别相关工作;介绍了语音情感识别在驾驶、智能交互领域、医疗健康,安全等领域的应用;最后,总结与阐述了语音情感识别领域仍面临的挑战与未来发展方向。本文旨在对语音情感识别相关工作进行深入分析与总结,为语音情感识别相关研究者提供有价值的参考。 相似文献
7.
8.
提出了一种基于LS-SVM的情感语音识别方法。即先提取实验中语音信号的基频,能量,语速等参数为情感特征,然后采用LS-SVM方法对相应的情感语音信号建立模型,进行识别。实验结果表明,利用LS-SVM进行基本情感识别时,识别率较高。 相似文献
9.
实际的研究表明,语音情感识别方法有多种.介绍了一种基于GMM的语音情感识别方法,包括该方法的优点、存在的问题或不足等,并对此进行了思考,给出了一些处理办法. 相似文献
10.
11.
对语音情感识别的起源及主要研究内容作了介绍,对国内外语音情感识别的研究现状作了归纳总结;对语音情感特征的提取、情感分类器的建模算法作了重点分析介绍,最后对情感识别未来发展方向进行了展望. 相似文献
12.
13.
14.
Speech emotion recognition (SER) in noisy environment is a vital issue in artificial intelligence (AI). In this paper, the reconstruction of speech samples removes the added noise. Acoustic features extracted from the reconstructed samples are selected to build an optimal feature subset with better emotional recognizability. A multiple-kernel (MK) support vector machine (SVM) classifier solved by semi-definite programming (SDP) is adopted in SER procedure. The proposed method in this paper is demonstrated on Berlin Database of Emotional Speech. Recognition accuracies of the original, noisy, and reconstructed samples classified by both single-kernel (SK) and MK classifiers are compared and analyzed. The experimental results show that the proposed method is effective and robust when noise exists. 相似文献
15.
Emotion recognition is a hot research in modern intelligent systems. The technique is pervasively used in autonomous vehicles, remote medical service, and human–computer interaction (HCI). Traditional speech emotion recognition algorithms cannot be effectively generalized since both training and testing data are from the same domain, which have the same data distribution. In practice, however, speech data is acquired from different devices and recording environments. Thus, the data may differ significantly in terms of language, emotional types and tags. To solve such problem, in this work, we propose a bimodal fusion algorithm to realize speech emotion recognition, where both facial expression and speech information are optimally fused. We first combine the CNN and RNN to achieve facial emotion recognition. Subsequently, we leverage the MFCC to convert speech signal to images. Therefore, we can leverage the LSTM and CNN to recognize speech emotion. Finally, we utilize the weighted decision fusion method to fuse facial expression and speech signal to achieve speech emotion recognition. Comprehensive experimental results have demonstrated that, compared with the uni-modal emotion recognition, bimodal features-based emotion recognition achieves a better performance. 相似文献
16.
为了提高情感识别的正确率,针对单一语音信号特征和表面肌电信号特征存在的局限性,提出了一种集成语音信号特征和表面肌电信号特征的情感自动识别模型.首先对语音信号和表面肌电信号进行预处理,并分别提取相关的语音信号和表面肌电信号特征,然后采用支持向量机对语音信号和表面肌电信号特征进行学习,分别建立相应的情感分类器,得到相应的识别结果,最后将识别结果分别输入到支持向量机确定两种特征的权重系数,从而得到最终的情感识别结果.两个标准语情感数据库的仿真结果表明,相对于其它情感识别模型,本文模型大幅提高了情感识别的正确率,人机交互情感识别系统提供了一种新的研究工具. 相似文献