共查询到20条相似文献,搜索用时 31 毫秒
1.
首先介绍了语音情感识别系统的组成,重点对情感特征和识别算法的研究现状进行了综述,分析了主要的语音情感特征,阐述了代表性的语音情感识别算法以及混合模型,并对其进行了分析比较。最后,指出了语音情感识别技术的可能发展趋势。 相似文献
2.
Speech emotion recognition (SER) systems identify emotions from the human voice in the areas of smart healthcare, driving a vehicle, call centers, automatic translation systems, and human-machine interaction. In the classical SER process, discriminative acoustic feature extraction is the most important and challenging step because discriminative features influence the classifier performance and decrease the computational time. Nonetheless, current handcrafted acoustic features suffer from limited capability and accuracy in constructing a SER system for real-time implementation. Therefore, to overcome the limitations of handcrafted features, in recent years, variety of deep learning techniques have been proposed and employed for automatic feature extraction in the field of emotion prediction from speech signals. However, to the best of our knowledge, there is no in-depth review study is available that critically appraises and summarizes the existing deep learning techniques with their strengths and weaknesses for SER. Hence, this study aims to present a comprehensive review of deep learning techniques, uniqueness, benefits and their limitations for SER. Moreover, this review study also presents speech processing techniques, performance measures and publicly available emotional speech databases. Furthermore, this review also discusses the significance of the findings of the primary studies. Finally, it also presents open research issues and challenges that need significant research efforts and enhancements in the field of SER systems. 相似文献
3.
Recent years have witnessed the great progress for speech emotion recognition using deep convolutional neural networks (DCNNs). In order to improve the performance of speech emotion recognition, a novel feature fusion method is proposed. With going deeper of the convolutional layers, the convolutional feature of traditional DCNNs gradually become more abstract, which may not be the best feature for speech emotion recognition. On the other hand, the shallow feature includes only global information without the detailed information extracted by deeper convolutional layers. According to these observations, we design a deep and shallow feature fusion convolutional network, which combines the feature from different levels of network for speech emotion recognition. The proposed network allows us to fully exploit deep and shallow feature. The popular Berlin data set is used in our experiments, the experimental results show that our proposed network can further improve speech emotion recognition rate which demonstrates the effectiveness of the proposed network. 相似文献
4.
模糊认知图(Fuzzy Cognitive Map,FCM)作为一种图分析方法已在数据分类方面得到应用,为了提高其在语音情感识别中的分类精度,提出了融合FCM的方法。其中包括特征级融合和决策级融合两种方式。详细分析了这两种方式并提出将传统的模糊认知图的数值型输出转化为概率型输出,为不同特征提供了统一范围的初级识别结果。在此基础上,提出了自适应权值决策级融合方法。该方法充分考虑了分类器对不同特征的识别准确率差异。实验证明,提出的融合FCM方法相较于单一特征和单一分类器,具有更优的分类性能,同时大大降低了情感间的混淆程度。 相似文献
5.
Multimedia Tools and Applications - Automatic emotion recognition from speech signals is one of the important research areas. Most speech emotion recognition methods have been proposed, among which... 相似文献
6.
针对语音情感识别研究体系进行综述。这一体系包括情感描述模型、情感语音数据库、特征提取与降维、情感分类与回归算法4个方面的内容。本文总结离散情感模型、维度情感模型和两模型间单向映射的情感描述方法;归纳出情感语音数据库选择的依据;细化了语音情感特征分类并列出了常用特征提取工具;最后对特征提取和情感分类与回归的常用算法特点进行凝练并总结深度学习研究进展,并提出情感语音识别领域需要解决的新问题、预测了发展趋势。 相似文献
7.
Multimedia Tools and Applications - Research in emotion recognition seeks to develop insights into the variances of features of emotion in one common domain. However, automatic emotion recognition... 相似文献
8.
针对语音信号的实时性和不确定性,提出证据信任度信息熵和动态先验权重的方法,对传统D-S证据理论的基本概率分配函数进行改进;针对情感特征在语音情感识别中对不同的情感状态具有不同的识别效果,提出对语音情感特征进行分类。利用各类情感特征的识别结果,应用改进的D-S证据理论进行决策级数据融合,实现基于多类情感特征的语音情感识别,以达到细粒度的语音情感识别。最后通过算例验证了改进算法的迅速收敛和抗干扰性,对比实验结果证明了分类情感特征语音情感识别方法的有效性和稳定性。 相似文献
9.
为增强不同情感特征的融合程度和语音情感识别模型的鲁棒性,提出一种神经网络结构DBM-LSTM用于语音情感识别。利用深度受限玻尔兹曼机的特征重构原理将不同的情感特征进行融合;利用长短时记忆单元对短时特征进行长时建模,增强语音情感识别模型的鲁棒性;在柏林情感语音数据库上进行分类实验。研究结果表明,与传统识别模型相比,DBM-LSTM网络结构更适用于多特征语音情感识别任务,最优识别结果提升11%。 相似文献
10.
针对语音情感数据集规模小且数据维度高的特点,为解决传统循环神经网络(RNN)长程依赖消失和卷积神经网络(CNN)关注局部信息导致输入序列内部各帧之间潜在关系没有被充分挖掘的问题,提出一个基于多头注意力(MHA)和支持向量机(SVM)的神经网络MHA-SVM用于语音情感识别(SER)。首先将原始音频数据输入MHA网络来训练MHA的参数并得到MHA的分类结果;然后将原始音频数据再次输入到预训练好的MHA中用于提取特征;最后通过全连接层后使用SVM对得到的特征进行分类获得MHA-SVM的分类结果。充分评估MHA模块中头数和层数对实验结果的影响后,发现MHA-SVM在IEMOCAP数据集上的识别准确率最高达到69.6%。实验结果表明同基于RNN和CNN的模型相比,基于MHA机制的端到端模型更适合处理SER任务。 相似文献
11.
Recognition of emotion in speech has recently matured to one of the key disciplines in speech analysis serving next generation human-machine interaction and communication. However, compared to automatic speech recognition, that emotion recognition from an isolated word or a phrase is inappropriate for conversation. Because a complete emotional expression may stride across several sentences, and may fetch-up on any word in dialogue. In this paper, we present a segment-based emotion recognition approach to continuous Mandarin Chinese speech. In this proposed approach, the unit for recognition is not a phrase or a sentence but an emotional expression in dialogue. To that end, the following procedures are presented: First, we evaluate the performance of several classifiers in short sentence speech emotion recognition architectures. The results of the experiments show that the WD-KNN classifier achieves the best accuracy for the 5-class emotion recognition what among the five classification techniques. We then implemented a continuous Mandarin Chinese speech emotion recognition system with an emotion radar chart which is based on WD-KNN; this system can represent the intensity of each emotion component in speech. This proposed approach shows how emotions can be recognized by speech signals, and in turn how emotional states can be visualized. 相似文献
12.
情感识别在人机交互中具有重要意义,为了提高情感识别准确率,将语音与文本特征融合。语音特征采用了声学特征和韵律特征,文本特征采用了基于情感词典的词袋特征(Bag-of-words,BoW)和N-gram模型。将语音与文本特征分别进行特征层融合与决策层融合,比较它们在IEMOCAP四类情感识别的效果。实验表明,语音与文本特征融合比单一特征在情感识别中表现更好;决策层融合比在特征层融合识别效果好。且基于卷积神经网络(Convolutional neural network,CNN)分类器,语音与文本特征在决策层融合中不加权平均召回率(Unweighted average recall,UAR)达到了68.98%,超过了此前在IEMOCAP数据集上的最好结果。 相似文献
13.
由于人类情感的表达受文化和社会的影响,不同语言语音情感的特征差异较大,导致单一语言语音情感识别模型泛化能力不足。针对该问题,提出了一种基于多任务注意力的多语言语音情感识别方法。通过引入语言种类识别辅助任务,模型在学习不同语言共享情感特征的同时也能学习各语言独有的情感特性,从而提升多语言情感识别模型的多语言情感泛化能力。在两种语言的维度情感语料库上的实验表明,所提方法相比于基准方法在Valence和Arousal任务上的相对UAR均值分别提升了3.66%~5.58%和1.27%~6.51%;在四种语言的离散情感语料库上的实验表明,所提方法的相对UAR均值相比于基准方法提升了13.43%~15.75%。因此,提出的方法可以有效地抽取语言相关的情感特征并提升多语言情感识别的性能。 相似文献
14.
Multimedia Tools and Applications - 相似文献
15.
International Journal of Speech Technology - Speech emotion recognition is one of the fastest growing areas of interest in the field of affective computing. Emotion detection aids... 相似文献
16.
Recognizing speakers in emotional conditions remains a challenging issue, since speaker states such as emotion affect the acoustic parameters used in typical speaker recognition systems. Thus, it is believed that knowledge of the current speaker emotion can improve speaker recognition in real life conditions. Conversely, speech emotion recognition still has to overcome several barriers before it can be employed in realistic situations, as is already the case with speech and speaker recognition. One of these barriers is the lack of suitable training data, both in quantity and quality—especially data that allow recognizers to generalize across application scenarios (‘cross-corpus’ setting). In previous work, we have shown that in principle, the usage of synthesized emotional speech for model training can be beneficial for recognition of human emotions from speech. In this study, we aim at consolidating these first results in a large-scale cross-corpus evaluation on eight of most frequently used human emotional speech corpora, namely ABC, AVIC, DES, EMO-DB, eNTERFACE, SAL, SUSAS and VAM, covering natural, induced and acted emotion as well as a variety of application scenarios and acoustic conditions. Synthesized speech is evaluated standalone as well as in joint training with human speech. Our results show that the usage of synthesized emotional speech in acoustic model training can significantly improve recognition of arousal from human speech in the challenging cross-corpus setting. 相似文献
17.
语音情感识别的精度很大程度上取决于不同情感间的特征差异性。从分析语音的时频特性入手,结合人类的听觉选择性注意机制,提出一种基于语谱特征的语音情感识别算法。算法首先模拟人耳的听觉选择性注意机制,对情感语谱信号进行时域和频域上的分割提取,从而形成语音情感显著图。然后,基于显著图,提出采用Hu不变矩特征、纹理特征和部分语谱特征作为情感识别的主要特征。最后,基于支持向量机算法对语音情感进行识别。在语音情感数据库上的识别实验显示,提出的算法具有较高的语音情感识别率和鲁棒性,尤其对于实用的烦躁情感的识别最为明显。此外,不同情感特征间的主向量分析显示,所选情感特征间的差异性大,实用性强。 相似文献
18.
We propose an improved version of brain emotional learning (BEL) model trained via learning automata (LA) for speech emotion recognition. Inspiring from the limbic system in mammalian brain, the original BEL model is composed of two neural network components, namely amygdala and orbitofrontal cortex. In this modified BEL model, named brain emotional learning based on learning automata (BELBLA), we have employed the theory of the stochastic LA in error back-propagation to train the BEL model in decreasing the high computational complexity of the traditional gradient method. Hence, the performance of the model can be enhanced. For a speech emotion recognition task, we extract the usual features, such as energy, pitch, formants, amplitude, zero crossing rate and MFCC, from average short-term signals of the emotional Berlin dataset. The experimental results show that the BELBLA outperforms some opponents, like hidden Markov model, Gaussian mixture model, k-nearest neighbor, support vector machines and artificial neural networks, for this application. 相似文献
19.
语音情感计算引起了国内外广泛的关注,特别是在语音情感特征提取方面做了大量的研究。利用经验模态分解(EMD)方法对情感语音进行处理,得到情感语音的前4阶固有模态函数(IMF),并将前4阶IMF分别通过Hilbert变换得到其瞬时频率和瞬时振幅。提取它们的统计特征,再结合情感语音的声学特征共同组成情感特征向量,并对特征向量做归一化处理。利用支持向量机(SVM)对四种情感语音即生气、高兴、悲伤和平静进行识别。实验结果表明该方法的识别效果较好。 相似文献
20.
Emotion recognition in speech signals is currently a very active research topic and has attracted much attention within the engineering application area. This paper presents a new approach of robust emotion recognition in speech signals in noisy environment. By using a weighted sparse representation model based on the maximum likelihood estimation, an enhanced sparse representation classifier is proposed for robust emotion recognition in noisy speech. The effectiveness and robustness of the proposed method is investigated on clean and noisy emotional speech. The proposed method is compared with six typical classifiers, including linear discriminant classifier, K-nearest neighbor, C4.5 decision tree, radial basis function neural networks, support vector machines as well as sparse representation classifier. Experimental results on two publicly available emotional speech databases, that is, the Berlin database and the Polish database, demonstrate the promising performance of the proposed method on the task of robust emotion recognition in noisy speech, outperforming the other used methods. 相似文献
|