首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
为获得更丰富的情感信息、有效识别长语音的情感状态,提出基于D-S证据理论的多粒度语段融合语音情感识别方法。采用2种分段方法对语音样本分段,用SVM对语段进行识别,再利用D-S证据理论对各语音段识别结果进行决策融合,得到2种分段方法下语音的情感识别结果,将这2个识别结果进一步融合得到最终结果。实验结果表明,该方法具有较好的整体识别性能,能有效提高语音情感的识别率。  相似文献   

2.
基于SVM的语音情感识别算法   总被引:1,自引:0,他引:1  
为有效提高语音情感识别系统的识别正确率,提出一种基于SVM的语音情感识别算法.该算法提取语音信号的能量、基音频率及共振峰等参数作为情感特征,采用SVM(Support Vector Machine,支持向量机)方法对情感信号进行建模与识别.在仿真环境下的情感识别实验中,所提算法相比较人工神经网络的ACON(All Cl...  相似文献   

3.
针对F-score特征选择算法不能揭示特征间互信息而不能有效降维这一问题,应用去相关的方法对F-score进行改进,利用德语情感语音库EMO-DB,在提取语音情感特征的基础上,根据支持向量机(SVM)的分类精度选择出分类效果最佳的特征子集。与F-score特征选择算法对比,改进后的算法实现了候选特征集较大幅度的降维,选择出了有效的特征子集,同时得到了较理想的语音情感识别效果。  相似文献   

4.
为了提高情感识别的正确率,针对单模情感特征及传统特征融合方法识别低的缺陷,提出了一种核典型相关分析算法(KCCA)的多特征(multi-features)融合情感识别方法(MF-KCCA)。分别提取语音韵律特征和分数阶傅里叶域表情特征,利用两种特征互补性,采用KCCA将它们进行融合,降低特征向量的维数,利用最近邻分类器进行情感分类和识别。采用加拿大瑞尔森大学数据库进行仿真实验,结果表明,MF-KCCA有效提高了语音情感的识别率。  相似文献   

5.
一种基于层次化支持向量机的语种识别方法   总被引:2,自引:0,他引:2  
基于广义线性区分性序列核的支持向量机方法在语种识别中了得到了广泛应用.本文此基础上,进一步提出了一种层次化的SVM方法,通过将训练语音切分成不同时长的语音段集合.利用长时语音段训练得到的模型对短时语音段集合进行数据选择.同时借鉴互训练的思想,采用互补的特征参数训练SVM模型,并对不同时长、特征的系统识别结果加以融合,有效提高了系统性能.在NIST 2003语种测试中30秒时长的测试结果表明,本文所提方法有效的提升了语种识别的性能,等错误率(EER)从6.3降到了4.5%.  相似文献   

6.
为进一步提高学前教育对话机器人交互过程的准确性,结合多模态融合思想,提出一种基于面部表情情感和语音情感融合的识别技术。其中,为解决面部表情异常视频帧的问题,采用卷积神经网络对人脸进行检测,然后基于Gabor小波变换对人脸表情进行特征提取,最后通过残差网络对面部表情情感进行识别;为提高情感识别的准确性,协助学前教育机器人更好地理解儿童情感,在采用MFCC对连续语音特征进行提取后,通过残差网络对连续语音情感进行识别;利用多元线性回归算法对面部和语音情感识别结果进行融合。在AVEC2019数据集上的验证结果表明,表情情感识别和连续语音情感识别均具有较高识别精度;与传统的单一情感识别相比,多模态融合识别的一致性相关系数最高,达0.77。由此得出,将多模态情感识别的方法将有助于提高学前教育对话机器人交互过程中的情感识别水平,提高对话机器人的智能化。  相似文献   

7.
为有效利用语音情感词局部特征,提出了一种融合情感词局部特征与语音语句全局特征的语音情感识别方法。该方法依赖于语音情感词典的声学特征库,提取出语音语句中是否包含情感词及情感词密度等局部特征,并与全局声学特征进行融合,再通过机器学习算法建模和识别语音情感。对比实验结果表明,融合语音情感词局部特征与全局特征的语音情感识别方法能取得更好的效果,局部特征的引入能有效提高语音情感识别准确率。  相似文献   

8.
为有效提高语音情感识别系统的识别率,研究分析了一种改进型的支持向量机(SVM)算法。该算法首先利用遗传算法对SVM参数惩罚因子和核函数中参数进行优化,然后用优化后的参数进行语音情感的建模与识别。在柏林数据集上进行7种和常用5种情感识别实验,取得了91.03%和96.59%的识别率,在汉语情感数据集上,取得了97.67%的识别率。实验结果表明该算法能够有效识别语音情感。  相似文献   

9.
基于HMM的汉语语音识别中,易混淆语音的识别率仍然不高.在分析HMM固有缺陷的基础上,本文提出一种使用SVM在HMM系统上进行二次识别来提高易混淆语音识别率的方法.通过引入置信度估计环节,提高系统性能和效率.通过充分利用Viterbi解码获得的信息来构造新的分类特征,从而解决标准SVM难以处理可变长数据的问题.详细探讨这种两级识别结构中置信度估计、分类特征提取和SVM识别器构造等问题.语音识别实验的结果显示,与采用HMM/SVM混合结构的模型相比,本文方法在对识别速度影响很小的情况下可以使识别率有明显提高.这表明所提出的具有置信估计环节的HMM/SVM两级结构用于易混淆语音识别是可行的.  相似文献   

10.
情感识别在人机交互中具有重要意义,为了提高情感识别准确率,将语音与文本特征融合。语音特征采用了声学特征和韵律特征,文本特征采用了基于情感词典的词袋特征(Bag-of-words,BoW)和N-gram模型。将语音与文本特征分别进行特征层融合与决策层融合,比较它们在IEMOCAP四类情感识别的效果。实验表明,语音与文本特征融合比单一特征在情感识别中表现更好;决策层融合比在特征层融合识别效果好。且基于卷积神经网络(Convolutional neural network,CNN)分类器,语音与文本特征在决策层融合中不加权平均召回率(Unweighted average recall,UAR)达到了68.98%,超过了此前在IEMOCAP数据集上的最好结果。  相似文献   

11.
Feature Fusion plays an important role in speech emotion recognition to improve the classification accuracy by combining the most popular acoustic features for speech emotion recognition like energy, pitch and mel frequency cepstral coefficients. However the performance of the system is not optimal because of the computational complexity of the system, which occurs due to high dimensional correlated feature set after feature fusion. In this paper, a two stage feature selection method is proposed. In first stage feature selection, appropriate features are selected and fused together for speech emotion recognition. In second stage feature selection, optimal feature subset selection techniques [sequential forward selection (SFS) and sequential floating forward selection (SFFS)] are used to eliminate the curse of dimensionality problem due to high dimensional feature vector after feature fusion. Finally the emotions are classified by using several classifiers like Linear Discriminant Analysis (LDA), Regularized Discriminant Analysis (RDA), Support Vector Machine (SVM) and K Nearest Neighbor (KNN). The performance of overall emotion recognition system is validated over Berlin and Spanish databases by considering classification rate. An optimal uncorrelated feature set is obtained by using SFS and SFFS individually. Results reveal that SFFS is a better choice as a feature subset selection method because SFS suffers from nesting problem i.e it is difficult to discard a feature after it is retained into the set. SFFS eliminates this nesting problem by making the set not to be fixed at any stage but floating up and down during the selection based on the objective function. Experimental results showed that the efficiency of the classifier is improved by 15–20 % with two stage feature selection method when compared with performance of the classifier with feature fusion.  相似文献   

12.
Current emotion recognition computational techniques have been successful on associating the emotional changes with the EEG signals, and so they can be identified and classified from EEG signals if appropriate stimuli are applied. However, automatic recognition is usually restricted to a small number of emotions classes mainly due to signal’s features and noise, EEG constraints and subject-dependent issues. In order to address these issues, in this paper a novel feature-based emotion recognition model is proposed for EEG-based Brain–Computer Interfaces. Unlike other approaches, our method explores a wider set of emotion types and incorporates additional features which are relevant for signal pre-processing and recognition classification tasks, based on a dimensional model of emotions: Valenceand Arousal. It aims to improve the accuracy of the emotion classification task by combining mutual information based feature selection methods and kernel classifiers. Experiments using our approach for emotion classification which combines efficient feature selection methods and efficient kernel-based classifiers on standard EEG datasets show the promise of the approach when compared with state-of-the-art computational methods.  相似文献   

13.
情绪识别作为人机交互的热门领域,其技术已经被应用于医学、教育、安全驾驶、电子商务等领域.情绪主要由面部表情、声音、话语等进行表达,不同情绪表达时的面部肌肉、语气、语调等特征也不相同,使用单一模态特征确定的情绪的不准确性偏高,考虑到情绪表达主要通过视觉和听觉进行感知,本文提出了一种基于视听觉感知系统的多模态表情识别算法,分别从语音和图像模态出发,提取两种模态的情感特征,并设计多个分类器为单特征进行情绪分类实验,得到多个基于单特征的表情识别模型.在语音和图像的多模态实验中,提出了晚期融合策略进行特征融合,考虑到不同模型间的弱依赖性,采用加权投票法进行模型融合,得到基于多个单特征模型的融合表情识别模型.本文使用AFEW数据集进行实验,通过对比融合表情识别模型与单特征的表情识别模型的识别结果,验证了基于视听觉感知系统的多模态情感识别效果要优于基于单模态的识别效果.  相似文献   

14.
研究了情绪的维度空间模型与语音声学特征之间的关系以及语音情感的自动识别方法。介绍了基本情绪的维度空间模型,提取了唤醒度和效价度对应的情感特征,采用全局统计特征减小文本差异对情感特征的影响。研究了生气、高兴、悲伤和平静等情感状态的识别,使用高斯混合模型进行4种基本情感的建模,通过实验设定了高斯混合模型的最佳混合度,从而较好地拟合了4种情感在特征空间中的概率分布。实验结果显示,选取的语音特征适合于基本情感类别的识别,高斯混合模型对情感的建模起到了较好的效果,并且验证了二维情绪空间中,效价维度上的情感特征对语音情感识别的重要作用。  相似文献   

15.
Functional paralanguage includes considerable emotion information, and it is insensitive to speaker changes. To improve the emotion recognition accuracy under the condition of speaker-independence, a fusion method combining the functional paralanguage features with the accompanying paralanguage features is proposed for the speaker-independent speech emotion recognition. Using this method, the functional paralanguages, such as laughter, cry, and sigh, are used to assist speech emotion recognition. The contributions of our work are threefold. First, one emotional speech database including six kinds of functional paralanguage and six typical emotions were recorded by our research group. Second, the functional paralanguage is put forward to recognize the speech emotions combined with the accompanying paralanguage features. Third, a fusion algorithm based on confidences and probabilities is proposed to combine the functional paralanguage features with the accompanying paralanguage features for speech emotion recognition. We evaluate the usefulness of the functional paralanguage features and the fusion algorithm in terms of precision, recall, and F1-measurement on the emotional speech database recorded by our research group. The overall recognition accuracy achieved for six emotions is over 67% in the speaker-independent condition using the functional paralanguage features.  相似文献   

16.
The recognition of the emotional state of speakers is a multi-disciplinary research area that has received great interest over the last years. One of the most important goals is to improve the voice-based human–machine interactions. Several works on this domain use the prosodic features or the spectrum characteristics of speech signal, with neural networks, Gaussian mixtures and other standard classifiers. Usually, there is no acoustic interpretation of types of errors in the results. In this paper, the spectral characteristics of emotional signals are used in order to group emotions based on acoustic rather than psychological considerations. Standard classifiers based on Gaussian Mixture Models, Hidden Markov Models and Multilayer Perceptron are tested. These classifiers have been evaluated with different configurations and input features, in order to design a new hierarchical method for emotion classification. The proposed multiple feature hierarchical method for seven emotions, based on spectral and prosodic information, improves the performance over the standard classifiers and the fixed features.  相似文献   

17.
陈师哲  王帅  金琴 《软件学报》2018,29(4):1060-1070
自动情感识别是一个非常具有挑战性的课题,并且有着广泛的应用价值.本文探讨了在多文化场景下的多模态情感识别问题.我们从语音声学和面部表情等模态分别提取了不同的情感特征,包括传统的手工定制特征和基于深度学习的特征,并通过多模态融合方法结合不同的模态,比较不同单模态特征和多模态特征融合的情感识别性能.我们在CHEAVD中文多模态情感数据集和AFEW英文多模态情感数据集进行实验,通过跨文化情感识别研究,我们验证了文化因素对于情感识别的重要影响,并提出3种训练策略提高在多文化场景下情感识别的性能,包括:分文化选择模型、多文化联合训练以及基于共同情感空间的多文化联合训练,其中基于共同情感空间的多文化联合训练通过将文化影响与情感特征分离,在语音和多模态情感识别中均取得最好的识别效果.  相似文献   

18.
Automatic emotion recognition from speech signals is one of the important research areas, which adds value to machine intelligence. Pitch, duration, energy and Mel-frequency cepstral coefficients (MFCC) are the widely used features in the field of speech emotion recognition. A single classifier or a combination of classifiers is used to recognize emotions from the input features. The present work investigates the performance of the features of Autoregressive (AR) parameters, which include gain and reflection coefficients, in addition to the traditional linear prediction coefficients (LPC), to recognize emotions from speech signals. The classification performance of the features of AR parameters is studied using discriminant, k-nearest neighbor (KNN), Gaussian mixture model (GMM), back propagation artificial neural network (ANN) and support vector machine (SVM) classifiers and we find that the features of reflection coefficients recognize emotions better than the LPC. To improve the emotion recognition accuracy, we propose a class-specific multiple classifiers scheme, which is designed by multiple parallel classifiers, each of which is optimized to a class. Each classifier for an emotional class is built by a feature identified from a pool of features and a classifier identified from a pool of classifiers that optimize the recognition of the particular emotion. The outputs of the classifiers are combined by a decision level fusion technique. The experimental results show that the proposed scheme improves the emotion recognition accuracy. Further improvement in recognition accuracy is obtained when the scheme is built by including MFCC features in the pool of features.  相似文献   

19.
为了提高语音和文本融合的情绪识别准确率,提出一种基于Transformer-ESIM(Transformer-enhanced sequential inference model)注意力机制的多模态情绪识别方法.传统循环神经网络在语音和文本序列特征提取时存在长期依赖性,其自身顺序属性无法捕获长距离特征,因此采用Tra...  相似文献   

20.
现有的情感自动标注方法大多仅从声学层或语言层提取单一识别特征,而彝语受分支方言多、复杂性高等因素的影响,对其使用单层情感特征进行自动标注的正确率较低。利用彝语情感词缀丰富等特点,提出一种双层特征融合方法,分别从声学层和语言层提取情感特征,采用生成序列和按需加入单元的方法完成特征序列对齐,最后通过相应的特征融合和自动标注算法来实现情感自动标注过程。以某扶贫日志数据库中的彝语语音和文本数据为样本,分别采用三种不同分类器进行对比实验。结果表明分类器对自动标注结果影响不明显,而双层特征融合后的自动标注正确率明显提高,正确率从声学层的48.1%和语言层的34.4%提高到双层融合的64.2%。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号