首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 483 毫秒
1.
近年来,情感计算一直是学术界研究的热点问题。语音情感识别作为情感计算的重要研究且涉及到人工智能、模式识别、机器学习等多个领域。针对语音情感识别中特征挖掘的复杂性,利用关联规则挖掘算法对语音特征中的韵律特征与所包含情感之间的关联关系进行研究。主要进行如下工作:(1)针对语音情感的特点,给出了情感频繁项集的概念;(2)提出基于关联规则的语音情感中韵律特征抽取算法(PFEA_AR);(3)在汉语情感数据集上进行相关实验,取得了85%的识别率,比fisher准则判别法的精度提高了10%。实验结果表明,通过关联规则算法所抽取的特征在降低维度的同时还能够有效提高情感分类精度,从而验证了新算法所抽取特征的有效性。  相似文献   

2.
特征提取是情感语音识别系统的关键过程,决定系统整体识别性能。传统特征提取技术假定语音信号是线性、短时平稳信号,不具有自适应性。为此,通过聚合经验模态分解(EEMD)算法以非线性的处理方式提取特征。情感语音信号经EEMD分解后得到一组固有模态函数(IMF),利用相关系数法筛选出有效分量集合,对集合函数计算得到IMF能量特征(IMFE)。选用德国柏林语音库作为实验数据来源,将IMFE特征、韵律特征、梅尔倒谱系数特征以及三者的融合特征分别输入到支持向量机中,通过比较不同特征的识别结果验证IM FE特征的有效性。实验结果表明,IM FE特征与声学特征融合后的平均识别率达到91.67%,可有效区分不同的情感状态。  相似文献   

3.
针对说话人识别的性能易受到情感因素影响的问题,提出利用片段级别特征和帧级别特征联合学习的方法。利用长短时记忆网络进行说话人识别任务,提取时序输出作为片段级别的情感说话人特征,保留了语音帧特征原本信息的同时加强了情感信息的表达,再利用全连接网络进一步学习片段级别特征中每一个特征帧的说话人信息来增强帧级别特征的说话人信息表示能力,最后拼接片段级别特征和帧级别特征得到最终的说话人特征以增强特征的表征能力。在普通话情感语音语料库(MASC)上进行实验,验证所提出方法有效性的同时,探究了片段级别特征中包含语音帧数量和不同情感状态对情感说话人识别的影响。  相似文献   

4.
为有效利用语音情感词局部特征,提出了一种融合情感词局部特征与语音语句全局特征的语音情感识别方法。该方法依赖于语音情感词典的声学特征库,提取出语音语句中是否包含情感词及情感词密度等局部特征,并与全局声学特征进行融合,再通过机器学习算法建模和识别语音情感。对比实验结果表明,融合语音情感词局部特征与全局特征的语音情感识别方法能取得更好的效果,局部特征的引入能有效提高语音情感识别准确率。  相似文献   

5.
语音特征抽取和距离量度是语音识别中的一个重要研究课题。本文对短时傅里叶分析、离散余弦变换、LPC系数、倒频谱系数等特征参数,在LPC分析中分别用加权似然比距离(WLR),COSH距离,CEP距离及LPC系数欧氏距离等进行了实际比较。实验是在汉语十个数字语音中进行的。实验结果表明,LPC分析中的WLR距离和CEP距离能得到较好的识别结果。  相似文献   

6.
韵律特征是语音信号中情感信息的主要表征之一。为了更好地进行情感语音合成的研究,本文通过提取普通话情感语音的韵律特征进行分析,采用广义回归神经网络构建了一个情感语音韵律特征预测模型,并根据所提取的测试集数据文本语境信息进行韵律特征预测,实验获得了相应的结果。实验结果表明,情感语音韵律特征预测效果较好。  相似文献   

7.
全局和时序结构特征并用的语音信号情感特征识别方法   总被引:6,自引:1,他引:6  
在利用全局特征进行语音情感特征分析的基础上,提出了采用情感语句中各元音时序 结构作为新的特征,并针对不同语句中包含不同元音个数的情况,提出了零补齐、分局均值补 齐、前均值补齐三种不同的规整方法.以从10名话者中搜集的带有欢快、愤怒、惊奇、悲伤4种 情感的1000句语句作为样本,本文对语音情感特征进行了分析.实验结果表明利用全局特征和 时序特征相结合,对时序特征采用前均值补齐,同时使用修正二次判别函数(MQDF)进行情感 识别能够获得94%的平均情感识别率.  相似文献   

8.
为了克服语音情感线性参数在刻画不同情感类型特征上的不足,将多重分形理论引入语音情感识别中来,通过分析不同语音情感状态下的多重分形特征,提取多重分形谱参数和广义Hurst指数作为新的语音情感特征参数,并结合传统语音声学特征采用支持向量机(SVM)进行语音情感识别。实验结果表明,通过非线性参数的介入,与仅使用传统语音线性特征的识别方法相比,识别系统的准确率和稳定性得到有效提高,因此为语音情感识别提供了一个新的思路。  相似文献   

9.
为增强不同情感特征的融合程度和语音情感识别模型的鲁棒性,提出一种神经网络结构DBM-LSTM用于语音情感识别。利用深度受限玻尔兹曼机的特征重构原理将不同的情感特征进行融合;利用长短时记忆单元对短时特征进行长时建模,增强语音情感识别模型的鲁棒性;在柏林情感语音数据库上进行分类实验。研究结果表明,与传统识别模型相比,DBM-LSTM网络结构更适用于多特征语音情感识别任务,最优识别结果提升11%。  相似文献   

10.
针对声音突发特征(笑声、哭声、叹息声等,称之为功能性副语言)携带大量情感信息,而包含这类突发特征的语句由于特征突发性的干扰整体情感识别率不高的问题,提出了融合功能性副语言的语音情感识别方法。该方法首先对待识别语句进行功能性副语言自动检测,根据检测结果将功能性副语言从语句中分离,从而得到较为纯净的两类信号:功能性副语言信号和传统语音信号,最后将两类信号的情感信息使用自适应权重融合方法进行融合,从而达到提高待识别语句情感识别率和系统鲁棒性的目的。在包含6种功能性副语言和6种典型情感的情感语料库上的实验表明:该方法在与人无关的情况下得到的情感平均识别率为67.41%,比线性加权融合、Dempster-Shafer(DS)证据理论、贝叶斯融合方法分别提高了4.2%、2.8%和2.4%,比融合前平均识别率提高了8.08%,该方法针对非特定人语音情感识别具有较好的鲁棒性及识别准确率。  相似文献   

11.
Recognition of emotion in speech has recently matured to one of the key disciplines in speech analysis serving next generation human-machine interaction and communication. However, compared to automatic speech recognition, that emotion recognition from an isolated word or a phrase is inappropriate for conversation. Because a complete emotional expression may stride across several sentences, and may fetch-up on any word in dialogue. In this paper, we present a segment-based emotion recognition approach to continuous Mandarin Chinese speech. In this proposed approach, the unit for recognition is not a phrase or a sentence but an emotional expression in dialogue. To that end, the following procedures are presented: First, we evaluate the performance of several classifiers in short sentence speech emotion recognition architectures. The results of the experiments show that the WD-KNN classifier achieves the best accuracy for the 5-class emotion recognition what among the five classification techniques. We then implemented a continuous Mandarin Chinese speech emotion recognition system with an emotion radar chart which is based on WD-KNN; this system can represent the intensity of each emotion component in speech. This proposed approach shows how emotions can be recognized by speech signals, and in turn how emotional states can be visualized.  相似文献   

12.
葛磊  强彦  赵涓涓 《软件学报》2016,27(S2):130-136
语音情感识别是人机交互中重要的研究内容,儿童自闭症干预治疗中的语音情感识别系统有助于自闭症儿童的康复,但是由于目前语音信号中的情感特征多而杂,特征提取本身就是一项具有挑战性的工作,这样不利于整个系统的识别性能.针对这一问题,提出了一种语音情感特征提取算法,利用无监督自编码网络自动学习语音信号中的情感特征,通过构建一个3层的自编码网络提取语音情感特征,把多层编码网络学习完的高层特征作为极限学习机分类器的输入进行分类,其识别率为84.14%,比传统的基于提取人为定义特征的识别方法有所提高.  相似文献   

13.
为了提高语音和文本融合的情绪识别准确率,提出一种基于Transformer-ESIM(Transformer-enhanced sequential inference model)注意力机制的多模态情绪识别方法.传统循环神经网络在语音和文本序列特征提取时存在长期依赖性,其自身顺序属性无法捕获长距离特征,因此采用Tra...  相似文献   

14.
为了解决语言障碍者与健康人之间的交流障碍问题,提出了一种基于神经网络的手语到情感语音转换方法。首先,建立了手势语料库、人脸表情语料库和情感语音语料库;然后利用深度卷积神经网络实现手势识别和人脸表情识别,并以普通话声韵母为合成单元,训练基于说话人自适应的深度神经网络情感语音声学模型和基于说话人自适应的混合长短时记忆网络情感语音声学模型;最后将手势语义的上下文相关标注和人脸表情对应的情感标签输入情感语音合成模型,合成出对应的情感语音。实验结果表明,该方法手势识别率和人脸表情识别率分别达到了95.86%和92.42%,合成的情感语音EMOS得分为4.15,合成的情感语音具有较高的情感表达程度,可用于语言障碍者与健康人之间正常交流。  相似文献   

15.
The speech signal consists of linguistic information and also paralinguistic one such as emotion. The modern automatic speech recognition systems have achieved high performance in neutral style speech recognition, but they cannot maintain their high recognition rate for spontaneous speech. So, emotion recognition is an important step toward emotional speech recognition. The accuracy of an emotion recognition system is dependent on different factors such as the type and number of emotional states and selected features, and also the type of classifier. In this paper, a modular neural-support vector machine (SVM) classifier is proposed, and its performance in emotion recognition is compared to Gaussian mixture model, multi-layer perceptron neural network, and C5.0-based classifiers. The most efficient features are also selected by using the analysis of variations method. It is noted that the proposed modular scheme is achieved through a comparative study of different features and characteristics of an individual emotional state with the aim of improving the recognition performance. Empirical results show that even by discarding 22% of features, the average emotion recognition accuracy can be improved by 2.2%. Also, the proposed modular neural-SVM classifier improves the recognition accuracy at least by 8% as compared to the simulated monolithic classifiers.  相似文献   

16.
针对双模态情感识别框架识别率低、可靠性差的问题,对情感识别最重要的两个模态语音和面部表情进行了双模态情感识别特征层融合的研究。采用基于先验知识的特征提取方法和VGGNet-19网络分别对预处理后的音视频信号进行特征提取,以直接级联的方式并通过PCA进行降维来达到特征融合的目的,使用BLSTM网络进行模型构建以完成情感识别。将该框架应用到AViD-Corpus和SEMAINE数据库上进行测试,并和传统情感识别特征层融合框架以及基于VGGNet-19或BLSTM的框架进行了对比。实验结果表明,情感识别的均方根误差(RMSE)得到降低,皮尔逊相关系数(PCC)得到提高,验证了文中提出方法的有效性。  相似文献   

17.
情感特征的提取是语音情感识别的重要方面。由于传统信号处理方法的局限,使得提取的传统声学特征特别是频域特征并不准确,不能很好地表征语音的情感特性,因而对情感识别率不高。利用希尔伯特黄变换(HHT)对情感语音进行处理,得到情感语音的希尔伯特边际能量谱;通过对不同情感语音的边际能量谱基于Mel尺度的比较分析,提出了一组新的情感特征:Mel频率边际能量系数(MFEC)、Mel频率子带频谱质心(MSSC)、Mel频率子带频谱平坦度(MSSF);利用支持向量机(SVM)对5种情感语音即悲伤、高兴、厌倦、愤怒和平静进行了识别。实验结果表明,通过该方法提取的新的情感特征具有较好的识别效果。  相似文献   

18.
《Advanced Robotics》2013,27(1-2):47-67
Depending on the emotion of speech, the meaning of the speech or the intention of the speaker differs. Therefore, speech emotion recognition, as well as automatic speech recognition is necessary to communicate precisely between humans and robots for human–robot interaction. In this paper, a novel feature extraction method is proposed for speech emotion recognition using separation of phoneme class. In feature extraction, the signal variation caused by different sentences usually overrides the emotion variation and it lowers the performance of emotion recognition. However, as the proposed method extracts features from speech in parts that correspond to limited ranges of the center of gravity of the spectrum (CoG) and formant frequencies, the effects of phoneme variation on features are reduced. Corresponding to the range of CoG, the obstruent sounds are discriminated from sonorant sounds. Moreover, the sonorant sounds are categorized into four classes by the resonance characteristics revealed by formant frequency. The result shows that the proposed method using 30 different speakers' corpora improves emotion recognition accuracy compared with other methods by 99% significance level. Furthermore, the proposed method was applied to extract several features including prosodic and phonetic features, and was implemented on 'Mung' robots as an emotion recognizer of users.  相似文献   

19.
Functional paralanguage includes considerable emotion information, and it is insensitive to speaker changes. To improve the emotion recognition accuracy under the condition of speaker-independence, a fusion method combining the functional paralanguage features with the accompanying paralanguage features is proposed for the speaker-independent speech emotion recognition. Using this method, the functional paralanguages, such as laughter, cry, and sigh, are used to assist speech emotion recognition. The contributions of our work are threefold. First, one emotional speech database including six kinds of functional paralanguage and six typical emotions were recorded by our research group. Second, the functional paralanguage is put forward to recognize the speech emotions combined with the accompanying paralanguage features. Third, a fusion algorithm based on confidences and probabilities is proposed to combine the functional paralanguage features with the accompanying paralanguage features for speech emotion recognition. We evaluate the usefulness of the functional paralanguage features and the fusion algorithm in terms of precision, recall, and F1-measurement on the emotional speech database recorded by our research group. The overall recognition accuracy achieved for six emotions is over 67% in the speaker-independent condition using the functional paralanguage features.  相似文献   

20.
针对F-score特征选择算法不能揭示特征间互信息而不能有效降维这一问题,应用去相关的方法对F-score进行改进,利用德语情感语音库EMO-DB,在提取语音情感特征的基础上,根据支持向量机(SVM)的分类精度选择出分类效果最佳的特征子集。与F-score特征选择算法对比,改进后的算法实现了候选特征集较大幅度的降维,选择出了有效的特征子集,同时得到了较理想的语音情感识别效果。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号