首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 140 毫秒
1.
从语音信号中提取情感特征的研究   总被引:5,自引:0,他引:5  
分析了含有欢快、愤怒、惊奇、悲伤等 4种情感的语音信号的时间构造、振幅构造和基频构造的特征。通过和不带情感的平静的语音信号的比较 ,总结了不同情感语音信号情感特征的分布规律 ,为情感信号处理和识别提供了有实用价值的理论数据。  相似文献   

2.
针对存在情感差异性语音情况下说话人识别系统性能急剧下降以及缺乏充足情感语音训练说话人模型的问题,提出一种基于基频的情感语音聚类的说话人识别方法,能有效利用系统可获取的少量情感语音.该方法通过对男女说话人设定不同的基频阈值,根据阈值,对倒谱特征进行聚类,为每个说话人建立不同基频区间的模型.在特征匹配时,选用最大似然度的基频区间模型的得分作为该说话人的得分.在中文情感语音库上的测试结果表明,与传统的基于中性训练语音的高斯混合模型说话人识别方法和结构化训练方法相比,该方法具有更高的识别率.  相似文献   

3.
包含在语音信号中情感特征的分析   总被引:2,自引:0,他引:2  
分析了含有欢快、愤怒、惊奇、悲伤等4种情感语音信号的时间构造、振幅构造、基频构造和共振峰构造的特征。通过与不带情感的平静的语音信号的比较,总结了不同情感语音信号情感特征的分布规律,为情感信号处理和识别提供了有实用价值的理论数据。  相似文献   

4.
多语种下情感语音基频参数变化的统计分析   总被引:2,自引:0,他引:2  
田岚  姜晓庆  侯正信 《控制与决策》2005,20(11):1311-1313
为研究韵律特征中最能反映语音情感信息的基频参数变化,选取7种典型的情感状态、固定句式,对同一说话人的汉语、英语、日语等多语种语音样本进行基频平均值、基频动态范围、基频抖动等参数的统计分析.统计结果表明,情感语音的基频结构特征随情感状态改变有明显的变化,且不同语种下这种结构的变化有较好的一致性.  相似文献   

5.
语音情感识别是语音处理领域中一个具有挑战性和广泛应用前景的研究课题。探索了语音情感识别中的关键问题之一:生成情感识别的有效的特征表示。从4个角度生成了语音信号中的情感特征表示:(1)低层次的声学特征,包括能量、基频、声音质量、频谱等相关的特征,以及基于这些低层次特征的统计特征;(2)倒谱声学特征根据情感相关的高斯混合模型进行距离转化而得出的特征;(3)声学特征依据声学词典进行转化而得出的特征;(4)声学特征转化为高斯超向量的特征。通过实验比较了各类特征在情感识别上的独立性能,并且尝试了将不同的特征进行融合,最后比较了不同的声学特征在几个不同语言的情感数据集上的效果(包括IEMOCAP英语情感语料库、CASIA汉语情感语料库和Berlin德语情感语料库)。在IEMOCAP数据集上,系统的正确识别率达到了71.9%,超越了之前在此数据集上报告的最好结果。  相似文献   

6.
语音情感识别已经成为下一代人机交互技术的重要组成部分,从语音信号中提取与情感相关的特征是语音情感识别的重要挑战.针对单一特征在情感识别中准确度不高的问题,该文提出了特征级-决策级融合的方法融合声学特征和语义特征进行情感识别.首先提取声学特征,包括:1)低层次手工特征集,包括基于谱相关、音质、能量、基频等相关特征,以及基于低层次特征的高级统计特征;2)DNN提取的谱相关特征的深度特征;3)CNN提取的基于Filter_bank特征的深度特征.并且使用基于Listen-Attend-Spell(LAS)模型的语音识别模块提取语义特征.然后将声学特征中的3类特征与语义特征进行特征级融合,在确定融合特征的先后顺序时引入了构造哈夫曼树的方法.最后得到融合后特征和原始4类特征各自的情感识别结果,在结果之上进行决策级融合,使用此方法在IEMOCAP数据集中分类准确度可达76.2%.  相似文献   

7.
基于决策树的多特征语音情感识别   总被引:1,自引:1,他引:0  
数据挖掘技术是近年来计算机领域的重要方向.文中的研究目的就是通过深入分析各种语音情感特征,找出某种特征对语音情感识别的贡献程度,并在数据挖掘技术中寻找适合的模型将有效特征加以利用. 分析和研究了多位科学家在进行语音情感分析过程中采用的方法和技术,通过总结和创新建立了语音情感语料库,并成功地提取了相关的语音信号的特征.后研究了基音频率、振幅能量和共振峰等目前常用的情感特征在语音情感识别中的作用,把数据挖掘中常用的决策树分类方法和语音信号的多个特征相结合,建立了语音情感识别模型,对语音情感数据进行了大量的实验,取得了较为满意的识别结果.  相似文献   

8.
人类的语音情感变化是一个抽象的动态过程,难以使用静态信息对其情感交互进行描述,而人工智能的兴起为语音情感识别的发展带来了新的契机。从语音情感识别的概念和在国内外发展的历史进程入手,分别从5个方面对近些年关于语音情感识别的研究成果进行了归纳总结。介绍了语音情感特征,归纳总结了各种语音特征参数对语音情感识别的意义。分别对语音情感数据库的分类及特点、语音情感识别算法的分类及优缺点、语音情感识别的应用以及语音情感识别现阶段所遇到的挑战进行了详细的阐述。立足于研究现状对语音情感识别的未来研究及其发展进行了展望。  相似文献   

9.
为有效利用语音情感词局部特征,提出了一种融合情感词局部特征与语音语句全局特征的语音情感识别方法。该方法依赖于语音情感词典的声学特征库,提取出语音语句中是否包含情感词及情感词密度等局部特征,并与全局声学特征进行融合,再通过机器学习算法建模和识别语音情感。对比实验结果表明,融合语音情感词局部特征与全局特征的语音情感识别方法能取得更好的效果,局部特征的引入能有效提高语音情感识别准确率。  相似文献   

10.
语音信号中的情感识别研究   总被引:25,自引:0,他引:25  
赵力  钱向民  邹采荣  吴镇扬 《软件学报》2001,12(7):1050-1055
提出了从语音信号中识别情感特征的方法.从5名说话者中搜集了带有欢快、愤怒、惊奇和悲伤的情感语句共300句.从这些语音资料中提取了10个情感特征.提出了3种基于主元素分析的语音信号中的情感识别方法.使用这些方法获得了基本上接近于人的正常表现的识别效果.  相似文献   

11.
Context-Independent Multilingual Emotion Recognition from Speech Signals   总被引:3,自引:0,他引:3  
This paper presents and discusses an analysis of multilingual emotion recognition from speech with database-specific emotional features. Recognition was performed on English, Slovenian, Spanish, and French InterFace emotional speech databases. The InterFace databases included several neutral speaking styles and six emotions: disgust, surprise, joy, fear, anger and sadness. Speech features for emotion recognition were determined in two steps. In the first step, low-level features were defined and in the second high-level features were calculated from low-level features. Low-level features are composed from pitch, derivative of pitch, energy, derivative of energy, and duration of speech segments. High-level features are statistical presentations of low-level features. Database-specific emotional features were selected from high-level features that contain the most information about emotions in speech. Speaker-dependent and monolingual emotion recognisers were defined, as well as multilingual recognisers. Emotion recognition was performed using artificial neural networks. The achieved recognition accuracy was highest for speaker-dependent emotion recognition, smaller for monolingual emotion recognition and smallest for multilingual recognition. The database-specific emotional features are most convenient for use in multilingual emotion recognition. Among speaker-dependent, monolingual, and multilingual emotion recognition, the difference between emotion recognition with all high-level features and emotion recognition with database-specific emotional features is smallest for multilingual emotion recognition—3.84%.  相似文献   

12.
基于PCA和SVM的普通话语音情感识别   总被引:1,自引:0,他引:1  
蒋海华  胡斌 《计算机科学》2015,42(11):270-273
在语音情感识别中,情感特征的选取与抽取是重要环节。目前,还没有非常有效的语音情感特征被提出。因此,在包含6种情感的普通话情感语料库中,根据普通话不同于西方语种的特点,选取了一些有效的情感特征,包含Mel频率倒谱系数、基频、短时能量、短时平均过零率和第一共振峰等,进行提取并计算得到不同的统计量;接着采用主成分分析(PCA)进行抽取;最后利用基于支持向量机(SVM)的语音情感识别系统进行分类。实验结果表明, 与其他一些重要的研究结果相比,该方法得到了较高的平均情感识别率, 且情感特征的选取、抽取及建模是合理、有效的。  相似文献   

13.
基于SVM的语音情感识别算法   总被引:1,自引:0,他引:1  
为有效提高语音情感识别系统的识别正确率,提出一种基于SVM的语音情感识别算法.该算法提取语音信号的能量、基音频率及共振峰等参数作为情感特征,采用SVM(Support Vector Machine,支持向量机)方法对情感信号进行建模与识别.在仿真环境下的情感识别实验中,所提算法相比较人工神经网络的ACON(All Cl...  相似文献   

14.
Prosody conversion from neutral speech to emotional speech   总被引:1,自引:0,他引:1  
Emotion is an important element in expressive speech synthesis. Unlike traditional discrete emotion simulations, this paper attempts to synthesize emotional speech by using "strong", "medium", and "weak" classifications. This paper tests different models, a linear modification model (LMM), a Gaussian mixture model (GMM), and a classification and regression tree (CART) model. The linear modification model makes direct modification of sentence F0 contours and syllabic durations from acoustic distributions of emotional speech, such as, F0 topline, F0 baseline, durations, and intensities. Further analysis shows that emotional speech is also related to stress and linguistic information. Unlike the linear modification method, the GMM and CART models try to map the subtle prosody distributions between neutral and emotional speech. While the GMM just uses the features, the CART model integrates linguistic features into the mapping. A pitch target model which is optimized to describe Mandarin F0 contours is also introduced. For all conversion methods, a deviation of perceived expressiveness (DPE) measure is created to evaluate the expressiveness of the output speech. The results show that the LMM gives the worst results among the three methods. The GMM method is more suitable for a small training set, while the CART method gives the better emotional speech output if trained with a large context-balanced corpus. The methods discussed in this paper indicate ways to generate emotional speech in speech synthesis. The objective and subjective evaluation processes are also analyzed. These results support the use of a neutral semantic content text in databases for emotional speech synthesis.  相似文献   

15.
为了更好地表征语音情感状态,将固有时间尺度分解(ITD)用于语音情感特征提取。从语音信号中得到前若干阶合理旋转(PR)分量,并提取PR分量的瞬时参数特征和关联维数,以此作为新的情感特征参数,结合传统特征使用支持向量机(SVM)进行语音情感识别实验。实验结果显示,引入PR特征参数后,与传统特征的方案相比,情感识别率有了明显提高。  相似文献   

16.
为了更为全面地表征语音情感状态,弥补线性情感特征参数在刻画不同情感类型上的不足,将相空间重构理论引入语音情感识别中来,通过分析不同情感状态下的混沌特征,提取Kolmogorov熵和关联维作为新的情感特征参数,并结合传统语音特征使用支持向量机(SVM)进行语音情感识别。实验结果表明,通过引入混沌参数,与传统物理特征进行识别的方案相比,准确率有了一定的提高,为语音情感的识别提供了一个新的研究途径。  相似文献   

17.
针对单一语音特征对语音情感表达不完整的问题,将具有良好量化和插值特性的LSF参数与体现人耳听觉特性的MFCC参数相融合,提出基于线谱权重的MFCC(WMFCC)新特征。同时,通过高斯混合模型来对该参数建立模型空间,进一步得到GW-MFCC模型空间参数,以获取更高维的细节信息,进一步提高情感识别性能。采用柏林情感语料库进行验证,新参数的识别率比传统的MFCC和LSF分别有5.7%和6.9%的提高。实验结果表明,提出的WMFCC以及GW-MFCC参数可以有效地表现语音情感信息,提高语音情感识别率。  相似文献   

18.
Emotion recognition from speech has emerged as an important research area in the recent past. In this regard, review of existing work on emotional speech processing is useful for carrying out further research. In this paper, the recent literature on speech emotion recognition has been presented considering the issues related to emotional speech corpora, different types of speech features and models used for recognition of emotions from speech. Thirty two representative speech databases are reviewed in this work from point of view of their language, number of speakers, number of emotions, and purpose of collection. The issues related to emotional speech databases used in emotional speech recognition are also briefly discussed. Literature on different features used in the task of emotion recognition from speech is presented. The importance of choosing different classification models has been discussed along with the review. The important issues to be considered for further emotion recognition research in general and in specific to the Indian context have been highlighted where ever necessary.  相似文献   

19.
基于神经网络的语音情感识别   总被引:4,自引:1,他引:3       下载免费PDF全文
研究目的就是通过深入分析各种语音情感特征,找出其中对情感识别有较大贡献的特征,并寻找适合的模型将有效特征加以利用。分析和研究了多位科学家在进行语音情感分析过程中采用的方法和技术,通过总结和创新建立了语音情感语料库,并成功地提取了相关的语音信号的特征。研究了基音频率、振幅能量和共振峰等目前常用的情感特征在语音情感识别中的作用,重点研究了MFCC和?驻MFCC,实验发现特征筛选后系统的识别效果有着一定程度的提高。将处理后的频谱特征参数同原有的BP人工神经网络模型有效地结合起来,形成完整的语音情感识别系统,取得了较为满意的识别结果。  相似文献   

20.
The speech signal consists of linguistic information and also paralinguistic one such as emotion. The modern automatic speech recognition systems have achieved high performance in neutral style speech recognition, but they cannot maintain their high recognition rate for spontaneous speech. So, emotion recognition is an important step toward emotional speech recognition. The accuracy of an emotion recognition system is dependent on different factors such as the type and number of emotional states and selected features, and also the type of classifier. In this paper, a modular neural-support vector machine (SVM) classifier is proposed, and its performance in emotion recognition is compared to Gaussian mixture model, multi-layer perceptron neural network, and C5.0-based classifiers. The most efficient features are also selected by using the analysis of variations method. It is noted that the proposed modular scheme is achieved through a comparative study of different features and characteristics of an individual emotional state with the aim of improving the recognition performance. Empirical results show that even by discarding 22% of features, the average emotion recognition accuracy can be improved by 2.2%. Also, the proposed modular neural-SVM classifier improves the recognition accuracy at least by 8% as compared to the simulated monolithic classifiers.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号