首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 140 毫秒
1.
.基于NAQ的语音情感识别研究*   总被引:1,自引:0,他引:1  
研究了用迭代自适应逆滤波器估计声门激励的方法,以声门激励的时域参数归一化振幅商作为特征,对六种不同情感的连续语音,首先使用Fratio准则判别其对情感的区分能力,然后运用混合高斯模型对语音情感进行建模和识别。采用eNTERFACE’05情感语音数据库中的语音,比较了以整句NAQ值作为特征和以元音段的NAQ值作为特征,以及主观感知的情感识别结果。实验表明元音段的NAQ值是一种具有判别力的语音情感特征。  相似文献   

2.
为了点对点自动学习脑电信号(Electroencephalogram,EEG)空间与时间维度上的情感相关特征,提高脑电信号情感识别的准确率,基于DEAP数据集中EEG信号的时域、频域特征及其组合特征,提出一种基于卷积神经网络(Convolution Neural Network,CNN)模型的EEG情感特征学习与分类算法。采用包括集成决策树、支持向量机、线性判别分析和贝叶斯线性判别分析算法在内的浅层机器学习模型与CNN深度学习模型对DEAP数据集进行效价和唤醒度两个维度上的情感分类实验。实验结果表明,在效价和唤醒度两个维度上,深度CNN模型在时域和频域组合特征上均取得了目前最好的两类识别性能,在效价维度上比最佳的传统分类器集成决策树模型提高了3.58%,在唤醒度上比集成决策树模型的最好性能提高了3.29%。  相似文献   

3.
探索在不同的情感状态下的基音特征变化规律.通过对含有生气、高兴、悲伤情感语音信号进行分析,总结了情感语音基频的变化规律,确定了用于情感识别的12维的基频的基本特征以及扩展特征,运用混合高斯模型进行情感识别,并作了识别实验,获得了较好的结果.  相似文献   

4.
针对隐马尔可夫模型较强的语音信号表征能力和高斯混合模型良好的声音转换效果,提出了一种了隐马尔可夫模型和高斯混合模型相结合转换线谱频率的方法,给出了理论推导和算法流程,并利用高斯建模实现了韵律特征的转换.利用所述算法对录制的两段语音进行了仿真实验,转换语音有较好的自然度和清晰度,ABX测试结果显示,文中算法得到的语音在听觉上有90.2%的概率更接近目标说话人语音.  相似文献   

5.
为增强不同情感特征的融合程度和语音情感识别模型的鲁棒性,提出一种神经网络结构DBM-LSTM用于语音情感识别。利用深度受限玻尔兹曼机的特征重构原理将不同的情感特征进行融合;利用长短时记忆单元对短时特征进行长时建模,增强语音情感识别模型的鲁棒性;在柏林情感语音数据库上进行分类实验。研究结果表明,与传统识别模型相比,DBM-LSTM网络结构更适用于多特征语音情感识别任务,最优识别结果提升11%。  相似文献   

6.
现有基于混合高斯模型的说话人聚类方法主要依据最大后验准则,从通用背景模型中自适应得到类别的混合高斯模型,然而自适应数据较少,模型的准确性不够。对此,文中尝试基于本征语音(EV)空间和全变化(TV)空间分析的两种因子分析建模方法,通过对差异空间的建模,减少估计类别混合高斯模型时需要估计的参数个数。结果表明,在美国国家标准技术研究所2008年说话人识别评测的电话语音数据集上,相对于基于最大后验概率准则的基线系统而言,文中所使用的基于EV和TV空间分析的建模方法都可使聚类错误率有较大幅度的下降,并且TV空间分析建模相对于EV空间分析建模能获得更低的聚类错误率。  相似文献   

7.
提出了一种新的连续语音情感识别特征:语音元音段声门激励的时域参数归一化振幅商(the normalized amplitude quotient,NAQ).该方法首先运用迭代自适应逆滤波器(Iterative Adaptive Inverse Filtering,IAIF)估计声门波,然后采用NAQ值来描述声门开启和闭合的特性.采用eNERFACE'05听视觉情感语音数据库中六种不同情感的语音为实验数据,以情感语音元音段的归一化振幅商值为特征,使用直方图和盒形图分析其特征的分布和对情感的区分能力;以情感语句元音段的NAQ值的均值、方差、最大值、最小值作为特征,用高斯混合模型(Gaussian Minute Models,GMM)和k-近邻法进行了语音情感识别实验.结果表明NAQ特征对语音情感具有较强的区别能力.  相似文献   

8.
针对单一语音特征对语音情感表达不完整的问题,将具有良好量化和插值特性的LSF参数与体现人耳听觉特性的MFCC参数相融合,提出基于线谱权重的MFCC(WMFCC)新特征。同时,通过高斯混合模型来对该参数建立模型空间,进一步得到GW-MFCC模型空间参数,以获取更高维的细节信息,进一步提高情感识别性能。采用柏林情感语料库进行验证,新参数的识别率比传统的MFCC和LSF分别有5.7%和6.9%的提高。实验结果表明,提出的WMFCC以及GW-MFCC参数可以有效地表现语音情感信息,提高语音情感识别率。  相似文献   

9.
情感计算的一个重要任务是情感建模。提出了在人脸情感的视觉识别范畴中基于PAD理论的情感建模。根据Mehrabian提出的PAD 3维情感理论,建立了EBM(emotional block model)模型,进行了非典型情感识别的尝试。采用88特征点的Gabor特征和SVM算法在Cohn-Kanade数据集上进行了非典型情感识别以及典型情感识别的实验,并就典型情感的识别与基本情感模型比较。实验结果表明,无论是识别非典型情感还是典型情感,基于PAD理论建立的情感模型都是可靠的。在会聚度高的情感子空间上的识别率比会聚度低的情感子空间高。  相似文献   

10.
基于VQ/CDHMM的噪声环境下汉语口令识别研究   总被引:2,自引:0,他引:2  
黄玲  潘孟贤 《计算机工程与应用》2003,39(28):106-108,161
该文研究了基于改进VQ/HMM模型的语音识别方法,设计实现了基于该模型的汉语口令识别系统;研究了鲁棒性特征参数问题,提出了一些新的基于MFCC和LPCC的高维动态参数;分别进行了纯净语音和不同信噪比语音的识别实验,分析比较了不同类型特征参数、训练状态数和高斯混合度对该系统识别性能的影响。在此基础上得出了以下结论:在加性白噪声的情况下,使用高维动态参数明显提高了系统的鲁棒性;在汉语两字组的短语音(口令)识别中,状态数取4,混合度取3时实验结果较好;利用不同特征参数的优势,进行信息融合,是提高系统性能的一个很好选择。  相似文献   

11.
Automatic emotion recognition from speech signals is one of the important research areas, which adds value to machine intelligence. Pitch, duration, energy and Mel-frequency cepstral coefficients (MFCC) are the widely used features in the field of speech emotion recognition. A single classifier or a combination of classifiers is used to recognize emotions from the input features. The present work investigates the performance of the features of Autoregressive (AR) parameters, which include gain and reflection coefficients, in addition to the traditional linear prediction coefficients (LPC), to recognize emotions from speech signals. The classification performance of the features of AR parameters is studied using discriminant, k-nearest neighbor (KNN), Gaussian mixture model (GMM), back propagation artificial neural network (ANN) and support vector machine (SVM) classifiers and we find that the features of reflection coefficients recognize emotions better than the LPC. To improve the emotion recognition accuracy, we propose a class-specific multiple classifiers scheme, which is designed by multiple parallel classifiers, each of which is optimized to a class. Each classifier for an emotional class is built by a feature identified from a pool of features and a classifier identified from a pool of classifiers that optimize the recognition of the particular emotion. The outputs of the classifiers are combined by a decision level fusion technique. The experimental results show that the proposed scheme improves the emotion recognition accuracy. Further improvement in recognition accuracy is obtained when the scheme is built by including MFCC features in the pool of features.  相似文献   

12.
为提高语音情感识别精度,对基本声学特征构建的多维特征集合,采用二次特征选择方法综合考虑特征参数与情感类别之间的内在特性,从而建立优化的、具有有效情感可分性的特征子集;在语音情感识别阶段,设计二叉树结构的多分类器以综合考虑系统整体性能与复杂度,采用核融合方法改进SVM模型,使用多核SVM识别混淆度最大的情感。算法在Berlin情感语音库五种情感状态的样本上进行验证,实验结果表明二次特征选择与核融合相结合的方法在有效提高情感识别精度的同时,对噪声具有一定的鲁棒性。  相似文献   

13.
In this work, spectral features extracted from sub-syllabic regions and pitch synchronous analysis are proposed for speech emotion recognition. Linear prediction cepstral coefficients, mel frequency cepstral coefficients and the features extracted from high amplitude regions of spectrum are used to represent emotion specific spectral information. These features are extracted from consonant, vowel and transition regions of each syllable to study the contribution of these regions toward recognition of emotions. Consonant, vowel and the transition regions are determined using vowel onset points. Spectral features extracted from each pitch cycle, are also used to recognize emotions present in speech. The emotions used in this study are: anger, fear, happy, neutral and sad. The emotion recognition performance using sub-syllabic speech segments are compared with the results of conventional block processing approach, where entire speech signal is processed frame by frame. The proposed emotion specific features are evaluated using simulated emotion speech corpus, IITKGP-SESC (Indian Institute of Technology, KharaGPur-Simulated Emotion Speech Corpus). The emotion recognition results obtained using IITKGP-SESC are compared with the results of Berlin emotion speech corpus. Emotion recognition systems are developed using Gaussian mixture models and auto-associative neural networks. The purpose of this study is to explore sub-syllabic regions to identify the emotions embedded in a speech signal, and if possible, to avoid processing of entire speech signal for emotion recognition without serious compromise in the performance.  相似文献   

14.
针对语音信号的实时性和不确定性,提出证据信任度信息熵和动态先验权重的方法,对传统D-S证据理论的基本概率分配函数进行改进;针对情感特征在语音情感识别中对不同的情感状态具有不同的识别效果,提出对语音情感特征进行分类。利用各类情感特征的识别结果,应用改进的D-S证据理论进行决策级数据融合,实现基于多类情感特征的语音情感识别,以达到细粒度的语音情感识别。最后通过算例验证了改进算法的迅速收敛和抗干扰性,对比实验结果证明了分类情感特征语音情感识别方法的有效性和稳定性。  相似文献   

15.
语音情感识别是近年来新兴的研究课题之一,特征参数的提取直接影响到最终的识别效率,特征降维可以提取出最能区分不同情感的特征参数。提出了特征参数在语音情感识别中的重要性,介绍了语音情感识别系统的基本组成,重点对特征参数的研究现状进行了综述,阐述了目前应用于情感识别的特征降维常用方法,并对其进行了分析比较。展望了语音情感识别的可能发展趋势。  相似文献   

16.
Context-Independent Multilingual Emotion Recognition from Speech Signals   总被引:3,自引:0,他引:3  
This paper presents and discusses an analysis of multilingual emotion recognition from speech with database-specific emotional features. Recognition was performed on English, Slovenian, Spanish, and French InterFace emotional speech databases. The InterFace databases included several neutral speaking styles and six emotions: disgust, surprise, joy, fear, anger and sadness. Speech features for emotion recognition were determined in two steps. In the first step, low-level features were defined and in the second high-level features were calculated from low-level features. Low-level features are composed from pitch, derivative of pitch, energy, derivative of energy, and duration of speech segments. High-level features are statistical presentations of low-level features. Database-specific emotional features were selected from high-level features that contain the most information about emotions in speech. Speaker-dependent and monolingual emotion recognisers were defined, as well as multilingual recognisers. Emotion recognition was performed using artificial neural networks. The achieved recognition accuracy was highest for speaker-dependent emotion recognition, smaller for monolingual emotion recognition and smallest for multilingual recognition. The database-specific emotional features are most convenient for use in multilingual emotion recognition. Among speaker-dependent, monolingual, and multilingual emotion recognition, the difference between emotion recognition with all high-level features and emotion recognition with database-specific emotional features is smallest for multilingual emotion recognition—3.84%.  相似文献   

17.
In this work, source, system, and prosodic features of speech are explored for characterizing and classifying the underlying emotions. Different speech features contribute in different ways to express the emotions, due to their complementary nature. Linear prediction residual samples chosen around glottal closure regions, and glottal pulse parameters are used to represent excitation source information. Linear prediction cepstral coefficients extracted through simple block processing and pitch synchronous analysis represent the vocal tract information. Global and local prosodic features extracted from gross statistics and temporal dynamics of the sequence of duration, pitch, and energy values represent the prosodic information. Emotion recognition models are developed using above mentioned features separately, and in combination. Simulated Telugu emotion database (IITKGP-SESC) is used to evaluate the proposed features. The emotion recognition results obtained using IITKGP-SESC are compared with the results of internationally known Berlin emotion speech database (Emo-DB). Autoassociative neural networks, Gaussian mixture models, and support vector machines are used to develop emotion recognition systems with source, system, and prosodic features, respectively. Weighted combination of evidence has been used while combining the performance of systems developed using different features. From the results, it is observed that, each of the proposed speech features has contributed toward emotion recognition. The combination of features improved the emotion recognition performance, indicating the complementary nature of the features.  相似文献   

18.
语音情感识别的精度很大程度上取决于不同情感间的特征差异性。从分析语音的时频特性入手,结合人类的听觉选择性注意机制,提出一种基于语谱特征的语音情感识别算法。算法首先模拟人耳的听觉选择性注意机制,对情感语谱信号进行时域和频域上的分割提取,从而形成语音情感显著图。然后,基于显著图,提出采用Hu不变矩特征、纹理特征和部分语谱特征作为情感识别的主要特征。最后,基于支持向量机算法对语音情感进行识别。在语音情感数据库上的识别实验显示,提出的算法具有较高的语音情感识别率和鲁棒性,尤其对于实用的烦躁情感的识别最为明显。此外,不同情感特征间的主向量分析显示,所选情感特征间的差异性大,实用性强。  相似文献   

19.
In this paper we propose a feature normalization method for speaker-independent speech emotion recognition. The performance of a speech emotion classifier largely depends on the training data, and a large number of unknown speakers may cause a great challenge. To address this problem, first, we extract and analyse 481 basic acoustic features. Second, we use principal component analysis and linear discriminant analysis jointly to construct the speaker-sensitive feature space. Third, we classify the emotional utterances into pseudo-speaker groups in the speaker-sensitive feature space by using fuzzy k-means clustering. Finally, we normalize the original basic acoustic features of each utterance based on its group information. To verify our normalization algorithm, we adopt a Gaussian mixture model based classifier for recognition test. The experimental results show that our normalization algorithm is effective on our locally collected database, as well as on the eNTERFACE’05 Audio-Visual Emotion Database. The emotional features achieved using our method are robust to the speaker change, and an improved recognition rate is observed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号