首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 265 毫秒
1.
基于HHT倒谱系数的说话人识别算法   总被引:1,自引:0,他引:1  
针对LPCC只反应语音静态特征且不能突出其低频局部特征问题,提出一种以HHT倒谱系数为特征的说话人识别算法,HHT的经验模态分解使语音的低频局部特征得到更好的描述,Hilbert变换能够刻画语音动态特性,改进了LPCC的不足。用经验模态分解将语音分解为一系列固有模态函数分量并做Hilbert变换求得Hilbert边际谱,计算总边际谱的对数功率谱并做DCT得13维倒谱系数,将此特征送入高斯混合模型进行说话人识别。仿真实验结果表明,基于HHT倒谱系数的说话人识别算法,相较LPCC识别率提高了12.59%,但特征提取时间增加了19.27 s。  相似文献   

2.
线性预测分析在连接词语音识别中的研究   总被引:1,自引:0,他引:1  
特征参数的提取是关系到语音识别系统性能好坏的关键,而线性预测分析是目前普遍采用的特征参数提取方法.针对在连接词和连续语音识别系统中,传统的线性预测系数已不能满足特征提取的要求,研究采用了三种主要的线性预测推演参数,即线性预测反射系数、线谱对系数和线性预测倒谱系数,及其在连接词语音识别系统中的应用,并进行计算机仿真.仿真结果表明,在输入语音库与信噪比一致的情况下,线性预测倒谱系数的识别率最高.从而证明,在包含语义特征信息和说话人特征方面,线性预测倒谱系数性能要优于线谱对系数和线性预测反射系数.  相似文献   

3.
徐金甫  韦岗 《计算机工程》2000,26(5):58-59,89
提出了一种抗噪声语音特征。首先计算语音信号单边自相关序列的差分序列,再计算该差分序列的线性预测系数,进一步求出例说系数。实验证明,传统的线性预测例谱系数和边自相关序列的一性预测倒谱数相比,采用单边自相关序列差分序列的线性预测倒谱系数作为语音信号的特征矢量,可以提高语音识别系统对带噪音语音的识别率。  相似文献   

4.
人在不同情感下的语音信号其非平稳性尤为明显,传统的MFCC只能反映语音信号的静态特征,经验模态分解能够精细地刻画语音信号的非平稳特性。为提取情感语音的非平稳特征,用经验模态分解将情感语音信号分解为一系列固有模态函数分量,通过Mel滤波器后取其对数能量,进行DCT反变换后得到改进的MFCC作为情感识别的新特征,采用支持向量机对高兴、生气、厌烦和恐惧等四种语音情感识别。仿真实验结果表明:改进的MFCC识别率达到77.17%,在不同的信噪比下,识别率最大可提高3.26%。  相似文献   

5.
针对混响环境下语音识别系统性能急剧下降问题,提出一种采用复倒谱峰值滤波GMM识别混响语音的方法。通过训练纯净语音的MFCC特征参数构建高斯混合模型,在识别混响语音前引入复倒谱峰值滤波器以减少混响引起的语音失真而提高混响环境下语音识别率。经实验验证,该方法避免了在现实条件下准确估计房间冲击响应函数的麻烦,降低了计算难度,提高了混响环境下至少4%的系统识别率。  相似文献   

6.
为了应对低俗视频语音等多媒体信息在网络上的大量传播,提出了一种基于移位差分倒谱参数特征的低俗语音识别方法。该方法对输入的语音信号进行分帧,提取移位差分倒谱参数特征,采用了高斯混合模型进行粗分类,对粗分为低俗的语音帧再用支持向量机分类器进行确认。实验结果表明,该方法具有较高的正识别率和较低的误识别率,可用于网络上低俗语音和视频信息的过滤。  相似文献   

7.
周萍  唐李珍 《计算机工程》2011,37(2):169-171
针对短训练语音的说话人识别系统,提出一种基于决策层融合的识别算法。识别时运用经验模式分解法对语音信号进行处理,对获取的固有模态函数分量提取语音特征序列,分别进行匹配,通过决策层融合算法,将所得的匹配结果与传统独立识别结果相结合,最终输出识别结果。利用信号分解的方法,实现待测语音信号的重复识别,同时采用决策层融合算法优化识别结果,从而在短训练语音情况下,使系统的识别率得到保障。实验结果表明,该算法在短训练语音识别系统中的识别效果优于传统方法。  相似文献   

8.
目前说话人识别系统在理想环境下识别率已可达90%以上,但在实际通信环境下识别率却迅速下降.本文对信道失配环境下的鲁棒说话人识别进行研究.首先建立了一个基于高斯混合模型(GMM)的说话人识别系统,然后通过对实际通信信道的测试和分析,提出了两种改进方法.一是由实测数据建立了一个通用信道模型,将干净语音经通用信道模型滤波后再作为训练语音训练说话人模型;二是通过对比实测信道﹑理想低通信道及语音梅尔倒谱系数(MFCC)的特点,提出合理舍去语音第一﹑二维特征参数的方法.实验结果表明,通过处理后,系统在通信环境下的识别率提升了20%左右,与传统的倒谱均值减(CMS)方法相比,识别率提高了9%-12%.  相似文献   

9.
特征提取是情感语音识别系统的关键过程,决定系统整体识别性能。传统特征提取技术假定语音信号是线性、短时平稳信号,不具有自适应性。为此,通过聚合经验模态分解(EEMD)算法以非线性的处理方式提取特征。情感语音信号经EEMD分解后得到一组固有模态函数(IMF),利用相关系数法筛选出有效分量集合,对集合函数计算得到IMF能量特征(IMFE)。选用德国柏林语音库作为实验数据来源,将IMFE特征、韵律特征、梅尔倒谱系数特征以及三者的融合特征分别输入到支持向量机中,通过比较不同特征的识别结果验证IM FE特征的有效性。实验结果表明,IM FE特征与声学特征融合后的平均识别率达到91.67%,可有效区分不同的情感状态。  相似文献   

10.
为了从强烈的背景噪声中提取侧信道信号的特征信息,提出了一种基于经验模式分解(EMD)与奇异值差分谱相结合的信号特征提取方法。该方法首先对原始侧信道信号进行EMD分解,计算各个特征模态函数(IMF)与原始信号的相关系数,找到最大相似特征分量;再对该分量进行奇异值分解求出对应的奇异值差分谱;最后根据差分谱进行重构和消噪,进一步提取分量的特征信息。实验结果表明,该方法可以有效应用于侧信道信号的特征提取,成功提高信号的信噪比和攻击成功率。  相似文献   

11.
Recently, several algorithms have been proposed to enhance noisy speech by estimating a binary mask that can be used to select those time-frequency regions of a noisy speech signal that contain more speech energy than noise energy. This binary mask encodes the uncertainty associated with enhanced speech in the linear spectral domain. The use of the cepstral transformation smears the information from the noise dominant time-frequency regions across all the cepstral features. We propose a supervised approach using regression trees to learn the nonlinear transformation of the uncertainty from the linear spectral domain to the cepstral domain. This uncertainty is used by a decoder that exploits the variance associated with the enhanced cepstral features to improve robust speech recognition. Systematic evaluations on a subset of the Aurora4 task using the estimated uncertainty show substantial improvement over the baseline performance across various noise conditions.  相似文献   

12.
为了提高噪声中的说话人识别率,根据各维倒谱系数鉴别能力的不同,在识别过程中对GMM(Gauss mixture model)模型的各维分量直接加权,提出了直接倒谱加权的GMM模型,并且研究了在噪声情况下衡量各维特征鉴别能力的新方法。将该方法与MMSE(Minimum mean square error)相融合,对白噪声和地铁噪声进行实验,得到基线系统和MMSE增强系统在不同噪声情况下最优的加权窗函数。试验结果表明,直接倒谱加权GMM能显著提高系统识别精度。  相似文献   

13.
针对梅尔频率倒谱系数(MFCC)参数在噪声环境中语音识别率下降的问题,提出了一种基于耳蜗倒谱系数(CFCC)的改进的特征参数提取方法.提取具有听觉特性的CFCC特征参数;运用改进的线性判别分析(LDA)算法对提取出的特征参数进行线性变换,得到更具有区分性的特征参数和满足隐马尔可夫模型(HMM)需要的对角化协方差矩阵;进行均值方差归一化,得到最终的特征参数.实验结果表明:提出的方法能有效地提高噪声环境中语音识别系统的识别率和鲁棒性.  相似文献   

14.
基于自适应倒谱距离的强噪声语音端点检测   总被引:4,自引:0,他引:4  
赵新燕  王炼红  彭林哲 《计算机科学》2015,42(9):83-85, 117
在有噪声干扰的情况下,传统的语音端点检测方法的检测准确度明显下降。为了在强背景噪声环境下有效区分出语音信号和非语音信号,针对倒谱距离端点检测方法进行了研究,提出了一种基于自适应倒谱距离的强噪声语音端点检测方法。本方法引入倒谱距离乘数和门限增量系数,针对不同信噪比采用不同的倒谱距离乘数,并采用自适应判决门限的方法进行语音端点检测。MATLAB仿真实验结果显示,在不同背景噪声和不同信噪比下,本方法对于语音端点检测具有较高的检测正确率,其端点检测效果明显优于传统端点检测方法,适用于强背景噪声下的端点检测。  相似文献   

15.
In this paper we introduce a robust feature extractor, dubbed as robust compressive gammachirp filterbank cepstral coefficients (RCGCC), based on an asymmetric and level-dependent compressive gammachirp filterbank and a sigmoid shape weighting rule for the enhancement of speech spectra in the auditory domain. The goal of this work is to improve the robustness of speech recognition systems in additive noise and real-time reverberant environments. As a post processing scheme we employ a short-time feature normalization technique called short-time cepstral mean and scale normalization (STCMSN), which, by adjusting the scale and mean of cepstral features, reduces the difference of cepstra between the training and test environments. For performance evaluation, in the context of speech recognition, of the proposed feature extractor we use the standard noisy AURORA-2 connected digit corpus, the meeting recorder digits (MRDs) subset of the AURORA-5 corpus, and the AURORA-4 LVCSR corpus, which represent additive noise, reverberant acoustic conditions and additive noise as well as different microphone channel conditions, respectively. The ETSI advanced front-end (ETSI-AFE), the recently proposed power normalized cepstral coefficients (PNCC), conventional MFCC and PLP features are used for comparison purposes. Experimental speech recognition results demonstrate that the proposed method is robust against both additive and reverberant environments. The proposed method provides comparable results to that of the ETSI-AFE and PNCC on the AURORA-2 as well as AURORA-4 corpora and provides considerable improvements with respect to the other feature extractors on the AURORA-5 corpus.  相似文献   

16.
In this paper, we propose a novel front-end speech parameterization technique for automatic speech recognition (ASR) that is less sensitive towards ambient noise and pitch variations. First, using variational mode decomposition (VMD), we break up the short-time magnitude spectrum obtained by discrete Fourier transform into several components. In order to suppress the ill-effects of noise and pitch variations, the spectrum is then sufficiently smoothed. The desired spectral smoothing is achieved by discarding the higher-order variational mode functions and reconstructing the spectrum using the first-two modes only. As a result, the smoothed spectrum closely resembles the spectral envelope. Next, the Mel-frequency cepstral coefficients (MFCC) are extracted using the VMD-based smoothed spectra. The proposed front-end acoustic features are observed to be more robust towards ambient noise and pitch variations than the conventional MFCC features as demonstrated by the experimental evaluations presented in this study. For this purpose, we developed an ASR system using speech data from adult speakers collected under relatively clean recording conditions. State-of-the-art acoustic modeling techniques based on deep neural networks (DNN) and long short-term memory recurrent neural networks (LSTM-RNN) were employed. The ASR systems were then evaluated under noisy test conditions for assessing the noise robustness of the proposed features. To assess robustness towards pitch variations, experimental evaluations were performed on another test set consisting of speech data from child speakers. Transcribing children's speech helps in simulating an ASR task where pitch differences between training and test data are significantly large. The signal domain analyses as well as the experimental evaluations presented in this paper support our claims.  相似文献   

17.
In this work, we have developed a speech mode classification model for improving the performance of phone recognition system (PRS). In this paper, we have explored vocal tract system, excitation source and prosodic features for development of speech mode classification (SMC) model. These features are extracted from voiced regions of a speech signal. In this study, conversation, extempore, and read speech are considered as three different modes of speech. The vocal tract component of speech is extracted using Mel-frequency cepstral coefficients (MFCCs). The excitation source features are captured through Mel power differences of spectrum in sub-bands (MPDSS) and residual Mel-frequency cepstral coefficients (RMFCCs) of the speech signal. The prosody information is extracted from pitch and intensity. Speech mode classification models are developed using above described features independently, and in fusion. The experiments carried out on Bengali speech corpus to analyze the accuracy of the speech mode classification model using the artificial neural network (ANN), naive Bayes, support vector machines (SVMs) and k-nearest neighbor (KNN). We proposed four classification models which are combined using maximum voting approach for optimal performance. From the results, it is observed that speech mode classification model developed using the fusion of vocal tract system, excitation source and prosodic features of speech, yields the best performance of 98%. Finally, the proposed speech mode classifier is integrated to the PRS, and the accuracy of phone recognition system is observed to be improved by 11.08%.  相似文献   

18.
The most widely used speech representation is based on the mel-frequency cepstral coefficients, which incorporates biologically inspired characteristics into artificial recognizers. However, the recognition performance with these features can still be enhanced, specially in adverse conditions. Recent advances have been made with the introduction of wavelet based representations for different kinds of signals, which have shown to improve the classification performance. However, the problem of finding an adequate wavelet based representation for a particular problem is still an important challenge. In this work we propose a genetic algorithm to evolve a speech representation, based on a non-orthogonal wavelet decomposition, for phoneme classification. The results, obtained for a set of spanish phonemes, show that the proposed genetic algorithm is able to find a representation that improves speech recognition results. Moreover, the optimized representation was evaluated in noise conditions.  相似文献   

19.
The aim of this investigation is to determine to what extent automatic speech recognition may be enhanced if, in addition to the linear compensation accomplished by mean and variance normalisation, a non-linear mismatch reduction technique is applied to the cepstral and energy features, respectively. An additional goal is to determine whether the degree of mismatch between the feature distributions of the training and test data that is associated with acoustic mismatch, differs for the cepstral and energy features. Towards these aims, two non-linear mismatch reduction techniques – time domain noise reduction and histogram normalisation – were evaluated on the Aurora2 digit recognition task as well as on a continuous speech recognition task with noisy test conditions similar to those in the Aurora2 experiments. The experimental results show that recognition performance is enhanced by the application of both non-linear mismatch reduction techniques. The best results are obtained when the two techniques are applied simultaneously. The results also reveal that the mismatch in the energy features is quantitatively and qualitatively much larger than the corresponding mismatch associated with the cepstral coefficients. The most substantial gains in average recognition rate are therefore accomplished by reducing training-test mismatch for the energy features.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号