首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 312 毫秒
1.
The authors deal with the problem of automatic speech recognition in the presence of additive white noise. The effect of noise is modelled as an additive term to the power spectrum of the original clean speech. The cepstral coefficients of the noisy speech are then derived from this model. The reference cepstral vectors trained from clean speech are adapted to their appropriate noisy version to best fit the testing speech cepstral vector. The LPC coefficients, LPC derived cepstral coefficients, and the distance between test and reference, are all regarded as functions of the noise ratio (the spectral power ratio of noise to noisy speech). A gradient based algorithm is proposed to find the optimal noise ratio as well as the minimum distance between the test cepstral vector and the noise adapted reference. A recursive algorithm based on Levinson-Durbin recursion is proposed to simultaneously calculate the LPC coefficients and the derivatives of the LPC coefficients with respect to the noise ratio. The stability of the proposed adaptation algorithm is also addressed. Experiments on multispeaker (50 males and 50 females) isolated Mandarin digits recognition demonstrate remarkable performance improvements over noncompensated method under noisy environment. The results are also compared to the projection based approach, and experiments show that the proposed method is superior to the projection approach under a severe noisy environment  相似文献   

2.
We propose a novel feature processing technique which can provide a cepstral liftering effect in the log‐spectral domain. Cepstral liftering aims at the equalization of variance of cepstral coefficients for the distance‐based speech recognizer, and as a result, provides the robustness for additive noise and speaker variability. However, in the popular hidden Markov model based framework, cepstral liftering has no effect in recognition performance. We derive a filtering method in log‐spectral domain corresponding to the cepstral liftering. The proposed method performs a high‐pass filtering based on the decorrelation of filter‐bank energies. We show that in noisy speech recognition, the proposed method reduces the error rate by 52.7% to conventional feature.  相似文献   

3.
本文在丢失数据技术与声学后退技术的基础上,提出了一种基于模糊规则的鲁棒语音识别方法,首先根据先验知识或假定建立特征分量的可靠程度与其概率分布之间的模糊规则,识别时观察矢量的输出概率由一个基于规则的模糊逻辑系统来得到,并针对倒谱识别系统给出了一种具体的实现方法.实验结果表明,所提识别方法的性能显著优于丢失数据技术和声学后退技术.  相似文献   

4.
We propose a new bandpass filter (BPF)‐based online channel normalization method to dynamically suppress channel distortion when the speech and channel noise components are unknown. In this method, an adaptive modulation frequency filter is used to perform channel normalization, whereas conventional modulation filtering methods apply the same filter form to each utterance. In this paper, we only normalize the two mel frequency cepstral coefficients (C0 and C1) with large dynamic ranges; the computational complexity is thus decreased, and channel normalization accuracy is improved. Additionally, to update the filter weights dynamically, we normalize the learning rates using the dimensional power of each frame. Our speech recognition experiments using the proposed BPF‐based blind channel normalization method show that this approach effectively removes channel distortion and results in only a minor decline in accuracy when online channel normalization processing is used instead of batch processing.  相似文献   

5.
A new class‐based histogram equalization method is proposed for robust speech recognition. The proposed method aims at not only compensating the acoustic mismatch between training and test environments, but also at reducing the discrepancy between the phonetic distributions of training and test speech data. The algorithm utilizes multiple class‐specific reference and test cumulative distribution functions, classifies the noisy test features into their corresponding classes, and equalizes the features by using their corresponding class‐specific reference and test distributions. Experiments on the Aurora 2 database proved the effectiveness of the proposed method by reducing relative errors by 18.74%, 17.52%, and 23.45% over the conventional histogram equalization method and by 59.43%, 66.00%, and 50.50% over mel‐cepstral‐based features for test sets A, B, and C, respectively.  相似文献   

6.
Despite successes, there are still significant limitations to speech recognition performance, particularly for conversational speech and/or for speech with significant acoustic degradations from noise or reverberation. For this reason, authors have proposed methods that incorporate different (and larger) analysis windows, which are described in this article. Note in passing that we and many others have already taken advantage of processing techniques that incorporate information over long time ranges, for instance for normalization (by cepstral mean subtraction as stated in B. Atal (1974) or relative spectral analysis (RASTA) based in H. Hermansky and N. Morgan (1994)). They also have proposed features that are based on speech sound class posterior probabilities, which have good properties for both classification and stream combination.  相似文献   

7.
提出了一种结合韵律信息的高性能汉语连续数字语音识别算法,该识别算法基于CHMM(连续隐马尔可夫模型),采用MFCC(MEL频率倒谱系数)为主要语音特征参数,结合韵律信息进行连续数字精确分割,能够有效区分易混数字。算法采用两级识别框架来提高语音识别率,其中,第1级对连续数字分割,在此基础上进行数字语音识别,输出各候选结果,第2级在候选结果中确定易混数字对,并运用韵律信息进一步选择正确结果。实验表明,最终汉语连续数字语音识别率有很大提高。  相似文献   

8.
《Signal processing》1986,10(3):279-290
As a step towards phoneme identification, a method of clustering speech spectra and spectral changes is discussed. In this technique, two kinds of acoustic features are defined in each frame of analysis. The first feature, called a feature of Level 1, shows a spectral contour of a frame which is represented by LPC cepstral coefficients. The second feature, called a feature of Level 2, shows a spectral change in a frame, which is defined by the difference between the LPC cepstral coefficients derived from the first half and the second half of a frame. A phonemic feature of each frame is defined as a triplet of phonemic names. The acoustical features of Levels 1 and 2 are calculated from 800 V, VV, CV, VCV (vowel, vowel-vowel, consonant-vowel, vowel-consonant-vowel) syllables uttered by one male and clustered with an algorithm of vector quantizer design. This VQ design method is based on the one by Linde, Buzo and Gray (1980). However, the proposed VQ method is slightly modified to consider frame labels belonging to each cluster. As a result, each frame is characterized by the cluster numbers, or the centroid numbers, of Level 1 and Level 2.The relation between the cluster numbers and the phonemic feature was investigated. It was found that the number of different phonemic labels corresponding to each cluster was less than five. In the resulting 5503 clusters, the existing combinations of Level 1 and Level 2 codes (centroid numbers), 4428 clusters had only one kind of label.  相似文献   

9.
为了提高海洋哺乳动物声音识别算法的识别率和鲁棒性,提出了一种将梅尔倒谱系数MFCC、线性倒谱系数LFCC和时域特征融合作为特征参数进行声音识别的方法。该方法通过融合不同倒谱系数以增强对不同频段的表征能力,通过融合时域特征来更全面地描述声音信息。声音样本通过基于海洋环境下的预处理、特征提取与融合后,用支持向量机进行分类识别。相对于传统算法只针对一种或几种哺乳动物进行识别,该方法在包含61种海洋哺乳动物声音的样本库中进行测试。测试结果显示该算法较传统的梅尔倒谱系数在识别率上提升了5.5%,且在海洋低信噪比环境下有更好的识别表现。  相似文献   

10.
This communication presents a new method for automatic speech recognition in reverberant environments. Our approach consists in the selection of the best acoustic model out of a library of models trained on artificially reverberated speech databases corresponding to various reverberant conditions. Given a speech utterance recorded within a reverberant room, a Maximum Likelihood estimate of the fullband room reverberation time is computed using a statistical model for short-term log-energy sequences of anechoic speech. The estimated reverberation time is then used to select the best acoustic model, i.e., the model trained on a speech database most closely matching the estimated reverberation time, which serves to recognize the reverberated speech utterance. The proposed model selection approach is shown to improve significantly recognition accuracy for a connected digit task in both simulated and real reverberant environments, outperforming standard channel normalization techniques.  相似文献   

11.
In this paper, we propose a robust distant-talking speech recognition by combining cepstral domain denoising autoencoder (DAE) and temporal structure normalization (TSN) filter. As DAE has a deep structure and nonlinear processing steps, it is flexible enough to model highly nonlinear mapping between input and output space. In this paper, we train a DAE to map reverberant and noisy speech features to the underlying clean speech features in the cepstral domain. For the proposed method, after applying a DAE in the cepstral domain of speech to suppress reverberation, we apply a post-processing technology based on temporal structure normalization (TSN) filter to reduce the noise and reverberation effects by normalizing the modulation spectra to reference spectra of clean speech. The proposed method was evaluated using speech in simulated and real reverberant environments. By combining a cepstral-domain DAE and TSN, the average Word Error Rate (WER) was reduced from 25.2 % of the baseline system to 21.2 % in simulated environments and from 47.5 % to 41.3 % in real environments, respectively.  相似文献   

12.
Emotion recognition is one of the latest challenges in human-robot interaction. This paper describes the realization of emotional interaction for a Thinking Robot, focusing on speech emotion recognition. In general, speaker-independent systems show a lower accuracy rate compared with speaker-dependent systems, as emotional feature values depend on the speaker and their gender. However, speaker-independent systems are required for commercial applications. In this paper, a novel speaker-independent feature, the ratio of a spectral flatness measure to a spectral center (RSS), with a small variation in speakers when constructing a speaker-independent system is proposed. Gender and emotion are hierarchically classified by using the proposed feature (RSS), pitch, energy, and the mel frequency cepstral coefficients. An average recognition rate of 57.2% (plusmn 5.7%) at a 90% confidence interval is achieved with the proposed system in the speaker-independent mode.  相似文献   

13.
Wireless Personal Communications - In this paper, we propose novel sub-band spectral centroid weighted wavelet packet cepstral coefficients (W-WPCC) for robust speech emotion recognition. Wavelet...  相似文献   

14.
谱特征在语音情感识别中起到了重要的作用,然而现有的谱特征仍未能充分表达谱图中的语音情感信息.为研究语音情感与谱图之间的联系,提出了一种面向语音情感识别的Gabor分块局部二值模式特征(GBLBP)。首先,获取情感语音的对数能量谱;然后,采用多尺度,多方向的Gabor小波对对数能量谱进行处理,得到Gabor谱图;再次,对每张Gabor谱图进行分块,采用局部二值模式提取每个块的局部能量分布信息;最后,将提取到的所有特征级联,得到GBLBP特征。Berlin库上的实验结果表明:GBLBP特征的平均加权召回率比MFCC高了9%,识别性能显著优于众多谱特征,且与现有声学特征有较好的融合性。   相似文献   

15.
This paper introduces a cepstral approach for the automatic detection of landmines and underground utilities from acoustic and ground penetrating radar (GPR) images. This approach is based on treating the problem as a pattern recognition problem. Cepstral features are extracted from a group of images, which are transformed first to 1-D signals by lexicographic ordering. Mel-frequency cepstral coefficients (MFCCs) and polynomial shape coefficients are extracted from these 1-D signals to form a database of features, which can be used to train a neural network with these features. The target detection can be performed by extracting features from any new image with the same method used in the training phase. These features are tested with the neural network to decide whether a target exists or not. The different domains are tested and compared for efficient feature extraction from the lexicographically ordered 1-D signals. Experimental results show the success of the proposed cepstral approach for landmine detection from both acoustic and GPR images at low as well as high signal to noise ratios (SNRs). Results also show that the discrete cosine transform (DCT) is the most appropriate domain for feature extraction.  相似文献   

16.
Automatic speech recognition under adverse noise conditions has been a challenging problem. Under noise conditions when the stationarity assumption is valid, effective techniques have been established to provide excellent recognition accuracies. Under the conditions when this assumption cannot hold, recognition performance de- clines rapidly. Missing data, MD, theory is a promising method for robust automatic speech recognition, ASR, under an y noise condition. Unfortunately, the choice of feature used in the recognizer process is commonly limited to spectral based representations. The combination of recognizers approach to MD ASR allows the use of cepstral based features within the MD framework through a fusion of features mechanism in the pat- tern recognition stage. It was found that under two types of non-stationary noise conditions the combined fused effect, experienced by the fusion process, increased recognition accuracies substantially over traditional MD and cepstral based recognizers.  相似文献   

17.
This paper presents the results on whispered speech recognition using gammatone filterbank cepstral coefficients for speaker dependent mode. The isolated words used for this experiment are taken from the Whi-Spe database. Whispered speech recognition is based on dynamic time warping and hidden Markov models methods. The experiments are focused on the following modes: normal speech, whispered speech and their combinations (normal/whispered and whispered/normal). The results demonstrated an important improvement in recognition after application of cepstral mean subtraction, especially in mixed train/test scenarios.  相似文献   

18.
The main objective of this paper is to provide a comparative study between different cepstral features for the application of human recognition using heart sounds. In the past 10 years, heart sound, which is known as phonocardiogram, has been adopted for human biometric authentication tasks. Most of the previously proposed systems have adopted mel-frequency and linear frequency cepstral coefficients as features for heart sounds. In this paper, two more cepstral features are proposed. The first one is based on wavelet packet decomposition where a new filter bank structure is designed to select the appropriate bases for extracting discriminant features from heart sounds. The other is based on nonlinear modification for mel-scaled cepstral features. The four cepstral features are tested and compared on two databases: One consists of 21 subjects, and the other consists of 206 subjects. Based on the achieved results over the two databases, the two proposed cepstral features achieved higher correct recognition rates and lower error rates in identification and verification modes, respectively.  相似文献   

19.
Because there are many parameters in the cochlear implant (CI) device that can be optimized for individual patients, it is important to estimate a parameter's effect before patient evaluation. In this paper, Mel-frequency cepstrum coefficients (MFCCs) were used to estimate the acoustic vowel space for vowel stimuli processed by the CI simulations. The acoustic space was then compared to vowel recognition performance by normal-hearing subjects listening to the same processed speech. Five CI speech processor parameters were simulated to produce different degree of spectral resolution, spectral smearing, spectral warping, spectral shifting, and amplitude distortion. The acoustic vowel space was highly correlated with normal hearing subjects' vowel recognition performance for parameters that affected the spectral channels and spectral smearing. However, the acoustic vowel space was not significantly correlated with perceptual performance for parameters that affected the degree of spectral warping, spectral shifting, and amplitude distortion. In particular, while spectral warping and shifting did not significantly reshape the acoustic space, vowel recognition performance was significantly affected by these parameters. The results from the acoustic analysis suggest that the CI device can preserve phonetic distinctions under conditions of spectral warping and shifting. Auditory training may help CI patients better perceive these speech cues transmitted by their speech processors.  相似文献   

20.
本文根据倒谱系数矢量在特征空间的统计分布特性,提出了一种新的等方差加权倒谱失真测度,这种测度的加权函数充分刻画了语音倒谱矢量在特征空间分布的精细结构,从而有效地辨识不同讲话者的特征,实验表明,和常规的欧氏距离及方差倒数加权距离等相比,本文所提的失真测度能显著提高基于矢量量化的说话人识别的正识率。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号