首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper investigates a noise robust technique for automatic speech recognition which exploits hidden Markov modeling of stereo speech features from clean and noisy channels. The HMM trained this way, referred to as stereo HMM, has in each state a Gaussian mixture model (GMM) with a joint distribution of both clean and noisy speech features. Given the noisy speech input, the stereo HMM gives rise to a two-pass compensation and decoding process where MMSE denoising based on N-best hypotheses is first performed and followed by decoding the denoised speech in a reduced search space on lattice. Compared to the feature space GMM-based denoising approaches, the stereo HMM is advantageous as it has finer-grained noise compensation and makes use of information of the whole noisy feature sequence for the prediction of each individual clean feature. Experiments on large vocabulary spontaneous speech from speech-to-speech translation applications show that the proposed technique yields superior performance than its feature space counterpart in noisy conditions while still maintaining decent performance in clean conditions.  相似文献   

2.
Speech coding algorithms that have been developed for clean speech are often used in a noisy environment. We describe maximum a posteriori (MAP) and minimum mean square error (MMSE) techniques to estimate the clean-speech short-term predictor (STP) parameters from noisy speech. The MAP and MMSE estimates are obtained using a likelihood function computed by means of the DFT or Kalman filtering and empirical probability distributions based on multidimensional histograms. The method is assessed in terms of the resulting root mean spectral distortion between the "clean" speech STP parameters and the STP parameters computed with the proposed method from noisy speech. The estimated parameters are also applied to obtain clean speech estimates by means of a Kalman filter. The quality of the estimated speech as compared to the "clean" speech is assessed by means of subjective tests, signal-to-noise ratio improvement, and the perceptual speech quality measurement method.  相似文献   

3.
何志勇  朱忠奎 《计算机应用》2011,31(12):3441-3445
语音增强的目标在于从含噪信号中提取纯净语音,纯净语音在某些环境下会被脉冲噪声所污染,但脉冲噪声的时域分布特征却给语音增强带来困难,使传统方法在脉冲噪声环境下难以取得满意效果。为在平稳脉冲噪声环境下进行语音增强,提出了一种新方法。该方法通过计算确定脉冲噪声样本的能量与含噪信号样本的能量之比最大的频段,利用该频段能量分布情况逐帧判别语音信号是否被脉冲噪声所污染。进一步地,该方法只在被脉冲噪声污染的帧应用卡尔曼滤波算法去噪,并改进了传统算法执行时的自回归(AR)模型参数估计过程。实验中,采用白色脉冲噪声以及有色脉冲噪声污染语音信号,并对低输入信噪比的信号进行语音增强,结果表明所提出的算法能显著地改善信噪比和抑制脉冲噪声。  相似文献   

4.
Qinghua  Jie  Yue   《Neurocomputing》2007,70(16-18):3063
In this paper, we propose to use variational Bayesian (VB) method to learn the clean speech signal from noisy observation directly. It models the probability distribution of clean signal using a Gaussian mixture model (GMM) and minimizes the misfit between the true probability distributions of hidden variables and model parameters and their approximate distributions. Experimental results demonstrate that the performance of the proposed algorithm is better than that of some other methods.  相似文献   

5.
In this paper, we propose a multi-environment model adaptation method based on vector Taylor series (VTS) for robust speech recognition. In the training phase, the clean speech is contaminated with noise at different signal-to-noise ratio (SNR) levels to produce several types of noisy training speech and each type is used to obtain a noisy hidden Markov model (HMM) set. In the recognition phase, the HMM set which best matches the testing environment is selected, and further adjusted to reduce the environmental mismatch by the VTS-based model adaptation method. In the proposed method, the VTS approximation based on noisy training speech is given and the testing noise parameters are estimated from the noisy testing speech using the expectation-maximization (EM) algorithm. The experimental results indicate that the proposed multi-environment model adaptation method can significantly improve the performance of speech recognizers and outperforms the traditional model adaptation method and the linear regression-based multi-environment method.  相似文献   

6.
This work explores the use of speech enhancement for enhancing degraded speech which may be useful for text dependent speaker verification system. The degradation may be due to noise or background speech. The text dependent speaker verification is based on the dynamic time warping (DTW) method. Hence there is a necessity of the end point detection. The end point detection can be performed easily if the speech is clean. However the presence of degradation tends to give errors in the estimation of the end points and this error propagates into the overall accuracy of the speaker verification system. Temporal and spectral enhancement is performed on the degraded speech so that ideally the nature of the enhanced speech will be similar to the clean speech. Results show that the temporal and spectral processing methods do contribute to the task by eliminating the degradation and improved accuracy is obtained for the text dependent speaker verification system using DTW.  相似文献   

7.
An analysis-based non-linear feature extraction approach is proposed, inspired by a model of how speech amplitude spectra are affected by additive noise. Acoustic features are extracted based on the noise-robust parts of speech spectra without losing discriminative information. Two non-linear processing methods, harmonic demodulation and spectral peak-to-valley ratio locking, are designed to minimize mismatch between clean and noisy speech features. A previously studied method, peak isolation [IEEE Transactions on Speech and Audio Processing 5 (1997) 451], is also discussed with this model. These methods do not require noise estimation and are effective in dealing with both stationary and non-stationary noise. In the presence of additive noise, ASR experiments show that using these techniques in the computation of MFCCs improves recognition performance greatly. For the TI46 isolated digits database, the average recognition rate across several SNRs is improved from 60% (using unmodified MFCCs) to 95% (using the proposed techniques) with additive speech-shaped noise. For the Aurora 2 connected digit-string database, the average recognition rate across different noise types, including non-stationary noise background, and SNRs improves from 58% to 80%.  相似文献   

8.
Distant speech capture in lecture halls and auditoriums offers unique challenges in algorithm development for automatic speech recognition. In this study, a new adaptation strategy for distant noisy speech is created by the means of phoneme classes. Unlike previous approaches which adapt the acoustic model to the features, the proposed phoneme-class based feature adaptation (PCBFA) strategy adapts the distant data features to the present acoustic model which was previously trained on close microphone speech. The essence of PCBFA is to create a transformation strategy which makes the distributions of phoneme-classes of distant noisy speech similar to those of a close talk microphone acoustic model in a multidimensional MFCC space. To achieve this task, phoneme-classes of distant noisy speech are recognized via artificial neural networks. PCBFA is the adaptation of features rather than adaptation of acoustic models. The main idea behind PCBFA is illustrated via conventional Gaussian mixture model–Hidden Markov model (GMM–HMM) although it can be extended to new structures in automatic speech recognition (ASR). The new adapted features together with the new and improved acoustic models produced by PCBFA are shown to outperform those created only by acoustic model adaptations for ASR and keyword spotting. PCBFA offers a new powerful understanding in acoustic-modeling of distant speech.  相似文献   

9.
The new model reduces the impact of local spectral and temporal variability by estimating a finite set of spectral and temporal warping factors which are applied to speech at the frame level. Optimum warping factors are obtained while decoding in a locally constrained search. The model involves augmenting the states of a standard hidden Markov model (HMM), providing an additional degree of freedom. It is argued in this paper that this represents an efficient and effective method for compensating local variability in speech which may have potential application to a broader array of speech transformations. The technique is presented in the context of existing methods for frequency warping-based speaker normalization for ASR. The new model is evaluated in clean and noisy task domains using subsets of the Aurora 2, the Spanish Speech-Dat-Car, and the TIDIGITS corpora. In addition, some experiments are performed on a Spanish language corpus collected from a population of speakers with a range of speech disorders. It has been found that, under clean or not severely degraded conditions, the new model provides improvements over the standard HMM baseline. It is argued that the framework of local warping is an effective general approach to providing more flexible models of speaker variability.  相似文献   

10.
Li  Gang  Hu  Ruimin  Zhang  Rui  Wang  Xiaochen 《Multimedia Tools and Applications》2020,79(27-28):19471-19491

Environmental noise degrades the speech intelligibility when listening to the phone. Although the phone has a clean signal source, it is still difficult for the listener to get information. Intelligibility enhancement (IENH) is a type of perceptual enhancement technique for clean speech rendered in noisy environments. This study focuses on IENH by normal-to-Lombard speech conversion, which is inspired by Lombard reflex. In this conversion process, the key point is to map the spectral tilt from the normal speech (normal style) to the Lombard speech (Lombard style). For mapping the spectral tilt, we propose a mapping model combining linear-prediction-based mapping networks and tilt modification. Compared with previous studies, we use deep neural networks (DNNs) instead of Gaussian-based models for higher dimensional mapping, and inventively add a tilt modification module to reduce the mapping errors of formant magnitudes further. In this paper, we use AVS-M codec and two datasets as the benchmark platform. The valuation shows that our method gets better results than reference methods in both objective and subjective experiments.

  相似文献   

11.
The pitch is a crucial parameter in speech and music signals. However, due to severe noisy conditions, missing harmonics, unsuitable physical vibration, the determination of pitch presents a great challenge when desiring to get a good accuracy. In this paper, we propose a method for pitch estimation of speech and music sounds. Our method is based on the fast Fourier transform (FFT) of the multi-scale product (MP) provided by a feature auditory model of the sound signals. The auditory model simulates the spectral behaviour of the cochlea by a gammachirp filter-bank, and the out/middle ear filtering by a low-pass filter. For the two output channels, the FFT function of the MP is computed over frames. The MP is based on constituting the product of the speech and music wavelet transform coefficients at three scales. The experimental results show that our method estimates the pitch with high accuracy. Besides, our proposed method outperforms several other pitch detection algorithms in clean and noisy environments.  相似文献   

12.
In this work, a first approach to a robust phoneme recognition task by means of a biologically inspired feature extraction method is presented. The proposed technique provides an approximation to the speech signal representation at the auditory cortical level. It is based on an optimal dictionary of atoms, estimated from auditory spectrograms, and the Matching Pursuit algorithm to approximate the cortical activations. This provides a sparse coding with intrinsic noise robustness, which can be therefore exploited when using the system in adverse environments. The recognition task consisted in the classification of a set of 5 easily confused English phonemes, in both clean and noisy conditions. Multilayer perceptrons were trained as classifiers and the performance was compared to other classic and robust parameterizations: the auditory spectrogram, a probabilistic optimum filtering on Mel frequency cepstral coefficients and the perceptual linear prediction coefficients. Results showed a significant improvement in the recognition rate of clean and noisy phonemes by the cortical representation over these other parameterizations.  相似文献   

13.
卷积神经网络的感受野大小与卷积核的尺寸相关,传统的卷积采用了固定大小的卷积核,限制了网络模型的特征感知能力;此外,卷积神经网络使用参数共享机制,对空间区域中所有的样本点采用了相同的特征提取方式,然而带噪频谱图噪声信号与干净语音信号的分布存在差异,特别是在复杂噪声环境下,使得传统卷积方式难以实现高质量的语音信号特征提取和过滤.为了解决上述问题,提出了多尺度区域自适应卷积模块,利用多尺度信息提升模型的特征感知能力;根据对应采样点的特征值自适应地分配区域卷积权重,实现区域自适应卷积,提升模型过滤噪声的能力.在TIMIT公开数据集上的实验表明,提出的算法在语音质量和可懂度的评价指标上取得了更优的实验结果.  相似文献   

14.
Audio-visual speech modeling for continuous speech recognition   总被引:3,自引:0,他引:3  
This paper describes a speech recognition system that uses both acoustic and visual speech information to improve recognition performance in noisy environments. The system consists of three components: a visual module; an acoustic module; and a sensor fusion module. The visual module locates and tracks the lip movements of a given speaker and extracts relevant speech features. This task is performed with an appearance-based lip model that is learned from example images. Visual speech features are represented by contour information of the lips and grey-level information of the mouth area. The acoustic module extracts noise-robust features from the audio signal. Finally the sensor fusion module is responsible for the joint temporal modeling of the acoustic and visual feature streams and is realized using multistream hidden Markov models (HMMs). The multistream method allows the definition of different temporal topologies and levels of stream integration and hence enables the modeling of temporal dependencies more accurately than traditional approaches. We present two different methods to learn the asynchrony between the two modalities and how to incorporate them in the multistream models. The superior performance for the proposed system is demonstrated on a large multispeaker database of continuously spoken digits. On a recognition task at 15 dB acoustic signal-to-noise ratio (SNR), acoustic perceptual linear prediction (PLP) features lead to 56% error rate, noise robust RASTA-PLP (relative spectra) acoustic features to 7.2% error rate and combined noise robust acoustic features and visual features to 2.5% error rate  相似文献   

15.
We are addressing the novel problem of jointly evaluating multiple speech patterns for automatic speech recognition and training. We propose solutions based on both the non-parametric dynamic time warping (DTW) algorithm, and the parametric hidden Markov model (HMM). We show that a hybrid approach is quite effective for the application of noisy speech recognition. We extend the concept to HMM training wherein some patterns may be noisy or distorted. Utilizing the concept of “virtual pattern” developed for joint evaluation, we propose selective iterative training of HMMs. Evaluating these algorithms for burst/transient noisy speech and isolated word recognition, significant improvement in recognition accuracy is obtained using the new algorithms over those which do not utilize the joint evaluation strategy.  相似文献   

16.
提出一种噪声下的多数据流子带语音识别方法。传统的子带特征方法虽然能提高噪声下的语音识别性能,但通常会使无噪声情况下的识别性能下降。新方法提取感知线性预测(PLP)特征和子带特征,分别进行识别,然后在识别概率层将两者相结合。通过E-Set在NoiseX92下的白噪声的识别实验表明,新方法不仅具有更好的抗噪性能,而且同时能提高无噪声情况下的识别性能。  相似文献   

17.
Feature extraction methods for sound events have been traditionally based on parametric representations specifically developed for speech signals, such as the well-known Mel Frequency Cepstrum Coefficients (MFCC). However, the discrimination capabilities of these features for Acoustic Event Classification (AEC) tasks could be enhanced by taking into account the spectro-temporal structure of acoustic event signals. In this paper, a new front-end for AEC which incorporates this specific information is proposed. It consists of two different stages: short-time feature extraction and temporal feature integration. The first module aims at providing a better spectral representation of the different acoustic events on a frame-by-frame basis, by means of the automatic selection of the optimal set of frequency bands from which cepstral-like features are extracted. The second stage is designed for capturing the most relevant temporal information in the short-time features, through the application of Non-Negative Matrix Factorization (NMF) on their periodograms computed over long audio segments. The whole front-end has been evaluated in clean and noisy conditions. Experiments show that the removal of certain frequency bands (which are mainly located in the medium region of the spectrum for clean conditions and in low frequencies for noisy environments) in the short-time feature computation process in conjunction with the NMF technique for temporal feature integration improves significantly the performance of a Support Vector Machine (SVM) based AEC system with respect to the use of conventional MFCCs.  相似文献   

18.
Emotion recognition in speech signals is currently a very active research topic and has attracted much attention within the engineering application area. This paper presents a new approach of robust emotion recognition in speech signals in noisy environment. By using a weighted sparse representation model based on the maximum likelihood estimation, an enhanced sparse representation classifier is proposed for robust emotion recognition in noisy speech. The effectiveness and robustness of the proposed method is investigated on clean and noisy emotional speech. The proposed method is compared with six typical classifiers, including linear discriminant classifier, K-nearest neighbor, C4.5 decision tree, radial basis function neural networks, support vector machines as well as sparse representation classifier. Experimental results on two publicly available emotional speech databases, that is, the Berlin database and the Polish database, demonstrate the promising performance of the proposed method on the task of robust emotion recognition in noisy speech, outperforming the other used methods.  相似文献   

19.

The speech signals are affected by the background noise distortion that is unfavorable to both the intelligibility as well as the speech quality. Most of the speech processing algorithms function with the spectral magnitude without consideration of the spectral phase by leaving them unexplored and unstructured. The proposed single channel speech enhancement model called the Adaptive Recurrent Nonnegative Matrix Factorization (AR-NMF) is designed based on the phase compensation strategy with deep learning. The two major phases considered here are the training phase and the testing phase. During the process of training, the noisy speech signal is decomposed by the Hurst exponent-based Empirical Mode Decomposition (HEMD) and is converted into the frequency domain using Short Time Fourier Transform. Further, the new AR-NMF is used for denoising, where the tuning factor is optimally generated by the optimized RNN. Here, the hidden neurons are optimized using the proposed Adaptive Attack Power-based Sail Fish Optimization (AAP-SFO) with consideration of minimizing the Mean Absolute Error between the actual value and the predicted value. Finally, this phase compensated speech signal is given to the ISTFT that results in the final denoised clean speech signal. From the analysis, the CSED of AAP-SFO-AR-NMF for the street noise is 58.24%, 57.34%, 56.72%, and 77.37% more than RNMF, esHRNR, esTSNR, and Vuvuzela respectively. The performance of the proposed deep enhancement method is extensively evaluated and compared to diverse adverse noisy environments that describe the superiority of the proposed method.

  相似文献   

20.
针对爆发谱特征不稳定的问题,论文提出了一种基于能量变化率的汉语塞音检测方法。该方法首先基于Seneff听觉谱提取了一组描述音段能量变化率特性的参数,然后采用Fisherface方法进行特征变换,变换后的特征采用K近邻(KNN)分类器进行分类,实现了塞音的检测,最后利用留一法对模型性能进行交叉验证。实验结果表明,干净语音塞音检测准确率可以达到96.39%,信噪比10dB的语音塞音检测准确率可达到88.07%,模型具有较好的稳定性和泛化性能。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号