首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
目前,自动语音识别系统往往会因为环境中复杂因素的影响,造成训练环境和测试环境存在不匹配现象,使得识别系统性能大幅度下降,极大地限制了语音识别技术的应用范围。近年来,很多鲁棒语音识别技术成功地被提出,这些技术的目标都是相同的,主要是提高系统的鲁棒性,进而提高识别率。其中,基于特征的归一化技术简单而有效,常常被作为鲁棒语音识别的首选方法,它主要是通过对特征向量的统计属性、累积密度函数或功率谱的归一化来补偿环境不匹配产生的影响。该文主要对目前主流的归一化方法进行介绍,其中包括倒谱矩归一化方法、直方图均衡化方法以及调频谱归一化方法等。  相似文献   

2.
The application range of communication robots could be widely expanded by the use of automatic speech recognition (ASR) systems with improved robustness for noise and for speakers of different ages. In past researches, several modules have been proposed and evaluated for improving the robustness of ASR systems in noisy environments. However, this performance might be degraded when applied to robots, due to problems caused by distant speech and the robot's own noise. In this paper, we implemented the individual modules in a humanoid robot, and evaluated the ASR performance in a real-world noisy environment for adults' and children's speech. The performance of each module was verified by adding different levels of real environment noise recorded in a cafeteria. Experimental results indicated that our ASR system could achieve over 80% word accuracy in 70-dBA noise. Further evaluation of adult speech recorded in a real noisy environment resulted in 73% word accuracy.  相似文献   

3.
Audio-visual speech recognition (AVSR) using acoustic and visual signals of speech has received attention because of its robustness in noisy environments. In this paper, we present a late integration scheme-based AVSR system whose robustness under various noise conditions is improved by enhancing the performance of the three parts composing the system. First, we improve the performance of the visual subsystem by using the stochastic optimization method for the hidden Markov models as the speech recognizer. Second, we propose a new method of considering dynamic characteristics of speech for improved robustness of the acoustic subsystem. Third, the acoustic and the visual subsystems are effectively integrated to produce final robust recognition results by using neural networks. We demonstrate the performance of the proposed methods via speaker-independent isolated word recognition experiments. The results show that the proposed system improves robustness over the conventional system under various noise conditions without a priori knowledge about the noise contained in the speech.   相似文献   

4.
Histogram equalization (HEQ) is one of the most efficient and effective techniques that have been used to reduce the mismatch between training and test acoustic conditions. However, most of the current HEQ methods are merely performed in a dimension-wise manner and without allowing for the contextual relationships between consecutive speech frames. In this paper, we present several novel HEQ approaches that exploit spatial-temporal feature distribution characteristics for speech feature normalization. The automatic speech recognition (ASR) experiments were carried out on the Aurora-2 standard noise-robust ASR task. The performance of the presented approaches was thoroughly tested and verified by comparisons with the other popular HEQ methods. The experimental results show that for clean-condition training, our approaches yield a significant word error rate reduction over the baseline system, and also give competitive performance relative to the other HEQ methods compared in this paper.  相似文献   

5.
为提高语音识别系统对环境噪声的鲁棒性,在快速提升小波的基础上,结合感知频域上的滤波与倒谱均值归一化技术,提出一种语音特征参数提取方法.仿真实验表明,与传统方法相比,噪声鲁棒性显著提高;在语音信号的信噪比相近情况下,与传统小波方法相比,该方法计算简便、易于编程、计算速度快.  相似文献   

6.
In this paper, we propose a novel front-end speech parameterization technique for automatic speech recognition (ASR) that is less sensitive towards ambient noise and pitch variations. First, using variational mode decomposition (VMD), we break up the short-time magnitude spectrum obtained by discrete Fourier transform into several components. In order to suppress the ill-effects of noise and pitch variations, the spectrum is then sufficiently smoothed. The desired spectral smoothing is achieved by discarding the higher-order variational mode functions and reconstructing the spectrum using the first-two modes only. As a result, the smoothed spectrum closely resembles the spectral envelope. Next, the Mel-frequency cepstral coefficients (MFCC) are extracted using the VMD-based smoothed spectra. The proposed front-end acoustic features are observed to be more robust towards ambient noise and pitch variations than the conventional MFCC features as demonstrated by the experimental evaluations presented in this study. For this purpose, we developed an ASR system using speech data from adult speakers collected under relatively clean recording conditions. State-of-the-art acoustic modeling techniques based on deep neural networks (DNN) and long short-term memory recurrent neural networks (LSTM-RNN) were employed. The ASR systems were then evaluated under noisy test conditions for assessing the noise robustness of the proposed features. To assess robustness towards pitch variations, experimental evaluations were performed on another test set consisting of speech data from child speakers. Transcribing children's speech helps in simulating an ASR task where pitch differences between training and test data are significantly large. The signal domain analyses as well as the experimental evaluations presented in this paper support our claims.  相似文献   

7.
Automatic speech recognition (ASR) has reached very high levels of performance in controlled situations. However, the performance degrades significantly when environmental noise occurs during the recognition process. Nowadays, the major challenge is to reach a good robustness to adverse conditions, so that automatic speech recognizers can be used in real situations. Missing data theory is a very attractive and promising approach. Unlike other denoising methods, missing data recognition does not match the whole data with the acoustic models, but instead considers part of the signal as missing, i.e. corrupted by noise. While speech recognition with missing data can be handled efficiently by methods such as data imputation or marginalization, accurately identifying missing parts (also called masks) remains a very challenging task. This paper reviews the main approaches that have been proposed to address this problem. The objective of this study is to identify the mask estimation methods that have been proposed so far, and to open this domain up to other related research, which could be adapted to overcome this difficult challenge. In order to restrict the range of methods, only the techniques using a single microphone are considered.  相似文献   

8.
Despite the significant progress of automatic speech recognition (ASR) in the past three decades, it could not gain the level of human performance, particularly in the adverse conditions. To improve the performance of ASR, various approaches have been studied, which differ in feature extraction method, classification method, and training algorithms. Different approaches often utilize complementary information; therefore, to use their combination can be a better option. In this paper, we have proposed a novel approach to use the best characteristics of conventional, hybrid and segmental HMM by integrating them with the help of ROVER system combination technique. In the proposed framework, three different recognizers are created and combined, each having its own feature set and classification technique. For design and development of the complete system, three separate acoustic models are used with three different feature sets and two language models. Experimental result shows that word error rate (WER) can be reduced about 4% using the proposed technique as compared to conventional methods. Various modules are implemented and tested for Hindi Language ASR, in typical field conditions as well as in noisy environment.  相似文献   

9.
张毅  汪培培  罗元 《信息与控制》2016,45(3):355-360
针对语音识别系统受噪声干扰识别率急剧下降的问题,通过分析传统的鲁棒语音特征提取方法在语音信号谱估计方面的不足,提出一种在不同信噪比下都具有较好鲁棒性和识别性能的语音特征提取算法.该算法结合多信号分类法(MUSIC)和最小模法(minimum-norm method,MNM)来进行谱估计.接着在移动机器人平台上进行验证实验,结果表明:该算法能有效的提高语音识别率,增强语音识别鲁棒性能.  相似文献   

10.
In a distributed speech recognition (DSR) framework, the speech features are quantized and compressed at the client and recognized at the server. However, recognition accuracy is degraded by environmental noise at the input, quantization distortion, and transmission errors. In this paper, histogram-based quantization (HQ) is proposed, in which the partition cells for quantization are dynamically defined by the histogram or order statistics of a segment of the most recent past values of the parameter to be quantized. This scheme is shown to be able to solve to a good degree many problems related to DSR. A joint uncertainty decoding (JUD) approach is further developed to consider the uncertainty caused by both environmental noise and quantization errors. A three-stage error concealment (EC) framework is also developed to handle transmission errors. The proposed HQ is shown to be an attractive feature transformation approach for robust speech recognition outside of a DSR environment as well. All the claims have been verified by experiments using the Aurora 2 testing environment, and significant performance improvements for both robust and/or distributed speech recognition over conventional approaches have been achieved.  相似文献   

11.
一种基于调制谱特征的带噪语音识别方法   总被引:1,自引:0,他引:1  
在语音识别过程中,提取语音特征参数是重要的步骤之一。为了提高整个识别系统的性能,要求所选语音参数应具有较好的鲁棒性。文章在时频分析理论基础上,设计了一种基于语音调制谱的特征参数。这种参数利用了语音调制谱的时频集聚性并通过对语音调制谱作适当的滤波及归一化处理以削弱其对加性噪声和通道失真等干扰的敏感性。实验结果表明,该参数在提高语音识别系统的的抗噪性方面有明显的贡献。  相似文献   

12.
A Spectral Conversion Approach to Single-Channel Speech Enhancement   总被引:1,自引:0,他引:1  
In this paper, a novel method for single-channel speech enhancement is proposed, which is based on a spectral conversion feature denoising approach. Spectral conversion has been applied previously in the context of voice conversion, and has been shown to successfully transform spectral features with particular statistical properties into spectral features that best fit (with the constraint of a piecewise linear transformation) different target statistics. This spectral transformation is applied as an initialization step to two well-known single channel enhancement methods, namely the iterative Wiener filter (IWF) and a particular iterative implementation of the Kalman filter. In both cases, spectral conversion is shown here to provide a significant improvement as opposed to initializations using the spectral features directly from the noisy speech. In essence, the proposed approach allows for applying these two algorithms in a user-centric manner, when "clean" speech training data are available from a particular speaker. The extra step of spectral conversion is shown to offer significant advantages regarding output signal-to-noise ratio (SNR) improvement over the conventional initializations, which can reach 2 dB for the IWF and 6 dB for the Kalman filtering algorithm, for low input SNRs and for white and colored noise, respectively  相似文献   

13.
Audio-visual speech recognition (AVSR) has shown impressive improvements over audio-only speech recognition in the presence of acoustic noise. However, the problems of region-of-interest detection and feature extraction may influence the recognition performance due to the visual speech information obtained typically from planar video data. In this paper, we deviate from the traditional visual speech information and propose an AVSR system integrating 3D lip information. The Microsoft Kinect multi-sensory device was adopted for data collection. The different feature extraction and selection algorithms were applied to planar images and 3D lip information, so as to fuse the planar images and 3D lip feature into the visual-3D lip joint feature. For automatic speech recognition (ASR), the fusion methods were investigated and the audio-visual speech information was integrated into a state-synchronous two stream Hidden Markov Model. The experimental results demonstrated that our AVSR system integrating 3D lip information improved the recognition performance of traditional ASR and AVSR system in acoustic noise environments.  相似文献   

14.
刘金刚  周翊  马永保  刘宏清 《计算机应用》2016,36(12):3369-3373
针对语音识别系统在噪声环境下不能保持很好鲁棒性的问题,提出了一种切换语音功率谱估计算法。该算法假设语音的幅度谱服从Chi分布,提出了一种改进的基于最小均方误差(MMSE)的语音功率谱估计算法。然后,结合语音存在的概率(SPP),推导出改进的基于语音存在概率的MMSE估计器。接下来,将改进的MSME估计器与传统的维纳滤波器结合。在噪声干扰比较大时,使用改进的MMSE估计器来估计纯净语音的功率谱,当噪声干扰较小时,改用传统的维纳滤波器以减少计算量,最终得到用于识别系统的切换语音功率谱估计算法。实验结果表明,所提算法相比传统的瑞利分布下的MMSE估计器在各种噪声的情况下识别率平均提高在8个百分点左右,在去除噪声干扰、提高识别系统鲁棒性的同时,减小了语音识别系统的功耗。  相似文献   

15.
In this paper, we present a training-based approach to speech enhancement that exploits the spectral statistical characteristics of clean speech and noise in a specific environment. In contrast to many state-of-the-art approaches, we do not model the probability density function (pdf) of the clean speech and the noise spectra. Instead, subband-individual weighting rules for noisy speech spectral amplitudes are separately trained for speech presence and speech absence from noise recordings in the environment of interest. Weighting rules for a variety of cost functions are given; they are parameterized and stored as a table look-up. The speech enhancement system simply works by computing the weighting rules from the table look-up indexed by the a posteriori signal-to-noise ratio (SNR) and the a priori SNR for each subband computed on a Bark scale. Optimized for an automotive environment, our approach outperforms known-environment-independent-speech enhancement techniques, namely the a priori SNR-driven Wiener filter and the minimum mean square error (MMSE) log-spectral amplitude estimator, both in terms of speech distortion and noise attenuation.  相似文献   

16.
In this paper, we propose a Bayesian minimum mean squared error approach for the joint estimation of the short-term predictor parameters of speech and noise, from the noisy observation. We use trained codebooks of speech and noise linear predictive coefficients to model the a priori information required by the Bayesian scheme. In contrast to current Bayesian estimation approaches that consider the excitation variances as part of the a priori information, in the proposed method they are computed online for each short-time segment, based on the observation at hand. Consequently, the method performs well in nonstationary noise conditions. The resulting estimates of the speech and noise spectra can be used in a Wiener filter or any state-of-the-art speech enhancement system. We develop both memoryless (using information from the current frame alone) and memory-based (using information from the current and previous frames) estimators. Estimation of functions of the short-term predictor parameters is also addressed, in particular one that leads to the minimum mean squared error estimate of the clean speech signal. Experiments indicate that the scheme proposed in this paper performs significantly better than competing methods  相似文献   

17.
语音是一种重要的信息资源传递与交流方式,人们经常使用语音作为交流信息的媒介,在语音的声学信号中包含大量的说话者信息、语义信息和丰富的情感信息,因此形成了解决语音学任务的3个不同方向,即声纹识别(Speaker Recognition,SR)、语音识别(Auto Speech Recognition,ASR)和情感识别(Speech Emotion Recognition,SER),3个任务均在各自的领域使用不同的技术与特定的方法进行信息提取与模型设计。文中首先综述了3个任务在国内外早期的发展历史路线,将语音任务的发展归纳为4个不同阶段,同时总结了3个语音学任务在特征提取时所采用的公共语音学特征,并针对每类特征的侧重点进行了说明。然后,随着近年来深度学习技术在各个领域中的广泛应用,语音任务也得到了很好的发展,文中针对目前流行的深度学习模型在声学建模中的应用分别进行了分析,按照有监督、无监督的方式总结了针对3种不同语音任务的声学特征提取方式及技术路线,还总结了基于多通道并融合注意力机制的模型,用于语音的特征提取。为了同时完成语音识别、声纹识别和情感识别任务,针对声学信号的个性化特征提出了一个基于多任务的Tandem模型;此外,提出了一个多通道协作网络模型,利用这种设计思路可以提升多任务特征提取的准确度。  相似文献   

18.
为提高神经网络对语音信号时域波形的直接处理能力,提出了一种基于RefineNet的端到端语音增强方法.本文构建了一个时频分析神经网络,模拟语音信号处理中的短时傅里叶变换,利用RefineNet网络学习含噪语音到纯净语音的特征映射.在模型训练阶段,用多目标联合优化的训练策略将语音增强的评价指标短时客观可懂度(Short-time objective intelligibility,STOI)与信源失真比(Source to distortion ratio,SDR)融入到训练的损失函数.在与具有代表性的传统方法和端到端的深度学习方法的对比实验中,本文提出的算法在客观评价指标上均取得了最好的增强效果,并且在未知噪声和低信噪比条件下表现出更好的抗噪性.  相似文献   

19.
为提高神经网络对语音信号时域波形的直接处理能力,提出了一种基于RefineNet的端到端语音增强方法.本文构建了一个时频分析神经网络,模拟语音信号处理中的短时傅里叶变换,利用RefineNet网络学习含噪语音到纯净语音的特征映射.在模型训练阶段,用多目标联合优化的训练策略将语音增强的评价指标短时客观可懂度(Short-time objective intelligibility,STOI)与信源失真比(Source to distortion ratio,SDR)融入到训练的损失函数.在与具有代表性的传统方法和端到端的深度学习方法的对比实验中,本文提出的算法在客观评价指标上均取得了最好的增强效果,并且在未知噪声和低信噪比条件下表现出更好的抗噪性.  相似文献   

20.
为了提高语音端点检测效果,将小波分析和神经网络相融合,提出一种基于小波神经网络的语音端点检测算法(WA-PCA-RBF)。利用小波分析提取语音信号的特征向量,采用主成分分析法选择语音信号特征,消除冗余特征,将选择特征向量作为RBF神经网络输入,通过遗传算法优化RBF神经网络参数建立语音端检测模型。结果表明,相对于传统语音端点检测算法,WA-PCA-RBF提高了语音端点检测正确率,具有更好的适应性和鲁棒性,可满足实际系统需求。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号