首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We consider the problem of acoustic modeling of noisy speech data, where the uncertainty over the data is given by a Gaussian distribution. While this uncertainty has been exploited at the decoding stage via uncertainty decoding, its usage at the training stage remains limited to static model adaptation. We introduce a new expectation maximization (EM) based technique, which we call uncertainty training, that allows us to train Gaussian mixture models (GMMs) or hidden Markov models (HMMs) directly from noisy data with dynamic uncertainty. We evaluate the potential of this technique for a GMM-based speaker recognition task on speech data corrupted by real-world domestic background noise, using a state-of-the-art signal enhancement technique and various uncertainty estimation techniques as a front-end. Compared to conventional training, the proposed training algorithm results in 3–4% absolute improvement in speaker recognition accuracy by training from either matched, unmatched or multi-condition noisy data. This algorithm is also applicable with minor modifications to maximum a posteriori (MAP) or maximum likelihood linear regression (MLLR) acoustic model adaptation from noisy data and to other data than audio.  相似文献   

2.
Research on noise robust speech recognition has mainly focused on dealing with relatively stationary noise that may differ from the noise conditions in most living environments. In this paper, we introduce a recognition system that can recognize speech in the presence of multiple rapidly time-varying noise sources as found in a typical family living room. To deal with such severe noise conditions, our recognition system exploits all available information about speech and noise; that is spatial (directional), spectral and temporal information. This is realized with a model-based speech enhancement pre-processor, which consists of two complementary elements, a multi-channel speech–noise separation method that exploits spatial and spectral information, followed by a single channel enhancement algorithm that uses the long-term temporal characteristics of speech obtained from clean speech examples. Moreover, to compensate for any mismatch that may remain between the enhanced speech and the acoustic model, our system employs an adaptation technique that combines conventional maximum likelihood linear regression with the dynamic adaptive compensation of the variance of the Gaussians of the acoustic model. Our proposed system approaches human performance levels by greatly improving the audible quality of speech and substantially improving the keyword recognition accuracy.  相似文献   

3.
This paper proposes an efficient speech data selection technique that can identify those data that will be well recognized. Conventional confidence measure techniques can also identify well-recognized speech data. However, those techniques require a lot of computation time for speech recognition processing to estimate confidence scores. Speech data with low confidence should not go through the time-consuming recognition process since they will yield erroneous spoken documents that will eventually be rejected. The proposed technique can select the speech data that will be acceptable for speech recognition applications. It rapidly selects speech data with high prior confidence based on acoustic likelihood values and using only speech and monophone models. Experiments show that the proposed confidence estimation technique is over 50 times faster than the conventional posterior confidence measure while providing equivalent data selection performance for speech recognition and spoken document retrieval.  相似文献   

4.
Distant speech capture in lecture halls and auditoriums offers unique challenges in algorithm development for automatic speech recognition. In this study, a new adaptation strategy for distant noisy speech is created by the means of phoneme classes. Unlike previous approaches which adapt the acoustic model to the features, the proposed phoneme-class based feature adaptation (PCBFA) strategy adapts the distant data features to the present acoustic model which was previously trained on close microphone speech. The essence of PCBFA is to create a transformation strategy which makes the distributions of phoneme-classes of distant noisy speech similar to those of a close talk microphone acoustic model in a multidimensional MFCC space. To achieve this task, phoneme-classes of distant noisy speech are recognized via artificial neural networks. PCBFA is the adaptation of features rather than adaptation of acoustic models. The main idea behind PCBFA is illustrated via conventional Gaussian mixture model–Hidden Markov model (GMM–HMM) although it can be extended to new structures in automatic speech recognition (ASR). The new adapted features together with the new and improved acoustic models produced by PCBFA are shown to outperform those created only by acoustic model adaptations for ASR and keyword spotting. PCBFA offers a new powerful understanding in acoustic-modeling of distant speech.  相似文献   

5.
The performance of automatic speech recognition is severely degraded in the presence of noise or reverberation. Much research has been undertaken on noise robustness. In contrast, the problem of the recognition of reverberant speech has received far less attention and remains very challenging. In this paper, we use a dereverberation method to reduce reverberation prior to recognition. Such a preprocessor may remove most reverberation effects. However, it often introduces distortion, causing a dynamic mismatch between speech features and the acoustic model used for recognition. Model adaptation could be used to reduce this mismatch. However, conventional model adaptation techniques assume a static mismatch and may therefore not cope well with a dynamic mismatch arising from dereverberation. This paper proposes a novel adaptation scheme that is capable of managing both static and dynamic mismatches. We introduce a parametric model for variance adaptation that includes static and dynamic components in order to realize an appropriate interconnection between dereverberation and a speech recognizer. The model parameters are optimized using adaptive training implemented with the expectation maximization algorithm. An experiment using the proposed method with reverberant speech for a reverberation time of 0.5 s revealed that it was possible to achieve an 80% reduction in the relative error rate compared with the recognition of dereverberated speech (word error rate of 31%), and the final error rate was 5.4%, which was obtained by combining the proposed variance compensation and MLLR adaptation.  相似文献   

6.
A speech signal processing system using multi-parameter model bidirectional Kalman filter has been proposed in this paper. Conventional unidirectional Kalman filter usually performs estimation of current state speech signal by processing the time varying autoregressive model of speech signals from the past time states. A bidirectional Kalman filter utilizes the past and future measurements to estimate the current state of a speech signal that minimize the mean of the squared error using efficient recursive means. The matrices involved in the difference equations and the measurement equations of the bidirectional Kalman filter algorithm are kept constant throughout the process. With multi-parameter model, the proposed bidirectional Kalman filter relates more measurements from the future and past time states to the current time state. The proposed multi-parameter bidirectional Kalman filter has been implemented into a speech recognition system and its performance has been compared to other conventional speech processing algorithms. Compared to the single-parameter model bidirectional Kalman filter, the multi-parameter bidirectional Kalman filter improves the accuracy in the state prediction, reduces the speech information lost after the filtering process and better word error rate has been achieved at high SNR regions (clean, 20, 15, 10 dB).  相似文献   

7.
In this paper, we present a novel technique of constructing phonetic decision trees (PDTs) for acoustic modeling in conversational speech recognition. We use random forests (RFs) to train a set of PDTs for each phone state unit and obtain multiple acoustic models accordingly. We investigate several methods of combining acoustic scores from the multiple models, including maximum-likelihood estimation of the weights of different acoustic models from training data, as well as using confidence score of -value or relative entropy to obtain the weights dynamically from online data. Since computing acoustic scores from the multiple models slows down decoding search, we propose clustering methods to compact the RF-generated acoustic models. The conventional concept of PDT-based state tying is extended to RF-based state tying. On each RF tied state, we cluster the Gaussian density functions (GDFs) from multiple acoustic models into classes and compute a prototype for each class to represent the original GDFs. In this way, the number of GDFs in each RF tied state is decreased greatly, which significantly reduces the time for computing acoustic scores. Experimental results on a telemedicine automatic captioning task demonstrate that the proposed RF-PDT technique leads to significant improvements in word recognition accuracy.  相似文献   

8.
提出基于深层声学特征的端到端单声道语音分离算法,传统声学特征提取方法需要经过傅里叶变换、离散余弦变换等操作,会造成语音能量损失以及长时间延迟.为了改善这些问题,提出了以语音信号的原始波形作为深度神经网络的输入,通过网络模型来学习语音信号的更深层次的声学特征,实现端到端的语音分离.客观评价实验说明,本文提出的分离算法不仅有效地提升了语音分离的性能,也减少了语音分离算法的时间延迟.  相似文献   

9.
Speech production errors characteristic of dysarthria are chiefly responsible for the low accuracy of automatic speech recognition (ASR) when used by people diagnosed with it. A person with dysarthria produces speech in a rather reduced acoustic working space, causing typical measures of speech acoustics to have values in ranges very different from those characterizing unimpaired speech. It is unlikely then that models trained on unimpaired speech will be able to adjust to this mismatch when acted on by one of the currently well-studied adaptation algorithms (which make no attempt to address this extent of mismatch in population characteristics).In this work, we propose an interpolation-based technique for obtaining a prior acoustic model from one trained on unimpaired speech, before adapting it to the dysarthric talker. The method computes a ‘background’ model of the dysarthric talker's general speech characteristics and uses it to obtain a more suitable prior model for adaptation (compared to the speaker-independent model trained on unimpaired speech). The approach is tested with a corpus of dysarthric speech acquired by our research group, on speech of sixteen talkers with varying levels of dysarthria severity (as quantified by their intelligibility). This interpolation technique is tested in conjunction with the well-known maximum a posteriori (MAP) adaptation algorithm, and yields improvements of up to 8% absolute and up to 40% relative, over the standard MAP adapted baseline.  相似文献   

10.
语音识别系统在实用环境中的鲁棒性是语音识别技术实用化的关键问题。鲁棒性研究的核心问题是如何解决实用环境语音特征和模型与干净环境语音识别系统的失配问题,这涉及到噪声补偿、信道适应、说话人自适应等关键技术。文章综述了语音识别鲁棒性技术研究的主要方法、原理及研究现状,分析了实用环境语音识别中声学模型和语言模型的适应技术,并展望了近期语音识别实用化技术发展的研究方向。  相似文献   

11.
Feature statistics normalization in the cepstral domain is one of the most performing approaches for robust automaticspeech and speaker recognition in noisy acoustic scenarios: feature coefficients are normalized by using suitable linear or nonlinear transformations in order to match the noisy speech statistics to the clean speech one. Histogram equalization (HEQ) belongs to such a category of algorithms and has proved to be effective on purpose and therefore taken here as reference.In this paper the presence of multi-channel acoustic channels is used to enhance the statistics modeling capabilities of the HEQ algorithm, by exploiting the availability of multiple noisy speech occurrences, with the aim of maximizing the effectiveness of the cepstra normalization process. Computer simulations based on the Aurora 2 database in speech and speaker recognition scenarios have shown that a significant recognition improvement with respect to the single-channel counterpart and other multi-channel techniques can be achieved confirming the effectiveness of the idea. The proposed algorithmic configuration has also been combined with the kernel estimation technique in order to further improve the speech recognition performances.  相似文献   

12.
Based on the log-normal assumption, parallel model combination (PMC) provides an effective method to adapt the cepstral means and variances of speech models for noisy speech recognition. In addition, the log-add method has been derived to adapt the mean by ignoring the cepstral variance during the process of PMC. This method is efficient for speech recognition in a high signal-to-noise ratio (SNR) environment. In this paper, a new interpretation of the log-add method is proposed. This leads to a modified scheme for performing the adaptation procedure in PMC. This modified method is shown to be efficient in improving recognition accuracy in low SNR. Based on this modified PMC method, we derive a direct adaptation procedure for the variance of speech models in the cepstral domain. The proposed method is a fast algorithm because the computation for the transformation of the covariance matrix is no longer required. Three recognition tasks are conducted to evaluate the proposed method. Experimental results show that the proposed technique not only requires lower computational cost but it also outperforms the original PMC technique in noisy environments.  相似文献   

13.
本文在对语音识别中基于自适应回归树的极大似然线性变换(MLLR)模型自适应算法深刻分析的基础上,提出了一种基于目标驱动的多层MLLR自适应(TMLLR)算法。这种算法基于目标驱动的原则,引入反馈机制,根据目标函数似然概率的增加来动态决定MLLR变换的变换类,大大提高了系统的识别率。并且由于这种算法的特殊多层结构,减少了许多中间的冗余计算,算法在具有较高的自适应精度的同时还具有较快的自适应速度。在有监督自适应实验中,经过此算法自适应后的系统识别率比基于自适应回归树的MLLR算法自适应后系统的误识率降低了10% ,自适应速度也比基于自适应回归树的MLLR算法快近一倍。  相似文献   

14.
介绍了一种基于词网的最大似然线性回归(Lattice-MLLR)无监督自适应算法,并进行了改进。Lattice-MLLR是根据解码得到的词网估计MLLR变换参数,词网的潜在误识率远小于识别结果,因此可以使参数估计更为准确。Lattice-MLLR的一个很大缺点是计算量极大,较难实用,对此本文提出了两个改进技术:(1)利用后验概率压缩词网;(2)利用单词的时间信息限制状态统计量的计算范围。实验测定Lattice-MLLR的误识率比传统MLLR相对下降了3.5%,改进技术使Lattice-MLLR计算量下降幅度超过了87.9%。  相似文献   

15.
陈聪  贺杰  陈佳 《控制工程》2021,28(3):585-591
为提高常规自动语音识别(ASR)系统的精度,提出基于隐式马尔可夫模型混合连接时间分类/注意力机制的端到端ASR系统设计方法.首先,针对可观测时变序列语音识别过程中存在的连续性强、词汇量大的语音识别难点,基于隐式马尔可夫模型对语音识别过程进行模拟,实现了语音识别模型参数化;其次,使用连接时间分类目标函数作为辅助任务,在多...  相似文献   

16.
众所周知中文普通话被众多的地区口音强烈地影响着,然而带不同口音的普通话语音数据却十分缺乏。因此,普通话语音识别的一个重要目标是恰当地模拟口音带来的声学变化。文章给出了隐式和显式地使用口音信息的一系列基于深度神经网络的声学模型技术的研究。与此同时,包括混合条件训练,多口音决策树状态绑定,深度神经网络级联和多级自适应网络级联隐马尔可夫模型建模等的多口音建模方法在本文中被组合和比较。一个能显式地利用口音信息的改进多级自适应网络级联隐马尔可夫模型系统被提出,并应用于一个由四个地区口音组成的、数据缺乏的带口音普通话语音识别任务中。在经过序列区分性训练和自适应后,通过绝对上 0.8% 到 1.5%(相对上 6% 到 9%)的字错误率下降,该系统显著地优于基线的口音独立深度神经网络级联系统。  相似文献   

17.
We describe the automatic determination of a large and complicated acoustic model for speech recognition by using variational Bayesian estimation and clustering (VBEC) for speech recognition. We propose an efficient method for decision tree clustering based on a Gaussian mixture model (GMM) and an efficient model search algorithm for finding an appropriate acoustic model topology within the VBEC framework. GMM-based decision tree clustering for triphone HMM states features a novel approach designed to reduce the overly large number of computations to a practical level by utilizing the statistics of monophone hidden Markov model states. The model search algorithm also reduces the search space by utilizing the characteristics of the acoustic model. The experimental results confirmed that VBEC automatically and rapidly yielded an optimum model topology with the highest performance.  相似文献   

18.
针对基于隐马尔科夫(HMM,Hidden Markov Model)的MAP和MMSE两种语音增强算法计算量大且前者不能处理非平稳噪声的问题,借鉴语音分离方法,提出了一种语音分离与HMM相结合的语音增强算法。该算法采用适合处理非平稳噪声的多状态多混合单元HMM,对带噪语音在语音模型和噪声模型下的混合状态进行解码,结合语音分离方法中的最大模型理论进行语音估计,避免了迭代过程和计算量特别大的公式计算,减少了计算复杂度。实验表明,该算法能够有效地去除平稳噪声和非平稳噪声,且感知评价指标PESQ 的得分有明显提高,算法时间也得到有效控制。  相似文献   

19.
Current automatic speech recognition (ASR) works in off-line mode and needs prior knowledge of the stationary or quasi-stationary test conditions for expected word recognition accuracy. These requirements limit the application of ASR for real-world applications where test conditions are highly non-stationary and are not known a priori. This paper presents an innovative frame dynamic rapid adaptation and noise compensation technique for tracking highly non-stationary noises and its application for on-line ASR. The proposed algorithm is based on a soft computing model using Bayesian on-line inference for spectral change point detection (BOSCPD) in unknown non-stationary noises. BOSCPD is tested with the MCRA noise tracking technique for on-line rapid environmental change learning in different non-stationary noise scenarios. The test results show that the proposed BOSCPD technique reduces the delay in spectral change point detection significantly compared to the baseline MCRA and its derivatives. The proposed BOSCPD soft computing model is tested for joint additive and channel distortions compensation (JAC)-based on-line ASR in unknown test conditions using non-stationary noisy speech samples from the Aurora 2 speech database. The simulation results for the on-line AR show significant improvement in recognition accuracy compared to the baseline Aurora 2 distributed speech recognition (DSR) in batch-mode.  相似文献   

20.
针对单一模态情感识别精度低的问题,提出了基于Bi-LSTM-CNN的语音文本双模态情感识别模型算法.该算法采用带有词嵌入的双向长短时记忆网络(bi-directional long short-term memory network,Bi-LSTM)和卷积神经网络(convolutional neural networ...  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号