首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
本文在对语音识别中基于自适应回归树的极大似然线性变换(MLLR)模型自适应算法深刻分析的基础上,提出了一种基于目标驱动的多层MLLR自适应(TMLLR)算法。这种算法基于目标驱动的原则,引入反馈机制,根据目标函数似然概率的增加来动态决定MLLR变换的变换类,大大提高了系统的识别率。并且由于这种算法的特殊多层结构,减少了许多中间的冗余计算,算法在具有较高的自适应精度的同时还具有较快的自适应速度。在有监督自适应实验中,经过此算法自适应后的系统识别率比基于自适应回归树的MLLR算法自适应后系统的误识率降低了10% ,自适应速度也比基于自适应回归树的MLLR算法快近一倍。  相似文献   

2.
为了改善发声力度对说话人识别系统性能的影响,在训练语音存在少量耳语、高喊语音数据的前提下,提出了使用最大后验概率(MAP)和约束最大似然线性回归(CMLLR)相结合的方法来更新说话人模型、投影转换说话人特征。其中,MAP自适应方法用于对正常语音训练的说话人模型进行更新,而CMLLR特征空间投影方法则用来投影转换耳语、高喊测试语音的特征,从而改善训练语音与测试语音的失配问题。实验结果显示,采用MAP+CMLLR方法时,说话人识别系统等错误率(EER)明显降低,与基线系统、最大后验概率(MAP)自适应方法、最大似然线性回归(MLLR)模型投影方法和约束最大似然线性回归(CMLLR)特征空间投影方法相比,MAP+CMLLR方法的平均等错率分别降低了75.3%、3.5%、72%和70.9%。实验结果表明,所提出方法削弱了发声力度对说话人区分性的影响,使说话人识别系统对于发声力度变化更加鲁棒。  相似文献   

3.
自适应技术是提高非特定人语音识别系统识别性能的有效手段,其中应用最广泛的两种自适应方法是基于最大后验概率的自适应方法和基于最大似然线性回归的自适应方法,分析了它们各自的特点并将最大后验概率的自适应方法应用到基于隐马尔可夫模型的口令识别系统中,实验结果表明,该方法能够在每个词自适应一次的情况下,使系统的识别率由40%提高到90%以上,并在此基础上实现了一个实用的中等词汇量的口令识别系统。  相似文献   

4.
该文研究了基于数据模拟方法和HMM(隐马尔科夫模型)自适应的电话信道条件下语音识别问题。模拟数据模仿了纯净语音在不同电话信道条件下的语音行为。各基线系统的HMM模型分别由纯净语音和模拟语音训练而成。语音识别实验评估了各基线系统HMM模型在采用MLLR算法(最大似然线性回归)做无监督式自适应前后的识别性能。实验证明,由纯净语音转换生成的模拟语音有效地减小了训练语音和测试语音声学性质的不匹配,很大程度上提高了电话语音识别率。基线模型的自适应结果显示模拟数据的自适应性能比纯净语音自适应的性能最大提高达到9.8%,表明了电话语音识别性能的进一步改善和系统稳健性的提高。  相似文献   

5.
6.
针对现实中训练数据不足的特点,在说话人建模时采用高斯混合模型-通用背景模型(Gaussian Markov Model-Uniform Background Model, GMM-UBM),主要从说话人识别模型的自适应方法和参数估计方法两个方面,研究如何提高说话人识别系统的识别率。在说话人识别模型自适应方面,改进传统的用最大后验概率 MAP (Maximum A Posterior Probability)得到说话人模型的方法,将语音识别中的最大似然线性回归MLLR (Maximum Likelihood Linear Regression)和基于特征音(EigenVoice, EV)的自适应方法,应用到说话人识别模型自适应当中,并将其与MAP方法进行比较。  相似文献   

7.
In this paper, we propose an application of kernel methods for fast speaker adaptation based on kernelizing the eigenspace-based maximum-likelihood linear regression adaptation method. We call our new method "kernel eigenspace-based maximum-likelihood linear regression adaptation" (KEMLLR). In KEMLLR, speaker-dependent (SD) models are estimated from a common speaker-independent (SI) model using MLLR adaptation, and the MLLR transformation matrices are mapped to a kernel-induced high-dimensional feature space, wherein kernel principal component analysis is used to derive a set of eigenmatrices. In addition, a composite kernel is used to preserve row information in the transformation matrices. A new speaker's MLLR transformation matrix is then represented as a linear combination of the leading kernel eigenmatrices, which, though exists only in the feature space, still allows the speaker's mean vectors to be found explicitly. As a result, at the end of KEMLLR adaptation, a regular hidden Markov model (HMM) is obtained for the new speaker and subsequent speech recognition is as fast as normal HMM decoding. KEMLLR adaptation was tested and compared with other adaptation methods on the Resource Management and Wall Street Journal tasks using 5 or 10 s of adaptation speech. In both cases, KEMLLR adaptation gives the greatest improvement over the SI model with 11%-20% word error rate reduction  相似文献   

8.
The goal of this article is the application of genetic algorithms (GAs) to the automatic speech recognition (ASR) domain at the acoustic sequences classification level. Speech recognition has been cast as a pattern classification problem where we would like to classify an input acoustic signal into one of all possible phonemes. Also, the supervised classification has been formulated as a function optimization problem. Thus, we have attempted to recognize Standard Arabic (SA) phonemes of continuous, naturally spoken speech by using GAs, which have several advantages in resolving complicated optimization problems. In SA, there are 40 sounds. We have analyzed a corpus that contains several sentences composed of the whole SA phoneme types in the initial, medium, and final positions, recorded by several male speakers. Furthermore, the acoustic segments classification and the GAs have been explored. Among a set of classifiers such as Bayesian, likelihood, and distance classifier, we have used the distance classifier. It is based on the classification measure criterion. Therefore, we have used the decision rule Manhattan distance as the fitness functions for our GA evaluations. The corpus phonemes were extracted and classified successfully with an overall accuracy of 90.20%.  相似文献   

9.
Speaker adaptation is recognized as an essential part of today’s large-vocabulary automatic speech recognition systems. A family of techniques that has been extensively applied for limited adaptation data is transformation-based adaptation. In transformation-based adaptation we partition our parameter space in a set of classes, estimate a transform (usually linear) for each class and apply the same transform to all the components of the class. It is known, however, that additional gains can be made if we do not constrain the components of each class to use the same transform. In this paper two speaker adaptation algorithms are described. First, instead of estimating one linear transform for each class (as maximum likelihood linear regression (MLLR) does, for example) we estimate multiple linear transforms per class of models and a transform weights vector which is specific to each component (Gaussians in our case). This in effect means that each component receives its own transform without having to estimate each one of them independently. This scheme, termed maximum likelihood stochastic transformation (MLST) achieves a good trade-off between robustness and acoustic resolution. MLST is evaluated on the Wall Street Journal(WSJ) corpus for non-native speakers and it is shown that in the case of 40 adaptation sentences the algorithm outperforms MLLR by more than 13%. In the second half of this paper, we introduce a variant of the MLST designed to operate under sparsity of data. Since the majority of the adaptation parameters are the transformations, we estimate them on the training speakers and adapt to a new speaker by estimating the transform weights only. First we cluster the speakers in a number of sets and estimate the transformations on each cluster. The new speaker will use transformations from all clusters to perform adaptation. This method, termed basis transformation, can be seen as a speaker similarity scheme. Experimental results on the WSJ show that when basis transformation is cascaded with MLLR marginal gains can be obtained from MLLR only, for adaptation of native speakers.  相似文献   

10.
自适应技术在近年来得到越来越多的重视,其中应用广泛的包括MAP、MLLR,该技术利用少量特定人数据就可以调整码本,快速地提升识别性能,它要求原始的码本有很好的说话人无关性。本文介绍了结合MLLR自适应的说话人自适应训练(Speaker Adaptive Training,以下简称SAT)算法,这种方法将每个说话人码本视为说话人无关码本经过线性变换的结果,在此基础上训练的说话人无关码本更有效剔除了说话人相关信息,因此在说话人自适应中时能根据特定数据调整更好地逼近说话人特性,从而有更好的性能表现。  相似文献   

11.
提出了一种随机段模型系统的说话人自适应方法。根据随机段模型的模型特性,将最大似然线性回归方法引入到随机段模型系统中。在“863 test”测试集上进行的汉语连续语音识别实验显示,在不同的解码速度下,说话人自适应后汉字错误率均有明显的下降。实验结果表明,最大似然线性回归方法在随机段模型系统中同样能取得较好的效果。  相似文献   

12.
基于MAP和MLLR的综合渐进自适应方法研究   总被引:1,自引:0,他引:1  
从说活人自适应技术讨论了最大后验概率(MAP)和最大似然线性回归(MLLR)两种经典的说活人自适应方法,通过在渐进的MAP方法中引入一个简化的MLLR模块,提出一种适合于强健语音识别的快速综合渐进自适应语音识别方法和策略。  相似文献   

13.
基于发音特征的声效相关鲁棒语音识别算法   总被引:1,自引:0,他引:1  
晁浩  宋成  彭维平 《计算机应用》2015,35(1):257-261
针对声效(VE)相关的语音识别鲁棒性问题,提出了基于多模型框架的语音识别算法.首先,分析了不同声效模式下语音信号的声学特性以及声效变化对语音识别精度的影响;然后,提出了基于高斯混合模型(GMM)的声效模式检测方法;最后,根据声效检测的结果,训练专门的声学模型用于耳语音识别,而将发音特征与传统的谱特征一起用于其余4种声效模式的语音识别.基于孤立词识别的实验结果显示,采用所提方法后语音识别准确率有了明显的提高:与基线系统相比,所提方法5种声效的平均字错误率降低了26.69%;与声学模型混合语料训练方法相比,平均字错误率降低了14.51%;与最大似然线性回归(MLLR)自适应方法相比,平均字错误率降低了15.30%.实验结果表明:与传统谱特征相比发音特征对于声效变化更具鲁棒性,而多模型框架是解决声效相关的语音识别鲁棒性问题的有效方法.  相似文献   

14.
The purpose of this paper is the application of the Genetic Algorithms (GAs) to the supervised classification level, in order to recognize Standard Arabic (SA) fricative consonants of continuous, naturally spoken, speech. We have used GAs because of their advantages in resolving complicated optimization problems where analytic methods fail. For that, we have analyzed a corpus that contains several sentences composed of the thirteen types of fricative consonants in the initial, medium and final positions, recorded by several male Jordanian speakers. Nearly all the world’s languages contain at least one fricative sound. The SA language occupies a rather exceptional position in that nearly half of its consonants are fricatives and nearly half of fricative inventory is situated far back in the uvular, pharyngeal and glottal areas. We have used Mel-Frequency Cepstral analysis method to extract vocal tract coefficients from the speech signal. Among a set of classifiers like Bayesian, likelihood and distance classifier, we have used the distance one. It is based on the classification measure criterion. So, we formulate the supervised classification as a function optimization problem and we have used the decision rule Mahalanobis distance as the fitness function for the GA evaluation. We report promising results with a classification recognition accuracy of 82%.  相似文献   

15.
We consider the problem of acoustic modeling of noisy speech data, where the uncertainty over the data is given by a Gaussian distribution. While this uncertainty has been exploited at the decoding stage via uncertainty decoding, its usage at the training stage remains limited to static model adaptation. We introduce a new expectation maximization (EM) based technique, which we call uncertainty training, that allows us to train Gaussian mixture models (GMMs) or hidden Markov models (HMMs) directly from noisy data with dynamic uncertainty. We evaluate the potential of this technique for a GMM-based speaker recognition task on speech data corrupted by real-world domestic background noise, using a state-of-the-art signal enhancement technique and various uncertainty estimation techniques as a front-end. Compared to conventional training, the proposed training algorithm results in 3–4% absolute improvement in speaker recognition accuracy by training from either matched, unmatched or multi-condition noisy data. This algorithm is also applicable with minor modifications to maximum a posteriori (MAP) or maximum likelihood linear regression (MLLR) acoustic model adaptation from noisy data and to other data than audio.  相似文献   

16.
We present a new discriminative linear regression adaptation algorithm for hidden Markov model (HMM) based speech recognition. The cluster-dependent regression matrices are estimated from speaker-specific adaptation data through maximizing the aggregate a posteriori probability, which can be expressed in a form of classification error function adopting the logarithm of posterior distribution as the discriminant function. Accordingly, the aggregate a posteriori linear regression (AAPLR) is developed for discriminative adaptation where the classification errors of adaptation data are minimized. Because the prior distribution of regression matrix is involved, AAPLR is geared with the Bayesian learning capability. We demonstrate that the difference between AAPLR discriminative adaptation and maximum a posteriori linear regression (MAPLR) adaptation is due to the treatment of the evidence. Different from minimum classification error linear regression (MCELR), AAPLR has closed-form solution to fulfil rapid adaptation. Experimental results reveal that AAPLR speaker adaptation does improve speech recognition performance with moderate computational cost compared to maximum likelihood linear regression (MLLR), MAPLR, MCELR and conditional maximum likelihood linear regression (CMLLR). These results are verified for supervised adaptation as well as unsupervised adaptation for different numbers of adaptation data.  相似文献   

17.
In recent years, the use of morphological decomposition strategies for Arabic Automatic Speech Recognition (ASR) has become increasingly popular. Systems trained on morphologically decomposed data are often used in combination with standard word-based approaches, and they have been found to yield consistent performance improvements. The present article contributes to this ongoing research endeavour by exploring the use of the ‘Morphological Analysis and Disambiguation for Arabic’ (MADA) tools for this purpose. System integration issues concerning language modelling and dictionary construction, as well as the estimation of pronunciation probabilities, are discussed. In particular, a novel solution for morpheme-to-word conversion is presented which makes use of an N-gram Statistical Machine Translation (SMT) approach. System performance is investigated within a multi-pass adaptation/combination framework. All the systems described in this paper are evaluated on an Arabic large vocabulary speech recognition task which includes both Broadcast News and Broadcast Conversation test data. It is shown that the use of MADA-based systems, in combination with word-based systems, can reduce the Word Error Rates by up to 8.1% relative.  相似文献   

18.
This paper presents a fuzzy control mechanism for conventional maximum likelihood linear regression (MLLR) speaker adaptation, called FLC-MLLR, by which the effect of MLLR adaptation is regulated according to the availability of adaptation data in such a way that the advantage of MLLR adaptation could be fully exploited when the training data are sufficient, or the consequence of poor MLLR adaptation would be restrained otherwise. The robustness of MLLR adaptation against data scarcity is thus ensured. The proposed mechanism is conceptually simple and computationally inexpensive and effective; the experiments in recognition rate show that FLC-MLLR outperforms standard MLLR especially when encountering data insufficiency and performs better than MAPLR at much less computing cost.  相似文献   

19.
We introduce a strategy for modeling speaker variability in speaker adaptation based on maximum likelihood linear regression (MLLR). The approach uses a speaker-clustering procedure that models speaker variability by partitioning a large corpus of speakers in the eigenspace of their MLLR transformations and learning cluster-specific regression class tree structures. We present experiments showing that choosing the appropriate regression class tree structure for speakers leads to a significant reduction in overall word error rates in automatic speech recognition systems. To realize these gains in unsupervised adaptation, we describe an algorithm that produces a linear combination of MLLR transformations from cluster-specific trees using weights estimated by maximizing the likelihood of a speaker’s adaptation data. This algorithm produces small improvements in overall recognition performance across a range of tasks for both English and Mandarin. More significantly, distributional analysis shows that it reduces the number of speakers with performance loss due to adaptation across a range of adaptation data sizes and word error rates.  相似文献   

20.
One of the key issues for adaptation algorithms is to modify a large number of parameters with only a small amount of adaptation data. Speaker adaptation techniques try to obtain near speaker-dependent (SD) performance with only small amounts of speaker-specific data, and are often based on initial speaker-independent (SI) recognition systems. Some of these speaker adaptation techniques may also be applied to the task of adaptation to a new acoustic environment. In this case an SI recognition system trained in, typically, a clean acoustic environment is adapted to operate in a new, noise-corrupted, acoustic environment. This paper examines the maximum likelihood linear regression (MLLR) adaptation technique. MLLR estimates linear transformations for groups of model parameters to maximize the likelihood of the adaptation data. Previously, MLLR has been applied to the mean parameters in mixture-Gaussian HMM systems. In this paper MLLR is extended to also update the Gaussian variances and re-estimation formulae are derived for these variance transforms. MLLR with variance compensation is evaluated on several large vocabulary recognition tasks. The use of mean and variance MLLR adaptation was found to give an additional 2% to 7% decrease in word error rate over mean-only MLLR adaptation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号