首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 109 毫秒
1.
提出了一种基于数学统计模型化的说话人归一化训练方法,它将与状态相关的直接均值移动归一化训练方法和MAP/WNR模型自适应方法结合到统一的鲁棒性框架中,为模型自适应方法提供了更加合适的初始模型,在提高自适应速度和保持足够的模型平滑度之间得到了较好的平衡,实验表明,该方法可有效的提高有监督模式下语音识别的鲁棒性。  相似文献   

2.
马瑞堂  李成荣 《计算机应用》2007,27(B06):130-132
介绍了儿童语音的相关特点及儿童语音数据库,对男声、女声和混合语音各自训练的模型进行了性能比较,并将声道长度归一化的说话人自适应技术用于儿童语音识别,在传统声道归一化方法的基础上提出了一种基于比例门限动态调整的办法,使识别率得到了进一步提高。  相似文献   

3.
基于分段线性频谱弯折函数的说话人归一化方法   总被引:1,自引:0,他引:1  
在传统的声道长度归一化方法中 ,基于声道无损级联短管模型假设 ,用一个简单的声道因子来确定频谱弯折函数 ,无法描述出不同说话人的频谱差异的细节 .针对这一缺陷 ,提出用细致的分段线性频谱弯折函数来描述说话人差异 ,在适当的频谱分段下 ,较好地完成了频谱对齐的任务 .此外 ,由于利用了与模型无关的频谱弯折函数 ,该方法被证明是一种快速的、尤其适用于无监督模式的说话人鲁棒性方法  相似文献   

4.
基于高斯混合模型的说话人确认系统   总被引:4,自引:1,他引:4  
杨澄宇  赵文  杨鉴 《计算机应用》2001,21(4):7-8,11
由于在人的话音频谱中,低频和较高频段含有较多说话人的个性信息,本文提出一种LPC倒谱的改进算法用于与文本无关的说话人识别,该改进算法通过话音频谱的各频段进行加权,突出说话人的个性信息,从而使说话人更易于区分。  相似文献   

5.
噪声环境中基于GMM汉语说话人识别   总被引:1,自引:0,他引:1  
在噪声环境中如何提高说话人识别精度是说话人识别研究中的一个非常重要的课题.为了让说话人识别系统在安静的环境和噪声环境中都能取得满意的工作性能,研究一个将语音增强器与说话人识别器级连起来的系统.该系统中,采用Weiner滤波增强算法提高说话人识别系统前端预处理的抗噪声能力,提高输入信号的信噪比.实验测试结果表明,该系统具有很好的抗噪声性能.  相似文献   

6.
利用Windows平台与MATLAB软件.设计了一种基于GMM的与文本无关的说话人识别系统,系统包括端点检测,特征提取,参数训练,说话人识别四部分。该系统性能稳定,使用方便,可用于语音信号的频谱分析,实时处理。  相似文献   

7.
在噪声环境中如何提高说话人识别精度是说话人识别研究中的一个非常重要的课题。为了让说话人识别系统在安静的环境和噪声环境中都能取得满意的工作性能,研究一个将语音增强器与说话人识别器级连起来的系统。该系统中,采用Weiner滤波增强算法提高说话人识别系统前端预处理的抗噪声能力,提高输入信号的信噪比。实验测试结果表明,该系统具有很好的抗噪声性能。  相似文献   

8.
语音识别中基于i-vector的说话人归一化研究   总被引:1,自引:0,他引:1  
i-vector是反映说话人声学差异的一种重要特征,在目前的说话人识别和说话人验证中显示了有效性。将i-vector应用于语音识别中的说话人的声学特征归一化,对训练数据提取i-vector并利用LBG算法进行无监督聚类.然后对各类分别训练最大似然线性变换并使用说话人自适应训练来实现说话人的归一化。将变换后的特征用于训练和识别.实验表明该方法能够提高语音识别的性能。  相似文献   

9.
对文本无关的说话人验证中模型距离归一化问题的研究   总被引:2,自引:0,他引:2  
董远  陆亮  赵贤宇  赵建 《自动化学报》2009,35(5):556-560
在自动说话人验证中, 模型距离归一化是非常有用的得分归一化技术之一. 相比于其他的主流得分归一化技术, 模型距离归一化的主要优点在于它不需要额外的语音数据和说话人集合. 但是, 它也仍然有自身的缺点. 比如, 在传统的模型距离归一化中, 模型之间的KL距离用Monte-Carlo方法求得, 而此方法的时间复杂度很高. 本文从一个新的角度探讨了模型距离归一化的原理, 并且提出了简化的模型距离归一化方法, 即使用KL距离的上限来衡量两个说话人模型的距离. 在2006年的NIST说话人评测数据集上, 本文提出的简化的模型距离归一化方法取得了与传统方式相近的结果, 而时间复杂度却大大降低了.  相似文献   

10.
本文提出一种基于词格信息的置信度计算方法,估计自适应语音识别结果的可靠性,将不可靠的语音从自适应训练集中去掉,从而减小无监督自适应与有监督自适应间的性能差异,提高无监督自适应的性能。  相似文献   

11.
12.
The issue of input variability resulting from speaker changes is one of the most crucial factors influencing the effectiveness of speech recognition systems. A solution to this problem is adaptation or normalization of the input, in a way that all the parameters of the input representation are adapted to that of a single speaker, and a kind of normalization is applied to the input pattern against the speaker changes, before recognition. This paper proposes three such methods in which some effects of the speaker changes influencing speech recognition process is compensated. In all three methods, a feed-forward neural network is first trained for mapping the input into codes representing the phonetic classes and speakers. Then, among the 71 speakers used in training, the one who is showing the highest percentage of phone recognition accuracy is selected as the reference speaker so that the representation parameters of the other speakers are converted to the corresponding speech uttered by him. In the first method, the error back-propagation algorithm is used for finding the optimal point of every decision region relating to each phone of each speaker in the input space for all the phones and all the speakers. The distances between these points and the corresponding points related to the reference speaker are employed for offsetting the speaker change effects and the adaptation of the input signal to the reference speaker. In the second method, using the error back-propagation algorithm and maintaining the reference speaker data as the desirable speaker output, we correct all the speech signal frames, i.e., the train and the test datasets, so that they coincide with the corresponding speech of the reference speaker. In the third method, another feed-forward neural network is applied inversely for mapping the phonetic classes and speaker information to the input representation. The phonetic output retrieved from the direct network along with the reference speaker data are given to the inverse network. Using this information, the inverse network yields an estimation of the input representation adapted to the reference speaker. In all three methods, the final speech recognition model is trained using the adapted training data, and is tested by the adapted testing data. Implementing these methods and combining the final network results with un-adapted network based on the highest confidence level, an increase of 2.1, 2.6 and 3% in phone recognition accuracy on the clean speech is obtained from the three methods, respectively.  相似文献   

13.
In spite of recent advances in automatic speech recognition, the performance of state-of-the-art speech recognisers fluctuates depending on the speaker. Speaker normalisation aims at the reduction of differences between the acoustic space of a new speaker and the training acoustic space of a given speech recogniser, improving performance. Normalisation is based on an acoustic feature transformation, to be estimated from a small amount of speech signal. This paper introduces a mixture of recurrent neural networks as an effective regression technique to approach the problem. A suitable Vit-erbi-based time alignment procedure is proposed for generating the adaptation set. The mixture is compared with linear regression and single-model connectionist approaches. Speaker-dependent and speaker-independent continuous speech recognition experiments with a large vocabulary, using Hidden Markov Models, are presented. Results show that the mixture improves recognition performance, yielding a 21% relative reduction of the word error rate, i.e. comparable with that obtained with model-adaptation approaches.  相似文献   

14.
The bilinear transformation (BT) is used for vocal tract length normalization (VTLN) in speech recogniton systems. We prove two properties of the bilinear mapping that motivated the band-diagonal transform proposed in M. Afify and O. Siohan, (ldquoConstrained maximum likelihood linear regression for speaker adaptation,rdquo in Proc. ICSLP, Beijing, China, Oct. 2000.) This is in contrast to what is stated in M. Pitz and H. Ney, (ldquoVocal tract length normalization equals linear transformation in cepstral space,rdquo IEEE Transactions on Speech and Audio Processing, vol. 13, no. 5, pp 930-944, September 2005) that the transform of Afify and Siohan was motivated by empirical observations.  相似文献   

15.
对特征参数概率分布的实验分析表明,在有噪声影响的情况下,特征参数通常呈现双峰分布.据此,本文提出了一种新的,基于双高斯的高斯混合模型(Gaussian mixture model,GMM)的特征参数归一化方法,以提高语音识别系统的鲁棒性.该方法采用更为细致的双高斯模型来表达特征参数的累积分布函数(CDF),并依据估计得到的CDF进行参数变换将训练和识别时的特征参数的分布都规整为标准高斯分布,从而提高识别正确率.在Aurora 2和Aurora 3数据库上的实验结果表明,本文提出的方法的性能明显好于传统的倒谱均值规整(Cepstral mean normalization,CMN)和倒谱均值方差规整(Cepstral mean and variance normalization,CMVN)方法,而与非参数化方法-直方图均衡特征规整方法的性能基本相当.  相似文献   

16.
Vocal tract length normalization (VTLN) for standard filterbank-based Mel frequency cepstral coefficient (MFCC) features is usually implemented by warping the center frequencies of the Mel filterbank, and the warping factor is estimated using the maximum likelihood score (MLS) criterion. A linear transform (LT) equivalent for frequency warping (FW) would enable more efficient MLS estimation. We recently proposed a novel LT to perform FW for VTLN and model adaptation with standard MFCC features. In this paper, we present the mathematical derivation of the LT and give a compact formula to calculate it for any FW function. We also show that our LT is closely related to different LTs previously proposed for FW with cepstral features, and these LTs for FW are all shown to be numerically almost identical for the sine-log all-pass transform (SLAPT) warping functions. Our formula for the transformation matrix is, however, computationally simpler and, unlike other previous LT approaches to VTLN with MFCC features, no modification of the standard MFCC feature extraction scheme is required. In VTLN and speaker adaptive modeling (SAM) experiments with the DARPA resource management (RM1) database, the performance of the new LT was comparable to that of regular VTLN implemented by warping the Mel filterbank, when the MLS criterion was used for FW estimation. This demonstrates that the approximations involved do not lead to any performance degradation. Performance comparable to front end VTLN was also obtained with LT adaptation of HMM means in the back end, combined with mean bias and variance adaptation according to the maximum likelihood linear regression (MLLR) framework. The FW methods performed significantly better than standard MLLR for very limited adaptation data (1 utterance), and were equally effective with unsupervised parameter estimation. We also performed speaker adaptive training (SAT) with feature space LT denoted CLTFW. Global CLTFW SAT gave results comparable to SAM and VTLN. By estimating multiple CLTFW transforms using a regression tree, and including an additive bias, we obtained significantly improved results compared to VTLN, with increasing adaptation data.  相似文献   

17.
刘鹏  王怀杰 《数字社区&智能家居》2007,(12):1399-1400,1404
噪音环境下的语音识别一直是语音识别的难点,本文采用了谱减法进行去噪,进行孤立词(数字0-9)的识别,提高系统的识别率  相似文献   

18.
噪音环境下的语音识别一直是语音识别的难点,本文采用了谱减法进行去噪,进行孤立词(数字0-9)的识别,提高系统的识别率.  相似文献   

19.
孙林慧  叶蕾  杨震 《计算机仿真》2005,22(5):231-234
测试时长是影响说话人识别问题的主要因素之一。该文主要对分布式语音识别中测试时长与说话人识别率的关系进行了研究。文中采用文本无关的训练模板,首先对基本的说话人辨认系统用干净语音和带噪语音进行了测试,结果表明系统识别率随测试时长的增加而提高,并在实验室条件下获得加噪语音最佳测试时长。其次为了减小最佳测试时长采用改进的说话人辨认系统,先对说话人的性别进行分类然后再对其身份进行识别,不仅减少了测试所需的最佳时长,而且提高了系统的抗噪性能。最后对仿真结果进行了分析。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号