首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 233 毫秒
1.
现有基于混合高斯模型的说话人聚类方法主要依据最大后验准则,从通用背景模型中自适应得到类别的混合高斯模型,然而自适应数据较少,模型的准确性不够。对此,文中尝试基于本征语音(EV)空间和全变化(TV)空间分析的两种因子分析建模方法,通过对差异空间的建模,减少估计类别混合高斯模型时需要估计的参数个数。结果表明,在美国国家标准技术研究所2008年说话人识别评测的电话语音数据集上,相对于基于最大后验概率准则的基线系统而言,文中所使用的基于EV和TV空间分析的建模方法都可使聚类错误率有较大幅度的下降,并且TV空间分析建模相对于EV空间分析建模能获得更低的聚类错误率。  相似文献   

2.
邬龙  黎塔  王丽  颜永红 《软件学报》2019,30(S2):25-34
为了进一步利用近场语音数据来提高远场语音识别的性能,提出一种基于知识蒸馏和生成对抗网络相结合的远场语音识别算法.该方法引入多任务学习框架,在进行声学建模的同时对远场语音特征进行增强.为了提高声学建模能力,使用近场语音的声学模型(老师模型)来指导远场语音的声学模型(学生模型)进行训练.通过最小化相对熵使得学生模型的后验概率分布逼近老师模型.为了提升特征增强的效果,加入鉴别网络来进行对抗训练,从而使得最终增强后的特征分布更逼近近场特征.AMI数据集上的实验结果表明,该算法的平均词错误率(WER)与基线相比在单通道的情况下,在没有说话人交叠和有说话人交叠时分别相对下降5.6%和4.7%.在多通道的情况下,在没有说话人交叠和有说话人交叠时分别相对下降6.2%和4.1%.TIMIT数据集上的实验结果表明,该算法获得了相对7.2%的平均词错误率下降.为了更好地展示生成对抗网络对语音增强的作用,对增强后的特征进行了可视化分析,进一步验证了该方法的有效性.  相似文献   

3.
在连续语音识别系统中,针对复杂环境(包括说话人及环境噪声的多变性)造成训练数据与测试数据不匹配导致语音识别率低下的问题,提出一种基于自适应深度神经网络的语音识别算法。结合改进正则化自适应准则及特征空间的自适应深度神经网络提高数据匹配度;采用融合说话人身份向量i-vector及噪声感知训练克服说话人及环境噪声变化导致的问题,并改进传统深度神经网络输出层的分类函数,以保证类内紧凑、类间分离的特性。通过在TIMIT英文语音数据集和微软中文语音数据集上叠加多种背景噪声进行测试,实验结果表明,相较于目前流行的GMM-HMM和传统DNN语音声学模型,所提算法的识别词错误率分别下降了5.151%和3.113%,在一定程度上提升了模型的泛化性能和鲁棒性。  相似文献   

4.
针对语音识别系统中测试的目标说话人语音和训练数据的说话人语音存在较大差异时,系统识别准确率下降的问题,提出一种基于深度神经网络DNN(Deep Neural Network)的说话人自适应SA(Speaker Adaptation)方法。它是在特征空间上进行的说话人自适应,通过在DNN声学模型中加入说话人身份向量I-Vector辅助信息来去除特征中的说话人差异信息,减少说话人差异的影响,保留语义信息。在TEDLIUM开源数据集上的实验结果表明,该方法在特征分别为fbank和f MLLR时,系统单词错误率WER(Word Error Rate)相对基线DNN声学模型提高了7.7%和6.7%。  相似文献   

5.
提出了一种随机段模型系统的说话人自适应方法。根据随机段模型的模型特性,将最大似然线性回归方法引入到随机段模型系统中。在“863 test”测试集上进行的汉语连续语音识别实验显示,在不同的解码速度下,说话人自适应后汉字错误率均有明显的下降。实验结果表明,最大似然线性回归方法在随机段模型系统中同样能取得较好的效果。  相似文献   

6.
传统的声音识别系统通过短时声音频谱信息来辨识说话人,这种方法在某些条件下具有较好的性能。但是由于有些说话人特征隐藏在较长的语音片段中,通过添加长时信息可能会进一步提高系统的性能。在文中,音素持续时间信息被添加到传统模型上,以提高说话人辨识率。频谱信息是通过短时分析获得的,但音素持续时间的提取却属于长时分析,它需要更多的语音数据。通过大量语音数据探讨了音素持续时间信息对说话人辨识的有效性,提出2种方法来解决数据量小所引起的问题。实验结果表明,当说话人的声音模型被恰当建立时,即使在语音数据量小的情况下,音素持续时间信息对说话人辨识率的提高也是有效的。  相似文献   

7.
在基于全差异空间因子(i-Vector)的说话人确认系统中,需进一步从语音段的i-Vector表示中提取说话人相关的区分性信息,以提高系统性能。文中通过结合锚模型的思想,提出一种基于深层置信网络的建模方法。该方法通过对i-Vector中包含的复杂差异信息逐层进行分析、建模,以非线性变换的形式挖掘出其中的说话人相关信息。在NIST SRE 2008核心测试电话训练-电话测试数据库上,男声和女声的等错误率分别为4。96%和6。18%。进一步与基于线性判别分析的系统进行融合,能将等错误率降至4。74%和5。35%。  相似文献   

8.
为了提高说话人识别系统的识别效率,提出一种基于说话人模型聚类的说话人识别方法,通过近似KL距离将相似的说话人模型聚类,为每类确定类中心和类代表,构成分级说话人识别模型。测试时先通过计算测试矢量与类中心或类代表之间的距离选择类,再通过计算测试矢量与选中类中的说话人模型之间对数似然度确定目标说话人,这样可以大大减少计算量。实验结果显示,在相同条件下,基于说话人模型聚类的说话人识别的识别速度要比传统的GMM的识别速度快4倍,但是识别正确率只降低了0.95%。因此,与传统GMM相比,基于说话人模型聚类的说话人识别能在保证识别正确率的同时大大提高识别速度。  相似文献   

9.
情绪变化问题是说话人识别技术面临的一个难题。为了解决该问题,提出了基于多项式方程拟合的中性-情感模型转换算法。该算法建立了中性模型和情感模型之间的函数关系,只需要说话人的中性语音就能训练其各种情感类型的说话人模型。在普通话情感语音库上的实验表明,采用该方法后识别算法的等错误率由16.06%降低到10.31%,提高了系统性能。  相似文献   

10.
传统的声音识别系统通过短时声音频谱信息来辨识说话人.这种方法在某些条件下具有较好的性能。但是由于有些说话人特征隐藏在较长的语音片段中,通过添加长时信息可能会进一步提高系统的性能。在文中.音素持续时间信息被添加到传统模型上,以提高说话人辨识率。频谱信息是通过短时分析获得的,但音素持续时间的提取却属于长时分析,它需要更多的语音数据。通过大量语音数据探讨了音素持续时间信息对说话人辨识的有效性,提出2种方法来解决数据量小所引起的问题。实验结果表明,当说话人的声音模型被恰当建立时,即使在语音数据量小的情况下,音素持续时间信息对说话人辨识率的提高也是有效的。  相似文献   

11.
In this paper, the problem of identifying in-set versus out-of-set speakers using extremely limited enrollment data is addressed. The recognition objective is to form a binary decision regarding an input speaker as being a legitimate member of a set of enrolled speakers or not. Here, the emphasis is on low enrollment (about 5 sec of speech for each enrolled speaker) and test data durations (2-8 sec), in a text-independent scenario. In order to overcome the limited enrollment, data from speakers that are acoustically close to a given in-set speaker are used to form an informative prior (base model) for speaker adaptation. Score normalization for in-set systems is addressed, and the difficulty of using conventional score normalization schemes for in-set speaker recognition is highlighted. Distribution scaling based score normalization techniques are developed specifically for the in-set/out-of-set problem and compared against existing score normalization schemes used in open-set speaker recognition. Experiments are performed using the following three separate corpora: (1) noise-free TIMIT; (2) noisy in-vehicle CU-move; and (3) the NIST-SRE-2006 database. Experimental results show a consistent increase in system performance for the proposed techniques.  相似文献   

12.
Speech babble is one of the most challenging noise interference for all speech systems. Here, a systematic approach to model its underlying structure is proposed to further the existing knowledge of speech processing in noisy environments. This paper establishes a working foundation for the analysis and modeling of babble speech. We first address the underlying model for multiple speaker babble speech—considering the number of conversations versus the number of speakers contributing to babble. Next, based on this model, we develop an algorithm to detect the range of the number of speakers within an unknown babble speech sequence. Evaluation is performed using 110 h of data from the Switchboard corpus. The number of simultaneous conversations ranges from one to nine, or one to 18 subjects speaking. A speaker conversation stream detection rate in excess of 80% is achieved with a speaker window size of ${pm}1$ speakers. Finally, the problem of in-set/out-of-set speaker recognition is considered in the context of interfering babble speech noise. Results are shown for test durations from 2–8 s, with babble speaker groups ranging from two to nine subjects. It is shown that by choosing the correct number of speakers in the background babble an overall average performance gain of 6.44% equal error rate can be obtained. This study represents effectively the first effort in developing an overall model for speech babble, and with this, contributions are made for speech system robustness in noise.   相似文献   

13.
The objective of voice conversion algorithms is to modify the speech by a particular source speaker so that it sounds as if spoken by a different target speaker. Current conversion algorithms employ a training procedure, during which the same utterances spoken by both the source and target speakers are needed for deriving the desired conversion parameters. Such a (parallel) corpus, is often difficult or impossible to collect. Here, we propose an algorithm that relaxes this constraint, i.e., the training corpus does not necessarily contain the same utterances from both speakers. The proposed algorithm is based on speaker adaptation techniques, adapting the conversion parameters derived for a particular pair of speakers to a different pair, for which only a nonparallel corpus is available. We show that adaptation reduces the error obtained when simply applying the conversion parameters of one pair of speakers to another by a factor that can reach 30%. A speaker identification measure is also employed that more insightfully portrays the importance of adaptation, while listening tests confirm the success of our method. Both the objective and subjective tests employed, demonstrate that the proposed algorithm achieves comparable results with the ideal case when a parallel corpus is available.  相似文献   

14.
This paper presents the study of speaker identification for security systems based on the energy of speaker utterances. The proposed system consisted of a combination of signal pre-process, feature extraction using wavelet packet transform (WPT) and speaker identification using artificial neural network. In the signal pre-process, the amplitude of utterances, for a same sentence, were normalized for preventing an error estimation caused by speakers’ change in volume. In the feature extraction, three conventional methods were considered in the experiments and compared with the irregular decomposition method in the proposed system. In order to verify the effect of the proposed system for identification, a general regressive neural network (GRNN) was used and compared in the experimental investigation. The experimental results demonstrated the effectiveness of the proposed speaker identification system and were compared with the discrete wavelet transform (DWT), conventional WPT and WPT in Mel scale.  相似文献   

15.
In classification tasks, the error rate is proportional to the commonality among classes. In conventional GMM-based modeling technique, since the model parameters of a class are estimated without considering other classes in the system, features that are common across various classes may also be captured, along with unique features. This paper proposes to use unique characteristics of a class at the feature-level and at the phoneme-level, separately, to improve the classification accuracy. At the feature-level, the performance of a classifier has been analyzed by capturing the unique features while modeling, and removing common feature vectors during classification. Experiments were conducted on speaker identification task, using speech data of 40 female speakers from NTIMIT corpus, and on a language identification task, using speech data of two languages (English and French) from OGI_MLTS corpus. At the phoneme-level, performance of a classifier has been analyzed by identifying a subset of phonemes, which are unique to a speaker with respect to his/her closely resembling speaker, in the acoustic sense, on a speaker identification task. In both the cases (feature-level and phoneme-level) considerable improvement in classification accuracy is observed over conventional GMM-based classifiers in the above mentioned tasks. Among the three experimental setup, speaker identification task using unique phonemes shows as high as 9.56 % performance improvement over conventional GMM-based classifier.  相似文献   

16.
This paper proposes a new method for speaker feature extraction based on Formants, Wavelet Entropy and Neural Networks denoted as FWENN. In the first stage, five formants and seven Shannon entropy wavelet packet are extracted from the speakers’ signals as the speaker feature vector. In the second stage, these 12 feature extraction coefficients are used as inputs to feed-forward neural networks. Probabilistic neural network is also proposed for comparison. In contrast to conventional speaker recognition methods that extract features from sentences (or words), the proposed method extracts the features from vowels. Advantages of using vowels include the ability to recognize speakers when only partially-recorded words are available. This may be useful for deaf-mute persons or when the recordings are damaged. Experimental results show that the proposed method succeeds in the speaker verification and identification tasks with high classification rate. This is accomplished with minimum amount of information, using only 12 coefficient features (i.e. vector length) and only one vowel signal, which is the major contribution of this work. The results are further compared to well-known classical algorithms for speaker recognition and are found to be superior.  相似文献   

17.
As one of a practical method, least squares support vector machine (LS-SVM) is usable for nonlinear separable problem as speaker identification. However, single LS-SVM can only do such classifying as binary classification, so it always needs multiple LS-SVMs and corresponding algorithms for classifying multiple speakers in a speaker identification database. By comparing pairwise LS-SVM with one-against-all LS-SVM, it is obvious that the pairwise LS-SVM has the advantage of facilitative expanding for different cases, while the one-against-all LS-SVM can not bring. However conventional pairwise LS-SVM needs too many judgment times to do multi-classing. In order to improve the pairwise LS-SVM and make it applicable to multi-speaker identification system, we propose a new notion of classification weight for pairwise LS-SVM and the corresponding algorithm, named as pairwise LS-SVM based on classification weight, i.e., the m-ωLS-SVM method, which can be used in multi-speaker identification system. Experiment results show that, comparing with conventional pairwise LS-SVM, the identification speed of the system with m-ωLS-SVM method is improved while keeping correct rate of identification, or vice versa, with only a little increase of training time.  相似文献   

18.
This correspondence introduces a new text-independent speaker verification method, which is derived from the basic idea of pattern recognition that the discriminating ability of a classifier can be improved by removing the common information between classes. In looking for the common speech characteristics between a group of speakers, a global speaker model can be established. By subtracting the score acquired from this model, the conventional likelihood score is normalized with the consequence of more compact score distribution and lower equal error rates. Several experiments are carried out to demonstrate the effectiveness of the proposed method  相似文献   

19.
基于模型距离和支持向量机的说话人确认   总被引:1,自引:0,他引:1  
针对采用支持向量机的说话人的确认问题,提出采用背景模型、说话人模型、测试语句模型间距离和夹角作为支持向量机的特征矢量,同时将组特征矢量与广义线性判别式序列核函数的参数相拼接,能够取得相对于基线的混合高斯模型算法更高的识别率.在2004年NIST评测数据库上,采用推荐算法的系统等错误率比基线的混合高斯-背景模型系统低16%.对说话人识别取得一定进展.  相似文献   

20.
The issue of input variability resulting from speaker changes is one of the most crucial factors influencing the effectiveness of speech recognition systems. A solution to this problem is adaptation or normalization of the input, in a way that all the parameters of the input representation are adapted to that of a single speaker, and a kind of normalization is applied to the input pattern against the speaker changes, before recognition. This paper proposes three such methods in which some effects of the speaker changes influencing speech recognition process is compensated. In all three methods, a feed-forward neural network is first trained for mapping the input into codes representing the phonetic classes and speakers. Then, among the 71 speakers used in training, the one who is showing the highest percentage of phone recognition accuracy is selected as the reference speaker so that the representation parameters of the other speakers are converted to the corresponding speech uttered by him. In the first method, the error back-propagation algorithm is used for finding the optimal point of every decision region relating to each phone of each speaker in the input space for all the phones and all the speakers. The distances between these points and the corresponding points related to the reference speaker are employed for offsetting the speaker change effects and the adaptation of the input signal to the reference speaker. In the second method, using the error back-propagation algorithm and maintaining the reference speaker data as the desirable speaker output, we correct all the speech signal frames, i.e., the train and the test datasets, so that they coincide with the corresponding speech of the reference speaker. In the third method, another feed-forward neural network is applied inversely for mapping the phonetic classes and speaker information to the input representation. The phonetic output retrieved from the direct network along with the reference speaker data are given to the inverse network. Using this information, the inverse network yields an estimation of the input representation adapted to the reference speaker. In all three methods, the final speech recognition model is trained using the adapted training data, and is tested by the adapted testing data. Implementing these methods and combining the final network results with un-adapted network based on the highest confidence level, an increase of 2.1, 2.6 and 3% in phone recognition accuracy on the clean speech is obtained from the three methods, respectively.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号