首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到18条相似文献,搜索用时 187 毫秒
1.
提出了一种将基于深度神经网络(Deep Neural Network,DNN)特征映射的回归分析模型应用到身份认证矢量(identity vector,i-vector)/概率线性判别分析(Probabilistic Linear Discriminant Analysis,PLDA)说话人系统模型中的方法。DNN通过拟合含噪语音和纯净语音i-vector之间的非线性函数关系,得到纯净语音i-vector的近似表征,达到降低噪声对系统性能影响的目的。在TIMIT数据集上的实验验证了该方法的可行性和有效性。  相似文献   

2.
利用i-vector/PLDA模型进行说话人确认时,对于不定时间的语音,由于将长度归一化后的i-vector转化到PLDA模型时,伴随着不确定性的扭曲和缩放,影响识别率。本文通过对全变量空间矩阵T的列向量执行归一化,代替在PLDA模型上对i-vector进行长度归一化,避免因在i-vector上执行长度归一化,导致转移到PLDA模型上产生不良的扭曲。实验结果表明,该方法得到和长度归一化相似的效果,部分效果要优于长度归一化。  相似文献   

3.
在说话人识别系统中,一种结合深度神经网路(DNN)、身份认证矢量(i-vector)和概率线性鉴别分析(PLDA)的模型被证明十分有效。为进一步提升PLDA模型信道补偿的性能,将降噪自动编码器(DAE)和受限玻尔兹曼机(RBM)以及它们的组合(DAE-RBM)分别应用到信道补偿PLDA模型端,降低说话人i-vector空间信道信息的影响。实验表明相比标准PLDA系统,基于DAE-PLDA和RBM-PLDA的识别系统的等错误率(EER)和检测代价函数(DCF)都显著降低,结合两者优势的DAE-RBMPLDA使系统识别性能得到了进一步提升。  相似文献   

4.
随着深度伪造技术的发展,合成语音检测面临越来越多的挑战。本文提出一种将辅助学习融入端到端模型的合成语音检测方法。将音频数据进行数据对齐后在不加提取任何手工特征的情况下直接输入到改进端到端模型,主任务进行真实语音与合成语音的二分类,同时选用不同合成语音类型判别作为辅助任务,为主任务的合成语音检测提供先验假设,并且对主辅任务的权重叠加进行了优化。通过在公开数据集ASVspoof2019及ASVspoof2015上进行的实验结果表明,本文改进的模型与使用手工特征的模型相比能有效降低等错率,且优于改进前的端到端模型,并且在面对未知攻击类型时拥有更好的泛化能力。  相似文献   

5.
置信度判决用于确定语音数据与模型之间的匹配程度,可以发现语音命令系统中的识别错误,提高其可靠性.近年来,基于身份矢量(identity vector,i-vector)以及概率线性判别分析(Probabilistic Linear Discriminant Analysis,PLDA)的方法在说话人识别任务中取得了显著效果.本文尝试将i-vector以及PLDA模型作为一种命令词识别结果置信度分析方法,其无需声学模型、语言模型支撑,且实验表明性能良好.在此基础上,针对i-vector在刻画时序信息方面的不足,尝试将该系统与DTW融合,有效提升了系统对音频时序的鉴别能力.  相似文献   

6.
针对传统的语音识别系统采用数据驱动并利用语言模型来决策最优的解码路径,导致在部分场景下的解码结果存在明显的音对字错的问题,提出一种基于韵律特征辅助的端到端语音识别方法,利用语音中的韵律信息辅助增强正确汉字组合在语言模型中的概率。在基于注意力机制的编码-解码语音识别框架的基础上,首先利用注意力机制的系数分布提取发音间隔、发音能量等韵律特征;然后将韵律特征与解码端结合,从而显著提升了发音相同或相近、语义歧义情况下的语音识别准确率。实验结果表明,该方法在1 000 h及10 000 h级别的语音识别任务上分别较端到端语音识别基线方法在准确率上相对提升了5.2%和5.0%,进一步改善了语音识别结果的可懂度。  相似文献   

7.
在连续语音识别系统中,针对复杂环境(包括说话人及环境噪声的多变性)造成训练数据与测试数据不匹配导致语音识别率低下的问题,提出一种基于自适应深度神经网络的语音识别算法。结合改进正则化自适应准则及特征空间的自适应深度神经网络提高数据匹配度;采用融合说话人身份向量i-vector及噪声感知训练克服说话人及环境噪声变化导致的问题,并改进传统深度神经网络输出层的分类函数,以保证类内紧凑、类间分离的特性。通过在TIMIT英文语音数据集和微软中文语音数据集上叠加多种背景噪声进行测试,实验结果表明,相较于目前流行的GMM-HMM和传统DNN语音声学模型,所提算法的识别词错误率分别下降了5.151%和3.113%,在一定程度上提升了模型的泛化性能和鲁棒性。  相似文献   

8.
针对普通话语音识别任务中的多口音识别问题,提出了链接时序主义(connectionist temporal classification,CTC)和多头注意力(multi-head attention)的混合端到端模型,同时采用多目标训练和联合解码的方法。实验分析发现随着混合架构中链接时序主义权重的降低和编码器层数的加深,混合模型在带口音的数据集上表现出了更好的学习能力,同时训练一个深度达到48层的编码器—解码器架构的网络,生成模型的表现超过之前所有端到端模型,在数据堂开源的200 h带口音数据集上达到了5.6%字错率和26.2%句错率。实验证明了提出的端到端模型超过一般端到端模型的识别率,在解决带口音的普通话识别上有一定的先进性。  相似文献   

9.
为增强端到端语音识别模型的鲁棒性和特征提取的有效性,对瓶颈特征提取网络进行研究,提出采用基于联合优化正交投影和估计的端到端语音识别模型.通过连接时序分类损失函数训练瓶颈特征提取网络,摆脱对语言学和对齐信息的先验知识的依赖,在解码输出部分添加注意力机制,实现两种不同的端到端模型的融合.在中文数据集AISHELL-1上的实验结果表明,与传统识别模型相比,该改进端到端模型更适用于带噪语音的识别任务.  相似文献   

10.
端到端神经网络能够根据特定的任务自动学习从原始数据到特征的变换,解决人工设计的特征与任务不匹配的问题。以往语音识别的端到端网络采用一层时域卷积网络作为特征提取模型,递归神经网络和全连接前馈深度神经网络作为声学模型的方式,在效果和效率两个方面具有一定的局限性。从特征提取模块的效果以及声学模型的训练效率角度,提出多时间频率分辨率卷积网络与带记忆模块的前馈神经网络相结合的端到端语音识别模型。实验结果表明,所提方法语音识别在真实录制数据集上较传统方法字错误率下降10%,训练时间减少80%。  相似文献   

11.
In the i-vector/probabilistic linear discriminant analysis (PLDA) technique, the PLDA backend classifier is modelled on i-vectors. PLDA defines an i-vector subspace that compensates the unwanted variability and helps to discriminate among speaker-phrase pairs. The channel or session variability manifested in i-vectors are known to be nonlinear in nature. PLDA training, however, assumes the variability to be linearly separable, thereby causing loss of important discriminating information. Besides, the i-vector estimation, itself, is known to be poor in case of short utterances. This paper attempts to address these issues using a simple hierarchy-based system. A modified fuzzy-clustering technique is employed to divide the feature space into more characteristic feature subspaces using vocal source features. Thereafter, a separate i-vector/PLDA model is trained for each of the subspaces. The sparser alignment owing to subspace-specific universal background model and the relatively reduced dimensions of variability in individual subspaces help to train more effective i-vector/PLDA models. Also, vocal source features are complementary to mel frequency cepstral coefficients, which are transformed into i-vectors using mixture model technique. As a consequence, vocal source features and i-vectors tend to have complementary information. Thus using vocal source features for classification in a hierarchy tree may help to differentiate some of the speaker-phrase classes, which otherwise are not easily discriminable based on i-vectors. The proposed technique has been validated on Part 1 of RSR2015 database, and it shows a relative equal error rate reduction of up to 37.41% with respect to the baseline i-vector/PLDA system.  相似文献   

12.
13.
The availability of multiple utterances (and hence, i-vectors) for speaker enrollment brings up several alternatives for their utilization with probabilistic linear discriminant analysis (PLDA). This paper provides an overview of their effective utilization, from a practical viewpoint. We derive expressions for the evaluation of the likelihood ratio for the multi-enrollment case, with details on the computation of the required matrix inversions and determinants. The performance of five different scoring methods, and the effect of i-vector length normalization is compared experimentally. We conclude that length normalization is a useful technique for all but one of the scoring methods considered, and averaging i-vectors is the most effective out of the methods compared. We also study the application of multicondition training on the PLDA model. Our experiments indicate that multicondition training is more effective in estimating PLDA hyperparameters than it is for likelihood computation. Finally, we look at the effect of the configuration of the enrollment data on PLDA scoring, studying the properties of conditional dependence and number-of-enrollment-utterances per target speaker. Our experiments indicate that these properties affect the performance of the PLDA model. These results further support the conclusion that i-vector averaging is a simple and effective way to process multiple enrollment utterances.  相似文献   

14.
近年来,基于总变化因子的说话人识别方法成为说话人识别领域的主流方法.其中,概率线性鉴别分析(Probabilistic linear discriminant analysis,PLDA)因其优异的性能而得到学者们的广泛关注.然而,在估计PLDA模型时,传统的因子分析方法只更新模型空间,因此,模型均值不能很好地与更新后的模型空间耦合.提出联合估计法对模型均值和模型空间同时估计,得到更为严格的期望最大化更新公式,在美国国家标准与技术局说话人识别评测2010扩展测试数据库以及2012核心测试数据库上,等错率得到一定提升.  相似文献   

15.
说话人识别由于其独特的方便性、经济性和准确性等优势,已成为人们日常生活与工作中重要的身份认证方式。然而在实际应用场景下,对说话人识别系统的准确性、鲁棒性、迁移性、实时性等提出了巨大的挑战。近年来深度学习在特征表达和模式分类方面表现优异,为说话人识别技术的进一步发展提供了新方向。相较于传统说话人识别技术(如GMM-UBM、GMM-SVM、JFA、i-vector等),聚焦于深度学习框架下的说话人识别方法,按照深度学习在说话人识别中的作用方式,将目前的研究分为基于深度学习的特征表达、基于深度学习的后端建模、端到端联合优化三种类别,并分析和总结了其典型算法的特点及网络结构,对其具体性能进行了对比分析。最后总结了深度学习在说话人识别中的应用特点及优势,进一步分析了目前说话人识别研究面临的问题及挑战,并展望了深度学习框架下说话人识别研究的前景,以期推动说话人识别技术的进一步发展。  相似文献   

16.
在基于全差异空间因子(i-Vector)的说话人确认系统中,需进一步从语音段的i-Vector表示中提取说话人相关的区分性信息,以提高系统性能。文中通过结合锚模型的思想,提出一种基于深层置信网络的建模方法。该方法通过对i-Vector中包含的复杂差异信息逐层进行分析、建模,以非线性变换的形式挖掘出其中的说话人相关信息。在NIST SRE 2008核心测试电话训练-电话测试数据库上,男声和女声的等错误率分别为4。96%和6。18%。进一步与基于线性判别分析的系统进行融合,能将等错误率降至4。74%和5。35%。  相似文献   

17.
孙念  张毅  林海波  黄超 《计算机应用》2018,38(10):2839-2843
当测试语音时长充足时,单一特征的信息量和区分性足够完成说话人识别任务,但是在测试语音很短的情况下,语音信号里缺乏充分的说话人信息,使得说话人识别性能急剧下降。针对短语音条件下的说话人信息不足的问题,提出一种基于多特征i-vector的短语音说话人识别算法。该算法首先提取不同的声学特征向量组合成一个高维特征向量,然后利用主成分分析(PCA)去除高维特征向量的相关性,使特征之间正交化,最后采用线性判别分析(LDA)挑选出最具区分性的特征,并且在一定程度上降低空间维度,从而实现更好的说话人识别性能。结合TIMIT语料库进行实验,同一时长的短语音(2 s)条件下,所提算法比基于i-vector的单一的梅尔频率倒谱系数(MFCC)、线性预测倒谱系数(LPCC)、感知对数面积比系数(PLAR)特征系统在等错误率(EER)上分别有相对72.16%、69.47%和73.62%的下降。不同时长的短语音条件下,所提算法比基于i-vector的单一特征系统在EER和检测代价函数(DCF)上大致都有50%的降低。基于以上两种实验的结果充分表明了所提算法在短语音说话人识别系统中可以充分提取说话人的个性信息,有利地提高说话人识别性能。  相似文献   

18.
This paper presents a simplified and supervised i-vector modeling approach with applications to robust and efficient language identification and speaker verification. First, by concatenating the label vector and the linear regression matrix at the end of the mean supervector and the i-vector factor loading matrix, respectively, the traditional i-vectors are extended to label-regularized supervised i-vectors. These supervised i-vectors are optimized to not only reconstruct the mean supervectors well but also minimize the mean square error between the original and the reconstructed label vectors to make the supervised i-vectors become more discriminative in terms of the label information. Second, factor analysis (FA) is performed on the pre-normalized centered GMM first order statistics supervector to ensure each gaussian component's statistics sub-vector is treated equally in the FA, which reduces the computational cost by a factor of 25 in the simplified i-vector framework. Third, since the entire matrix inversion term in the simplified i-vector extraction only depends on one single variable (total frame number), we make a global table of the resulting matrices against the frame numbers’ log values. Using this lookup table, each utterance's simplified i-vector extraction is further sped up by a factor of 4 and suffers only a small quantization error. Finally, the simplified version of the supervised i-vector modeling is proposed to enhance both the robustness and efficiency. The proposed methods are evaluated on the DARPA RATS dev2 task, the NIST LRE 2007 general task and the NIST SRE 2010 female condition 5 task for noisy channel language identification, clean channel language identification and clean channel speaker verification, respectively. For language identification on the DARPA RATS, the simplified supervised i-vector modeling achieved 2%, 16%, and 7% relative equal error rate (EER) reduction on three different feature sets and sped up by a factor of more than 100 against the baseline i-vector method for the 120 s task. Similar results were observed on the NIST LRE 2007 30 s task with 7% relative average cost reduction. Results also show that the use of Gammatone frequency cepstral coefficients, Mel-frequency cepstral coefficients and spectro-temporal Gabor features in conjunction with shifted-delta-cepstral features improves the overall language identification performance significantly. For speaker verification, the proposed supervised i-vector approach outperforms the i-vector baseline by relatively 12% and 7% in terms of EER and norm old minDCF values, respectively.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号