首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this paper a new text-independent speaker verification method GSMSV is proposed based on likelihood score normalization.In this novel method a global speaker model is established to represent the universal features of speech and normalize the likelihood score.Statistical analysis demonstrates that this normalization method can remove common factors of speech and bring the differences between speakers into prominence.As a result the equal error rate is decreased significantly,verification procedure is accelerated and system adaptability to speaking speed is improved.  相似文献   

2.
高新建  屈丹  李弼程 《计算机应用》2007,27(10):2602-2604
在说话人确认中,由于目标说话人和冒认者的得分分布是双峰分布,并且不同目标说话人模型得分分布不一致,使对所有说话人确定一个统一的阈值变得困难,导致系统性能下降。分数归一化通过调整冒认者的得分分布来调整阈值。简要介绍了目前最常用的两种归一化方法:零归一化(Z-Norm)和测试归一化(T-Norm)。重点引入了一种新的根据KL距离的D-Norm 归一化方法。然后结合Z-Norm 和D-Norm的优点,又提出一种新的方法ZD-Norm。对这四种归一化方法的性能进行了比较。实验表明,ZD-Norm相对Z-Norm和D-Norm,能够更有效地提高说话人确认系统的性能。  相似文献   

3.
随着说话人模型数量的增加,说话人识别系统的识别速度下降,不能满足实时性要求。针对这个问题,提出了基于分层识别模型的快速说话人识别方法。将变分法求解的KL散度的近似值作为模型间的相似性度量准则,并设计了说话人模型聚类的方法。结果表明,本文方法能够保证说话人模型聚类结果的有效性,在系统识别率损失很小的情况下,使系统的识别速度得到大幅度提升。  相似文献   

4.
In speaker detection it is important to build an alternative model against which to compare scores from the ‘target’ speaker model. Two alternative strategies for building an alternative model are to build a single global model by sampling from a pool of training data, the Universal Background (UBM), or to build a cohort of models from selected individuals in the training data for the target speaker. The main contribution in this paper is to show that these approaches can be unified by using a Support Vector Machine (SVM) to learn a decision rule in the score space made up of the output scores of the client, cohort and UBM model.  相似文献   

5.

In voice recognition, the two main problems are speech recognition (what was said), and speaker recognition (who was speaking). The usual method for speaker recognition is to postulate a model where the speaker identity corresponds to the parameters of the model, which estimation could be time-consuming when the number of candidate speakers is large. In this paper, we model the speaker as a high dimensional point cloud of entropy-based features, extracted from the speech signal. The method allows indexing, and hence it can manage large databases. We experimentally assessed the quality of the identification with a publicly available database formed by extracting audio from a collection of YouTube videos of 1,000 different speakers. With 20 second audio excerpts, we were able to identify a speaker with 97% accuracy when the recording environment is not controlled, and with 99% accuracy for controlled recording environments.

  相似文献   

6.
为了提高说话人识别系统的识别效率,提出一种基于说话人模型聚类的说话人识别方法,通过近似KL距离将相似的说话人模型聚类,为每类确定类中心和类代表,构成分级说话人识别模型。测试时先通过计算测试矢量与类中心或类代表之间的距离选择类,再通过计算测试矢量与选中类中的说话人模型之间对数似然度确定目标说话人,这样可以大大减少计算量。实验结果显示,在相同条件下,基于说话人模型聚类的说话人识别的识别速度要比传统的GMM的识别速度快4倍,但是识别正确率只降低了0.95%。因此,与传统GMM相比,基于说话人模型聚类的说话人识别能在保证识别正确率的同时大大提高识别速度。  相似文献   

7.
在说话人空间中,存在语音特征随句子和时间差异而变化的问题。这个变化主要是由语音数据中的语音信息和说话人信息的变化引起的。如果把这两种信息彼此分离就能实现鲁棒的说话人识别。在假设大的说话人变量的空间为“语音空间”和小的说话人变量的空间为“说话人空间”的情况下,通过子空间方法分离语音信息和说话人信息,提出了说话人辨认和说话人确认方法。结果显示:通过相对于传统方法的比较试验,能用小量训练数据建立鲁棒说话人模型。  相似文献   

8.
This paper presents a method of rapidly determining speaker identity from a small sample of speech, using a tree-based vector quantiser trained to maximise mutual information (MMI). The method is text-independent and new speakers may be rapidly enrolled. Unlike most conventional hidden Markov model approaches, this method is computationally inexpensive enough to work on a modest integer microprocessor, yet is robust even with only a small amount of test data. Thus speaker identification is rapid in terms of both computational cost and the small amount of test speech necessary to identify the speaker. This paper presents theoretical and experimental results, showing that perfect ID accuracy may be achieved on a 15-speaker corpus using little more than 1 s of text-independent test speech. Also presented is a demonstration of how this method may be used to segment audio data by speaker.  相似文献   

9.
在文本无关的说话人辨识中,为了提高系统在电话语音条件下的鲁棒性,提出了将说话人确认中常用的评分规整手段用于说话人辨识中,即对测试语音通过不同话者模型的评分分别进行评分规整,为测试语音选取最接近的话者模型作为系统识别输出,有效地提高了系统性能。在NIST’03 1spk数据库上的说话人辨识实验表明了评分规整技术对说话人辨识的有效性。  相似文献   

10.
Recently, we proposed an improvement to the conventional eigenvoice (EV) speaker adaptation using kernel methods. In our novel kernel eigenvoice (KEV) speaker adaptation, speaker supervectors are mapped to a kernel-induced high dimensional feature space, where eigenvoices are computed using kernel principal component analysis. A new speaker model is then constructed as a linear combination of the leading eigenvoices in the kernel-induced feature space. KEV adaptation was shown to outperform EV, MAP, and MLLR adaptation in a TIDIGITS task with less than 10 s of adaptation speech. Nonetheless, due to many kernel evaluations, both adaptation and subsequent recognition in KEV adaptation are considerably slower than conventional EV adaptation. In this paper, we solve the efficiency problem and eliminate all kernel evaluations involving adaptation or testing observations by finding an approximate pre-image of the implicit adapted model found by KEV adaptation in the feature space; we call our new method embedded kernel eigenvoice (eKEV) adaptation. eKEV adaptation is faster than KEV adaptation, and subsequent recognition runs as fast as normal HMM decoding. eKEV adaptation makes use of multidimensional scaling technique so that the resulting adapted model lies in the span of a subset of carefully chosen training speakers. It is related to the reference speaker weighting (RSW) adaptation method that is based on speaker clustering. Our experimental results on Wall Street Journal show that eKEV adaptation continues to outperform EV, MAP, MLLR, and the original RSW method. However, by adopting the way we choose the subset of reference speakers for eKEV adaptation, we may also improve RSW adaptation so that it performs as well as our eKEV adaptation.  相似文献   

11.
为了改善发声力度对说话人识别系统性能的影响,在训练语音存在少量耳语、高喊语音数据的前提下,提出了使用最大后验概率(MAP)和约束最大似然线性回归(CMLLR)相结合的方法来更新说话人模型、投影转换说话人特征。其中,MAP自适应方法用于对正常语音训练的说话人模型进行更新,而CMLLR特征空间投影方法则用来投影转换耳语、高喊测试语音的特征,从而改善训练语音与测试语音的失配问题。实验结果显示,采用MAP+CMLLR方法时,说话人识别系统等错误率(EER)明显降低,与基线系统、最大后验概率(MAP)自适应方法、最大似然线性回归(MLLR)模型投影方法和约束最大似然线性回归(CMLLR)特征空间投影方法相比,MAP+CMLLR方法的平均等错率分别降低了75.3%、3.5%、72%和70.9%。实验结果表明,所提出方法削弱了发声力度对说话人区分性的影响,使说话人识别系统对于发声力度变化更加鲁棒。  相似文献   

12.
This paper addresses the problem of real-time speaker segmentation and speaker tracking in audio content analysis in which no prior knowledge of the number of speakers and the identities of speakers is available. Speaker segmentation is to detect the speaker change boundaries in a speech stream. It is performed by a two-step algorithm, which includes potential change detection and refinement. Speaker tracking is then performed based on the results of speaker segmentation by identifying the speaker of each segment. In our approach, incremental speaker model updating and segmental clustering is proposed, which makes the unsupervised speaker segmentation and tracking feasible in real-time processing. A Bayesian fusion method is also proposed to fuse multiple audio features to obtain a more reliable result, and different noise levels are utilized to compensate for background mismatch. Experiments show that the proposed algorithm can recall 89% of speaker change boundaries with 15% false alarms, and 76% of speakers can be unsupervised identified with 20% false alarms. Compared with previous works, the algorithm also has low computation complexity and can perform in 15% of real time with a very limited delay in analysis. Published online: 12 January 2005 Part of the work presented in this paper was published in the 10th ACM International Conference on Multimedia, 1-6 December 2002  相似文献   

13.
Content-based multimedia indexing, retrieval, and processing as well as multimedia databases demand the structuring of the media content (image, audio, video, text, etc.), one significant goal being to associate the identity of the content to the individual segments of the signals. In this paper, we specifically address the problem of speaker clustering, the task of assigning every speech utterance in an audio stream to its speaker. We offer a complete treatment to the idea of partially supervised speaker clustering, which refers to the use of our prior knowledge of speakers in general to assist the unsupervised speaker clustering process. By means of an independent training data set, we encode the prior knowledge at the various stages of the speaker clustering pipeline via 1) learning a speaker-discriminative acoustic feature transformation, 2) learning a universal speaker prior model, and 3) learning a discriminative speaker subspace, or equivalently, a speaker-discriminative distance metric. We study the directional scattering property of the Gaussian mixture model (GMM) mean supervector representation of utterances in the high-dimensional space, and advocate exploiting this property by using the cosine distance metric instead of the euclidean distance metric for speaker clustering in the GMM mean supervector space. We propose to perform discriminant analysis based on the cosine distance metric, which leads to a novel distance metric learning algorithm—linear spherical discriminant analysis (LSDA). We show that the proposed LSDA formulation can be systematically solved within the elegant graph embedding general dimensionality reduction framework. Our speaker clustering experiments on the GALE database clearly indicate that 1) our speaker clustering methods based on the GMM mean supervector representation and vector-based distance metrics outperform traditional speaker clustering methods based on the “bag of acoustic features” representation and statistical model-based distance metrics, 2) our advocated use of the cosine distance metric yields consistent increases in the speaker clustering performance as compared to the commonly used euclidean distance metric, 3) our partially supervised speaker clustering concept and strategies significantly improve the speaker clustering performance over the baselines, and 4) our proposed LSDA algorithm further leads to state-of-the-art speaker clustering performance.  相似文献   

14.
15.
In this paper, we propose a sub-vector based speaker characterization method for biometric speaker verification, where speakers are represented by uniform segmentation of their maximum likelihood linear regression (MLLR) super-vectors called m-vectors. The MLLR transformation is estimated with respect to universal background model (UBM) without any speech/phonetic information. We introduce two strategies for segmentation of MLLR super-vector: one is called disjoint and other is an overlapped window technique. During test phase, m-vectors of the test utterance are scored against the claimant speaker. Before scoring, m-vectors are post-processed to compensate the session variability. In addition, we propose a clustering algorithm for multiple-class wise MLLR transformation, where Gaussian components of the UBM are clustered into different groups using the concept of expectation maximization (EM) and maximum likelihood (ML). In this case, MLLR transformations are estimated with respect to each class using the sufficient statistics accumulated from the Gaussian components belonging to the particular class, which are then used for m-vector system. The proposed method needs only once alignment of the data with respect to the UBM for multiple MLLR transformations. We first show that the proposed multi-class m-vector system shows promising speaker verification performance when compared to the conventional i-vector based speaker verification system. Secondly, the proposed EM based clustering technique is robust to the random initialization in-contrast to the conventional K-means algorithm and yields system performance better/equal which is best obtained by the K-means. Finally, we show that the fusion of the m-vector with the i-vector further improves the performance of the speaker verification in both score as well as feature domain. The experimental results are shown on various tasks of NIST 2008 speaker recognition evaluation (SRE) core condition.  相似文献   

16.
In recent years, computational paralinguistics has emerged as a new topic within speech technology. It concerns extracting non-linguistic information from speech (such as emotions, the level of conflict, whether the speaker is drunk). It was shown recently that many methods applied here can be assisted by speaker clustering; for example, the features extracted from the utterances could be normalized speaker-wise instead of using a global method. In this paper, we propose a speaker clustering algorithm based on standard clustering approaches like K-means and feature selection. By applying this speaker clustering technique in two paralinguistic tasks, we were able to significantly improve the accuracy scores of several machine learning methods, and we also obtained an insight into what features could be efficiently used to separate the different speakers.  相似文献   

17.
The need for computer modeling tools capable of precisely simulating multi-field interactions is increasing. The accurate modeling of an electrostatically actuated Micro-Electro-Mechanical-Systems speaker results in a system of coupled partial differential equations (PDEs), describing the interactions between electrostatic, mechanical and acoustic fields. A finite element (FE) method is applied to solve the PDEs efficiently and accurately. In the first part of this paper, we present the driving technology of an electrostatic actuated Micro-Electro-Mechanical-Systems speaker, where the electrostatic mechanical coupling is realized with reduced order electro mechanical transducer elements. The electrostatic attracting force is derived from the capacity to gap relation of our device. In a second investigation, we focus on generation of the generated sound including open domain characteristics and propagation region optimization. The sound pressure level is computed with Kirchhoff Helmholtz integral as well as with FEM by using CFS++. We use the Kirchhoff Helmholtz model to characterize the interactions of multiple speaker cells in arrays and the FE tool for single speaker cell investigations. At the acoustic FE model, the focus is on mesh generation and optimization of the propagation region using non-conforming grids (Mortar FEM) and in addition at the boundary region to model open domain characteristics. We apply a recently developed perfectly matched layer technique, which allows us to truncate the acoustic propagation domain with open domain characteristics. Finally, we present an optimization method taking advantage of stress induced self-raising realized with various merged layers with different intrinsic pre-stress. The buckling back plate concept can be compared to bimetal characteristics.  相似文献   

18.
基于duffing随机共振的说话人特征提取方法   总被引:2,自引:0,他引:2  
潘平  何朝霞 《计算机工程与应用》2012,48(35):123-125,142
说话人特征参数的提取直接影响识别模型的建立,MFCC与LPC参数提取方法,分别以局域低频信息和全局AR信号为主要特征。提出一种基于duffing随机共振的说话人频谱特征提取方法。仿真结果表明,该方法能识别说话人之间频谱的微小差别,有效地提取说话人频谱的基本特征,从而为说话人识别模型提供更为精细的识别模型。  相似文献   

19.
为了发现新闻故事中的关键说话人,用以提高多媒体检索效率,在说话人索引的基础上,提出了关键人发现方法:根据新闻故事中说话人的特点,基于说话人频率、说话人持续时间、平均每次说话人时长和说话人位置因子4个因素,综合定义了说话人关键度,用以判断说话人的重要性,把每个新闻故事中说话人关键度最大的人作为关键说话人.实验结果表明,该种算法可以找到故事中绝大部分的关键说话人,验证了该算法的有效性和可行性.  相似文献   

20.
提出了一种将基于深度神经网络(Deep Neural Network,DNN)特征映射的回归分析模型应用到身份认证矢量(identity vector,i-vector)/概率线性判别分析(Probabilistic Linear Discriminant Analysis,PLDA)说话人系统模型中的方法。DNN通过拟合含噪语音和纯净语音i-vector之间的非线性函数关系,得到纯净语音i-vector的近似表征,达到降低噪声对系统性能影响的目的。在TIMIT数据集上的实验验证了该方法的可行性和有效性。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号