首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
杨毅  宋辉  刘加 《电子与信息学报》2011,33(5):1234-1237
该文针对美国国家标准与技术研究院(NIST)的 NIST评测,构建了一套多距离麦克风说话人分类及定位语音处理系统,针对NIST富标注评测中提出的说话人分类问题,提出改进的结合时延估计和聚类的说话人分类方法,在保证稳定性的前提下降低说话人分类的复杂度并提高准确率;提出一种新的相邻阵元间时延构造矩阵方程算法,可得到多个说话人的方向角。实验在标准会议环境下采集真实语音数据进行算法验证,说话人分类算法的正确率接近目前主要说话人分类系统的正确率,定位方向角误差在3以内。实验结果说明,适当条件下多距离麦克风系统可作为合适的语音信号输入设备应用于多人多方会议环境。  相似文献   

2.
朱唯鑫  郭武 《信号处理》2016,32(7):859-865
本文首次提出了长度规整的最大后验估计(MAP)方法,并将其应用到说话人分割聚类中的交叉似然比(CLR)和T Test这两种度量距离上。传统的MAP方法需要在通用背景模型(UBM)基础上进行统计量的计算,进而对模型参数进行自适应偏移,因此偏移的程度与语音片段的长度正相关。当在度量两个长度不相同的语音片段的相似性时,传统的MAP方法会使得说话人模型刻画不准确,从而影响距离度量。本文在MAP过程中,根据语音的长度对相关因子进行规整,然后再进行模型参数的调整,从而使得模型参数与语音长度无关,更能体现说话人的身份信息。在中文多人电视访谈节目数据的分割聚类评测任务上,采用长度规整的MAP方法相对于传统方法都有明显提升,在CLR度量准则下分割聚类错误率相对下降了35%,在T Test度量准则下分割聚类错误率相对下降了107%。   相似文献   

3.
A novel text independent speaker identification system is proposed. In the proposed system, the 12-order perceptual linear predictive cepstrum and their delta coefficients in the span of five frames are extracted from the segmented speech based on the method of pitch synchronous analysis. The Fisher ratios of the original coefficients then be calculated, and the coefficients whose Fisher ratios are bigger are selected to form the 13-dimensional feature vectors of speaker. The Gaussian mixture model is used to model the speakers. The experimental results show that the identification accuracy of the proposed system is obviously better than that of the systems based on other conventional coefficients like the linear predictive cepstral coefficients and the Mel-frequency cepstral coefficients.  相似文献   

4.
本文根据倒谱系数矢量在特征空间的统计分布特性,提出了一种新的等方差加权倒谱失真测度,这种测度的加权函数充分刻画了语音倒谱矢量在特征空间分布的精细结构,从而有效地辨识不同讲话者的特征,实验表明,和常规的欧氏距离及方差倒数加权距离等相比,本文所提的失真测度能显著提高基于矢量量化的说话人识别的正识率。  相似文献   

5.
This paper proposes using a previously well-trained deep neural network (DNN) to enhance the i-vector representation used for speaker diarization. In effect, we replace the Gaussian mixture model typically used to train a universal background model (UBM), with a DNN that has been trained using a different large-scale dataset. To train the T-matrix, we use a supervised UBM obtained from the DNN using filterbank input features to calculate the posterior information and then MFCC features to train the UBM instead of a traditional unsupervised UBM derived from single features. Next we jointly use DNN and MFCC features to calculate the zeroth- and first-order Baum–Welch statistics for training an extractor from which we obtain the i-vector. The system will be shown to achieve a significant improvement on the NIST 2008 speaker recognition evaluation telephone data task compared to state-of-the-art approaches.  相似文献   

6.
本征音子说话人自适应算法在自适应数据量充足时可以取得很好的自适应效果,但在自适应数据量不足时会出现严重的过拟合现象。为此该文提出一种基于本征音子说话人子空间的说话人自适应算法来克服这一问题。首先给出基于隐马尔可夫模型-高斯混合模型(HMM-GMM)的语音识别系统中本征音子说话人自适应的基本原理。其次通过引入说话人子空间对不同说话人的本征音子矩阵间的相关性信息进行建模;然后通过估计说话人相关坐标矢量得到一种新的本征音子说话人子空间自适应算法。最后将本征音子说话人子空间自适应算法与传统说话人子空间自适应算法进行了对比。基于微软语料库的汉语连续语音识别实验表明,与本征音子说话人自适应算法相比,该算法在自适应数据量极少时能大幅提升性能,较好地克服过拟合现象。与本征音自适应算法相比,该算法以较小的性能牺牲代价获得了更低的空间复杂度而更具实用性。  相似文献   

7.
We propose a novel feature processing technique which can provide a cepstral liftering effect in the log‐spectral domain. Cepstral liftering aims at the equalization of variance of cepstral coefficients for the distance‐based speech recognizer, and as a result, provides the robustness for additive noise and speaker variability. However, in the popular hidden Markov model based framework, cepstral liftering has no effect in recognition performance. We derive a filtering method in log‐spectral domain corresponding to the cepstral liftering. The proposed method performs a high‐pass filtering based on the decorrelation of filter‐bank energies. We show that in noisy speech recognition, the proposed method reduces the error rate by 52.7% to conventional feature.  相似文献   

8.
针对当前神经网络声学建模中数据混用困难的问题,文中提出了一种基于听感量化编码的神经网络语音合成方法。通过设计听感量化编码模型学习海量语音在音色、语种、情感上的不同差异表征,构建统一的多人数据混合训练的神经网络声学模型。在统一的听感量化编码声学模型内通过数据共享和迁移学习,可以显著降低合成系统搭建的数据量要求,并实现对合成语音的音色、语种、情感等属性的有效控制。提升了神经网络语音合成的质量和灵活性,一小时数据构建语音合成系统自然度可达到4.0MOS分,达到并超过普通说话人水平。  相似文献   

9.
一种语音特征参数子分量分析与有效性评价的新方法   总被引:2,自引:0,他引:2  
语音信号中包含语义和说话人个性两大特征,其有效提取和强化对语音识别和说话人识别有着非常重要的意义。本文提出了一种语音特征参数中语义和个性特征子分量分析与有效性评价的4S方法,对语义和个性特征的成份比例进行分析,并通过量化指标评判特征参数对语音识别和说话人识别的有效性。运用4S分析方法对目前常用的特征参数LPC, LPCC和MFCC的子分量分析与有效性评价结果表明,所有的特征参数都更多地包含了语义特征信息,语义特征和说话人个性特征的成份比例因子LIR分别为1.30、1.44和1.61,并且,三种参数对语音识别和说话人识别的有效性均呈现出依次提高的特性。  相似文献   

10.
Dysarthria is a degenerative disorder of the central nervous system that affects the control of articulation and pitch; therefore, it affects the uniqueness of sound produced by the speaker. Hence, dysarthric speaker recognition is a challenging task. In this paper, a feature-extraction method based on deep belief networks is presented for the task of identifying a speaker suffering from dysarthria. The effectiveness of the proposed method is demonstrated and compared with well-known Mel-frequency cepstral coefficient features. For classification purposes, the use of a multi-layer perceptron neural network is proposed with two structures. Our evaluations using the universal access speech database produced promising results and outperformed other baseline methods. In addition, speaker identification under both text-dependent and text-independent conditions are explored. The highest accuracy achieved using the proposed system is 97.3%.  相似文献   

11.
稳健语音识别技术发展现状及展望   总被引:12,自引:0,他引:12  
姚文冰  姚天任  韩涛 《信号处理》2001,17(6):484-493
本文在简单叙述稳健语音识别技术产生的背景后,着重介绍了现阶段国内外有关稳健语音识别的主要技术、研究现状及未来发展方向.首先简述引起语音质量恶化、影响语音识别系统稳健性的干扰源及其影响.然后分别介绍语音增强、稳健语音特征的提取、基于特征和模型的补偿技术、麦克风阵列、基于人耳的听觉处理及听觉视觉双模态语音识别等技术路线及发展现状.最后讨论稳健语音识别技术朱来的发展方向.  相似文献   

12.
本文在丢失数据技术与声学后退技术的基础上,提出了一种基于模糊规则的鲁棒语音识别方法,首先根据先验知识或假定建立特征分量的可靠程度与其概率分布之间的模糊规则,识别时观察矢量的输出概率由一个基于规则的模糊逻辑系统来得到,并针对倒谱识别系统给出了一种具体的实现方法.实验结果表明,所提识别方法的性能显著优于丢失数据技术和声学后退技术.  相似文献   

13.
Speaker adaptation techniques are generally used to reduce speaker differences in speech recognition. In this work, we focus on the features fitted to a linear regression‐based speaker adaptation. These are obtained by feature transformation based on independent component analysis (ICA), and the feature transformation matrices are estimated from the training data and adaptation data. Since the adaptation data is not sufficient to reliably estimate the ICA‐based feature transformation matrix, it is necessary to adjust the ICA‐based feature transformation matrix estimated from a new speaker utterance. To cope with this problem, we propose a smoothing method through a linear interpolation between the speaker‐independent (SI) feature transformation matrix and the speaker‐dependent (SD) feature transformation matrix. From our experiments, we observed that the proposed method is more effective in the mismatched case. In the mismatched case, the adaptation performance is improved because the smoothed feature transformation matrix makes speaker adaptation using noisy speech more robust.  相似文献   

14.
Currently, many speaker recognition applications must handle speech corrupted by environmental additive noise without having a priori knowledge about the characteristics of noise. Some previous works in speaker recognition have used the missing feature (MF) approach to compensate for noise. In most of those applications, the spectral reliability decision step is performed using the signal to noise ratio (SNR) criterion, which attempts to directly measure the relative signal to noise energy at each frequency. An alternative approach to spectral data reliability has been used with some success in the MF approach to speech recognition. Here, we compare the use of this new criterion with the SNR criterion for MF mask estimation in speaker recognition. The new reliability decision is based on the extraction and analysis of several spectro-temporal features from across the entire speech frame, but not across the time, which highlight the differences between spectral regions dominated by speech and by noise. We call it the feature classification (FC) criterion. It uses several spectral features to establish spectrogram reliability unlike SNR criterion that relies only in one feature: SNR. We evaluated our proposal through speaker verification experiments, in Ahumada speech database corrupted by different types of noise at various SNR levels. Experiments demonstrated that the FC criterion achieves considerably better recognition accuracy than the SNR criterion in the speaker verification tasks tested.  相似文献   

15.
高畅  李海峰  马琳 《信号处理》2012,28(6):851-858
压缩感知理论依据信号的稀疏性质进行压缩测量,将信号的获取方式从对信号的采样上升为对信息的感知,是信号处理领域的一场革命。本文提出一种基于非确定基字典(Uncertainty Basis Dictionary, UBD)对语音信号进行稀疏表示的方法,将压缩感知理论应用于对语音信号稀疏表示的压缩,并提出了基于求解线性规划问题的方法重构语音信号的算法。通过语音识别、话者识别和情感识别实验,从面向内容分析的角度,研究这种基于压缩感知理论的信息感知方法是否保留了语音信号的主要内容。实验结果表明,语音识别、话者识别和情感识别的准确率,与目前这些领域研究方法得到的结果基本一致,说明基于压缩感知理论的信息感知方法能够很好地获取语音信号的语义、话者和情感方面的信息。   相似文献   

16.
This paper presents the results on whispered speech recognition using gammatone filterbank cepstral coefficients for speaker dependent mode. The isolated words used for this experiment are taken from the Whi-Spe database. Whispered speech recognition is based on dynamic time warping and hidden Markov models methods. The experiments are focused on the following modes: normal speech, whispered speech and their combinations (normal/whispered and whispered/normal). The results demonstrated an important improvement in recognition after application of cepstral mean subtraction, especially in mixed train/test scenarios.  相似文献   

17.
基于鲁棒听觉特征的说话人识别   总被引:3,自引:0,他引:3  
林琳  陈虹  陈建 《电子学报》2013,41(3):619-624
 为了提高噪声环境中说话人识别系统的性能,本文提出了一种鲁棒听觉特征提取的算法,并将其应用到说话人识别系统中.运用自适应压缩Gammachirp滤波器组模拟人耳耳蜗的听觉特性,对输入的语音信号进行频域子带滤波,将得到的对数子带能量作为听觉特征参数.分别运用离散余弦变换和核主成分分析方法,对提取的特征参数进行特征变换,降低特征参数的维数,提高特征参数的噪声鲁棒性和个性表现力.实验结果表明,将提取的新听觉特征参数应用到说话人识别系统中,新特征参数在鲁棒性和识别性能上均优于梅尔倒谱系数和基于Gammatone的听觉特征参数.  相似文献   

18.
薛峰  俞一彪 《信号处理》2010,26(1):127-131
缺失数据理论的置信度分析用于说话人识别时,使用的是滤波器组语音特征,虽然系统的鲁棒性可以提高,但整体的误识率依然很高。为了进一步降低系统的误识率,本文在滤波器组语音特征分量置信度的基础上,提出了一种用于计算倒谱域特征MFCC各维分量置信度的方法CBTM,该方法通过一个置信度变换矩阵,估算出经过Mel谱减法处理后的MFCC各维分量的置信度,在此基础上通过对GMM模型的方差加权来减少置信度小的特征分量对输出概率的影响,以此来提高系统的鲁棒性。在基于SUDA2002语料库的说话人辨认实验中,上述方法对NoiseX 92噪声库中的white、pink、factory1噪声表现出了比传统方法更低的误识率,说明了这种方法的有效性。   相似文献   

19.
基于倒谱特征的带噪语音端点检测   总被引:44,自引:0,他引:44       下载免费PDF全文
胡光锐  韦晓东 《电子学报》2000,28(10):95-97
在语音识别系统中产生错误识别的原因之一是端点检测有误差.在高信噪比情况下,正确地确定语音的端点并不困难.然而,大多数实际的语音识别系统需工作在低信噪比情况下,一些常规的端点检测方法,例如基于能量的端点检测方法在噪声环境下不能有效地工作.本文利用倒谱特征来检测语音端点,提出了带噪语音端点检测的两个算法,第一个算法利用倒谱距离代替短时能量作为判决的门限,第二个算法改进了基于隐马尔柯夫模型(HMM)的语音检测以适应噪声的变化,实验结果表明本方法可得到高正确率的带噪语音端点检测.  相似文献   

20.
当前基于预训练说话人编码器的语音克隆方法可以为训练过程中见到的说话人合成较高音色相似性的语音,但对于训练中未看到的说话人,语音克隆的语音在音色上仍然与真实说话人音色存在明显差别。针对此问题,本文提出了一种基于音色一致的说话人特征提取方法,该方法使用当前先进的说话人识别模型TitaNet作为说话人编码器的基本架构,并依据说话人音色在语音片段中保持不变的先验知识,引入一种音色一致性约束损失用于说话人编码器训练,以此提取更精确的说话人音色特征,增加说话人表征的鲁棒性和泛化性,最后将提取的特征应用端到端的语音合成模型VITS进行语音克隆。实验结果表明,本文提出的方法在2个公开的语音数据集上取得了相比基线系统更好的性能表现,提高了对未见说话人克隆语音的音色相似度。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号