首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 250 毫秒
1.
概率测度和距离测度是模式识别最基本的两种测度,矢量量化算法是典型的基于距离测度的模式识别算法。根据量子模距离测度理论,在矢量量化算法的基础上,探索一种基于量子模距离的说话人识别方法。该方法针对说话人语音的时变性、随机性、特征维数较高等特点,将一帧语音信号视为一个量子态,并根据量子测量理论,对量子态之间进行模距离测量,从而对量子态进行有效的分类和聚类。研究表明该方法能有效地降低语音信号处理的复杂度。在经典计算机上的仿真表明,该方法在运行时间上略优于矢量量化算法,在识别率上明显优于矢量量化算法,为说话人识别的理论研究提供了新的途径。  相似文献   

2.
基于量子模距离的说话人识别方法   总被引:1,自引:0,他引:1  
概率测度和距离测度是模式识别最基本的两种测度,矢量量化算法是典型的基于距离测度的模式识别算法。根据量子模距离测度理论,在矢量量化算法的基础上,探索一种基于量子模距离的说话人识别方法。该方法针对说话人语音的时变性、随机性、特征维数较高等特点,将一帧语音信号视为一个量子态,并根据量子测量理论,对量子态之间进行模距离测量,从而对量子态进行有效的分类和聚类。研究表明该方法能有效地降低语音信号处理的复杂度。在经典计算机上的仿真表明,该方法在运行时间上略优于矢量量化算法,在识别率上明显优于矢量量化算法,为说话人识别的理论研究提供了新的途径。  相似文献   

3.
郑普亮  许刚 《计算机工程》2006,32(9):193-195
讨论了时频分布以及距离测度在说话人确认中的应用。实验采用语音信号的时频分布作为说话人模型,并根据分布之间的距离测度进行确认判决。选择了多个核函数和距离测度进行比较,采用改进的Nelder-Mead算法,对核函数的参数进行优化,显著提高了确认的正确率,并在实验中得到了验征。  相似文献   

4.
给出了基于公共码书的说话人分布特征的定义。提出了基于分布特征统计的说话人识别算法,根据所有参考说话人的训练语音建立公共码书,实现对语音特征空间的分类,统计各参考说话人训练语音的在公共码字上的分布特征进行建模。识别中引入双序列比对方法进行识别语音的分布特征统计与参考说话人模型间的相似度匹配,实现对说话人的辨认。实验表明,该方法保证识别率的情况下,进一步提高了基于VQ的说话人识别的速度。  相似文献   

5.
语音识别中的一种说话人聚类算法   总被引:1,自引:1,他引:1  
本文介绍了稳健语音识别中的一种说话人聚类算法,包括它在语音识别中的作用和具体的用法,聚类中常用的特征、距离测度,聚类的具体实现步骤等。我们从两个方面对该算法的性能进行了测试,一是直接计算句子聚类的正确率,二是对说话人自适应效果的改进的作用,即比较使用此算法后系统性能的改进进行评价。实验表明:在使用GLR 距离作为距离测度的时候,该算法对句子的聚类正确率达85169 %;在识别实验中,该聚类算法的使用,使得用于说话人自适应的数据更加充分,提高了自适应的效果,系统的误识率已经接近利用已知说话人信息进行自适应时的误识率。  相似文献   

6.
王波  徐毅琼  李弼程 《计算机工程与设计》2007,28(10):2401-2402,2416
提出了一种使用段级语音特征对测试进行说话人分段从而实现对话环境下说话人分段算法,算法实现中基于车比雪夫和不等式提出了基于协方差模型的段级特征的距离测度描述.该识别方法根据实验选择了合适的段级特征语音段长度,实验结果表明基于段级特征的说话人识别方法在有效地在对话环境下将多人的语音进行分段,从而提高了说话人识别系统的精度和识别速度.  相似文献   

7.
基于FMFCC和HMM的说话人识别   总被引:2,自引:0,他引:2  
张永亮  张先庭  鲁宇明 《计算机仿真》2010,27(5):352-354,358
美尔频率倒谱系数(MFCC)是说话人识别中常用的特征参数,而语音信号是非平稳信号,MFCC并不能很好的反映语音的时频特性。针对这一缺陷,为了提高说话人的识别率,结合新的时频分析工具分数傅立叶变换(FRFT)。将MFCC推广到分数形式,得到分数美尔频率倒谱系数(FMFCC),用以表征语音信号的特征;并利用可分性测度验证了特征参数的有效性;通过建立20个不同说话人的FMFCC特征库,采用隐马尔可夫模型(HMM)对说话人进行仿真识别。仿真结果表明,在合适的变换阶次下,说话人的平均识别率可达93%以上。  相似文献   

8.
基于性别识别的分类CHMM语音识别   总被引:2,自引:0,他引:2       下载免费PDF全文
对语音识别进行了探讨,提出一种通过性别识别对连续隐马尔可夫模型(CHMM)分类的方法,在此基础上进行语音识别。首先,通过计算性别判定语音信号的Mel频率倒谱系数(MFCC)使用CHMM对说话人性别进行识别,然后再根据不同性别使用分类CHMM进行语音识别。最后通过实验验证了方法的有效性。  相似文献   

9.
针对MFCC不能得到高效的说话人识别性能的问题,提出了将时频特征与MFCC相结合的说话人特征提取方法。首先得到语音信号的时频分布,然后将时频域转换到频域再提取MFCC+MFCC作为特征参数,最后通过支持向量机来进行说话人识别研究。仿真实验比较了MFCC、MFCC+MFCC分别作为特征参数时语音信号与各种时频分布的识别性能,结果表明基于CWD分布的MFCC和MFCC的识别率可提高到95.7%。  相似文献   

10.
基于支持向量机和小波分析的说话人识别   总被引:2,自引:0,他引:2  
为解决说话人识别问题,提出了一种基于支持向量机和小波分析的识别方法以及其框架模型,即将小波分析应用于信号预处理,并以此为基础,利用其奇异点检测原理将语音信号和噪声分离,实现语音增强,最终基于样本进行训练和测试,采用SVM实现说话人的分类识别.  相似文献   

11.
Speech and speaker recognition is an important topic to be performed by a computer system. In this paper, an expert speaker recognition system based on optimum wavelet packet entropy is proposed for speaker recognition by using real speech/voice signal. This study contains both the combination of the new feature extraction and classification approach by using optimum wavelet packet entropy parameter values. These optimum wavelet packet entropy values are obtained from measured real English language speech/voice signal waveforms using speech experimental set. A genetic-wavelet packet-neural network (GWPNN) model is developed in this study. GWPNN includes three layers which are genetic algorithm, wavelet packet and multi-layer perception. The genetic algorithm layer of GWPNN is used for selecting the feature extraction method and obtaining the optimum wavelet entropy parameter values. In this study, one of the four different feature extraction methods is selected by using genetic algorithm. Alternative feature extraction methods are wavelet packet decomposition, wavelet packet decomposition – short-time Fourier transform, wavelet packet decomposition – Born–Jordan time–frequency representation, wavelet packet decomposition – Choi–Williams time–frequency representation. The wavelet packet layer is used for optimum feature extraction in the time–frequency domain and is composed of wavelet packet decomposition and wavelet packet entropies. The multi-layer perceptron of GWPNN, which is a feed-forward neural network, is used for evaluating the fitness function of the genetic algorithm and for classification speakers. The performance of the developed system has been evaluated by using noisy English speech/voice signals. The test results showed that this system was effective in detecting real speech signals. The correct classification rate was about 85% for speaker classification.  相似文献   

12.
声纹识别技术实现的关键点在于从语音信号中提取语音特征参数,此参数具备表征说话人特征的能力。基于GMM-UBM模型,通过Matlab实现文本无关的声纹识别系统,对主流静态特征参数MFCC、LPCC、LPC以及结合动态参数的MFCC,从说话人确认与说话人辨认两种应用角度进行性能比较。在取不同特征参数阶数、不同高斯混合度和使用不同时长的训练语音与测试语音的情况下,从理论识别效果、实际识别效果、识别所用时长、识别时长占比等多个方面进行了分析与研究。最终结果表明:在GMM-UBM模式识别方法下,三种静态特征参数中MFCC绝大多数时候具有最佳识别效果,同时其系统识别耗时最长;识别率与语音特征参数的阶数之间并非单调上升关系。静态参数在结合较佳阶数的动态参数时能够提升识别效果;增加动态参数阶数与提高系统识别效果之间无必然联系。  相似文献   

13.
Speaker recognition is a major challenge in various languages for researchers. For programmed speaker recognition structure prepared by utilizing ordinary speech, shouting creates a confusion between the enlistment and test, henceforth minimizing the identification execution as extreme vocal exertion is required during shouting. Speaker recognition requires more time for classification of data, accuracy is optimized, and the low root-mean-square error rate is the major problem. The objective of this work is to develop an efficient system of speaker recognition. In this work, an improved method of Wiener filter algorithm is applied for better noise reduction. To obtain the essential feature vector values, Mel-frequency cepstral coefficient feature extraction method is used on the noise-removed signals. Furthermore, input samples are created by using these extracted features after the dimensions have been reduced using probabilistic principal component analysis. Finally, recurrent neural network-bidirectional long-short-term memory is used for the classification to improve the prediction accuracy. For checking the effectiveness, the proposed work is compared with the existing methods based on accuracy, sensitivity, and error rate. The results obtained with the proposed method demonstrate an accuracy of 95.77%.  相似文献   

14.
于目标声源的方位信息与非线性时频掩蔽语音欠定盲分离方法和BP说话人识别技术的研究基础上,针对现实生活中多说话人交流场景,设计并提出了一种行之有效的解决方案,实现了对处于任意方位的任意目标说话人语音的提取.该方案总体上分目标语音搜索与提取两个阶段,搜索阶段采用了BP说话人识别技术,提取阶段采用了一种改进的势函数聚类声源方位信息与非线性时频掩蔽的语音欠定盲分离方法.实验结果表明:该方案具有可行性,可从混合语音流中有效提取处于任意方位的目标说话人语音,且效果较好,信噪比增益平均为8.68dB,相似系数为85%,识别率为61%,运行时间为20.6S.  相似文献   

15.
建立一种非参数模型来刻画说话人的特征分布,并采用地面移动距离来度量分布之间的相似性.该方法能有效地利用有限的数据表达说话人的身份信息,直接计算特征分布与测试语音分布之间的距离,与传统的矢量量化和高斯混合模型相比,不需要通过对所有语音帧计算总平均失真误差和最小相似度,计算简单,主要能够降低系统对数据量的依赖性.并且通过自适应直方图均衡化方法对原始语音特征进行修正,使得噪声环境下获得的语音特征经过修正后更符合真实分布,增强了特征的抗噪性.实验表明,本文提出的方法在噪声环境下的短语音说话人识别系统中表现出较强的优势.  相似文献   

16.
A novel approach for joint speaker identification and speech recognition is presented in this article. Unsupervised speaker tracking and automatic adaptation of the human-computer interface is achieved by the interaction of speaker identification, speech recognition and speaker adaptation for a limited number of recurring users. Together with a technique for efficient information retrieval a compact modeling of speech and speaker characteristics is presented. Applying speaker specific profiles allows speech recognition to take individual speech characteristics into consideration to achieve higher recognition rates. Speaker profiles are initialized and continuously adapted by a balanced strategy of short-term and long-term speaker adaptation combined with robust speaker identification. Different users can be tracked by the resulting self-learning speech controlled system. Only a very short enrollment of each speaker is required. Subsequent utterances are used for unsupervised adaptation resulting in continuously improved speech recognition rates. Additionally, the detection of unknown speakers is examined under the objective to avoid the requirement to train new speaker profiles explicitly. The speech controlled system presented here is suitable for in-car applications, e.g. speech controlled navigation, hands-free telephony or infotainment systems, on embedded devices. Results are presented for a subset of the SPEECON database. The results validate the benefit of the speaker adaptation scheme and the unified modeling in terms of speaker identification and speech recognition rates.  相似文献   

17.
Pre-processing is one of the vital steps for developing robust and efficient recognition system. Better pre-processing not only aid in better data selection but also in significant reduction of computational complexity. Further an efficient frame selection technique can improve the overall performance of the system. Pre-quantization (PQ) is the technique of selecting less number of frames in the pre-processing stage to reduce the computational burden in the post processing stages of speaker identification (SI). In this paper, we develop PQ techniques based on spectral entropy and spectral shape to pick suitable frames containing speaker specific information that varies from frame to frame depending on spoken text and environmental conditions. The attempt is to exploit the statistical properties of distributions of speech frames at the pre-processing stage of speaker recognition. Our aim is not only to reduce the frame rate but also to maintain identification accuracy reasonably high. Further we have also analyzed the robustness of our proposed techniques on noisy utterances. To establish the efficacy of our proposed methods, we used two different databases, POLYCOST (telephone speech) and YOHO (microphone speech).  相似文献   

18.
Robustness is one of the most important topics for automatic speech recognition (ASR) in practical applications. Monaural speech separation based on computational auditory scene analysis (CASA) offers a solution to this problem. In this paper, a novel system is presented to separate the monaural speech of two talkers. Gaussian mixture models (GMMs) and vector quantizers (VQs) are used to learn the grouping cues on isolated clean data for each speaker. Given an utterance, speaker identification is firstly performed to identify the two speakers presented in the utterance, then the factorial-max vector quantization model (MAXVQ) is used to infer the mask signals and finally the utterance of the target speaker is resynthesized in the CASA framework. Recognition results on the 2006 speech separation challenge corpus prove that this proposed system can improve the robustness of ASR significantly.  相似文献   

19.
Speaker variability is known to have an adverse impact on speech systems that process linguistic content, such as speech and language recognition. However, speech production changes in individuals due to stress and emotions have similarly detrimental effect also on the task of speaker recognition as they introduce mismatch with the speaker models typically trained on modal speech. The focus of this study is on the analysis of stress-induced variations in speech and design of an automatic stress level assessment scheme that could be used in directing stress-dependent acoustic models or normalization strategies. Current stress detection methods typically employ a binary decision based on whether the speaker is or not under stress. In reality, the amount of stress in individuals varies and can change gradually. Using speech and biometric data collected in a real-world, variable-stress level law enforcement training scenario, this study considers two methods for stress level assessment. The first approach uses a nearest neighbor clustering scheme at the vowel token and sentence levels to classify speech data into three levels of stress. The second approach employs Euclidean distance metrics within the multi-dimensional feature space to provide real-time stress level tracking capability. Evaluations on audio data confirmed by biometric readings show both methods to be effective in assessment of stress level within a speaker (average accuracy of 55.6?% in a 3-way classification task). In addition, an impact of high-level stress on in-set speaker recognition is evaluated and shown to reduce the accuracy from 91.7?% (low/mid stress) to 21.4?% (high level stress).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号