首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 218 毫秒
1.
传统的声音识别系统通过短时声音频谱信息来辨识说话人.这种方法在某些条件下具有较好的性能。但是由于有些说话人特征隐藏在较长的语音片段中,通过添加长时信息可能会进一步提高系统的性能。在文中.音素持续时间信息被添加到传统模型上,以提高说话人辨识率。频谱信息是通过短时分析获得的,但音素持续时间的提取却属于长时分析,它需要更多的语音数据。通过大量语音数据探讨了音素持续时间信息对说话人辨识的有效性,提出2种方法来解决数据量小所引起的问题。实验结果表明,当说话人的声音模型被恰当建立时,即使在语音数据量小的情况下,音素持续时间信息对说话人辨识率的提高也是有效的。  相似文献   

2.
基于音素绑定码本映射的说话人声音转换方法   总被引:1,自引:0,他引:1  
介绍说话人声音转换系统框架,并对传统的基于码本映射的说话人声音转换方法进行讨论.指出传统的码本映射方法由于对谱的转换采用所有码本加权叠加,因此会产生转换后语音频谱平滑效应过重的问题,从而使转换后语音音质较差.为了克服这种问题,本文提出基于音素绑定的码本加权叠加方法来完成语音谱的转换,同时利用决策树来完成韵律的转换.实验表明,即使在数据量较少的情况下,该方法也能较好地完成说话人声音转换,并能得到较高的语音音质.  相似文献   

3.
基于遗传径向基神经网络的声音转换   总被引:4,自引:1,他引:4  
声音转换技术可以将一个人的语音模式转换为与其特性不同的另一个人语音模式,使转换语音保持源说话人原有语音信息内容不变,而具有目标说话人的声音特点。本文研究了由遗传算法训练的RBF神经网络捕获说话人的语音频谱包络映射关系,以实现不同说话人之间声音特性的转换。实验对六个普通话单元音音素的转换语音质量分别作了客观和主观评估,结果表明用神经网络方法可以获得所期望的转换语音性能。实验结果还说明,与K-均值法相比,用遗传算法训练神经网络可以增强网络的全局寻优能力,使转换语音与目标语音的平均频谱失真距离减小约10%。  相似文献   

4.
基于mel标度频谱和音素分割的汉语语音单词端点检测方法   总被引:3,自引:0,他引:3  
利用语音声学信号的频谱分析来寻找连续语音信号帧的分割点,再结合音素分割方法,成功的提高了分割精度。实验表明mel标度频谱法比传统的以信号的短时能量,过零率等简单特征作为判决特征参数的语音端点检测方法更适合语音的分割。  相似文献   

5.
提出一种将STRAIGHT模型和深度信念网络DBN相结合实现语音转换的方式。首先,通过STRAIGHT模型提取出源说话人和目标说话人的语音频谱参数,用提取的频谱参数分别训练两个DBN得到语音高阶空间的个性特征信息;然后,用人工神经网络ANN将两个具有高阶特征的空间连接并进行特征转换;最后,用基于目标说话人数据训练出的DBN来对转换后的特征信息进行逆处理得到语音频谱参数,并用STRAIGHT模型合成具有目标说话人个性化特征的语音。实验结果表明,采用此种方式获得的语音转换效果要比传统的采用GMM实现语音转换更好,转换后的语音音质和相似度与目标语音更接近。  相似文献   

6.
改进的跨语种语音合成模型自适应方法   总被引:1,自引:0,他引:1  
统计参数语音合成中的跨语种模型自适应主要应用于目标说话人语种与源模型语种不同时,使用目标发音人少量语音数据快速构建具有其音色特征的源模型语种合成系统。本文对传统的基于音素映射和三音素模型的跨语种自适应方法进行改进,一方面通过结合数据挑选的音素映射方法以提高音素映射的可靠性,另一方面引入跨语种的韵律信息映射以弥补原有方法中三音素模型在韵律表征上的不足。在中英文跨语种模型自适应系统上的实验结果表明,改进后系统合成语音的自然度与相似度相对传统方法都有了明显提升。  相似文献   

7.
文章提出了一种融合声学、音素配位和韵律特征等多信息融合的汉语方言辨识系统,分析了将语言信息转化为这些特征的实验方法,在此基础上,根据汉语方言辨识的特点,提出了一种基于概率模型的多信息辨识机制,实验结果表明,韵律特征对于短时语音具有很好的辨识效果,而音位配列特征对于长时语音更加有效。对于汉语三种方言的辨识,融合这三种特征的辨识率达95%。  相似文献   

8.
传统的说话人识别中,人们往往认为人耳对相位信息不敏感而忽略了相位信息对语音识别的影响。为了验证相位信息对说话人识别的影响,提出了一种提取相位特征参数的方法。分别在纯净语音和带噪语音条件下,基于高斯混合模型,通过将相位特征参数与耳蜗倒谱系数(CFCC)相结合,研究了相位信息对说话人辨识性能的影响。实验结果标明:相位信息在说话人识别中也有着重要的作用,将其应用于说话人辨识系统,可明显提高系统的识别率和鲁棒性。  相似文献   

9.
对说话人语音个性特征信息的表征和提取进行了深入研究,提出了一种基于深度信念网络(Deep Belief Nets,DBN)的语音转换方法。分别用提取出的源说话人和目标说话人语音频谱参数来训练DBN,分别得到其在高阶空间的语音个性特征表征;通过人工神经网络(Artificial Neural Networks,ANN)来连接这两个高阶空间并进行特征转换;使用基于目标说话人数据训练出的DBN来对转换后的特征信息进行逆处理得到转换后语音频谱参数,合成转换语音。实验结果表明,与传统的基于GMM方法相比,该方法效果更好,转换语音音质和相似度同目标语音更接近。  相似文献   

10.
在说话人空间中,存在语音特征随句子和时间差异而变化的问题。这个变化主要是由语音数据中的语音信息和说话人信息的变化引起的。如果把这两种信息彼此分离就能实现鲁棒的说话人识别。在假设大的说话人变量的空间为“语音空间”和小的说话人变量的空间为“说话人空间”的情况下,通过子空间方法分离语音信息和说话人信息,提出了说话人辨认和说话人确认方法。结果显示:通过相对于传统方法的比较试验,能用小量训练数据建立鲁棒说话人模型。  相似文献   

11.
音素层特征等高层信息的参数由于完全不受信道的影响,被认为可对基于声学参数的低层信息系统进行有益的补充,但高层信息存在数据稀少的缺点。建立了基于音素特征超矢量的识别方法,并采用BUT的音素层语音识别器对其识别性能进行分析,进而尝试通过数据裁剪和KPCA映射的方法来提升该识别方法的性能。结果表明,采用裁剪并不能有效提升其识别性能,但融合KPCA映射的识别算法的性能得到了显著提升。进一步与主流的GMM-UBM系统融合后,相对于GMM-UBM系统,EER从8.4%降至6.7%。  相似文献   

12.
The problem of using a small amount of speech data to adapt a set of Gaussian HMMs (hidden Markov models) that have been trained on one speaker to recognize the speech of another is considered. The authors experimented with a phoneme-dependent spectral mapping for adapting the mean vectors of the multivariate Gaussian distributions (a method analogous to the confusion matrix method that has been used to adapt discrete HMMs), and a heuristic for estimating covariance matrices from small amounts of data. The best results were obtained by training the mean vectors individually from the adaptation data and using the heuristic to estimate distinct covariance matrices for each phoneme  相似文献   

13.
为了改善发声力度对说话人识别系统性能的影响,在训练语音存在少量耳语、高喊语音数据的前提下,提出了使用最大后验概率(MAP)和约束最大似然线性回归(CMLLR)相结合的方法来更新说话人模型、投影转换说话人特征。其中,MAP自适应方法用于对正常语音训练的说话人模型进行更新,而CMLLR特征空间投影方法则用来投影转换耳语、高喊测试语音的特征,从而改善训练语音与测试语音的失配问题。实验结果显示,采用MAP+CMLLR方法时,说话人识别系统等错误率(EER)明显降低,与基线系统、最大后验概率(MAP)自适应方法、最大似然线性回归(MLLR)模型投影方法和约束最大似然线性回归(CMLLR)特征空间投影方法相比,MAP+CMLLR方法的平均等错率分别降低了75.3%、3.5%、72%和70.9%。实验结果表明,所提出方法削弱了发声力度对说话人区分性的影响,使说话人识别系统对于发声力度变化更加鲁棒。  相似文献   

14.
This paper describes the work done in improving the performance of Tamil speech recognition system by using Time Scale Modification (TSM) and Vocal Tract Length Normalization (VTLN) techniques. The speech recognition system for Tamil language was developed using a new approach of text independent speech segmentation, with a phoneme based language model for recognition. There is degradation in the performance of speech recognition due to variations in the speaking rate and vocal tract shape among different speakers. In order to improve the performance of speech recognition system, both TSM and VTLN normalization techniques were used in this work. The TSM was implemented using the Phase vocoder approach and the VTLN was implemented using speaker specific bark/mel scale in bark/mel domain. The performance of Tamil speech recognition system was improved by performing both TSM and VTLN normalization techniques.  相似文献   

15.
The fine spectral structure related to pitch information is conveyed in Mel cepstral features, with variations in pitch causing variations in the features. For speaker recognition systems, this phenomenon, known as "pitch mismatch" between training and testing, can increase error rates. Likewise, pitch-related variability may potentially increase error rates in speech recognition systems for languages such as English in which pitch does not carry phonetic information. In addition, for both speech recognition and speaker recognition systems, the parsing of the raw speech signal into frames is traditionally performed using a constant frame size and a constant frame offset, without aligning the frames to the natural pitch cycles. As a result the power spectral estimation that is done as part of the Mel cepstral computation may include artifacts. Pitch synchronous methods have addressed this problem in the past, at the expense of adding some complexity by using a variable frame size and/or offset. This paper introduces Pseudo Pitch Synchronous (PPS) signal processing procedures that attempt to align each individual frame to its natural cycle and avoid truncation of pitch cycles while still using constant frame size and frame offset, in an effort to address the above problems. Text independent speaker recognition experiments performed on NIST speaker recognition tasks demonstrate a performance improvement when the scores produced by systems using PPS are fused with traditional speaker recognition scores. In addition, a better distribution of errors across trials may be obtained for similar error rates, and some insight regarding of role of the fundamental frequency in speaker recognition is revealed. Speech recognition experiments run on the Aurora-2 noisy digits task also show improved robustness and better accuracy for extremely low signal-to-noise ratio (SNR) data.  相似文献   

16.
This paper describes an approach for automatic scoring of pronunciation quality for non-native speech. It is applicable regardless of the foreign language student’s mother tongue. Sentences and words are considered as scoring units. Additionally, mispronunciation and phoneme confusion statistics for the target language phoneme set are derived from human annotations and word level scoring results using a Markov chain model of mispronunciation detection. The proposed methods can be employed for building a part of the scoring module of a system for computer assisted pronunciation training (CAPT). Methods from pattern and speech recognition are applied to develop appropriate feature sets for sentence and word level scoring. Besides features well-known from and approved in previous research, e.g. phoneme accuracy, posterior score, duration score and recognition accuracy, new features such as high-level phoneme confidence measures are identified. The proposed method is evaluated with native English speech, non-native English speech from German, French, Japanese, Indonesian and Chinese adults and non-native speech from German school children. The speech data are annotated with tags for mispronounced words and sentence level ratings by native English teachers. Experimental results show, that the reliability of automatic sentence level scoring by the system is almost as high as the average human evaluator. Furthermore, a good performance for detecting mispronounced words is achieved. In a validation experiment, it could also be verified, that the system gives the highest pronunciation quality scores to 90% of native speakers’ utterances. Automatic error diagnosis based on a automatically derived phoneme mispronunciation statistic showed reasonable results for five non-native speaker groups. The statistics can be exploited in order to provide the non-native feedback on mispronounced phonemes.  相似文献   

17.
陈迪  龚卫国  杨利平 《计算机应用》2007,27(5):1217-1219
提出了一种可用于改善说话人识别效果的基于基音周期的可变窗长语音MFCC参数提取方法。基本原理是将原始的语音分解为当前基音周期整数倍长度以内部分及其以外部分,并保留前者舍去后者,以减小训练语音与测试语音的频谱失真。通过文本无关的说话人确认实验,验证了该方法能有效提高说话人确认的识别率,并能提高短时语音的稳定性。  相似文献   

18.
许友亮  张连海  屈丹  牛铜 《计算机工程》2012,38(11):160-162,166
提出一种基于长时性信息的音位属性检测方法,该方法通过高、低两层时间延迟神经网络(TDNN)进行实现,低层TDNN在短时特征上进行音位属性的检测,高层TDNN在低层检测结果的基础上,对更长时段上的信息进行融合。实验结果表明,引入长时性特征使得音位属性检测率提升约3%,将音位属性后验概率作为音素识别系统的观测特征,使用长时性特征的识别结果提升约1.7%。  相似文献   

19.
针对说话人识别的性能易受到情感因素影响的问题,提出利用片段级别特征和帧级别特征联合学习的方法。利用长短时记忆网络进行说话人识别任务,提取时序输出作为片段级别的情感说话人特征,保留了语音帧特征原本信息的同时加强了情感信息的表达,再利用全连接网络进一步学习片段级别特征中每一个特征帧的说话人信息来增强帧级别特征的说话人信息表示能力,最后拼接片段级别特征和帧级别特征得到最终的说话人特征以增强特征的表征能力。在普通话情感语音语料库(MASC)上进行实验,验证所提出方法有效性的同时,探究了片段级别特征中包含语音帧数量和不同情感状态对情感说话人识别的影响。  相似文献   

20.
A novel approach for joint speaker identification and speech recognition is presented in this article. Unsupervised speaker tracking and automatic adaptation of the human-computer interface is achieved by the interaction of speaker identification, speech recognition and speaker adaptation for a limited number of recurring users. Together with a technique for efficient information retrieval a compact modeling of speech and speaker characteristics is presented. Applying speaker specific profiles allows speech recognition to take individual speech characteristics into consideration to achieve higher recognition rates. Speaker profiles are initialized and continuously adapted by a balanced strategy of short-term and long-term speaker adaptation combined with robust speaker identification. Different users can be tracked by the resulting self-learning speech controlled system. Only a very short enrollment of each speaker is required. Subsequent utterances are used for unsupervised adaptation resulting in continuously improved speech recognition rates. Additionally, the detection of unknown speakers is examined under the objective to avoid the requirement to train new speaker profiles explicitly. The speech controlled system presented here is suitable for in-car applications, e.g. speech controlled navigation, hands-free telephony or infotainment systems, on embedded devices. Results are presented for a subset of the SPEECON database. The results validate the benefit of the speaker adaptation scheme and the unified modeling in terms of speaker identification and speech recognition rates.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号