首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到17条相似文献,搜索用时 875 毫秒
1.
言语信息处理的进展   总被引:1,自引:0,他引:1  
该文介绍了言语信息处理的进展,特别提到汉语言语处理的现状。言语信息处理涉及到言语识别、说话人识别、言语合成、言语知觉计算等。带口音和随意发音的言语识别有力的支持了语言学习与口语水平测评等应用;跨信道、环境噪音、多说话人、短语音、时变语音等因素存在的情况下提高识别正确率,是说话人识别的研究热点;言语合成主要关注多语言合成、情感言语合成、可视言语合成等;言语知觉计算开展了言语测听、噪声抑制算法、助听器频响补偿方法、语音信号增强算法等研究。将言语处理技术与语言、网络有效结合,促进了更加和谐的人机言语交互。  相似文献   

2.
一种用于方言口音语音识别的字典自适应技术   总被引:2,自引:1,他引:1  
基于标准普通话的语音识别系统在识别带有方言口音的普通话时,识别率会下降很多。针对这一问题,论文介绍了一种“字典自适应技术”。文中首先提出了一种自动标注算法,然后以此为基础,通过分析语音数据,统计出带有方言口音普通话的发音规律,然后把这个规律编码到标准普通话字典里,构造出体现这种方言发音特征的新字典,最后把新字典整合于搜索框架,用于识别带有该方言口音的普通话,使识别率得到显著提高。  相似文献   

3.
该文针对维吾尔语说话人之间的发音差异会在一定程度上影响维吾尔语语音识别系统的性能这一情况研究了说话人自适应技术,将目前较为常用的MLLR和MAP以及MLLR和MAP相结合的自适应方法应用于维吾尔语连续语音识别的声学模型训练中,并用这三种方法自适应后的声学模型分别在测试集上进行识别实验。实验结果表明MLLR、MAP以及MAP+MLLR自适应方法使基线识别系统的单词错误识别率分别降低了0.6%、2.34%和2.57%。
  相似文献   

4.
自适应技术可以用较少的数据来调整声学模型参数,从而达到较好的语音识别效果,它们大多用于自适应有口音的语音。将最大似然线性回归(Maximum Likelihood Linear Regression,MLLR)、最大后验概率(Maximum A Posteriori,MAP)自适应技术用在远场噪声混响环境下来分析其在此环境下的识别性能。实验结果表明,仿真条件下,在墙壁反射系数为0.6,各种噪声环境下MAP有最好的自适应性能,在信噪比(Signal-to-Noise Ratio,SNR)分别为5 dB、10 dB、15 dB时,MAP使远场连续语音词错率(Word Error Rate,WER)平均降低了1.51%、12.82%、2.95%。真实条件下,MAP使WER下降幅度最大达到了37.13%。进一步验证了MAP良好的渐进性,且当自适应句数为1 000时,用MAP声学模型自适应方法得到的远场噪声混响连续语音的识别词错率比自适应前平均降低了12.5%。  相似文献   

5.
该文根据云南境内少数民族同胞说普通话时明显带有民族口音的语言使用现状,介绍了一个以研究非母语说话人汉语连续语音识别为目的的云南少数民族口音汉语普通话语音数据库,并在其基础上开展了发音变异规律、说话人自适应和非母语说话人口音识别研究,是汉语语音识别中用户多样性研究的重要补充。  相似文献   

6.
蒙古语语言中非词首音节短元音位置不确定产生了一词多音、构词音变、协同发音以及口语语流等现象,导致声学模型自适应性差。通过使用小规模的自适应数据集,结合MLLR和MAP建模方法,从τ值的选取和自适应声学模型建模的训练过程两方面对基本蒙古语声学模型的自适应性开展研究,给出了一种适合构建自适应蒙古语语音识别声学模型的MLLR-MAP方法。在Sphinx语音识别实验平台上进行建模实验,使用声学模型识别率与系统识别率评价指标对MAP、MLLR、MAP-MLLR和MLLR-MAP等建模方法进行评价。实验结果表明,在声学模型的总正确率、错误率和准确率三个评价指标上都得到了提升,明显优于基线模型。  相似文献   

7.
该文研究了基于数据模拟方法和HMM(隐马尔科夫模型)自适应的电话信道条件下语音识别问题。模拟数据模仿了纯净语音在不同电话信道条件下的语音行为。各基线系统的HMM模型分别由纯净语音和模拟语音训练而成。语音识别实验评估了各基线系统HMM模型在采用MLLR算法(最大似然线性回归)做无监督式自适应前后的识别性能。实验证明,由纯净语音转换生成的模拟语音有效地减小了训练语音和测试语音声学性质的不匹配,很大程度上提高了电话语音识别率。基线模型的自适应结果显示模拟数据的自适应性能比纯净语音自适应的性能最大提高达到9.8%,表明了电话语音识别性能的进一步改善和系统稳健性的提高。  相似文献   

8.
众所周知中文普通话被众多的地区口音强烈地影响着,然而带不同口音的普通话语音数据却十分缺乏。因此,普通话语音识别的一个重要目标是恰当地模拟口音带来的声学变化。文章给出了隐式和显式地使用口音信息的一系列基于深度神经网络的声学模型技术的研究。与此同时,包括混合条件训练,多口音决策树状态绑定,深度神经网络级联和多级自适应网络级联隐马尔可夫模型建模等的多口音建模方法在本文中被组合和比较。一个能显式地利用口音信息的改进多级自适应网络级联隐马尔可夫模型系统被提出,并应用于一个由四个地区口音组成的、数据缺乏的带口音普通话语音识别任务中。在经过序列区分性训练和自适应后,通过绝对上 0.8% 到 1.5%(相对上 6% 到 9%)的字错误率下降,该系统显著地优于基线的口音独立深度神经网络级联系统。  相似文献   

9.
针对现实中训练数据不足的特点,在说话人建模时采用高斯混合模型-通用背景模型(Gaussian Markov Model-Uniform Background Model, GMM-UBM),主要从说话人识别模型的自适应方法和参数估计方法两个方面,研究如何提高说话人识别系统的识别率。在说话人识别模型自适应方面,改进传统的用最大后验概率 MAP (Maximum A Posterior Probability)得到说话人模型的方法,将语音识别中的最大似然线性回归MLLR (Maximum Likelihood Linear Regression)和基于特征音(EigenVoice, EV)的自适应方法,应用到说话人识别模型自适应当中,并将其与MAP方法进行比较。  相似文献   

10.
语音是人机交互方式之一,语音识别技术是人工智能的重要组成部分.近年来神经网络技术在语音识别领域的应用快速发展,已经成为语音识别领域中主流的声学建模技术.然而测试条件中目标说话人语音与训练数据存在差异,导致模型不适配的问题.因此说话人自适应(SA)方法是为了解决说话人差异导致的不匹配问题,研究说话人自适应方法成为语音识别领域的一个热门方向.相比传统语音识别模型中的说话人自适应方法,使用神经网络的语音识别系统中的自适应存在着模型参数庞大,而自适应数据量相对较少等特点,这使得基于神经网络的语音识别系统中的说话人自适应方法成为一个研究难题.首先回顾说话人自适应方法的发展历程和基于神经网络的说话人自适应方法研究遇到的各种问题,其次将说话人自适应方法分为基于特征域和基于模型域的说话人自适应方法并介绍对应原理和改进方法,最后指出说话人自适应方法在语音识别中仍然存在的问题及未来的发展方向.  相似文献   

11.
This paper addresses accent1 issues in large vocabulary continuous speech recognition. Cross-accent experiments show that the accent problem is very dominant in speech recognition. Analysis based on multivariate statistical tools (principal component analysis and independent component analysis) confirms that accent is one of the key factors in speaker variability. Considering different applications, we proposed two methods for accent adaptation. When a certain amount of adaptation data was available, pronunciation dictionary modeling was adopted to reduce recognition errors caused by pronunciation mistakes. When a large corpus was collected for each accent type, accent-dependent models were trained and a Gaussian mixture model-based accent identification system was developed for model selection. We report experimental results for the two schemes and verify their efficiency in each situation.  相似文献   

12.
Research on noise robust speech recognition has mainly focused on dealing with relatively stationary noise that may differ from the noise conditions in most living environments. In this paper, we introduce a recognition system that can recognize speech in the presence of multiple rapidly time-varying noise sources as found in a typical family living room. To deal with such severe noise conditions, our recognition system exploits all available information about speech and noise; that is spatial (directional), spectral and temporal information. This is realized with a model-based speech enhancement pre-processor, which consists of two complementary elements, a multi-channel speech–noise separation method that exploits spatial and spectral information, followed by a single channel enhancement algorithm that uses the long-term temporal characteristics of speech obtained from clean speech examples. Moreover, to compensate for any mismatch that may remain between the enhanced speech and the acoustic model, our system employs an adaptation technique that combines conventional maximum likelihood linear regression with the dynamic adaptive compensation of the variance of the Gaussians of the acoustic model. Our proposed system approaches human performance levels by greatly improving the audible quality of speech and substantially improving the keyword recognition accuracy.  相似文献   

13.
This paper presents a new method of constructing phonetic decision trees (PDTs) for acoustic model state tying based on implicitly induced prior knowledge. Our hypothesis is that knowledge on pronunciation variation in spontaneous, conversational speech contained in a relatively large corpus can be used for building domain-specific or speaker-dependent PDTs. In view of tree-structure adaptation, this method leads to transformation of tree topology in contrast to keeping fixed tree structure as in traditional methods of speaker adaptation. A Bayesian learning framework is proposed to incorporate prior knowledge on decision rules in a greedy search of new decision trees, where the prior is generated by a decision tree growing process on a large data set. Experimental results on the telemedicine automatic captioning task demonstrate that the proposed approach results in consistent improvement in model quality and recognition accuracy.  相似文献   

14.
This article presents an approach for the automatic recognition of non-native speech. Some non-native speakers tend to pronounce phonemes as they would in their native language. Model adaptation can improve the recognition rate for non-native speakers, but has difficulties dealing with pronunciation errors like phoneme insertions or substitutions. For these pronunciation mismatches, pronunciation modeling can make the recognition system more robust. Our approach is based on acoustic model transformation and pronunciation modeling for multiple non-native accents. For acoustic model transformation, two approaches are evaluated: MAP and model re-estimation. For pronunciation modeling, confusion rules (alternate pronunciations) are automatically extracted from a small non-native speech corpus. This paper presents a novel approach to introduce confusion rules in the recognition system which are automatically learned through pronunciation modelling. The modified HMM of a foreign spoken language phoneme includes its canonical pronunciation along with all the alternate non-native pronunciations, so that spoken language phonemes pronounced correctly by a non-native speaker could be recognized. We evaluate our approaches on the European project HIWIRE non-native corpus which contains English sentences pronounced by French, Italian, Greek and Spanish speakers. Two cases are studied: the native language of the test speaker is either known or unknown. Our approach gives better recognition results than the classical acoustic adaptation of HMM when the foreign origin of the speaker is known. We obtain 22% WER reduction compared to the reference system.  相似文献   

15.
语音识别系统在实用环境中的鲁棒性是语音识别技术实用化的关键问题。鲁棒性研究的核心问题是如何解决实用环境语音特征和模型与干净环境语音识别系统的失配问题,这涉及到噪声补偿、信道适应、说话人自适应等关键技术。文章综述了语音识别鲁棒性技术研究的主要方法、原理及研究现状,分析了实用环境语音识别中声学模型和语言模型的适应技术,并展望了近期语音识别实用化技术发展的研究方向。  相似文献   

16.
随着计算机技术的发展,人工智能产品已经开始广泛地应用在各个领域。利用地区方言与人工智能产品进行交流成为了人机交互技术领域一个重要的研究方向。地处西南的重庆市为国家定位的国际大都市,世界各种文化伴随着人流汇聚于此。承载着重庆本土文化的重庆话作为重庆地区的主要交流语言,研究重庆话语音识别在推动人工智能产品本土化有着积极的作用。本文以重庆话为研究对象,建立了重庆话和重庆话口音的普通话小语料库,搭建了以HMM为声学模型的语音识别系统,分别以重庆话和重庆话口音的普通话作为声学模型去分别识别重庆话和带重庆话口音的普通话。实验表明,重庆话和重庆话口音的普通话声学模型去识别对应语音的正确识别率均为100%;重庆话声学模型识别重庆口音的普通话的正确识别率达到78.89%,重庆话口音的普通话声学模型去识别重庆话的正确识别率达到91.67%。  相似文献   

17.
We consider the problem of acoustic modeling of noisy speech data, where the uncertainty over the data is given by a Gaussian distribution. While this uncertainty has been exploited at the decoding stage via uncertainty decoding, its usage at the training stage remains limited to static model adaptation. We introduce a new expectation maximization (EM) based technique, which we call uncertainty training, that allows us to train Gaussian mixture models (GMMs) or hidden Markov models (HMMs) directly from noisy data with dynamic uncertainty. We evaluate the potential of this technique for a GMM-based speaker recognition task on speech data corrupted by real-world domestic background noise, using a state-of-the-art signal enhancement technique and various uncertainty estimation techniques as a front-end. Compared to conventional training, the proposed training algorithm results in 3–4% absolute improvement in speaker recognition accuracy by training from either matched, unmatched or multi-condition noisy data. This algorithm is also applicable with minor modifications to maximum a posteriori (MAP) or maximum likelihood linear regression (MLLR) acoustic model adaptation from noisy data and to other data than audio.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号