首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 140 毫秒
1.
置信度判决用于确定语音数据与模型之间的匹配程度,可以发现语音命令系统中的识别错误,提高其可靠性.近年来,基于身份矢量(identity vector,i-vector)以及概率线性判别分析(Probabilistic Linear Discriminant Analysis,PLDA)的方法在说话人识别任务中取得了显著效果.本文尝试将i-vector以及PLDA模型作为一种命令词识别结果置信度分析方法,其无需声学模型、语言模型支撑,且实验表明性能良好.在此基础上,针对i-vector在刻画时序信息方面的不足,尝试将该系统与DTW融合,有效提升了系统对音频时序的鉴别能力.  相似文献   

2.
置信度判决是现代语音识别系统中重要的后处理模块,能够基于语音识别结果及相关信息有效地实现识别错误检测和集外词拒识等功能。本文主要针对受限命令词语音识别系统中的置信度提出两种改进方案,分别是基于高斯混合模型的音素相关置信度得分规整,以及传统置信度特征和时长特征的融合。在中英文测试集上的实验结果表明,上述两种改进方案相对于置信度基线系统的性能都能够获得显著的性能提升,且在性能提升上具有可叠加性。  相似文献   

3.
文本倾向识别的置信度估计   总被引:1,自引:0,他引:1  
由于置信度模型可以有效地判断观测数据与文本倾向模板之间的匹配程度,因此可以用在文本倾向识别中,有效地定位识别结果,从而提高系统的识别率和稳健性。该文讨论了文本倾向识别中的置信度的基本原理,介绍了假设检验和区间估计在文本倾向识别中的应用。  相似文献   

4.
为获得较为鲁棒的识别性能,一般的语音识别系统中都会在后端加入一个置信度判决模块,以实现识别错误检测和集外词拒识等功能。针对命令词语音识别系统,传统的基于Filler模型的置信度方法由于自身模型结构的限制,性能相对有限,尤其是对集外词的检测效果不好。为此,使用了一种基于音节循环的置信度判决方法,并对该方法的解码网络进行精简,以满足实用化的效率要求。在中文命令词测试集上的实验结果表明,该方法相对于基于Filler模型的置信度方法对识别效果与识别效率都有了较大的提升。  相似文献   

5.
语音拒识技术是实现一个实用语音识别系统的关键。提出了一种新颖的基于置信度的非特定人语音识别拒识算法,该算法同时考虑了备选假设模型和多候选的信息,适用于拒识不正确的识别结果和词表外(OOV)语音。在一个非特定人英语命令词识别系统中做了一些相关的实验来评估这个算法的性能。实验结果表明,该算法可以有效地去除识别不可靠的语音,提高语音识别的整体性能。  相似文献   

6.
基于HMM的汉语语音识别中,易混淆语音的识别率仍然不高.在分析HMM固有缺陷的基础上,本文提出一种使用SVM在HMM系统上进行二次识别来提高易混淆语音识别率的方法.通过引入置信度估计环节,提高系统性能和效率.通过充分利用Viterbi解码获得的信息来构造新的分类特征,从而解决标准SVM难以处理可变长数据的问题.详细探讨这种两级识别结构中置信度估计、分类特征提取和SVM识别器构造等问题.语音识别实验的结果显示,与采用HMM/SVM混合结构的模型相比,本文方法在对识别速度影响很小的情况下可以使识别率有明显提高.这表明所提出的具有置信估计环节的HMM/SVM两级结构用于易混淆语音识别是可行的.  相似文献   

7.
近十年来,端到端的语音识别框架发展迅速.区别于传统的基于隐马尔可夫模型的语音识别框架,端到端语音识别拥有众多新特性,而且可以达到相同或更优秀的性能.因此,端到端语音识别吸引了越来越多的关注,已经成为了与传统语音识别并列的第二类主流框架.针对端到端语音识别无法提供关键词检索所需的关键词准确时间起止点与可靠置信度的问题,提...  相似文献   

8.
针对于当前语音信号的复杂性,和外界噪音的干扰,导致语音交互系统难以实现较为连续交互这一问题,采用由语音识别、单轮交互、多轮交互、语音合成这四个模块构成的基于语音识别的人机交互系统,在语音识别模块中,语音特征信号提取采用了MFCC特征提取方法,采用了深度算法进行构建声学模型。在多轮交互模块中,采用了GPT-2模型来实现了人机交互中的长对话。结果表明:该语音交互系统可以精准地提取出语音中的所需特征,然后进行有效的语音识别,DNN-HMM模型进行语音识别的WER值为4.11,识别时间短,最后合成出了清晰自然的语音。该结果证明此语音交互系统具有可行性。  相似文献   

9.
基于资源稀少情况下的语音识别,提出针对大量无标注数据的半监督学习的挑选策略,应用到声学模型和语言模型建模.采用少量数据训练种子模型后,解码无标注数据.首先在解码的最佳候选结果中采用置信度与困惑度结合的方法挑选高可信的语句训练声学模型及语言模型.进一步对解码得到的格进行转化,得到多候选文本,用于语言模型训练.在日语识别任务上,相比基于置信度挑选数据的方法,文中方法在识别率上具有较大提升.  相似文献   

10.
近年来,卷积神经网络在图像、文本、语音分类等领域广泛使用,但现有的研究大多忽视了特定场所下语音情感识别的性能。针对上述问题,提出一种基于卷积神经网络(CNN)的火车站语音情感识别模型。模型首先提取每条语音的梅尔倒谱系数(MFCC)特征,然后把提取的特征矩阵送到卷积神经网络训练,最后由网络输出每个语音的所属类别。此外在模型的输出层加入了置信度的设置,认为每一条语音属于某类别的概率大于90%则是可信的,否则不可信。实验结果表明,与循环神经网络(RNN)和多层感知器(MLP)相比,上述模型准确率更高。所提出的方法为深度学习技术在语音情感识别中的应用及火车站等场所危险情况的预警提供了一定的借鉴。  相似文献   

11.
随着语音识别系统继续从实验室转向实际应用,语音拒识就变得愈来愈重要.为解决语音识别系统对识别候选的接受/拒识判决问题,文中提出了基于隐马尔可夫模型(HMM)的语音识别系统中状态和状态驻留相关的声学置信量度准则.给定状态下特征矢量的平均观测先验概率和给定特征矢量状态的后验概率均比较容易设定统一的拒识门限,且不需专门的训练.而状态驻留分布相关法则是基于驻留分布概率和置信区间理论,不仅可设定一个拒识门限,同时可给出语音识别候选的状态驻留可信度.实验表明上述拒识准则能很好地拒识误识别候选和词表外语音(OOV或非关键词),从而在较低拒识率的情况下有效地提高系统的识别率  相似文献   

12.
This paper proposes an efficient speech data selection technique that can identify those data that will be well recognized. Conventional confidence measure techniques can also identify well-recognized speech data. However, those techniques require a lot of computation time for speech recognition processing to estimate confidence scores. Speech data with low confidence should not go through the time-consuming recognition process since they will yield erroneous spoken documents that will eventually be rejected. The proposed technique can select the speech data that will be acceptable for speech recognition applications. It rapidly selects speech data with high prior confidence based on acoustic likelihood values and using only speech and monophone models. Experiments show that the proposed confidence estimation technique is over 50 times faster than the conventional posterior confidence measure while providing equivalent data selection performance for speech recognition and spoken document retrieval.  相似文献   

13.
In this paper we introduce a set of related confidence measures for large vocabulary continuous speech recognition (LVCSR) based on local phone posterior probability estimates output by an acceptor HMM acoustic model. In addition to their computational efficiency, these confidence measures are attractive as they may be applied at the state-, phone-, word- or utterance-levels, potentially enabling discrimination between different causes of low confidence recognizer output, such as unclear acoustics or mismatched pronunciation models. We have evaluated these confidence measures for utterance verification using a number of different metrics. Experiments reveal several trends in “profitability of rejection", as measured by the unconditional error rate of a hypothesis test. These trends suggest that crude pronunciation models can mask the relatively subtle reductions in confidence caused by out-of-vocabulary (OOV) words and disfluencies, but not the gross model mismatches elicited by non-speech sounds. The observation that a purely acoustic confidence measure can provide improved performance over a measure based upon both acoustic and language model information for data drawn from the Broadcast News corpus, but not for data drawn from the North American Business News corpus suggests that the quality of model fit offered by a trigram language model is reduced for Broadcast News data. We also argue that acoustic confidence measures may be used to inform the search for improved pronunciation models.  相似文献   

14.
Confidence measures enable us to assess the output of a speech recognition system. The confidence measure provides us with an estimate of the probability that a word in the recognizer output is either correct or incorrect. In this paper we discuss ways in which to quantify the performance of confidence measures in terms of their discrimination power and bias. In particular, we analyze two different performance metrics: the classification equal error rate and the normalized mutual information metric. We then report experimental results of using these metrics to compare four different confidence measure estimation schemes. We also discuss the relationship between these metrics and the operating point of the speech recognition system and develop an approach to the robust estimation of normalized mutual information.  相似文献   

15.
Confidence measures are computed to estimate the certainty that target acoustic units are spoken in specific speech segments. They are applied in tasks such as keyword verification or utterance verification. Because many of the confidence measures use the same set of models and features as in recognition, the resulting scores may not provide an independent measure of reliability. In this paper, we propose two articulatory feature (AF) based phoneme confidence measures that estimate the acoustic reliability based on the match in AF properties. While acoustic-based features, such as Mel-frequency cepstral coefficients (MFCC), are widely used in speech processing, some recent works have focus on linguistically based features, such as the articulatory features that relate directly to the human articulatory process which may better capture speech characteristics. The articulatory features can either replace or complement the acoustic-based features in speech processing. The proposed AF-based measures in this paper were evaluated, in comparison and in combination, with the HMM-based scores on phoneme and keyword verification tasks using children’s speech collected for a computer-based English pronunciation learning project. To fully evaluate their usefulness, the proposed measures and combinations were evaluated on both native and non-native data; and under field test conditions that mis-matches with the training condition. The experimental results show that under the different environments, combinations of the AF scores with the HMM-based scores outperforms HMM-based scores alone on phoneme and keyword verification.  相似文献   

16.
The automatic recognition of natural or close to natural speech is linked to the problem of detection of “new” or “unknown” words. These are the words or the nonverbal acoustical events that do not belong to the speech recognition system’s vocabulary. In this paper we consider a new method for the estimation of confidence score for words at the output of the recognition system based on a likelihood score of the signal frame. The method and confidence measure could be used, for example, for out-of-vocabulary (OOV) word detection and rejection. The text was submitted by the authors in English. Minh T. Nguyen received the MS degree in computer sciences from Moscow State University in 2004. In 2004 he began postgraduate studies at the Computing Centre, Russian Academy of Sciences. He is working on development and evaluation of confidence measures for speech recognition. Vladimir J. Chuchupal received the MS degree in mathematics from Moscow State Pedagogical Institute in 1976. He received his Candidate of Sciences degree from the Computing Centre, USSR Academy of Sciences, in 1984 completing his dissertation work in noisy speech enhancement methods. Since 1984 he has been with the Speech Recognition subdivision of the Computing Centre, Russian Academy of Sciences, where he works on speech recognition problems. Currently he is the head of the subdivision.  相似文献   

17.
Palm PC 语音识别算法及实现   总被引:4,自引:0,他引:4  
掌上型计算机(palm PC)是一种新型、灵巧的个人数字助理(PDA),由于其没有软健盘或手写体识别作为主要的输入手段,如果在该平台上提供类似于语音导航、声音拔号等功能,将大大改善人机交互界面,针对掌上型计算机这种应用需求,结合其运算速度慢、内存少等特点,讨论了最新设计的一个掌上型计算机语音识别核心算法及实现,包括基于时域能量的端点检测算法、基于神经网络的多可信度综合判决处理集外词、特征选择及定点  相似文献   

18.
研究语音识别率问题,语音信号是一种非平稳信号,含有大量噪声信息,目前大多数识别算法线性理论,难以正确识别语音信号非线性变化过程,识别正确率低。通过将隐马尔可夫模型(HMM)和SVM相结合组成一个混合抗噪语音识别模型(HMM-SVM)。同时用HMM模型对语音信号时序进行建模,并得到待识别语音信号的输出概率,然后将输出概率作为SVM的输入进行学习,得到语音分类信息,最后通过利用HMM-SVM识别结果做出正确识别决策。仿真结果表明,HMM-SVM提高语音识别正确率,尤其在低信噪比环境下,明显改善了语音识别系统的性能。  相似文献   

19.
语音是人类与智能手机或智能家电等现代智能设备进行通信的一种常用而有效的方式。随着计算机和网络技术的显著进步,语音识别系统得到了广泛的应用,它可以将用户发出的语音指令解释为智能设备上可以理解的数字指令或信号,实现用户与这些设备的远程交互功能。近年来,深度学习技术的进步推动了语音识别系统发展,使得语音识别系统的精度和可用性不断提高。然而深度学习技术自身还存在未解决的安全性问题,例如对抗样本。对抗样本是指在模型的预测阶段,通过对预测样本添加细微的扰动,使模型以高置信度给出一个错误的目标类别输出。目前对于对抗样本的攻击及防御研究主要集中在计算机视觉领域而忽略了语音识别系统模型的安全问题,当今最先进的语音识别系统由于采用深度学习技术也面临着对抗样本攻击带来的巨大安全威胁。针对语音识别系统模型同样面临对抗样本的风险,本文对语音识别系统的对抗样本攻击和防御提供了一个系统的综述。我们概述了不同类型语音对抗样本攻击的基本原理并对目前最先进的语音对抗样本生成方法进行了全面的比较和讨论。同时,为了构建更安全的语音识别系统,我们讨论了现有语音对抗样本的防御策略并展望了该领域未来的研究方向。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号