首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 234 毫秒
1.
限定领域的语言模型训练语料的搜集需要耗费大量的人力物力,如果语料搜集不充分,往往会造成数据稀疏的问题.解决该问题的方法有两种:1、采用数据平滑算法,降低模型的困惑度;2、对训练语料进行扩展.探索了对语言模型的训练语料进行半自动扩展的方法.该方法通过计算互信息将非限定领域的大规模语料分成若干词类,生成大词类表;再将该表中...  相似文献   

2.
自动语音识别系统由声学模型和语言模型两部分构成,但传统语言模型N-gram存在忽略词条语义相似性、参数过大等问题,限制了语音识别字符错误率的进一步降低。针对上述问题,提出一种新型的语音识别系统,以中文音节(拼音)作为中间字符,以深度前馈序列记忆神经网络DFSMN作为声学模型,执行语音转中文音节任务,进而将拼音转汉字理解成翻译任务,引入Transformer作为语言模型;同时提出一种减少Transformer计算复杂度的简易方法,在计算注意力权值时引入Hadamard矩阵进行滤波,对低于阈值的参数进行丢弃,使得模型解码速度更快。在Aishell-1、Thchs30等数据集上的实验表明,相较于DFSMN结合3-gram模型,基于DFSMN和改进Transformer的语音识别系统在最优模型上的字符错误率相对下降了3.2%,达到了11.8%的字符错误率;相较于BLSTM模型语音识别系统,其字符错误率相对下降了7.1%。  相似文献   

3.
基于相似度的词聚类算法和可变长语言模型   总被引:3,自引:0,他引:3  
基于类的统计语言模型是解决统计模型数据稀疏问题的重要方法.传统的统计聚类方法基于含婪原则,常以语料的似然函数或困惑度(perplexity)作为评价标准.这种传统的聚类方法的主要缺点是聚类速度慢,初值对结果影响大,易陷入局部最优.本文利用互信息定义一种词相似度,基于相似度,提出一种自下而上的分层聚类算法.实验证明,该算法在计算复杂度和聚类效果上比传统的基于贪婪原则的统计聚类算法都有明显的改进.在提高预测能力方面,提出一种新的基于类的可变长语言模型(Vari-gram)的生成方法.  相似文献   

4.
大词汇量连续语音识别系统的性能很大程度上取决于语音库的质量,而语音库设计的中心环节就是语料选取。但是传统语料选取方法往往考虑因素单一,不利于语音识别系统有效利用语言信息。本语音库的语料选取方法综合考虑了多种因素:三音子覆盖率、三音子覆盖效率、三音子稀疏度、常用词分布等,并完全实现程序自动选取,充分利用了原始语料,使选取结果的信息量更加丰富。程序自动选取结果可以覆盖94.1%的三音子,75.4%的最常用词,覆盖效率和稀疏度也比传统方法有了较大改善。  相似文献   

5.
针对基于维吾尔语的N-gram模型统计数据稀疏问题造成统计模型识别性能降低,研究针对政府文献和报告领域的语料进行了1到3元文法统计,采用加法、线性插值、Witten-Bell和Kneser-Ney平滑算法进行了约束。结果表明,本实验中Kneser-Ney平滑技术可以大大降低统计维吾尔语的N-gram模型的困惑度。  相似文献   

6.
循环神经网络语言模型能够克服统计语言模型中存在的数据稀疏问题,同时具有更强的长距离约束能力,是一种重要的语言模型建模方法。但在语音解码时,由于该模型使词图的扩展次数过多,造成搜索空间过大而难以使用。本文提出了一种基于循环神经网络语言模型的N-best重打分算法,利用N-best引入循环神经网络语言模型概率得分,对识别结果进行重排序,并引入缓存模型对解码过程进行优化,得到最优的识别结果。实验结果表明,本文方法能够有效降低语音识别系统的词错误率。  相似文献   

7.
陈浪舟  黄泰翼 《软件学报》2000,11(7):971-978
统计语言模型在语音识别中具有重要作用.对于特定领域的识别系统来说,主题相关的语言模型效果远远优于领域无关的语言模型.传统方法在建立领域相关的语言模型时通常会遇到两个问题,一个是领域相关的语料不像普通语料那样充分,另一个是一篇特定的文章往往与好几个主题相关,而在模型的训练过程中,这种现象没有得到充分的考虑.为解决这两个问题,提出了一种新的领域相关训练语料的组织方法——基于模糊训练集的组织方法,领域相关的语言模型就建立在模糊训练集的基础上.同时,为了增强模型的预测能力,将自组织学习引入到模型的训练过程中,取得了良好的效果.  相似文献   

8.
在普通话大词汇量连续语音识别中,使用最大后验概率决策规则解码得到的是具有最小句子错误率的识别结果,但是本文通常使用字错误率作为识别结果的评测标准.为了使识别结果具有最小字错误率,在充分考虑汉语语言特点的基础上,提出了一种汉字混淆网络算法.这种算法能够有效地将普通话大词汇量连续语音识别系统输出的词格转换成为汉字混淆网络.详细讨论了最小贝叶斯风险决策规则理论及使用汉字混淆网络进行的解码过程.基于2005 HTRDP(863)评测数据集进行的实验结果表明,这种使用汉语字混淆网络的方法有效地降低了普通话大词汇量连续语音识别结果的字错误率.  相似文献   

9.
词表的质量直接影响汉语语言模型的性能, 而当前汉语词典编撰工作同语言建模工作相脱离, 一方面使得现有的汉语语言模型受词表规模所限, 性能不能发挥到最优, 另一方面因为缺乏专业领域的词表, 难以建立面向特定领域的语言模型. 本文旨在通过建立优化词表的方式来提高现有汉语语言模型的性能, 并使其自动适应训练语料的领域. 本文首先将词表自动生成工作同汉语语言建模工作相结合, 构建一体化迭代算法框架, 在自动生成优化词表的同时能够获得高性能的汉语语言模型. 在该框架下, 本文提出汉字构词强度的概念来描述汉语的词法信息, 并将其作为词法特征与统计特征相结合, 构造一种基于多特征的汉语词表自动生成算法. 最后, 本文提出两种启发式方法, 自动根据训练语料的特点调整系统中的各项参数, 使系统能够自动适应训练语料的领域. 实验表明, 本文的方法能够在生成高质量词表的同时获得高性能的语言模型, 并且能够有效自动适应训练语料的领域.  相似文献   

10.
基于Transformer的端到端语音识别系统获得广泛的普及,但Transformer中的多头自注意力机制对输入序列的位置信息不敏感,同时它灵活的对齐方式在面对带噪语音时泛化性能较差。针对以上问题,首先提出使用时序卷积神经网络(TCN)来加强神经网络模型对位置信息的捕捉,其次在上述基础上融合连接时序分类(CTC),提出TCN-Transformer-CTC模型。在不使用任何语言模型的情况下,在中文普通话开源语音数据库AISHELL-1上的实验结果表明,TCN-Transformer-CTC相较于Transformer字错误率相对降低了10.91%,模型最终字错误率降低至5.31%,验证了提出的模型具有一定的先进性。  相似文献   

11.
12.
A Dialectal Chinese Speech Recognition Framework   总被引:2,自引:0,他引:2       下载免费PDF全文
A framework for dialectal Chinese speech recognition is proposed and studied, in which a relatively small dialectal Chinese (or in other words Chinese influenced by the native dialect) speech corpus and dialect-related knowledge are adopted to transform a standard Chinese (or Putonghua, abbreviated as PTH) speech recognizer into a dialectal Chinese speech recognizer. Two kinds of knowledge sources are explored: one is expert knowledge and the other is a small dialectal Chinese corpus. These knowledge sources provide information at four levels: phonetic level, lexicon level, language level, and acoustic decoder level. This paper takes Wu dialectal Chinese (WDC) as an example target language. The goal is to establish a WDC speech recognizer from an existing PTH speech recognizer based on the Initial-Final structure of the Chinese language and a study of how dialectal Chinese speakers speak Putonghua. The authors propose to use context-independent PTH-IF mappings (where IF means either a Chinese Initial or a Chinese Final), context-independent WDC-IF mappings, and syllable-dependent WDC-IF mappings (obtained from either experts or data), and combine them with the supervised maximum likelihood linear regression (MLLR) acoustic model adaptation method. To reduce the size of the multi-pronunciation lexicon introduced by the IF mappings, which might also enlarge the lexicon confusion and hence lead to the performance degradation, a Multi-Pronunciation Expansion (MPE) method based on the accumulated uni-gram probability (AUP) is proposed. In addition, some commonly used WDC words are selected and added to the lexicon. Compared with the original PTH speech recognizer, the resulting WDC speech recognizer achieves 10-18% absolute Character Error Rate (CER) reduction when recognizing WDC, with only a 0.62% CER increase when recognizing PTH. The proposed framework and methods are expected to work not only for Wu dialectal Chinese but also for other dialectal Chinese languages and even other languages.  相似文献   

13.
基于最大似然估计(Maximum likelihood estimation,MLE)的语言模型(Language model,LM)数据增强方法由于存在暴露偏差问题而无法生成具有长时语义信息的采样数据.本文提出了一种基于对抗训练策略的语言模型数据增强的方法,通过一个辅助的卷积神经网络判别模型判断生成数据的真伪,从而引导递归神经网络生成模型学习真实数据的分布.语言模型的数据增强问题实质上是离散序列的生成问题.当生成模型的输出为离散值时,来自判别模型的误差无法通过反向传播算法回传到生成模型.为了解决此问题,本文将离散序列生成问题表示为强化学习问题,利用判别模型的输出作为奖励对生成模型进行优化,此外,由于判别模型只能对完整的生成序列进行评价,本文采用蒙特卡洛搜索算法对生成序列的中间状态进行评价.语音识别多候选重估实验表明,在有限文本数据条件下,随着训练数据量的增加,本文提出的方法可以进一步降低识别字错误率(Character error rate,CER),且始终优于基于MLE的数据增强方法.当训练数据达到6M词规模时,本文提出的方法使THCHS30数据集的CER相对基线系统下降5.0%,AISHELL数据集的CER相对下降7.1%.  相似文献   

14.
This paper describes the use of a neural network language model for large vocabulary continuous speech recognition. The underlying idea of this approach is to attack the data sparseness problem by performing the language model probability estimation in a continuous space. Highly efficient learning algorithms are described that enable the use of training corpora of several hundred million words. It is also shown that this approach can be incorporated into a large vocabulary continuous speech recognizer using a lattice rescoring framework at a very low additional processing time. The neural network language model was thoroughly evaluated in a state-of-the-art large vocabulary continuous speech recognizer for several international benchmark tasks, in particular the Nist evaluations on broadcast news and conversational speech recognition. The new approach is compared to four-gram back-off language models trained with modified Kneser–Ney smoothing which has often been reported to be the best known smoothing method. Usually the neural network language model is interpolated with the back-off language model. In that way, consistent word error rate reductions for all considered tasks and languages were achieved, ranging from 0.4% to almost 1% absolute.  相似文献   

15.
16.
Natural languages are known for their expressive richness. Many sentences can be used to represent the same underlying meaning. Only modelling the observed surface word sequence can result in poor context coverage and generalization, for example, when using n-gram language models (LMs). This paper proposes a novel form of language model, the paraphrastic LM, that addresses these issues. A phrase level paraphrase model statistically learned from standard text data with no semantic annotation is used to generate multiple paraphrase variants. LM probabilities are then estimated by maximizing their marginal probability. Multi-level language models estimated at both the word level and the phrase level are combined. An efficient weighted finite state transducer (WFST) based paraphrase generation approach is also presented. Significant error rate reductions of 0.5–0.6% absolute were obtained over the baseline n-gram LMs on two state-of-the-art recognition tasks for English conversational telephone speech and Mandarin Chinese broadcast speech using a paraphrastic multi-level LM modelling both word and phrase sequences. When it is further combined with word and phrase level feed-forward neural network LMs, a significant error rate reduction of 0.9% absolute (9% relative) and 0.5% absolute (5% relative) were obtained over the baseline n-gram and neural network LMs respectively.  相似文献   

17.
Language modeling is the problem of predicting words based on histories containing words already hypothesized. Two key aspects of language modeling are effective history equivalence classification and robust probability estimation. The solution of these aspects is hindered by the data sparseness problem.Application of random forests (RFs) to language modeling deals with the two aspects simultaneously. We develop a new smoothing technique based on randomly grown decision trees (DTs) and apply the resulting RF language models to automatic speech recognition. This new method is complementary to many existing ones dealing with the data sparseness problem. We study our RF approach in the context of n-gram type language modeling in which n  1 words are present in a history. Unlike regular n-gram language models, RF language models have the potential to generalize well to unseen data, even when histories are longer than four words. We show that our RF language models are superior to the best known smoothing technique, the interpolated Kneser–Ney smoothing, in reducing both the perplexity (PPL) and word error rate (WER) in large vocabulary state-of-the-art speech recognition systems. In particular, we will show statistically significant improvements in a contemporary conversational telephony speech recognition system by applying the RF approach only to one of its many language models.  相似文献   

18.
语义分析和结构化语言模型   总被引:3,自引:0,他引:3  
李明琴  李涓子  王作英  陆大? 《软件学报》2005,16(9):1523-1533
提出了一个语义分析集成系统,并在此基础上构建了结构化的语言模型.该语义分析集成系统能够自动分析句子中各个词的词义以及词之间的语义依存关系,达到90.85%的词义标注正确率和75.84%的语义依存结构标注正确率.为了描述语言的结构信息和长距离依存关系,研究并分析了两种基于语义结构的语言模型.最后,在中文语音识别任务上测试两类语言模型的性能.与三元语言模型相比,性能最好的语义结构语言模型--中心词三元模型,使绝对字错误率下降0.8%,相对错误率下降8%.  相似文献   

19.
20.
This paper explores the interaction between a language model’s perplexity and its effect on the word error rate of a speech recognition system. Much recent research has indicated that these two measures are not as well correlated as was once thought, and many examples exist of models which have a much lower perplexity than the equivalent N -gram model, yet lead to no improvement in recognition accuracy. This paper investigates the reasons for this apparent discrepancy. Perplexity’s calculation is based solely on the probabilities of words contained within the test text; it disregards the probabilities of alternative words which will be competing with the correct word within the decoder. It is shown that by considering the probabilities of the alternative words it is possible to derive measures of language model quality which are better correlated with word error rate than perplexity is. Furthermore, optimizing language model parameters with respect to these new measures leads to a significant reduction in the word error rate.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号