首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到18条相似文献,搜索用时 78 毫秒
1.
维吾尔语连续语音识别技术研究   总被引:1,自引:0,他引:1  
维吾尔语连续语音识别技术研究主要阐述维吾尔语连续语音的识别技术.主要包括声学模型和语言模趋。在声学模型中,主要介绍基于隐马尔可夫模型(Hidden Markov Model,HMM)的维吾尔语筵续语音识别声学建模。在语言模型中,主要对比基于文法和基于统计这两种方法的优劣。  相似文献   

2.
以维吾尔语为例研究自然语料缺乏的民族语言连续语音识别方法。采用HTK通过人工标注的少量语料生成种子模型,引导大语音数据构建声学模型,利用palmkit工具生成统计语言模型,以Julius工具实现连续语音识别。实验用64个维语母语者自由发话的6 400个 短句语音建立单音素声学模型,由100 MB文本、6万词词典生成基于词类的3-gram语言模型,测试结果表明,该方法的识别率为 72.5%,比单用HTK提高4.2个百分点。  相似文献   

3.
语音识别技术目前的技术框架主要基于模式识别,对数据的匹配性要求很高,对方言、口音以及口语的处理能力还存在很大的瓶颈,即使是标准口音,也需要用户较高的配合度。、本文介绍了语音信号处理技术的研究现状及几种常见的技术方法,并且分析探讨了语音信号处理技术的应用和发展前景。  相似文献   

4.
针对当前保密监管的技术现状,本文分析了当前保密信息监管的主要监管范围及其局限性,提出并论述了语音信息保密监管的必要性与重要性,同时对语音信息保密监管的核心技术——语音识别技术的基本原理进行了分析,对语音信息保密监管的具体方法及技术路线进行了讨论,选择基于大规模连续语音识别的方法作为语音信息保密监管的底层支撑技术,并在此基础之上提出一种基于置信度的语音信息保密监管匹配算法,通过同音词扩展的方法提升监管数据的召回率,通过类别置信度计算的方法提升召回监管数据的准确率,以实现在提升监管数据召回率的同时,更好的兼顾监管的准确率。  相似文献   

5.
语音识别使声音变得"可读",让计算机能够"听懂"人类的语言并做出反应,是人工智能实现人机交互的关键技术之一.本文介绍了语音识别的发展历程,阐述了语音识别的原理概念与基础框架,分析了语音识别领域的研究热点和难点,最后,对语音识别技术进行了总结并就其未来研究进行了展望.  相似文献   

6.
孤立词语音识别技术,采用的是模式匹配法,是语音识别技术的核心之一。首先,用户将词汇表中的每一词依次说一遍,并且将其特征矢量作为模板存入棋板库。然后,将输入语音的特征矢量依次与模板库中的每个模板进行相似度比较,将相似度最高者作为识别结果输出。本文介绍了孤立词语音识别技术的研究现状及几种常见的技术方法,并且分析探讨了孤立词语音识别技术的应用和发展前景。  相似文献   

7.
本文综述了近年来大词汇量连续语音识别中搜索空间的表示及相关搜索方法的研究进展,分析了搜索空间的表示及相关搜索方法对语音识别性能产生的影响,并对本领域的研究中存在的问题和未来的发展动向进行了讨论.  相似文献   

8.
计算机能听懂人的语言,这意味着人工智能时代已向我们走来c北京中自汉王科技公司在IBM支持下,基于ViaVoice自行开发的一种最新语音识别软件与手写输入系统——汉王听写系统,它把IBM研究开发的语音识别核心技术与汉三优秀的手写汉字识别输入系统完美的集成在一起.形成了一个优势互补的非键盘输入系统。成为一种简单方便,人人会用的文字录入工具。这不仅提高了汉字输入速度,而且使讲话人更自然、更流畅地表达自己的意愿。汉王听写系统,具有汉语语音听写输入、语音命令、编辑、打印功能,基于中文自身的特点,同音字多,有声调、词…  相似文献   

9.
随着多媒体信息和通信技术的快速发展,网络上的多语言语音数据日益增多.语音识别作为语音分析与处理的核心技术,如何快速地把中文和英文等少数多资源主要语言处理能力推广到更多的低资源语言,是当前识别技术迫切需要突破的瓶颈.文中试图总结声学模型建模领域的最新进展,探讨传统语音识别技术从单语言向多语言跨越过程中可能面临的困难.并在...  相似文献   

10.
语音识别技术是一个涉及多种学科的集成技术,目前已在工业、军事和医疗部门,产品检验和人机语音通信等领域取得了广泛的实际应用.语音识别技术长期以来一直是研究热点,但现有的语音识别系统运行缓慢,成本高,不方便使用.这些缺点影响了语音识别的速度,系统的硬件实现和应用.特别是在吵闹的环境中应用智能机器人语音识别更是非常困难.用于识别的工业智能机器人技术研究也越来越受到人们的关注.  相似文献   

11.
We compared the performance of an automatic speech recognition system using n-gram language models, HMM acoustic models, as well as combinations of the two, with the word recognition performance of human subjects who either had access to only acoustic information, had information only about local linguistic context, or had access to a combination of both. All speech recordings used were taken from Japanese narration and spontaneous speech corpora.Humans have difficulty recognizing isolated words taken out of context, especially when taken from spontaneous speech, partly due to word-boundary coarticulation. Our recognition performance improves dramatically when one or two preceding words are added. Short words in Japanese mainly consist of post-positional particles (i.e. wa, ga, wo, ni, etc.), which are function words located just after content words such as nouns and verbs. So the predictability of short words is very high within the context of the one or two preceding words, and thus recognition of short words is drastically improved. Providing even more context further improves human prediction performance under text-only conditions (without acoustic signals). It also improves speech recognition, but the improvement is relatively small.Recognition experiments using an automatic speech recognizer were conducted under conditions almost identical to the experiments with humans. The performance of the acoustic models without any language model, or with only a unigram language model, were greatly inferior to human recognition performance with no context. In contrast, prediction performance using a trigram language model was superior or comparable to human performance when given a preceding and a succeeding word. These results suggest that we must improve our acoustic models rather than our language models to make automatic speech recognizers comparable to humans in recognition performance under conditions where the recognizer has limited linguistic context.  相似文献   

12.
The two- or three-layered neural networks (2LNN, 3LNN) which originated from stereovision neural networks are applied to speech recognition. To accommodate sequential data flow, we consider a window through which the new acoustic data enter and from which the final neural activities are output. Inside the window, a recurrent neural network develops neural activity toward a stable point. The process is called winner-take-all (WTA) with cooperation and competition. The resulting neural activities clearly showed recognition of continuous speech of a word. The string of phonemes obtained is compared with reference words by using a dynamic programming method. The resulting recognition rate was 96.7% for 100 words spoken by nine male speakers, compared with 97.9% by a hidden Markov model (HMM) with three states and a single gaussian distribution. These results, which are close to those of HMM, seem important because the architecture of the neural network is very simple, and the number of parameters in the neural net equations is small and fixed. This work was presented in part at the Fifth International Symposium on Artificial Life and Robotics, Oita, Japan, January 26–28, 2000  相似文献   

13.
马仕瑛 《计算机时代》2020,(5):27-29,37
为使更多人了解使用少数民族语音产品,有效解决我国少数民族地区与其他区域之间的语言障碍问题,促进民族间的相互交流。通过搜集资料,以国内基于语音识别技术的维吾尔语、蒙古语、藏语的语音产品为研究对象,梳理其开发和应用情况,发现目前开发的相关产品主要集中于语音输入法、语音翻译软件和转录产品三方面,在此基础上,对产品使用产生的影响进行分析,并对相关语音产品的发展前景进行展望。  相似文献   

14.
Recognition of emotion in speech has recently matured to one of the key disciplines in speech analysis serving next generation human-machine interaction and communication. However, compared to automatic speech recognition, that emotion recognition from an isolated word or a phrase is inappropriate for conversation. Because a complete emotional expression may stride across several sentences, and may fetch-up on any word in dialogue. In this paper, we present a segment-based emotion recognition approach to continuous Mandarin Chinese speech. In this proposed approach, the unit for recognition is not a phrase or a sentence but an emotional expression in dialogue. To that end, the following procedures are presented: First, we evaluate the performance of several classifiers in short sentence speech emotion recognition architectures. The results of the experiments show that the WD-KNN classifier achieves the best accuracy for the 5-class emotion recognition what among the five classification techniques. We then implemented a continuous Mandarin Chinese speech emotion recognition system with an emotion radar chart which is based on WD-KNN; this system can represent the intensity of each emotion component in speech. This proposed approach shows how emotions can be recognized by speech signals, and in turn how emotional states can be visualized.  相似文献   

15.
Monaural speech separation and recognition challenge   总被引:2,自引:1,他引:1  
Robust speech recognition in everyday conditions requires the solution to a number of challenging problems, not least the ability to handle multiple sound sources. The specific case of speech recognition in the presence of a competing talker has been studied for several decades, resulting in a number of quite distinct algorithmic solutions whose focus ranges from modeling both target and competing speech to speech separation using auditory grouping principles. The purpose of the monaural speech separation and recognition challenge was to permit a large-scale comparison of techniques for the competing talker problem. The task was to identify keywords in sentences spoken by a target talker when mixed into a single channel with a background talker speaking similar sentences. Ten independent sets of results were contributed, alongside a baseline recognition system. Performance was evaluated using common training and test data and common metrics. Listeners’ performance in the same task was also measured. This paper describes the challenge problem, compares the performance of the contributed algorithms, and discusses the factors which distinguish the systems. One highlight of the comparison was the finding that several systems achieved near-human performance in some conditions, and one out-performed listeners overall.  相似文献   

16.
语音识别技术的研究与发展   总被引:1,自引:0,他引:1  
回顾了语音识别技术的发展历史,描述了语音识别系统的基本原理,介绍了语音识别的几种基本方法,并对语音识别技术面临的问题和发展前景进行了讨论.  相似文献   

17.
提出一种新的基于Matching Pursuit(MP)的语音信号稀疏分解算法。在对语音信号稀疏分解中使用的过完备原子库进行划分的基础上,将内积运算转换成互相关运算,并结合语音信号与原子是实的特性,利用Fast Hartley Transform(FHT)快速实现互相关运算。从而比利用FFT实现基于MP的信号稀疏分解节省一半的存储空间,提高分解速度约24.8%。此外,应用改进后的算法对语音信号进行特征提取,并结合语音信号的美尔(Mel)频率倒谱参数一起作为该信号的特征向量,通过Support Vector Machine(SVM)进行识别,最后通过实验验证了方法的有效性。  相似文献   

18.
柏财通  崔翛龙  郑会吉  李爱 《计算机应用》2022,42(10):3217-3223
针对标注神经网络训练数据的成本日益增加与噪声干扰阻碍语音识别系统性能提升的问题,提出一种基于自监督知识迁移的鲁棒性语音识别模型的模型训练算法。首先,在预处理阶段提取原始语音样本的三个人工特征;然后,在训练阶段将特征提取网络生成的高级特征分别通过三个浅层网络来拟合预处理阶段提取的人工特征;同时,把特征提取前端与语音识别后端进行交叉训练,并合并它们的损失函数;最后,通过梯度反向传播令特征提取网络学会提取更有助于去噪语音识别的高级特征,从而实现人工知识迁移与去噪,并高效利用了训练数据。在军事装备控制的应用场景下,基于加噪后的THCHS-30、希尔贝壳数据集AISHELL-1与ST-CMDS这三个开源中文语音识别数据集以及军事装备控制指令的数据集上进行测试,实验结果表明,基于自监督知识迁移的鲁棒性语音识别模型的模型训练算法词错率可以降低到0.12,不仅可以实现对鲁棒性语音识别模型的模型训练,同时通过自监督知识迁移提高了训练样本的利用率,可完成装备控制任务。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号