首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到18条相似文献,搜索用时 296 毫秒
1.
构造了两个单流单音素的动态贝叶斯网络(DBN)模型,以实现基于音频和视频特征的连续语音识别,并在描述词和对应音素具体关系的基础上,实现对音素的时间切分。实验结果表明,在基于音频特征的识别率方面:在低信噪比(0~15dB)时,DBN模型的识别率比HMM模型平均高12.79%;而纯净语音下,基于DBN模型的音素时间切分结果和三音素HMM模型的切分结果很接近。对基于视频特征的语音识别,DBN模型的识别率比HMM识别率高2.47%。实验最后还分析了音视频数据音素时间切分的异步关系,为基于多流DBN模型的音视频连续语音识别和确定音频和视频的异步关系奠定了基础。  相似文献   

2.
为实现文本/语音驱动的说话人头部动画,本文提出基于贝叶斯切线形状模型的口形轮廓特征提取方法和基于动态贝叶斯网络(Dynamic Bayesian Network, DBN)模型的唇读系统。在描述词与它的组成视素关系的基础上,得到视素时间切分序列。为比较性能,音素DBN模型和HMM的音素识别结果被影射成视素序列。在评价准则上,提出绝对视素切分正确性和基于图像与嘴唇几何特征两种相对视素切分正确性的评价标准。实验表明,DBN模型识别性能优于HMM,而基于视素的DBN模型能为说话人头部动画提供最好的口形。  相似文献   

3.
基于三音素动态贝叶斯网络模型的大词汇量连续语音识别   总被引:1,自引:0,他引:1  
考虑连续语音中的协同发音现象,基于词-音素结构的DBN(WP-DBN)模型和词-音素-状态结构的DBN(WPS-DBN)模型,引入上下文相关的三音素单元,提出两个新颖的单流DBN模型:基于词-三音素结构的DBN(WT-DBN)模型和基于词-三音素-状态的DBN(WTS-DBN)模型.WTS-DBN模型是三音素模型,识别基元为三音素,以显式的方式模拟了基于三音素状态捆绑的隐马尔可夫模型(HMM).大词汇量语音识别实验结果表明:在纯净语音环境下,WTS-DBN模型的识别率比HMM,WT-DBN,WP-DBN和WPS-DBN模型的识别率分别提高了20.53%,40.77%,42.72%和7.52%.  相似文献   

4.
近年来,由于动态贝叶斯网络(DBN)相对于传统的隐马尔可夫模型(HMM)更具可解释性、可分解性以及可扩展性,基于DBN的语音识别引起学者们越来越多的关注.但是,目前关于基于DBN的语音识别的研究主要集中在孤立语音识别上,连续语音识别的框架和识别算法还远没有HMM成熟和灵活.为了解决基于DBN的连续语音识别的灵活性和可扩展性,将在基于HMM的连续语音识别中很好地解决了上述问题的Token传递模型加以修改,使之适用于DBN.在该模型基础上,为基于DBN的连续语音识别提出了一个基本框架,并在此框架下提出了一个新的独立于上层语言模型的识别算法.还介绍了作者开发的一套基于该框架的可用于连续语音识别及其他时序系统的工具包DTK.  相似文献   

5.
构建了一种新的基于动态贝叶斯网络(Dynamic Bayesian Network,DBN)的异步整词-发音特征语音识别模型AWA-DBN(每个词由其发音特征的运动来描述),定义了各发音特征节点及异步检查节点的条件概率分布。在标准数字语音库Aurora5.0上的语音识别实验表明,与整词-状态DBN(WS-DBN,每个词由固定个数的整词状态构成)和整词-音素DBN(WP-DBN,每个词由其对应的音素序列构成)模型相比,WS-DBN模型虽然具有最高的识别率,但其只适用于小词汇量孤立词语音识别,AWA-DBN和WP-DBN可以为大词汇量连续语音建模,而AWA-DBN模型比WP-DBN模型具有更高的语音识别率和系统鲁棒性。  相似文献   

6.
考虑连续语音中的协同发音问题,提出基于词内扩展的单流上下文相关三音素动态贝叶斯网络(SS-DBN-TRI)模型和词间扩展的单流上下文相关三音素DBN(SS-DBN-TRI-CON)模型。SS-DBN-TRI模型是Bilmes提出单流DBN(SS-DBN)模型的改进,采用词内上下文相关三音素节点替代单音素节点,每个词由它的对应三音素单元构成,而三音素单元和观测向量相联系;SS-DBN-TRI-CON模型基于SS-DBN模型,通过增加当前音素的前音素节点和后音素节点,构成一个新的词间扩展的三音素变量节点,新的三音素节点和观测向量相联系,采用高斯混合模型来描述,采用数字连续语音数据库的实验结果表明:SS-DBN-TRI-CON具备最好的语音识别性能。  相似文献   

7.
基于发音特征的音/视频双流语音识别模型*   总被引:1,自引:0,他引:1  
构建了一种基于发音特征的音/视频双流动态贝叶斯网络(dynamic Bayesian network, DBN)语音识别模型,定义了各节点的条件概率关系,以及发音特征之间的异步约束关系,最后在音/视频连接数字语音数据库上进行了语音识别实验,并与音频单流、视频单流DBN模型比较了在不同信噪比情况下的识别效果。结果表明,在低信噪比情况下,基于发音特征的音/视频双流语音识别模型表现出最好的识别性能,而且随着噪声的增加,其识别率下降的趋势比较平缓,表明该模型对噪声具有很强的鲁棒性,更适用于低信噪比环境下的语音识别  相似文献   

8.
构建一种基于发音特征的音视频双流动态贝叶斯网络(DBN)语音识别模型(AFAV_DBN),定义节点的条件概率关系,使发音特征状态的变化可以异步.在音视频语音数据库上的语音识别实验表明,通过调整发音特征之问的异步约束,AF- AV_DBN模型能得到比基于状态的同步和异步DBN模型以及音频单流模型更高的识别率,对噪声也具有...  相似文献   

9.
语音信号在传播过程中会产生持续时长不等的音素特征,这些特征会影响语音识别的正确率.针对这一问题,提出一种多核卷积融合网络(Multi-core Convolution Fusion Network,MCFN),用于对不同长度的音素特征进行标准化,用标准化后的特征训练语音识别模型.此外,还利用子空间高斯混合模型(Subspace Gaussian Mixture Model,SGMM)将一般说话者的语音和信息加入到模型中,减小语料稀疏性对模型的影响.通过在Thchs30和ST-CMDS数据集对模型进行评估,结果显示,基于MCFN的BLSTM-CTC语音识别模型的识别字错误率(WER)较传统的语音识别模型有所降低.  相似文献   

10.
为探索智能语音技术在英语发音学习中的作用,开展了面向中国人朗读英语句子的音素发音自动检错技术研究.首先收集了45个人录制的900句英文朗读发音,并由两位专家对音素发音中的错误进行详细的标注,然后基于语音识别技术建立的句子朗读发音中音素自动检错系统,并针对中国人英语发音时最为常见的错读和漏读两大问题,分别提出音素独立检错阈值和限定音素对齐识别网络的方法,对音素检错系统进行了优化,显著地提高了系统的性能,最终系统的召回率和正确率分别达到49%和52%,接近人工专家间的69%召回率下59%的正确率的性能.  相似文献   

11.
一种基于改进CP网络与HMM相结合的混合音素识别方法   总被引:2,自引:0,他引:2  
提出了一种基于改进对偶传播(CP)神经网络与隐驰尔可夫模型(HMM)相结合的混合音素识别方法.这一方法的特点是用一个具有有指导学习矢量量化(LVQ)和动态节点分配等特性的改进的CP网络生成离散HMM音素识别系统中的码书。因此,用这一方法构造的混合音素识别系统中的码书实际上是一个由有指导LVQ算法训练的具有很强分类能力的高性能分类器,这就意味着在用HMM对语音信号进行建模之前,由码书产生的观测序列中  相似文献   

12.
目前,汉语识别已经取得了一定的研究成果.但由于中国的地域性差异,十里不同音,使得汉语识别系统在进行方言识别时识别率低、性能差.针对语音识别系统在对方言进行识别时的缺陷,构建了基于HTK的衡阳方言孤立词识别系统.该系统使用HTK3.4.1工具箱,以音素为基本识别单元,提取39维梅尔频率倒谱系数(MFCC)语音特征参数,构建隐马尔可夫模型(HMM),采用Viterbi算法进行模型训练和匹配,实现了衡阳方言孤立词语音识别.通过对比实验,比较了在不同因素模型下和不同高斯混合数下系统的性能.实验结果表明,将39维MFCC和5个高斯混合数与HMM模型结合实验时,系统的性能得到很大的改善.  相似文献   

13.
We present a new framework for joint analysis of throat and acoustic microphone (TAM) recordings to improve throat microphone only speech recognition. The proposed analysis framework aims to learn joint sub-phone patterns of throat and acoustic microphone recordings through a parallel branch HMM structure. The joint sub-phone patterns define temporally correlated neighborhoods, in which a linear prediction filter estimates a spectrally rich acoustic feature vector from throat feature vectors. Multimodal speech recognition with throat and throat-driven acoustic features significantly improves throat-only speech recognition performance. Experimental evaluations on a parallel TAM database yield benchmark phoneme recognition rates for throat-only and multimodal TAM speech recognition systems as 46.81% and 60.69%, respectively. The proposed throat-driven multimodal speech recognition system improves phoneme recognition rate to 52.58%, a significant relative improvement with respect to the throat-only speech recognition benchmark system.  相似文献   

14.
15.
Viseme是在语音驱动说话人头部动画中一种常用的为口形建立的音频-视频模型。本文尝试建立viseme隐马尔可夫模型(HMM),用于驱动说话人头部的语音识别系统,称为前映射系统。为了得到更精确的模型以提高识别率,引入考虑发音口形上下文的Triseme模型。但是引入Triseme模型后,随着模型数量的急剧增加将导致训练数据的严重不足。本文使用决策树状态捆绑方法来缓解这一问题,同时引入了一种以口形相似度为基础的决策树视频问题设计方法。为了比较viseme系统的性能,本文也建立了一个以phoneme为基本HMM模型的语音识别系统。在评价准则上,使用了一种客观评价说话人头部动画的加权识别率。实验表明,以viseme为基本HMM模型的前映射系统可以为说话人头部提供更加合理的口形。  相似文献   

16.
Building a large vocabulary continuous speech recognition (LVCSR) system requires a lot of hours of segmented and labelled speech data. Arabic language, as many other low-resourced languages, lacks such data, but the use of automatic segmentation proved to be a good alternative to make these resources available. In this paper, we suggest the combination of hidden Markov models (HMMs) and support vector machines (SVMs) to segment and to label the speech waveform into phoneme units. HMMs generate the sequence of phonemes and their frontiers; the SVM refines the frontiers and corrects the labels. The obtained segmented and labelled units may serve as a training set for speech recognition applications. The HMM/SVM segmentation algorithm is assessed using both the hit rate and the word error rate (WER); the resulting scores were compared to those provided by the manual segmentation and to those provided by the well-known embedded learning algorithm. The results show that the speech recognizer built upon the HMM/SVM segmentation outperforms in terms of WER the one built upon the embedded learning segmentation of about 0.05%, even in noisy background.  相似文献   

17.
This paper presents a real-time speech-driven talking face system which provides low computational complexity and smoothly visual sense. A novel embedded confusable system is proposed to generate an efficient phoneme-viseme mapping table which is constructed by phoneme grouping using Houtgast similarity approach based on the results of viseme similarity estimation using histogram distance, according to the concept of viseme visually ambiguous. The generated mapping table can simplify the mapping problem and promote viseme classification accuracy. The implemented real time speech-driven talking face system includes: 1) speech signal processing, including SNR-aware speech enhancement for noise reduction and ICA-based feature set extractions for robust acoustic feature vectors; 2) recognition network processing, HMM and MCSVM are combined as a recognition network approach for phoneme recognition and viseme classification, which HMM is good at dealing with sequential inputs, while MCSVM shows superior performance in classifying with good generalization properties, especially for limited samples. The phoneme-viseme mapping table is used for MCSVM to classify the observation sequence of HMM results, which the viseme class is belong to; 3) visual processing, arranges lip shape image of visemes in time sequence, and presents more authenticity using a dynamic alpha blending with different alpha value settings. Presented by the experiments, the used speech signal processing with noise speech comparing with clean speech, could gain 1.1 % (16.7 % to 15.6 %) and 4.8 % (30.4 % to 35.2 %) accuracy rate improvements in PER and WER, respectively. For viseme classification, the error rate is decreased from 19.22 % to 9.37 %. Last, we simulated a GSM communication between mobile phone and PC for visual quality rating and speech driven feeling using mean opinion score. Therefore, our method reduces the number of visemes and lip shape images by confusable sets and enables real-time operation.  相似文献   

18.
It was observed that for non-stationary and quasi-stationary signals, wavelet transform has been found to be an effective tool for the time–frequency analysis. In the recent years wavelet transform being used for feature extraction in speech recognition applications. Here a new filter structure using admissible wavelet packet analysis is proposed for Hindi phoneme recognition. These filters have the benefit of having frequency bands spacing similar to the auditory Equivalent Rectangular Bandwidth (ERB) scale whose central frequencies are equally distributed along the frequency response of human cochlea. The phoneme recognition performance of proposed feature is compared with the standard baseline features and 24-band admissible wavelet packet-based features using a Hidden Markov Model (HMM) based classifier. Proposed feature shows better performance compared to conventional features for Hindi consonant recognition. To evaluate the robustness of proposed feature in the noisy environment NOISEX-92 database has been used.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号