共查询到19条相似文献,搜索用时 140 毫秒
1.
为实现音视频语音识别和同时对音频视频流进行准确的音素切分,该文提出一个新的多流异步三音素动态贝叶斯网络(MM-ADBN-TRI)模型,在词级别上描述了音频视频流的异步性,音频流和视频流都采用了词-三音素-状态-观测向量的层次结构,识别基元是三音素,描述了连续语音中的协同发音现象.实验结果表明:该模型在音视频语音识别和对音频视频流的音素切分方面,以及在确定音视频流的异步关系上,都具备较好的性能. 相似文献
2.
发音字典是语音识别系统的重要组成部分,字典词汇量不足将导致高集外词率,降低语音识别性能。提出一种自动扩展字典的新方法,该方法不需要大量文本数据来获取新词,而是利用单词发音恢复集外词。首先,利用字典有限状态转换器(FST)表示的互补形式和P2G转换获取新的词-发音对。然后采用一种两步确认策略,即发音确认和单词确认,滤除错误词条。最后,采用语言模型线性内插将生成的新词添加进语言模型中。该方法在英语和捷克语的连续语音识别任务中进行了测试。实验表明,字典扩展有效降低系统集外词(OOV)率;英语大词汇量连续语音识别(LVCSR)系统的连续语音识别性能相对基线系统提升约9%,关键词检索性能约提升9.7%;捷克语系统性能分别提升了2.3%和10.0%。 相似文献
3.
4.
针对目前汉语连续语音识别中英文识别问题,采用中英文混合建模的方法建立中英文混合模型.在分析已有语音识别系统的基础上,根据发音学的一些先验知识,提出一种基于主元音及英文音素序列混合的声学模型,同时利用最大似然规则训练出的声学模型,通过最小音素错误准则对声学模型进行区分性训练,更新得到最终的声学模型.在测试集上的结果表明,... 相似文献
5.
由于现有的加权有限状态机(WFST)解码网络没有精确词尾标记,导致当前已有的词图生成算法不含精确的词尾时间点,或者仅是状态、音素级别的词图,无法应用到关键词检索中。该文提出在WFST静态解码器下的语音识别词图生成算法。首先从理论上分析了WFST解码音素图和词图的可转换关系,然后提出了字典的动态音素匹配方法解决了WFST网络中词尾时间点对齐的问题,最后通过令牌传递的遍历方法生成了词图。同时,考虑到计算量优化,在令牌传递过程中引入了剪枝算法,使音素图转词图的耗时不到解码耗时的3%。得到的词图,不仅可以用于语言模型重打分,由于含有精确的词尾时间点,还可以直接应用到关键词检索系统中。实验结果表明,该文的词图生成算法具有较高的计算效率;和已有动态解码器的词图相比,词图中包含更多解码信息,在大词汇连续语音识别的重打分结果和关键词检索中都能取得更好的性能。 相似文献
6.
结合维吾尔语的语音特征和语义信息,在大量电话语音语料库的基础上,以建立维吾尔语连续音素识别平台为目标,通过构建隐马尔科夫模型工具HTK(Hidden Markov Model Toolkit)工具实现了维吾尔语连续音素识别算法:首先根据具体技术指标完成了较大规模电话语音语料库的录制和标注工作;确定音素为基元,通过训练获得了每个音素的HMM(Hidden Markov Model)声学模型,随后对输入的语音进行识别,声学模型在不同的高斯混合数目下,得出了识别结果;统计了32个音素的识别率并对它进行分析,为了进一步提高识别率奠定了基础。 相似文献
7.
8.
汉语连续语音识别中不同基元声学模型的复合 总被引:1,自引:0,他引:1
该文研究由不同声学基元训练的声学模型的复合。在汉语连续语音识别中,流行的基元包括上下文相关的声韵母基元和音素基元。实验发现,有些汉语音节在声韵母模型下有更高的识别率,有些音节在音素模型下有更高的识别率。该文提出一种复合这两种声学模型的方法,一方面在识别过程中同时使用两种模型,另一方面在识别过程中避开造成低识别率的模型。实验表明,采用本文的方法后,音节错误率比音素模型和声韵母模型分别下降了9.60%和6.10%。 相似文献
9.
基于电话用户交换机的语音识别系统研究 总被引:3,自引:0,他引:3
本论文对电话用户交换机研制了一个声控语音命令交换系统,该系统能够实现与特定人无关中小词汇量连续命令语音自动识别,研究中统计了用和命令语句,生成相应识别文法网络,识别系统的训练采用由子词模型构成的复合模型进行强化训练,识别采用令牌传递式改进Viterbi算法,提高系统的识别性能,论文比较了不同语音特征参数以及隐含马尔可夫模型状态数对电话语音识别精度的影响,研究中还开发识别系统拒识系统,在无拒识情况下 相似文献
10.
为了解决传统氦语音处理技术存在的处理速度慢、计算复杂、操作困难等问题,提出了一种采用机器学习的氦语音识别方法,通过深层网络学习高维信息、提取多种特征,不但解决了过拟合问题,同时也具备了字错率(Word Error Rate,WER)低、收敛速度快的优点。首先自建氦语音孤立词和连续氦语音数据库,对氦语音数据预处理,提取的语音特征主要包括共振峰特征、基音周期特征和FBank(Filter Bank)特征。之后将语音特征输入到由深度卷积神经网络(Deep Convolutional Neural Network,DCNN)和连接时序分类(Connectionist Temporal Classification,CTC)组成的声学模型进行语音到拼音的建模,最后应用Transformer语言模型得到汉字输出。提取共振峰特征、基音周期特征和FBank特征的氦语音孤立词识别模型相比于仅提取FBank特征的识别模型的WER降低了7.91%,连续氦语音识别模型的WER降低了14.95%。氦语音孤立词识别模型的最优WER为1.53%,连续氦语音识别模型的最优WER为36.89%。结果表明,所提方法可有效识别氦语音。 相似文献
11.
视听多模态建模已被验证在与语音分离的任务中性能表现优异,本文提出一种语音分离模型,对现有的时域音视频联合语音分离算法进行改进,增强音视频流之间的联系。针对现有音视频分离模型联合度不高的情况,作者提出一种在时域上将语音特征与额外输入的视觉特征进行多次融合,并加入纵向权值共享的端到端的语音分离模型。在GRID数据集上的实验结果表明,该网络与仅使用音频的时域语音卷积分离网络(Conv-TasNet)和音视频联合的Conv-TasNet相比,性能上分别获得了1.2 dB和0.4 dB的改善。 相似文献
12.
13.
Audio-visual speech recognition (AVSR) using acoustic and visual signals of speech has received attention recently because
of its robustness in noisy environments. An important issue in decision fusion based AVSR system is the determination of appropriate
integration weight for the speech modalities to integrate and ensure better performance under various SNR conditions. Generally,
the integration weight is calculated from the relative reliability of two modalities. This paper investigates the effect of
reliability measure on integration weight estimation and proposes a genetic algorithm (GA) based reliability measure which
uses optimum number of best recognition hypotheses rather than N best recognition hypotheses to determine an appropriate integration weight. Further improvement in recognition accuracy is
achieved by optimizing the above measured integration weight by genetic algorithm. The performance of the proposed integration
weight estimation scheme is demonstrated for isolated word recognition (incorporating commonly used functions in mobile phones)
via multi-speaker database experiment. The results show that the proposed schemes improve robust recognition accuracy over
the conventional unimodal systems, and a couple of related existing bimodal systems, namely, the baseline reliability ratio-based
system and N best recognition hypotheses reliability ratio-based system under various SNR conditions. 相似文献
14.
《Signal Processing Magazine, IEEE》2006,23(2):69-78
This paper describes an indexing system that automatically creates metadata for multimedia broadcast news content by integrating audio, speech, and visual information. The automatic multimedia content indexing system includes acoustic segmentation (AS), automatic speech recognition (ASR), topic segmentation (TS), and video indexing features. The new spectral-based features and smoothing method in the AS module improved the speech detection performance from the audio stream of the input news content. In the speech recognition module, automatic selection of acoustic models achieved both a low WER, as with parallel recognition using multiple acoustic models, and fast recognition, as with the single acoustic model. The TS method using word concept vectors achieved more accurate results than the conventional method using local word frequency vectors. The information integration module provides the functionality of integrating results from the AS module, TS module, and SC module. The story boundary detection accuracy was improved by combining it with the AS results and the SC results compared to the sole TS results 相似文献
15.
近年来,情感计算逐渐成为人机交互发展突破的关键,而情感识别作为情感计算的重要部分,也受到了广泛的关注。本文实现了基于ResNet18的面部表情识别系统和基于HGFM架构的语音情感识别模型,通过调整参数,训练出了性能较好的模型。在此基础上,通过特征级融合和决策级融合这两种多模态融合策略,实现了包含视频和音频信号的多模态情感识别系统,展现了多模态情感识别系统性能的优越性。两种不同融合策略下的音视频情感识别模型相比视频模态和音频模态,在准确率上都有一定的提升,验证了多模态模型往往比最优的单模态模型的识别性能更好的结论。本文所实现的模型取得了较好的情感识别性能,融合后的音视频双模态模型的准确率达到了76.84%,与现有最优模型相比提升了3.50%,在与现有的音视频情感识别模型的比较中具有性能上的优势。 相似文献
16.
在日常生活中视觉事件通常伴随着声音的产生。这表明视频流与音频之间存在某种潜在的联系,本文称之为音视频同步的联合表达。本文将视频流与音频融合并通过训练所设计的神经网络预测视频流和音频是否在时间上同步来学习这种联合表达。与传统音视频信息融合方法不同,本文引入注意力机制,利用视频特征与音频特征的皮尔森相关系数在时间维度和空间维度同时对视频流加权,使视频流与音频关联更加紧密。基于学习到的音视频同步的联合表达,本文进一步利用类激活图方法进行视频声源定位。实验结果表明,所提出的引入注意力机制的音视频同步检测模型可以更好地判定给定视频的音视频是否同步,即更好地学习到音视频同步的联合表达,从而也可以有效地定位视频声源。 相似文献
17.
Pardeep Sangwan Deepti Deshwal Divya Kumar Saurabh Bhardwaj 《International Journal of Communication Systems》2023,36(12):e4418
The representation of good audio features is the first and foremost requirement for improving the identification performance of any system. Most of the representation learning approaches are based on connectionist systems to learn and extract latent features from the speech data. This research work presents a hybrid feature extraction approach to integrate Mel-Frequency Cepstral Coefficients (MFCC) features with Shifted Delta Cepstral (SDC) coefficients features, which are further stacked to Deep Belief Network (DBN), for yielding new feature representations of the speech signals. DBN is utilized for unsupervised feature learning on the extracted MFCC-SDC acoustic features. A 3-layer Back Propagation Neural Network (BPNN) classifier is initialized in terms of the learning outcomes of hidden layers of DBN for identifying language from the uttered speech. The efficiency of the proposed approach is evaluated by simulating several experimental algorithms on the user-defined database of isolated words in four languages, namely, Tamil, Malayalam, Hindi, and English, in the working platform of MATLAB. The obtained results for the proposed hybrid approach MFCC-SDC-DBN are promising. The proposed approach is also compared with the baseline feature extraction approach MFCC-SDC by utilizing traditional acoustic features and BPNN classifier. The accuracy obtained with our proposed approach is 98.1% whereas that of the baseline approach is 82%, thereby providing an overall improvement of 16.1%. 相似文献
18.
Makhoul J. Kubala F. Leek T. Daben Liu Long Nguyen Schwartz R. Srivastava A. 《Proceedings of the IEEE. Institute of Electrical and Electronics Engineers》2000,88(8):1338-1353
With the advent of essentially unlimited data storage capabilities and with the proliferation of the use of the Internet, it becomes reasonable to imagine a world in which it would be possible to access any of the stored information at will with a few keystrokes or voice commands. Since much of this data will be in the form of speech from various sources, it becomes important to develop the technologies necessary for indexing and browsing such audio data. This paper describes some of the requisite speech and language technologies that would be required and introduces an effort aimed at integrating these technologies into a system, called Rough `n' Ready, which indexes speech data, creates a structural summarization, and provides tools for browsing the stored data. The technologies highlighted in the paper include speaker-independent continuous speech recognition, speaker segmentation and identification, name spotting, topic classification, story segmentation, and information retrieval. The system automatically segments the continuous audio input stream by speaker, clusters audio segments from the same speaker, identifies speakers known to the system, and transcribes the spoken words. It also segments the input stream into stories, based on their topic content, and locates the names of persons, places, and organizations. These structural features are stored in a database and are used to construct highly selective search queries for retrieving specific content from large audio archives 相似文献
19.
传统的基于实例的音频检索算法采用顺序索引,检索时需遍历数据库并导致难以忍受的等待时间。针对传统的顺序的索引方法,该文提出基于倒排索引的音频检索算法。该方法首先利用多种音频特征构成的超向量,通过多层音频分割方法将连续音频流分割为特征数值波动幅度小的短时音频段;然后利用事先训练好的音频字典,将短时音频段序列转换为可以表征音频内容的音频字序列,并建立倒排索引;检索时,将用户提交的查询转换为音频字后利用倒排索引无须遍历数据库即可直接定位候选段落,并根据候选段落与查询的内容相似度大小对候选段落进行排序,将排好序的列表作为检索结果。仿真实验以匹配项排名、同类检索结果比例、定位准确性和检索用时4个方面作为评价指标,实验结果显示,该算法能够在平均1.101 s时间内实现92.58%的检索准确率。 相似文献