首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Distant speech capture in lecture halls and auditoriums offers unique challenges in algorithm development for automatic speech recognition. In this study, a new adaptation strategy for distant noisy speech is created by the means of phoneme classes. Unlike previous approaches which adapt the acoustic model to the features, the proposed phoneme-class based feature adaptation (PCBFA) strategy adapts the distant data features to the present acoustic model which was previously trained on close microphone speech. The essence of PCBFA is to create a transformation strategy which makes the distributions of phoneme-classes of distant noisy speech similar to those of a close talk microphone acoustic model in a multidimensional MFCC space. To achieve this task, phoneme-classes of distant noisy speech are recognized via artificial neural networks. PCBFA is the adaptation of features rather than adaptation of acoustic models. The main idea behind PCBFA is illustrated via conventional Gaussian mixture model–Hidden Markov model (GMM–HMM) although it can be extended to new structures in automatic speech recognition (ASR). The new adapted features together with the new and improved acoustic models produced by PCBFA are shown to outperform those created only by acoustic model adaptations for ASR and keyword spotting. PCBFA offers a new powerful understanding in acoustic-modeling of distant speech.  相似文献   

2.
为降低声学特征在语音识别系统中的音素识别错误率,提高系统性能,提出一种子空间高斯混合模型和深度神经网络结合提取特征的方法,分析了子空间高斯混合模型的参数规模并在减少计算复杂度后将其与深度神经网络串联进一步提高音素识别率。把经过非线性特征变换的语音数据输入模型,找到深度神经网络结构的最佳配置,建立学习与训练更可靠的网络模型进行特征提取,通过比较音素识别错误率来判断系统性能。实验仿真结果证明,基于该系统提取的特征明显优于传统声学模型。  相似文献   

3.
4.
In phoneme recognition experiments, it was found that approximately 75% of misclassified frames were assigned labels within the same broad phonetic group (BPG). While the phoneme can be described as the smallest distinguishable unit of speech, phonemes within BPGs contain very similar characteristics and can be easily confused. However, different BPGs, such as vowels and stops, possess very different spectral and temporal characteristics. In order to accommodate the full range of phonemes, acoustic models of speech recognition systems calculate input features from all frequencies over a large temporal context window. A new phoneme classifier is proposed consisting of a modular arrangement of experts, with one expert assigned to each BPG and focused on discriminating between phonemes within that BPG. Due to the different temporal and spectral structure of each BPG, novel feature sets are extracted using mutual information, to select a relevant time-frequency (TF) feature set for each expert. To construct a phone recognition system, the output of each expert is combined with a baseline classifier under the guidance of a separate BPG detector. Considering phoneme recognition experiments using the TIMIT continuous speech corpus, the proposed architecture afforded significant error rate reductions up to 5% relative  相似文献   

5.
The shapes of speakers' vocal organs change under their different emotional states, which leads to the deviation of the emotional acoustic space of short-time features from the neutral acoustic space and thereby the degradation of the speaker recognition performance. Features deviating greatly from the neutral acoustic space are considered as mismatched features, and they negatively affect speaker recognition systems. Emotion variation produces different feature deformations for different phonemes, so it is reasonable to build a finer model to detect mismatched features under each phoneme. However, given the difficulty of phoneme recognition, three sorts of acoustic class recognition--phoneme classes, Gaussian mixture model (GMM) tokenizer, and probabilistic GMM tokenizer--are proposed to replace phoneme recognition. We propose feature pruning and feature regulation methods to process the mismatched features to improve speaker recognition performance. As for the feature regulation method, a strategy of maximizing the between-class distance and minimizing the within-class distance is adopted to train the transformation matrix to regulate the mismatched features. Experiments conducted on the Mandarin affective speech corpus (MASC) show that our feature pruning and feature regulation methods increase the identification rate (IR) by 3.64% and 6.77%, compared with the baseline GMM-UBM (universal background model) algorithm. Also, corresponding IR increases of 2.09% and 3.32% can be obtained with our methods when applied to the state-of-the-art algorithm i-vector.  相似文献   

6.
该文对不同语速下,人工标注的维吾尔语连续语音语料中各音素进行共振峰频率、音长、音强的统计分析,并完成辅-元结构下的塞音、塞擦音的声学特征分析。该文通过美尔频率倒谱系数与共振峰频率等声学特征的融合及模型状态数的修改,对维吾尔语音素识别的声学模型进行了改进,并验证了不同声学特征对音素识别的影响。相比于基线系统,改进后声学模型的识别率取得一定提升。同时,利用语音学知识分析维吾尔语易混淆音素产生原因,为音素识别声学模型的进一步改进提供参考依据。  相似文献   

7.
针对多噪声环境下的语音识别问题,提出了将环境噪声作为语音识别上下文考虑的层级语音识别模型。该模型由含噪语音分类模型和特定噪声环境下的声学模型两层组成,通过含噪语音分类模型降低训练数据与测试数据的差异,消除了特征空间研究对噪声稳定性的限制,并且克服了传统多类型训练在某些噪声环境下识别准确率低的弊端,又通过深度神经网络(DNN)进行声学模型建模,进一步增强声学模型分辨噪声的能力,从而提高模型空间语音识别的噪声鲁棒性。实验中将所提模型与多类型训练得到的基准模型进行对比,结果显示所提层级语音识别模型较该基准模型的词错率(WER)相对降低了20.3%,表明该层级语音识别模型有利于增强语音识别的噪声鲁棒性。  相似文献   

8.
9.
秦楚雄  张连海 《计算机应用》2016,36(9):2609-2615
针对卷积神经网络(CNN)声学建模参数在低资源训练数据条件下的语音识别任务中存在训练不充分的问题,提出一种利用多流特征提升低资源卷积神经网络声学模型性能的方法。首先,为了在低资源声学建模过程中充分利用有限训练数据中更多数量的声学特征,先对训练数据提取几类不同的特征;其次,对每一类类特征分别构建卷积子网络,形成一个并行结构,使得多特征数据在概率分布上得以规整;然后通过在并行卷积子网络之上加入全连接层进行融合,从而得到一种新的卷积神经网络声学模型;最后,基于该声学模型搭建低资源语音识别系统。实验结果表明,并行卷积层子网络可以将不同特征空间规整得更为相似,且该方法相对传统多特征拼接方法和单特征CNN建模方法分别提升了3.27%和2.08%的识别率;当引入多语言训练时,该方法依然适用,且识别率分别相对提升了5.73%和4.57%。  相似文献   

10.
For a natural verbal communication between humans and machines, automatic speech recognition, which works reasonably well on recordings captured with mid- or far-field microphones, is essential. While a lot of research and development are devoted to address one of the two distortions frequently encountered in mid- and far-field sound pickup, namely noise or reverberation, less effort has been undertaken to jointly combat both kinds of distortions. In our view, however, this is essential to further reduce the demolishing effect by moving the microphone away from the speaker's mouth because in real environments both kinds of distortions are present. In this paper, we propose a first step into this direction by integrating an estimate of the reverberation energy derived by an auxiliary model based on multistep linear prediction, into a framework, which, so far tracks and removes nonstationary additive distortion by particle filters in a low-dimension logarithmic power frequency domain. On actual recordings with different speaker-to-microphone distances, we observe that combating, in the feature space, either nonstationary noise or reverberation alone, on a single channel, is already able to improve speech recognition performance before and after acoustic model adaptation. Furthermore, we observe that a simple concatenation of techniques addressing either additive noise or reverberation can further improve the accuracy in some cases. Last but not least, we demonstrate that the joint estimation and removal of both kinds of distortions, as proposed in this publication, further improve the accuracy of the text output.  相似文献   

11.
针对单一模态情感识别精度低的问题,提出了基于Bi-LSTM-CNN的语音文本双模态情感识别模型算法.该算法采用带有词嵌入的双向长短时记忆网络(bi-directional long short-term memory network,Bi-LSTM)和卷积神经网络(convolutional neural networ...  相似文献   

12.
目前基于注意力机制的序列到序列声学模型成为语音识别领域的研究热点。针对该模型训练耗时长和鲁棒性差等问题,提出一种结合瓶颈特征的注意力声学模型。该模型由基于深度置信网络(Deep Belief Network,DBN)的瓶颈特征提取网络和基于注意力的序列到序列模型两部分组成:DBN能够引入传统声学模型的先验信息来加快模型的收敛速度,同时增强瓶颈特征的鲁棒性和区分性;注意力模型利用语音特征序列的时序信息计算音素序列的后验概率。在基线系统的基础上,通过减少注意力模型中循环神经网络的层数来减少训练的时间,通过改变瓶颈特征提取网络的输入层单元数和瓶颈层单元数来优化识别准确率。在TIMIT数据库上的实验表明,该模型在测试集上的音素错误率降低至了17.80%,训练的平均迭代周期缩短了52%,训练迭代次数由139减少至89。  相似文献   

13.
《Advanced Robotics》2013,27(1-2):47-67
Depending on the emotion of speech, the meaning of the speech or the intention of the speaker differs. Therefore, speech emotion recognition, as well as automatic speech recognition is necessary to communicate precisely between humans and robots for human–robot interaction. In this paper, a novel feature extraction method is proposed for speech emotion recognition using separation of phoneme class. In feature extraction, the signal variation caused by different sentences usually overrides the emotion variation and it lowers the performance of emotion recognition. However, as the proposed method extracts features from speech in parts that correspond to limited ranges of the center of gravity of the spectrum (CoG) and formant frequencies, the effects of phoneme variation on features are reduced. Corresponding to the range of CoG, the obstruent sounds are discriminated from sonorant sounds. Moreover, the sonorant sounds are categorized into four classes by the resonance characteristics revealed by formant frequency. The result shows that the proposed method using 30 different speakers' corpora improves emotion recognition accuracy compared with other methods by 99% significance level. Furthermore, the proposed method was applied to extract several features including prosodic and phonetic features, and was implemented on 'Mung' robots as an emotion recognizer of users.  相似文献   

14.
基于发音特征的声效相关鲁棒语音识别算法   总被引:1,自引:0,他引:1  
晁浩  宋成  彭维平 《计算机应用》2015,35(1):257-261
针对声效(VE)相关的语音识别鲁棒性问题,提出了基于多模型框架的语音识别算法.首先,分析了不同声效模式下语音信号的声学特性以及声效变化对语音识别精度的影响;然后,提出了基于高斯混合模型(GMM)的声效模式检测方法;最后,根据声效检测的结果,训练专门的声学模型用于耳语音识别,而将发音特征与传统的谱特征一起用于其余4种声效模式的语音识别.基于孤立词识别的实验结果显示,采用所提方法后语音识别准确率有了明显的提高:与基线系统相比,所提方法5种声效的平均字错误率降低了26.69%;与声学模型混合语料训练方法相比,平均字错误率降低了14.51%;与最大似然线性回归(MLLR)自适应方法相比,平均字错误率降低了15.30%.实验结果表明:与传统谱特征相比发音特征对于声效变化更具鲁棒性,而多模型框架是解决声效相关的语音识别鲁棒性问题的有效方法.  相似文献   

15.
This paper addresses learning and recognition of human behavior models from multimodal observation in a smart home environment. The proposed approach is part of a framework for acquiring a high-level contextual model for human behavior in an augmented environment. A 3-D video tracking system creates and tracks entities (persons) in the scene. Further, a speech activity detector analyzes audio streams coming from head set microphones and determines for each entity, whether the entity speaks or not. An ambient sound detector detects noises in the environment. An individual role detector derives basic activity like ldquowalkingrdquo or ldquointeracting with tablerdquo from the extracted entity properties of the 3-D tracker. From the derived multimodal observations, different situations like ldquoaperitifrdquo or ldquopresentationrdquo are learned and detected using statistical models (HMMs). The objective of the proposed general framework is two-fold: the automatic offline analysis of human behavior recordings and the online detection of learned human behavior models. To evaluate the proposed approach, several multimodal recordings showing different situations have been conducted. The obtained results, in particular for offline analysis, are very good, showing that multimodality as well as multiperson observation generation are beneficial for situation recognition.  相似文献   

16.
识别率和对环境的适应能力是一个语音识别系统的两个重要性能,常见的提高语音识别率的方法大多通过改进声音模型来获得较高的识别率,这往往造成声音模型的复杂化以及模型训练的困难。另外,在说话人和麦克风位置不固定等情况下,这些方法识别效果往往很差。文中提出了一种用多话筒分别识别一个语音,并用数据融合技术对识别结果进行处理的语音识别方法。初步的实验结果表明该方法不仅可以提高系统对环境的适应能力,而且在单个声音  相似文献   

17.
语音识别是人机交互的重要方式,针对传统语音识别系统对含噪语音识别性能较差、特征选择不恰当的问题,提出一种基于迁移学习的深度自编码器循环神经网络模型。该模型由编码器、解码器以及声学模型组成,其中,声学模型由堆栈双向循环神经网络构成,用于提升识别性能;编码器和解码器均由全连接层构成,用于特征提取。将编码器结构及参数迁移至声学模型进行联合训练,在含噪Google Commands数据集上的实验表明本文模型有效增强了含噪语音的识别性能,并且具有较好的鲁棒性和泛化性。  相似文献   

18.
音素层特征等高层信息的参数由于完全不受信道的影响,被认为可对基于声学参数的低层信息系统进行有益的补充,但高层信息存在数据稀少的缺点。建立了基于音素特征超矢量的识别方法,并采用BUT的音素层语音识别器对其识别性能进行分析,进而尝试通过数据裁剪和KPCA映射的方法来提升该识别方法的性能。结果表明,采用裁剪并不能有效提升其识别性能,但融合KPCA映射的识别算法的性能得到了显著提升。进一步与主流的GMM-UBM系统融合后,相对于GMM-UBM系统,EER从8.4%降至6.7%。  相似文献   

19.
When performing speaker diarization on recordings from meetings, multiple microphones of different qualities are usually available and distributed around the meeting room. Although several approaches have been proposed in recent years to take advantage of multiple microphones, they are either too computationally expensive and not easily scalable or they cannot outperform the simpler case of using the best single microphone. In this paper, the use of classic acoustic beamforming techniques is proposed together with several novel algorithms to create a complete frontend for speaker diarization in the meeting room domain. New techniques we are presenting include blind reference-channel selection, two-step time delay of arrival (TDOA) Viterbi postprocessing, and a dynamic output signal weighting algorithm, together with using such TDOA values in the diarization to complement the acoustic information. Tests on speaker diarization show a 25% relative improvement on the test set compared to using a single most centrally located microphone. Additional experimental results show improvements using these techniques in a speech recognition task.  相似文献   

20.
This paper presents a real-time speech-driven talking face system which provides low computational complexity and smoothly visual sense. A novel embedded confusable system is proposed to generate an efficient phoneme-viseme mapping table which is constructed by phoneme grouping using Houtgast similarity approach based on the results of viseme similarity estimation using histogram distance, according to the concept of viseme visually ambiguous. The generated mapping table can simplify the mapping problem and promote viseme classification accuracy. The implemented real time speech-driven talking face system includes: 1) speech signal processing, including SNR-aware speech enhancement for noise reduction and ICA-based feature set extractions for robust acoustic feature vectors; 2) recognition network processing, HMM and MCSVM are combined as a recognition network approach for phoneme recognition and viseme classification, which HMM is good at dealing with sequential inputs, while MCSVM shows superior performance in classifying with good generalization properties, especially for limited samples. The phoneme-viseme mapping table is used for MCSVM to classify the observation sequence of HMM results, which the viseme class is belong to; 3) visual processing, arranges lip shape image of visemes in time sequence, and presents more authenticity using a dynamic alpha blending with different alpha value settings. Presented by the experiments, the used speech signal processing with noise speech comparing with clean speech, could gain 1.1 % (16.7 % to 15.6 %) and 4.8 % (30.4 % to 35.2 %) accuracy rate improvements in PER and WER, respectively. For viseme classification, the error rate is decreased from 19.22 % to 9.37 %. Last, we simulated a GSM communication between mobile phone and PC for visual quality rating and speech driven feeling using mean opinion score. Therefore, our method reduces the number of visemes and lip shape images by confusable sets and enables real-time operation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号