首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
基于HMM与RBF的混合语音识别新方法   总被引:5,自引:0,他引:5  
提出了一种隐马尔可夫模型(HMM)和径向基函数神经网络(RBF)相结合的语音识别新方法。该方法首先利用HMM生成最佳语音状态序列,然后用函数逼近技术产生对最佳状态序列进行时间规正,最后通过RBF神经网络进行分类识别。理论和实验结果表明,该系统比HMM具有更好的识别效果,特别对提高易混淆词的识别性能尤为显著。  相似文献   

2.
The two- or three-layered neural networks (2LNN, 3LNN) which originated from stereovision neural networks are applied to speech recognition. To accommodate sequential data flow, we consider a window through which the new acoustic data enter and from which the final neural activities are output. Inside the window, a recurrent neural network develops neural activity toward a stable point. The process is called winner-take-all (WTA) with cooperation and competition. The resulting neural activities clearly showed recognition of continuous speech of a word. The string of phonemes obtained is compared with reference words by using a dynamic programming method. The resulting recognition rate was 96.7% for 100 words spoken by nine male speakers, compared with 97.9% by a hidden Markov model (HMM) with three states and a single gaussian distribution. These results, which are close to those of HMM, seem important because the architecture of the neural network is very simple, and the number of parameters in the neural net equations is small and fixed. This work was presented in part at the Fifth International Symposium on Artificial Life and Robotics, Oita, Japan, January 26–28, 2000  相似文献   

3.
In this paper we investigated Artificial Neural Networks (ANN) based Automatic Speech Recognition (ASR) by using limited Arabic vocabulary corpora. These limited Arabic vocabulary subsets are digits and vowels carried by specific carrier words. In addition to this, Hidden Markov Model (HMM) based ASR systems are designed and compared to two ANN based systems, namely Multilayer Perceptron (MLP) and recurrent architectures, by using the same corpora. All systems are isolated word speech recognizers. The ANN based recognition system achieved 99.5% correct digit recognition. On the other hand, the HMM based recognition system achieved 98.1% correct digit recognition. With vowels carrier words, the MLP and recurrent ANN based recognition systems achieved 92.13% and 98.06, respectively, correct vowel recognition; but the HMM based recognition system achieved 91.6% correct vowel recognition.  相似文献   

4.
语音识别关键技术研究   总被引:11,自引:0,他引:11  
采用隐马尔可夫模型(HMM)进行语音声学建模是大词汇连续语音识别取得突破性进展最主要的原因之一,HMM本身依赖的某些不合理建模假设和不具有区分性的训练算法正在成为制约语音识别系统未来发展的瓶颈。神经网络依靠权能够进行长时间记忆和知识存储,但对于输入模式的瞬时响应的记忆能力比较差。采用混合HMM/ANN模型对HMM的一些不尽合理的建模假设和训练算法进行了革新。混合模型用神经网络非参数概率模型代替高斯混合器(GM)计算HMM的状态所需要的观测概率。另外对神经网络的结构进行了优化,取得了很好的效果。  相似文献   

5.
为了解决语音信号中帧与帧之间的重叠,提高语音信号的自适应能力,本文提出基于隐马尔可夫(HMM)与遗传算法神经网络改进的语音识别系统.该改进方法主要利用小波神经网络对Mel频率倒谱系数(MFCC)进行训练,然后利用HMM对语音信号进行时序建模,计算出语音对HMM的输出概率的评分,结果作为遗传神经网络的输入,即得语音的分类识别信息.实验结果表明,改进的语音识别系统比单纯的HMM有更好的噪声鲁棒性,提高了语音识别系统的性能.  相似文献   

6.
In this paper, we present an on-line learning neural network model, Dynamic Recognition Neural Network (DRNN), for real-time speech recognition. The property of accumulative learning of the DRNN makes it very suitable for real-time speech recognition with on-line learning. A comparison between the DRNN and Hidden Markov Model (HMM) shows that the computational complexity of the former is lower than that of the latter in both training and recognition. Encouraging results are obtained when the DRNN is tested on a BUPT digit database (Mandarin) and on the on-line learning of twenty isolated English computer command words.  相似文献   

7.
Global optimization of a neural network-hidden Markov model hybrid   总被引:1,自引:0,他引:1  
The integration of multilayered and recurrent artificial neural networks (ANNs) with hidden Markov models (HMMs) is addressed. ANNs are suitable for approximating functions that compute new acoustic parameters, whereas HMMs have been proven successful at modeling the temporal structure of the speech signal. In the approach described, the ANN outputs constitute the sequence of observation vectors for the HMM. An algorithm is proposed for global optimization of all the parameters. Results on speaker-independent recognition experiments using this integrated ANN-HMM system on the TIMIT continuous speech database are reported.  相似文献   

8.
卷积神经网络(Convolutional Neural Networks,CNN)是目前流行的语音识别模型之一,其特有卷积结构保证了语音信号时域和频域的平移不变性。但是CNN存在着对语音信号建模能力有所不足的问题。为此,将链接时序准则(CTC)应用在CNN结构中,构建端到端卷积神经网络(CTC-CNN)模型。同时,引入残差块结构,提出一种新的端到端深度卷积神经网络(CTC-DCNN)模型,并利用maxout激活函数对其进行优化。通过TIMIT和Thchs-30语音库测试实验,结果表明在中英文识别中,采用该模型比现有卷积神经网络模型,准确率分别提高约4.7%和6.3%。  相似文献   

9.
基于混合模型HMM/RBF的数字语音识别   总被引:5,自引:0,他引:5  
王朋  陈树中 《计算机工程》2002,28(12):136-138
提出一种离散隐马尔科夫模型(hidden Markov model,HMM)和径向基函数(radial basis function,RBF)神经网络相结合应用于汉语数码语音识别(Mmandarin Ddigit Speech Recognition,MDSR)的方法,同时采用了一系列改进方法,使汉语数码语音的识别率达到了99.7%。  相似文献   

10.
针对多数语音识别系统在噪音环境下性能急剧下降的问题,提出了一种新的语音识别特征提取方法。该方法是建立在听觉模型的基础上,通过组合语音信号和其差分信号的上升过零率获得频率信息,通过峰值检测和非线性幅度加权来获取强度信息,二者组合在一起,得到输出语音特征,再分别用BP神经网络和HMM进行训练和识别。仿真实现了不同信噪比下不依赖人的50词的语音识别,给出了识别的结果,证明了组合差分信息的过零与峰值幅度特征具有较强的抗噪声性能。  相似文献   

11.
基于HMM和遗传神经网络的语音识别系统   总被引:1,自引:0,他引:1  
本文提出了一种基于隐马尔可夫(HMM)和遗传算法优化的反向传播网络(GA-BP)的混合模型语音识别方法。该方法首先利用HMM对语音信号进行时序建模,并计算出语音对HMM的输出概率的评分,将得到的概率评分作为优化后反向传播网络的输入,得到分类识别信息,最后根据混合模型的识别算法作出识别决策。通过Matlab软件对已有的样本数据进行训练和测试。仿真结果表明,由于设计充分利用了HMM时间建模能力强和GA-BP神经网络分类能力强等特点,该混合模型比单纯的HMM具有更强的抗噪性,克服了神经网络的局部最优问题,大大提高了识别的速度,明显改善了语音识别系统的性能。  相似文献   

12.
This paper presents an artificial neural network (ANN) for speaker-independent isolated word speech recognition. The network consists of three subnets in concatenation. The static information within one frame of speech signal is processed in the probabilistic mapping subnet that converts an input vector of acoustic features into a probability vector whose components are estimated probabilities of the feature vector belonging to the phonetic classes that constitute the words in the vocabulary. The dynamics capturing subnet computes the first-order cross correlation between the components of the probability vectors to serve as the discriminative feature derived from the interframe temporal information of the speech signal. These dynamic features are passed for decision-making to the classification subnet, which is a multilayer perceptron (MLP). The architecture of these three subnets are described, and the associated adaptive learning algorithms are derived. The recognition results for a subset of the DARPA TIMIT speech database are reported. The correct recognition rate of the proposed ANN system is 95.5%, whereas that of the best of continuous hidden Markov model (HMM)-based systems is only 91.0%  相似文献   

13.
研究适用于隐马尔可夫模型(HMM)结合多层感知器(MLP)的小词汇量混合语音识别系统的一种简化神经网络结构。利用小词汇量混合语音识别系统中的HMM状态所形成的规则的二维阵列,对状态观测概率进行分解。基于这种利用HMM的二维结构特性的方法,实现了用一种由多个简单的MLP所组成的简化神经网络结构来估计状态观测概率。理论分析和语音识别实验的结果都表明,这种简化神经网络结构在性能上优于Franco等人提出的简化神经网络结构。  相似文献   

14.
In speech recognition research,because of the variety of languages,corresponding speech recognition systems need to be constructed for different languages.Especially in a dialect speech recognition system,there are many special words and oral language features.In addition,dialect speech data is very scarce.Therefore,constructing a dialect speech recognition system is difficult.This paper constructs a speech recognition system for Sichuan dialect by combining a hidden Markov model(HMM)and a deep long short-term memory(LSTM)network.Using the HMM-LSTM architecture,we created a Sichuan dialect dataset and implemented a speech recognition system for this dataset.Compared with the deep neural network(DNN),the LSTM network can overcome the problem that the DNN only captures the context of a fixed number of information items.Moreover,to identify polyphone and special pronunciation vocabularies in Sichuan dialect accurately,we collect all the characters in the dataset and their common phoneme sequences to form a lexicon.Finally,this system yields a 11.34%character error rate on the Sichuan dialect evaluation dataset.As far as we know,it is the best performance for this corpus at present.  相似文献   

15.
Artificial neural networks have been shown to perform well in automatic speech recognition (ASR) tasks, although their complexity and excessive computational costs have limited their use. Recently, a recurrent neural network with simplified training, the echo state network (ESN), was introduced by Jaeger and shown to outperform conventional methods in time series prediction experiments. We created the predictive ESN classifier by combining the ESN with a state machine framework. In small-vocabulary ASR experiments, we compared the noise-robust performance of the predictive ESN classifier with a hidden Markov model (HMM) as a function of model size and signal-to-noise ratio (SNR). The predictive ESN classifier outperformed an HMM by 8-dB SNR, and both models achieved maximum noise-robust accuracy for architectures with more states and fewer kernels per state. Using ten trials of random sets of training/validation/test speakers, accuracy for the predictive ESN classifier, averaged between 0 and 20 dB SNR, was 81plusmn3%, compared to 61plusmn2% for an HMM. The closed-form regression training for the ESN significantly reduced the computational cost of the network, and the reservoir of the ESN created a high-dimensional representation of the input with memory which led to increased noise-robust classification.  相似文献   

16.
李江和  王玫 《计算机工程》2022,48(11):77-82
传统基于深度学习的语音增强方法为了提高网络对带噪语音的建模能力,通常采用非因果式的网络输入,由此导致了固定时延问题,使得语音增强系统实时性较差。提出一种用于因果式语音增强的门控循环神经网络CGRU,以解决实时语音增强系统中的固定时延问题并提高语音增强性能。为了更好地建模带噪语音信号的相关性,网络单元在计算当前时刻的输出时融合上一时刻的输入与输出。此外,采用线性门控机制来控制信息传输,以缓解网络训练过程中的过拟合问题。考虑到因果式语音增强系统对实时性要求较高,在CGRU网络中采用单门控的结构设计,以降低网络的结构复杂度,提高系统的实时性。实验结果表明,CGRU网络在增强后的语音感知质量、语音客观可懂度、分段信噪比指标上均优于GRU、SRNN、SRU等传统网络结构,在信噪比为0 dB的条件下,CGRU的平均语音感知质量和平均语音客观可懂度分别达到2.4和0.786。  相似文献   

17.
In this paper, artificial neural networks were used to accomplish isolated speech recognition. The topic was investigated in two steps, consisting of the pre-processing part with Digital Signal Processing (DSP) techniques and the post-processing part with Artificial Neural Networks (ANN). These two parts were briefly explained and speech recognizers using different ANN architectures were implemented on Matlab. Three different neural network models; multi layer back propagation, Elman and probabilistic neural networks were designed. Performance comparisons with similar studies found in the related literature indicated that our proposed ANN structures yield satisfactory results.  相似文献   

18.
Large vocabulary recognition of on-line handwritten cursive words   总被引:1,自引:0,他引:1  
This paper presents a writer independent system for large vocabulary recognition of on-line handwritten cursive words. The system first uses a filtering module, based on simple letter features, to quickly reduce a large reference dictionary (lexicon) to a more manageable size; the reduced lexicon is subsequently fed to a recognition module. The recognition module uses a temporal representation of the input, instead of a static two-dimensional image, thereby preserving the sequential nature of the data and enabling the use of a Time-Delay Neural Network (TDNN); such networks have been previously successful in the continuous speech recognition domain. Explicit segmentation of the input words into characters is avoided by sequentially presenting the input word representation to the neural network-based recognizer. The outputs of the recognition module are collected and converted into a string of characters that is matched against the reduced lexicon using an extended Damerau-Levenshtein function. Trained on 2,443 unconstrained word images (11 k characters) from 55 writers and using a 21 k lexicon we reached a 97.9% and 82.4% top-5 word recognition rate on a writer-dependent and writer-independent test, respectively  相似文献   

19.
An unsupervised competitive neural network for efficient clustering of Gaussian probability density function (GPDF) data of continuous density hidden Markov models (CDHMMs) is proposed in this paper. The proposed unsupervised competitive neural network, called the divergence-based centroid neural network (DCNN), employs the divergence measure as its distance measure and utilizes the statistical characteristics of observation densities in the HMM for speech recognition problems. While the conventional clustering algorithms used for the vector quantization (VQ) codebook design utilize only the mean values of the observation densities in the HMM, the proposed DCNN utilizes both the mean and the covariance values. When compared with other conventional unsupervised neural networks, the DCNN successfully allocates more code vectors to the regions where GPDF data are densely distributed while it allocates fewer code vectors to the regions where GPDF data are sparsely distributed. When applied to Korean monophone recognition problems as a tool to reduce the size of the codebook, the DCNN reduced the number of GPDFs used for code vectors by 65.3% while preserving recognition accuracy. Experimental results with a divergence-based k-means algorithm and a divergence-based self-organizing map algorithm are also presented in this paper for a performance comparison.  相似文献   

20.
This paper proposes a new method for speaker feature extraction based on Formants, Wavelet Entropy and Neural Networks denoted as FWENN. In the first stage, five formants and seven Shannon entropy wavelet packet are extracted from the speakers’ signals as the speaker feature vector. In the second stage, these 12 feature extraction coefficients are used as inputs to feed-forward neural networks. Probabilistic neural network is also proposed for comparison. In contrast to conventional speaker recognition methods that extract features from sentences (or words), the proposed method extracts the features from vowels. Advantages of using vowels include the ability to recognize speakers when only partially-recorded words are available. This may be useful for deaf-mute persons or when the recordings are damaged. Experimental results show that the proposed method succeeds in the speaker verification and identification tasks with high classification rate. This is accomplished with minimum amount of information, using only 12 coefficient features (i.e. vector length) and only one vowel signal, which is the major contribution of this work. The results are further compared to well-known classical algorithms for speaker recognition and are found to be superior.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号