首页 | 本学科首页   官方微博 | 高级检索  
     

基于循环神经网络的语音识别模型
引用本文:朱小燕,王昱,徐伟.基于循环神经网络的语音识别模型[J].计算机学报,2001,24(2):213-218.
作者姓名:朱小燕  王昱  徐伟
作者单位:清华大学计算机科学与技术系,清华大学智能技术与系统国家重点实验室
基金项目:国家自然科学基金! (6 9982 0 0 5 ),国家重点基础研究发展规划项目! (G19980 30 5 0 70 3)资助
摘    要:近年来基于隐马尔可夫模型(HMM)的语音识别技术得到了很大发展。然而HMM模型有着一定的局限性,如何克服HMM的一阶假设和独立性假设带来的问题一直是研究讨论的热点,在语音识别中引入神经网络的方法是克服HMM局限性的一条途径。该文将循环神经网络应用于汉语语音识别,修改了原网络模型并提出了相应的训练方法,实验结果表明该模型具有良好的连续信号处理性能,与传统的HMM模型效果相当,新的训练策略能够在提高训练速度的同时,使得模型分类性能有明显提高。

关 键 词:语音识别  隐马尔可夫模型  循环神经网络  学习算法
修稿时间:1999年12月21

Speech Recognition Model Based on Recurrent Neural Networks
ZHU Xiao-Yan,WANG Yu,XU Wei.Speech Recognition Model Based on Recurrent Neural Networks[J].Chinese Journal of Computers,2001,24(2):213-218.
Authors:ZHU Xiao-Yan  WANG Yu  XU Wei
Abstract:To overcome some weaknesses of hidden Markov model in speech recognition, HMM/NN hybrid systems had been explored by many researchers in recent years. In the previous HMM/NN hybrid systems, the neural networks adopted are mostly multilayer perceptron (MLP). In our system, recurrent neural networks (RNN) were used to take the place of MLP as the syllable probability estimator. RNN is MLP incorporated with a feedback which can transport the output of some neurons to other neurons or themselves. The incorporation of feedback into a MLP gives the net the ability to efficiently process the context information of time sequence, which is especially useful for speech recognition. In this paper, the architecture of the RNN is modified and corresponding training schema is presented.   Following techniques have been adopted in our system.   1. A network with a single layer has been adopted, while the content of feedback is different from the network used by previous researchers, i.e., the external output is included in the feedback, not just the internal state output.   2. The training algorithm adopted in our system is back-propagation through time (BPTT) algorithm. In the common BPTT algorithm, the initial feedback values are set arbitrarily according to experience. This means that the initial feedback is not specific to the problem we are dealing with. So it should be preferable if the initial feedback values also can be trained. In our training algorithm, this is achieved by adding an additional layer to the unfolded network.   3. To train the network, proper target values must be given. To acquire them, we take use of HMMs which have been trained to recognize the same syllables. The advantage of this method is that it avoids the difficulty and inaccuracy of the hand-set teacher signals and it gives a smooth transition between two adjacent states.   4. In order to make the network learn faster and acquire better generalization ability, a strategy which trains the network by stages has been used. At first, short fragments of speech sequences are given. After small enough error has been achieved on these short pieces, longer fragments are used to learn. Finally, whole sequences are learned.   Experiment results show that the training speed can be accelerated by the method, and the recognition performance is also improved.
Keywords:speech recognition  hidden markov model  recurrent neural networks
本文献已被 CNKI 维普 万方数据 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号