首页 | 本学科首页   官方微博 | 高级检索  
     


A fast neural net training algorithm and its application to speech classification
Authors:Thea Ghiselli-Crippa

Amro El-Jaroudi

Affiliation:

University of Pittsburgh, USA

University of Pittsburgh, USA

Abstract:This paper describes a fast training algorithm for feedforward neural nets, as applied to a two-layer neural network to classify segments of speech as voiced, unvoiced, or silence. The speech classification method is based on five features computed for each speech segment and used as input to the network. The network weights are trained using a new fast training algorithm which minimizes the total least squares error between the actual output of the network and the corresponding desired output. The iterative training algorithm uses a quasi-Newtonian error-minimization method and employs a positive-definite approximation of the Hessian matrix to quickly converge to a locally optimal set of weights. Convergence is fast, with a local minimum typically reached within ten iterations; in terms of convergence speed, the algorithm compares favorably with other training techniques. When used for voiced-unvoiced-silence classification of speech frames, the network performance compares favorably with current approaches. Moreover, the approach used has the advantage of requiring no assumption of a particular probability distribution for the input features.
Keywords:Neural networks   classification   learning algorithms
本文献已被 ScienceDirect 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号