首页 | 本学科首页   官方微博 | 高级检索  
     


Adaptative time constants improve the prediction capability of recurrent neural networks
Authors:Jean-Philippe Draye  Davor Pavisic  Guy Cheron  Gaëtan Libert
Affiliation:(1) Faculté Polytechnique de Mons, Parallel Information Processing Laboratory, rue de Houdain 9, B-7000 Mons, Belgium;(2) Laboratory of Biomechanics, University of Brussels, Av. Paul Héger 28, B-1050 Brussels, Belgium;(3) Universidad Del Valle, P.O. Box 4742, Cochabamba, Bolivia
Abstract:Classical statistical techniques for prediction reach their limitations in applications with nonlinearities in the data set; nevertheless, neural models can counteract these limitations. In this paper, we present a recurrent neural model where we associate an adaptative time constant to each neuron-like unit and a learning algorithm to train these dynamic recurrent networks. We test the network by training it to predict the Mackey-Glass chaotic signal. To evaluate the quality of the prediction, we computed the power spectra of the two signals and computed the associated fractional error. Results show that the introduction of adaptative time constants associated to each neuron of a recurrent network improves the quality of the prediction and the dynamical features of a neural model. The performance of such dynamic recurrent neural networks outperform time-delay neural networks.
Keywords:
本文献已被 SpringerLink 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号