首页 | 本学科首页   官方微博 | 高级检索  
     


Extracting finite-state representations from recurrent neuralnetworks trained on chaotic symbolic sequences
Authors:Tino  P Koteles  M
Affiliation:Dept. of Comput. Sci. and Eng., Slovak Tech. Univ., Bratislava.
Abstract:Concerns neural-based modeling of symbolic chaotic time series. We investigate the knowledge induction process associated with training recurrent mural nets (RNN) on single long chaotic symbolic sequences. Even though training RNN to predict the next symbol leaves the standard performance measures such as the mean square error on the network output virtually unchanged, the nets extract a lot of knowledge. We monitor the knowledge extraction process by considering the nets stochastic sources and letting them generate sequences which are then confronted with the training sequence via information theoretic entropy and cross-entropy measures. We also study the possibility of reformulating the knowledge gained by RNN in a compact easy-to-analyze form of finite-state stochastic machines. The experiments are performed on two sequences with different complexities measured by the size and state transition structure of the induced Crutchfield's epsilon-machines (1991, 1994). The extracted machines can achieve comparable or even better entropy and cross-entropy performance. They reflect the training sequence complexity in their dynamical state representations that can be reformulated using finite-state means. The findings are confirmed by a much more detailed analysis of model generated sequences. We also introduce a visual representation of allowed block structure in the studied sequences that allows for an illustrative insight into both RNN training and finite-state stochastic machine extraction processes.
Keywords:
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号