首页 | 本学科首页   官方微博 | 高级检索  
     


Extraction,Insertion and Refinement of Symbolic Rules in Dynamically Driven Recurrent Neural Networks
Authors:C LEE GILES  CHRISTIAN W OMLIN
Affiliation:1. NEC Research Institute , 4 Independence Way, Princeton , NJ , 08540 , USA E-mail: giles@research.nj.nec.com;2. Institute for Advanced Computer Studies, University of Maryland , colloege Park, MD , 20742 , USA;3. NEC Research Institute , 4 Independence Way, Princeton , NJ , 08540 , USA;4. Computer Science Department , Rensselaer Polytechnic Institute , Troy , NY , 12180 , USA E-mail: omlin@cs.rpi.edu.
Abstract:Recurrent neural networks readily process, learn and generate temporal sequences. In addition, they have been shown to have impressive computational power. Recurrent neural networks can be trained with symbolic string examples encoded as temporal sequences to behave like sequential finite slate recognizers. We discuss methods for extracting, inserting and refining symbolic grammatical rules for recurrent networks. This paper discusses various issues: how rules are inserted into recurrent networks, how they affect training and generalization, and how those rules can be checked and corrected. The capability of exchanging information between a symbolic representation (grammatical rules)and a connectionist representation (trained weights) has interesting implications. After partially known rules are inserted, recurrent networks can be trained to preserve inserted rules that were correct and to correct through training inserted rules that were ‘incorrec’—rules inconsistent with the training data.
Keywords:Clustering  deterministic finite-state automata  hidden-state problem  hints  model selection  prior knowledge  real-time recurrent learning  recurrent neural networks  regular grammars  rule extraction  rule insertion  rule revision  system identification  
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号