首页 | 本学科首页   官方微博 | 高级检索  
     


Continuous state/action reinforcement learning: A growing self-organizing map approach
Authors:Hesam MontazeriAuthor VitaeSajjad MoradiAuthor Vitae  Reza SafabakhshAuthor Vitae
Affiliation:a Department of Computer Engineering and Information Technology, Amirkabir University of Technology, Tehran 15914, Iran
b Department of Computer Science and Engineering, University of Texas at Arlington, TX, USA
Abstract:This paper proposes an algorithm to deal with continuous state/action space in the reinforcement learning (RL) problem. Extensive studies have been done to solve the continuous state RL problems, but more research should be carried out for RL problems with continuous action spaces. Due to non-stationary, very large size, and continuous nature of RL problems, the proposed algorithm uses two growing self-organizing maps (GSOM) to elegantly approximate the state/action space through addition and deletion of neurons. It has been demonstrated that GSOM has a better performance in topology preservation, quantization error reduction, and non-stationary distribution approximation than the standard SOM. The novel algorithm proposed in this paper attempts to simultaneously find the best representation for the state space, accurate estimation of Q-values, and appropriate representation for highly rewarded regions in the action space. Experimental results on delayed reward, non-stationary, and large-scale problems demonstrate very satisfactory performance of the proposed algorithm.
Keywords:Reinforcement learning   Growing self-organizing maps   Continuous state/action spaces
本文献已被 ScienceDirect 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号