首页 | 本学科首页   官方微博 | 高级检索  
     


Multiagent reinforcement learning applied to a chase problem in a continuous world
Authors:Hiroki Tamakoshi  Shin Ishii
Affiliation:(1) Nara Institute of Science and Technology, Takayama 8916-5, Ikoma, Nara, Japan;(2) Japan Science and Technology Corporation, CREST, Japan
Abstract:Reinforcement learning (RL) is one of the methods of solving problems defined in multiagent systems. In the real world, the state is continuous, and agents take continuous actions. Since conventional RL schemes are often defined to deal with discrete worlds, there are difficulties such as the representation of an RL evaluation function. In this article, we intend to extend an RL algorithm so that it is applicable to continuous world problems. This extension is done by a combination of an RL algorithm and a function approximator. We employ Q-learning as the RL algorithm, and a neural network model called the normalized Gaussian network as the function approximator. The extended RL method is applied to a chase problem in a continuous world. The experimental result shows that our RL scheme was successful. This work was presented in part at the Fifth International Symposium on Artificial Life and Robotics, Oita, Japan, January 26–28, 2000
Keywords:Multiagent  Reinforcement learning  Continuous system  Function approximation  Normalized Gaussian network
本文献已被 SpringerLink 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号