首页 | 本学科首页   官方微博 | 高级检索  
     

基于动作选择级的多机器人协作
引用本文:褚海涛,洪炳熔.基于动作选择级的多机器人协作[J].软件学报,2002,13(9):1773-1778.
作者姓名:褚海涛  洪炳熔
作者单位:哈尔滨工业大学,计算机科学与工程系,黑龙江,哈尔滨,150001
基金项目:Supported by the National Natural Science Foundation of China under Grant No.69985002 (国家自然科学基金)
摘    要:在多机器人环境中,由于每个机器人动作选择的重叠现象,让机器人之间的协作变得很差.提出了一个方法用于确定动作选择级别.在此基础上,可以很好地控制多机器人的协作行为的获取.首先,定义了用于动作选择级优先级的8个级别,这8个级别相应的映射到8个动作子空间.然后,利用局部势场法,每个机器人的动作选择优先级被计算出来,并且因此,每个机器人获得了各自需要搜索的动作子空间.在动作子空间中,每个机器人利用加强学习方法来选择一个适当的动作.最终,把该方法用于机器人足球比赛的机器人局部协作训练中.试验的效果在仿真和实际比赛中得到了证实.

关 键 词:多智能体  加强学习  协作  局部势场  动作选择级
收稿时间:2001/9/10 0:00:00
修稿时间:5/9/2002 12:00:00 AM

Multi Robots Cooperative Based on Action Selection Level
CHU Hai-tao and HONG Bing-rong.Multi Robots Cooperative Based on Action Selection Level[J].Journal of Software,2002,13(9):1773-1778.
Authors:CHU Hai-tao and HONG Bing-rong
Abstract:In a multi robots environment, the overlap of actions selected by each robot makes the acquisition of cooperation behaviors less efficient. In this paper an approach is proposed to determine the action selection priority level based on which the cooperative behaviors can be readily controlled. First, eight levels are defined for the action selection priority, which can be correspondingly mapped to eight subspaces of actions. Second, using the local potential field method, the action selection priority level for each robot is calculated and thus its action subspace is obtained. Then, Reinforcement learning (RL) is utilized to choose a proper action for each robot in its action subspace. Finally, the proposed method has been implemented in a soccer game and the high efficiency of the proposed scheme was verified by the result of both the computer simulation and the real experiments.
Keywords:multi-agent  reinforcement learning  cooperative  local potential field  action selection level
本文献已被 CNKI 维普 万方数据 等数据库收录!
点击此处可从《软件学报》浏览原始摘要信息
点击此处可从《软件学报》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号