首页 | 本学科首页   官方微博 | 高级检索  
     

基于RDC-Q学习算法的移动机器人路径规划
引用本文:王子强,武继刚.基于RDC-Q学习算法的移动机器人路径规划[J].计算机工程,2014(6):211-214.
作者姓名:王子强  武继刚
作者单位:天津工业大学计算机科学与软件学院,天津300387
摘    要:传统Q算法对于机器人回报函数的定义较为宽泛,导致机器人的学习效率不高。为解决该问题,给出一种回报详细分类Q(RDC-Q)学习算法。综合机器人各个传感器的返回值,依据机器人距离障碍物的远近把机器人的状态划分为20个奖励状态和15个惩罚状态,对机器人每个时刻所获得的回报值按其状态的安全等级分类,使机器人趋向于安全等级更高的状态,从而帮助机器人更快更好地学习。通过在一个障碍物密集的环境中进行仿真实验,证明该算法收敛速度相对传统回报Q算法有明显提高。

关 键 词:路径规划  移动机器人  强化学习  Q学习算法  回报函数  学习效率

Mobile Robot Path Planning Based on RDC-Q Learning Algorithm
WANG Zi-qiang,WU Ji-gang.Mobile Robot Path Planning Based on RDC-Q Learning Algorithm[J].Computer Engineering,2014(6):211-214.
Authors:WANG Zi-qiang  WU Ji-gang
Affiliation:(College of Computer Science and Software, Tianjin Polytechnic University, Tianjin 300387, China)
Abstract:Reward function is always simple in traditional Q-learning algorithm, which makes a low learning efficiency. To solve this problem, a Reward Detailed Classification Q(RDC-Q) learning algorithm is proposed. It synthesizes all sensors' value of the robot and divides the robot's states into 20 damp states and 15 reward states according to the distance between the robot and obstacles. The reward value that a robot gets at each time step is classified by the robot's security level, which makes the robot go towards the states in higher security levels, so the robot can learn quicker and better. By simulating the new algorithm in an environment of dense barriers, it is proved that convergence speed of the new method is obviously improved than traditional reward Q methods.
Keywords:path planning  mobile robot  reinforcement learning  Q-learning algorithm  reward function  learning efficiency
本文献已被 CNKI 维普 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号