Neural networks based reinforcement learning for mobile robots obstacle avoidance |
| |
Affiliation: | 1. Department of Automotive and Transport Engineering, Faculty of Mechanical Engineering, University Transilvania of Brasov, Str. Universităţii nr. 1, Brasov 500036, Romania.;2. Department of Automotive and Transport Engineering, Faculty of Mechanical Engineering, University Transilvania of Brasov, Brasov 500036, Romania.;1. Department of Statistics, National Cheng Kung University, Tainan 70101, Taiwan, ROC;1. UTM Big Data Centre, Ibnu Sina Institute for Scientific and Industrial Research, Universiti Teknologi Malaysia, 81310 Skudai, Johor, Malaysia;2. Faculty of Computing, Universiti Teknologi Malaysia, 81310 Skudai, Johor, Malaysia |
| |
Abstract: | This study proposes a new approach for solving the problem of autonomous movement of robots in environments that contain both static and dynamic obstacles. The purpose of this research is to provide mobile robots a collision-free trajectory within an uncertain workspace which contains both stationary and moving entities. The developed solution uses Q-learning and a neural network planner to solve path planning problems. The algorithm presented proves to be effective in navigation scenarios where global information is available. The speed of the robot can be set prior to the computation of the trajectory, which provides a great advantage in time-constrained applications. The solution is deployed in both Virtual Reality (VR) for easier visualization and safer testing activities, and on a real mobile robot for experimental validation. The algorithm is compared with Powerbot's ARNL proprietary navigation algorithm. Results show that the proposed solution has a good conversion rate computed at a satisfying speed. |
| |
Keywords: | |
本文献已被 ScienceDirect 等数据库收录! |
|