Indoor navigation of a non-holonomic mobile robot using a visual memory |
| |
Authors: | Jonathan Courbon Youcef Mezouar Philippe Martinet |
| |
Affiliation: | (1) LASMEA UBP Clermont II, CNRS—UMR6602, 24 Avenue des Landais, 63177 Aubiere, France |
| |
Abstract: | When navigating in an unknown environment for the first time, a natural behavior consists on memorizing some key views along
the performed path, in order to use these references as checkpoints for a future navigation mission. The navigation framework
for wheeled mobile robots presented in this paper is based on this assumption. During a human-guided learning step, the robot
performs paths which are sampled and stored as a set of ordered key images, acquired by an embedded camera. The set of these
obtained visual paths is topologically organized and provides a visual memory of the environment. Given an image of one of
the visual paths as a target, the robot navigation mission is defined as a concatenation of visual path subsets, called visual
route. When running autonomously, the robot is controlled by a visual servoing law adapted to its nonholonomic constraint.
Based on the regulation of successive homographies, this control guides the robot along the reference visual route without
explicitly planning any trajectory. The proposed framework has been designed for the entire class of central catadioptric
cameras (including conventional cameras). It has been validated onto two architectures. In the first one, algorithms have
been implemented onto a dedicated hardware and the robot is equipped with a standard perspective camera. In the second one,
they have been implemented on a standard PC and an omnidirectional camera is considered.
|
| |
Keywords: | Visual navigation Mobile robot Central camera Visual-based control |
本文献已被 SpringerLink 等数据库收录! |
|