An accurate and robust visual-compass algorithm for robot-mounted omnidirectional cameras |
| |
Authors: | Gian Luca Mariottini Stefano Scheggi Fabio Morbidi Domenico Prattichizzo |
| |
Affiliation: | 1. Department of Computer Science and Engineering, University of Texas at Arlington, Engineering Research Building, 500 UTA Boulevard, Arlington, TX 76019, USA;2. Department of Information Engineering, University of Siena, Via Roma 56, I-53100 Siena, Italy;1. Inria, NeCS team, 655 Avenue de l’Europe, 38334 Montbonnot Saint Martin, France;2. Univ. Grenoble Alpes, CNRS, Inria, Gipsa-Lab, F-38000 Grenoble, France;1. Physical Medicine and Rehabilitation, Northwestern University, 710 North Lake Shore Drive Chicago, IL 60611, United States;2. Electrical Engineering and Computer Science, Northwestern University, 2145 Sheridan Road Evanston, IL 60208, United States;3. Mechanical Engineering, Northwestern University, 2145 Sheridan Road Evanston, IL 60208, United States;4. Rehabilitation Institute of Chicago, 345 East Superior Street Chicago, IL 60611, United States |
| |
Abstract: | Due to their wide field of view, omnidirectional cameras are becoming ubiquitous in many mobile robotic applications. A challenging problem consists of using these sensors, mounted on mobile robotic platforms, as visual compasses (VCs) to provide an estimate of the rotational motion of the camera/robot from the omnidirectional video stream. Existing VC algorithms suffer from some practical limitations, since they require a precise knowledge either of the camera-calibration parameters, or the 3-D geometry of the observed scene. In this paper we present a novel multiple-view geometry constraint for paracatadioptric views of lines in 3-D, that we use to design a VC algorithm that does not require either the knowledge of the camera calibration parameters, or the 3-D scene geometry. In addition, our algorithm runs in real time since it relies on a closed-form estimate of the camera/robot rotation, and can address the image-feature correspondence problem. Extensive simulations and experiments with real robots have been performed to show the accuracy and robustness of the proposed method. |
| |
Keywords: | |
本文献已被 ScienceDirect 等数据库收录! |
|