首页 | 本学科首页   官方微博 | 高级检索  
     


Visual–inertial navigation for pinpoint planetary landing using scale-based landmark matching
Affiliation:1. ONERA, 2 avenue Édouard Belin, 31000 Toulouse, France;2. ONERA, Chemin de la Hunière, 91120 Palaiseau, France;3. European Space Agency, Keplerlaan 1, 2201 AZ Noordwijk, The Netherlands;4. Airbus Defence and Space, 61 route de Verneuil, 78130 Les Mureaux, France;1. Department of Mechanical and Aerospace Engineering, Cornell University, Ithaca, NY, 14853, USA;2. Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA, 91109, USA;1. Asia-Pacific Research and Development Ltd, Intel Corporation, Shanghai, 200241, China;2. School of Aeronautics and Astronautics, Shanghai Jiao Tong University, Shanghai, 200240, China;3. Robotics Institute, University of Michigan, Ann Arbor, MI, 48105, USA
Abstract:Landing an autonomous spacecraft within 100 m of a mapped target is a navigation challenge in planetary exploration. Vision-based approaches attempt to pair 2D features detected in camera images with 3D mapped landmarks to reach the required precision. This paper presents a vision-aided inertial navigation system for pinpoint planetary landing called LION (Landing Inertial and Optical Navigation). It can fly over any type of terrain, whatever the topography. LION uses measurements from a novel image-to-map matcher in order to update through a tight data fusion scheme the state of an extended Kalman filter propagated with inertial data. The image processing uses the state and covariance predictions from the filter to determine the regions and extraction scales in which to search for non-ambiguous landmarks in the image. The image scale management process operates per landmark and greatly improves the repeatability rate between the map and descent images. A lunar-representative optical test bench called Visilab was also designed in order to test LION. The observability of absolute navigation performances in Visilab is evaluated with a model developed specifically for this purpose. Finally, the system performances are evaluated at a number of altitudes along with its robustness to off-nadir camera angle, illumination changes, a different map generation process and non-planar topography. The error converges to a mean of 4 m and a 3-RMS dispersion of 47 m at 3 km of altitude on the test setup at scale.
Keywords:Navigation  Vision  Inertial  Landing  Precision  Moon
本文献已被 ScienceDirect 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号