首页 | 本学科首页   官方微博 | 高级检索  
     

结合图像内容匹配的机器人视觉导航定位与全局地图构建系统
引用本文:曹天扬,蔡浩原,方东明,刘昶.结合图像内容匹配的机器人视觉导航定位与全局地图构建系统[J].光学精密工程,2017,25(8):2221-2232.
作者姓名:曹天扬  蔡浩原  方东明  刘昶
作者单位:1. 中国科学院 电子学研究所 传感技术联合国家重点实验室, 北京 100190; 2. 中国科学院大学, 北京 100190
基金项目:国家自然科学基金资助项目
摘    要:为了解决机器人室内定位时的绑架问题和相似物体的干扰,设计了一种具有图像内容匹配功能的视觉系统,从而使机器人能有效提取关键帧序列构建室内全局地图并实现自主定位。考虑影响图像内容匹配的主要干扰是机器人视角和位移造成的图像畸变,本文通过对室内物体的图像畸变建模与特征分析,设计了一种图像内容匹配方法。该方法以图像重叠区提取、基于子块分解匹配的重叠区重建两部分为核心,可将待匹配的两帧图像畸变调整为一致后再进行内容匹配并准确解算它们的相似度。其能有效利用各个房间内不同的景物和布局信息来消除相似物体的影响,从机器人学习环境时采集的视频中提取空间间距大且重叠相连的关键帧序列建立整栋建筑内部的全局导航地图。机器人工作时,实时视觉的图像内容与地图关键帧序列匹配,提取出与每个时刻视觉图像最相似的关键帧对机器人实施定位。在由3个房间和2条走廊组成的实验区进行了实验测试,结果表明:机器人可有效消除相似物体的干扰,绑架发生时仍可通过与全局地图匹配实施准确自主定位,匹配准确率≥93%,定位精度误差(RMSE)0.5m。

关 键 词:机器人视觉  机器人自主定位  关键帧全局地图  图像内容匹配  图像畸变
收稿时间:2017-01-03

Robot vision system for keyframe global map establishment and robot localization based on graphic content matching
CAO Tian-yang,CAI Hao-yuan,FANG Dong-ming,LIU Chang.Robot vision system for keyframe global map establishment and robot localization based on graphic content matching[J].Optics and Precision Engineering,2017,25(8):2221-2232.
Authors:CAO Tian-yang  CAI Hao-yuan  FANG Dong-ming  LIU Chang
Affiliation:1. State Key Laboratory of Transducer Technology, Institute of Electronics, Chinese Academy of Sciences, Beijing 100190, China; 2. University of Chinese Academy of Sciences, Beijing 100190, China
Abstract:To solve kidnapping problem and similar object interference at the time of indoor localization of robots, a visual system with graphic contents matching function was designed to make robots extract constructed indoor global map of key frame sequence effectively and to realize self-localization.Since the main interferences influencing graphic content matching was image distortion caused by visual angels of robots and displacement, a graphic content matching method was designed by image distortion modeling and feature analysis of indoor objects.With the method, both parts of image overlapping area extraction and overlapping area reconstruction based on sub-block decomposition and matching were taken as a core.The content matching could be implemented after two image distortions waiting for matching were adjusted to be conformity and their similarity could be calculated accurately.The method could make use of different sceneries and arrangement information in various rooms effectively to eliminate the influence of similar objects and to extract the overlapped and connected key frame sequence with great space distance from collected video when the robots learn environment to construct global navigation map in the whole building.When robots work, graphic contents of real-time vision and key frame sequence of map was matching, and key frame that is the most similar with visual image in all moments was extracted to implement localization on robots.Experimental test was performed in an experimental area composed of 3 rooms and 2 corridors.Experimental result indicates that robots can eliminate interference of similar objects effectively and can implement accurate self-localization by matching with global map when kidnapping happens.The matching accuracy is ≥93% and localization accuracy error (RMSE) is <0.5m.
Keywords:robot vision  robot self-localization  keyframes global map  graphic content matching  image distortion
本文献已被 CNKI 万方数据 等数据库收录!
点击此处可从《光学精密工程》浏览原始摘要信息
点击此处可从《光学精密工程》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号