首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 187 毫秒
1.
双曲面折反射全景图像的柱面解算研究   总被引:1,自引:0,他引:1       下载免费PDF全文
折反射全景视觉导航是自主移动机器人研究领域一项新兴技术,但由该视觉传感器生成的原始图像是严重畸变的,不适合机器人的目标定位和跟踪等任务。基于此,提出一种全景图像中心定位方法,将双线性内插值法应用于图像的采样,实现双曲面折反射全景图像的柱面还原解算。实验结果显示,还原后的图像人机交互性能有一定的提高。  相似文献   

2.
研究全景视觉机器人同时定位和地图创建(SLAM)问题。针对普通视觉视野狭窄, 对路标的连续跟踪和定位能力差的问题, 提出了一种基于改进的扩展卡尔曼滤波(EKF)算法的全景视觉机器人SLAM方法, 用全景视觉得到机器人周围的环境信息, 然后从这些信息中提取出环境特征, 定位出路标位置, 进而通过EKF算法同步更新机器人位姿和地图库。仿真实验和实体机器人实验结果验证了该算法的准确性和有效性, 且全景视觉比普通视觉定位精度更高。  相似文献   

3.
朱齐丹  李科  雷艳敏  孟祥杰 《机器人》2011,33(5):606-613
提出一种使用全景视觉系统引导机器人回航的方法.利用全景视觉装置采集出发位置(Home位置)的全景图像,使用SURF(Speeded-Up Robust Feature)算法提取全景图像中的特征点作为自然路标点.机器人回航过程中,将当前位置获得的全景图像与Home位置的全景图像进行特征匹配,确定白然路标点之间的对应关系....  相似文献   

4.
基于全景与前向视觉的足球机器人定位方法研究   总被引:2,自引:0,他引:2  
针对足球机器人比赛中要求快速准确获取目标位置信息的特点,设计了由全景视觉和前向视觉共同构建而成的机器人视觉系统.对于单一视觉传感器,采用图像坐标系转换求反正切法、分段比例法及针孔摄像机成像模型的方法,以blob面积为优选条件选择定位方式,从而得到精度较高的定位结果.实验结果证明了该视觉系统设计的合理性及定位方法的有效性.  相似文献   

5.
在未知环境下,机器人很难快速获取周边环境信息并建立实时环境地图,实现自主运行.为此提出基于视觉导航的方法,利用全景摄像机作为机器人的视觉传感器系统采集环境信息,将彩色地图进行HSI空间下模糊聚类图像分割,得到环境二值图像;将图像进行栅格化处理来构建环境地图,运用8方向连接的Dijkstra进行全局路径规划,计算出最优路径,从而实现移动机器人的快速、自主运动.经过仿真实验证明,该方法有效且可行.  相似文献   

6.
在光照条件可变且存在电磁干扰的环境下,针对机器人室外导航任务,提出了一种基于全景近红外视 觉和编码路标的自定位系统.通过近红外光源照明,利用全景视觉识别采用条形编码格式的路标,并利用扩展卡尔 曼滤波算法(EKF)融合视觉数据和里程计数据,从而实现机器人自定位.实验证明,该方法消除了室外大范围导航 时光照变化对机器人定位结果的影响.  相似文献   

7.
随着对足球机器人智能水平的要求进一步提高,机器人足球委员会将比赛场地的立柱、球门颜色取消.这使以前的基于球门、立柱地标的足球机器人自定位方法失效了.本文提出了一种利用了里程计、罗盘和全景摄像头多种传感器信息的视觉图像特征匹配的足球机器人自定位方法.首先,机器人通过里程计和罗盘取定一个可能位姿.然后,由视觉处理系统把机器人的可能位姿当作变换因子对实时拍摄到的场景图像作旋转、平移变换.最后,将变换后的图像中的白线与参考图像中的白线相比较,选择使图像匹配程度最大的变换因子作为机器人自定位的结果.实验结果表明该自定位方法达到了较高定位精度并能满足比赛的高实时性要求.  相似文献   

8.
一种鲁棒高效的足球机器人自定位方法   总被引:1,自引:0,他引:1  
为了解决中型组比赛环境下的足球机器人自定位问题,本文提出一种新的基于全向视觉的自定位方法,该定位方法首先从全景图像中提取出场上白线的对应点,并将其作为机器人的观测信息,然后计算观测信息与静态地图的匹配误差,且根据匹配误差的大小决定是否重新选取主位姿,最后利用梯度优化算法修正主位姿,并以主位姿信息作为定位结果,仿真结果说明了该定位方法的有效性.  相似文献   

9.
基于全景视觉的移动机器人同步定位与地图创建研究   总被引:8,自引:0,他引:8  
提出了一种基于全景视觉的移动机器人同步定位与地图创建(Omni-vSLAM)方法.该方法提取 颜色区域作为视觉路标;在分析全景视觉成像原理和定位不确定性的基础上建立起系统的观测模型,定位出 路标位置,进而通过扩展卡尔曼滤波算法(EKF)同步更新机器人位置和地图信息.实验结果证明了该方法在 建立环境地图的同时可以有效地修正由里程计造成的累积定位误差.  相似文献   

10.
一种新的用于足球机器人的全向视觉系统   总被引:3,自引:0,他引:3       下载免费PDF全文
全向视觉系统是RoboCup中型组足球机器人最重要的传感器之一。为了实现足球机器人的目标识别与自定位,提示了一种新的足球机器人全向视觉系统的设计与实现,其中在硬件上设计了一种由水平等比镜面和垂直等比镜面组合而成的新型全向反射镜面,其能够采集到较理想的全景图像;软件上则根据该镜面的成像特性实现了一种新颖的基于场地标志线信息的机器人自定位算法,该算法能够获得较准确的机器人自定位值。实验结果表明,该全向视觉系统能够有效地应用于机器人足球赛中。  相似文献   

11.
自定位是自主移动机器人在RoboCup中型组比赛中的一个主要研究任务。本文提出了一种基于场地标志线上白点的中型组机器人自定位算法。该自定位方法采用数字罗盘确定机器人在世界坐标系下的朝向,白色标志线上的点确定机器人在世界坐标系下的坐标值,该算法进一步融合了视觉定位结果和里程计信息,使得定位结果更加鲁棒。  相似文献   

12.
A vision-based navigation system is presented for determining a mobile robot's position and orientation using panoramic imagery. Omni-directional sensors are useful in obtaining a 360° field of view, permitting various objects in the vicinity of a robot to be imaged simultaneously. Recognizing landmarks in a panoramic image from an a priori model of distinct features in an environment allows a robot's location information to be updated. A system is shown for tracking vertex and line features for omni-directional cameras constructed with catadioptric (containing both mirrors and lenses) optics. With the aid of the panoramic Hough transform, line features can be tracked without restricting the mirror geometry so that it satisfies the single viewpoint criteria. This allows the use of rectangular scene features to be used as landmarks. Two paradigms for localization are explored, with experiments conducted with synthetic and real images. A working implementation on a mobile robot is also shown.  相似文献   

13.
基于多传感器信息融合的移动机器人快速精确自定位   总被引:3,自引:1,他引:2  
通过分析全向视觉、电子罗盘和里程计等传感器的感知模型,设计并实现了一种给定环境模型下移动机器人全局自定位算法.该算法利用蒙特卡罗粒子滤波,融合多个传感器在不同观测点获取的观测数据完成机器人自定位.与传统的、采用单一传感器自定位的方法相比,它把多个同质或异质传感器所提供的不完整测量及相关联数据库中的信息加以综合,降低单个...  相似文献   

14.
基于全景视觉与里程计的机器人自定位方法研究   总被引:8,自引:1,他引:8  
通过分析全景视觉与里程计传感器的感知模型的不确定性,提出了一种基于路标观测的移动机器人自定位算法. 该算法利用卡尔曼滤波器,融合多种传感器在不同观测点获取的观测数据完成机器人自定位.与传统的、采用单一传感器自定位的方法相比,它利用视觉和里程计的互补特性,提高了自定位的精度.实验结果证明了上述方法的有效性.  相似文献   

15.
Mobile Robot Self-Localization without Explicit Landmarks   总被引:3,自引:0,他引:3  
Localization is the process of determining the robot's location within its environment. More precisely, it is a procedure which takes as input a geometric map, a current estimate of the robot's pose, and sensor readings, and produces as output an improved estimate of the robot's current pose (position and orientation). We describe a combinatorially precise algorithm which performs mobile robot localization using a geometric model of the world and a point-and-shoot ranging device. We also describe a rasterized version of this algorithm which we have implemented on a real mobile robot equipped with a laser rangefinder we designed. Both versions of the algorithm allow for uncertainty in the data returned by the range sensor. We also present experimental results for the rasterized algorithm, obtained using our mobile robots at Cornell. Received November 15, 1996; revised January 13, 1998.  相似文献   

16.
As the autonomy of personal service robotic systems increases so has their need to interact with their environment. The most basic interaction a robotic agent may have with its environment is to sense and navigate through it. For many applications it is not usually practical to provide robots in advance with valid geometric models of their environment. The robot will need to create these models by moving around and sensing the environment, while minimizing the complexity of the required sensing hardware. Here, an information-based iterative algorithm is proposed to plan the robot's visual exploration strategy, enabling it to most efficiently build a graph model of its environment. The algorithm is based on determining the information present in sub-regions of a 2-D panoramic image of the environment from the robot's current location using a single camera fixed on the mobile robot. Using a metric based on Shannon's information theory, the algorithm determines potential locations of nodes from which to further image the environment. Using a feature tracking process, the algorithm helps navigate the robot to each new node, where the imaging process is repeated. A Mellin transform and tracking process is used to guide the robot back to a previous node. This imaging, evaluation, branching and retracing its steps continues until the robot has mapped the environment to a pre-specified level of detail. The set of nodes and the images taken at each node are combined into a graph to model the environment. By tracing its path from node to node, a service robot can navigate around its environment. This method is particularly well suited for flat-floored environments. Experimental results show the effectiveness of this algorithm.  相似文献   

17.
In human–robot communication it is often important to relate robot sensor readings to concepts used by humans. We suggest the use of a virtual sensor (one or several physical sensors with a dedicated signal processing unit for the recognition of real world concepts) and a method with which the virtual sensor can learn from a set of generic features. The virtual sensor robustly establishes the link between sensor data and a particular human concept. In this work, we present a virtual sensor for building detection that uses vision and machine learning to classify the image content in a particular direction as representing buildings or non-buildings. The virtual sensor is trained on a diverse set of image data, using features extracted from grey level images. The features are based on edge orientation, the configurations of these edges, and on grey level clustering. To combine these features, the AdaBoost algorithm is applied. Our experiments with an outdoor mobile robot show that the method is able to separate buildings from nature with a high classification rate, and to extrapolate well to images collected under different conditions. Finally, the virtual sensor is applied on the mobile robot, combining its classifications of sub-images from a panoramic view with spatial information (in the form of location and orientation of the robot) in order to communicate the likely locations of buildings to a remote human operator.  相似文献   

18.
《Advanced Robotics》2013,27(7):749-762
This paper proposes a method of robot navigation in outdoor environments based upon panoramic view and Global Positioning System (GPS) information. Our system is equipped with a GPS navigator and a camera. The route scene can be described by three-dimensional objects extracted as landmarks from panoramic representations. For an environment having limited routes, a two-dimensional map can be made based upon routes scenes, assuming that the topological relation of routes at intersections is known. By using GPS information, the global position of a mobile robot can be known, and a coarse-to-fine method is used to generate an outdoor environment map and locate a mobile robot. First, a robot finds its approximate position based on the GPS information. Then, it identifies its location from the image information. Experimental results in outdoor environments are given.  相似文献   

19.
CCD摄像机标定   总被引:3,自引:0,他引:3  
在基于单目视觉的农业轮式移动机器人自主导航系统中,CCD摄像机标定是农业轮式移动机器人正确和安全导航的前提和关键。摄像机标定确立了地面某点的三维空间坐标与计算机图像二维坐标之间的对应关系,机器人根据该关系计算出车体位姿值自主导航。因此,根据CCD摄像机针孔成像模型,利用大地坐标系中平面模板上已知的各点坐标,建立与计算机图像空间中各对应像素值之间的关系方程组,在Matlab环境下拟合出摄像机各内外参数。实验结果表明:该方法可以正确完成CCD摄像机标定。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号