首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 74 毫秒
1.
Robust topological navigation strategy for omnidirectional mobile robot using an omnidirectional camera is described. The navigation system is composed of on-line and off-line stages. During the off-line learning stage, the robot performs paths based on motion model about omnidirectional motion structure and records a set of ordered key images from omnidirectional camera. From this sequence a topological map is built based on the probabilistic technique and the loop closure detection algorithm, which can deal with the perceptual aliasing problem in mapping process. Each topological node provides a set of omnidirectional images characterized by geometrical affine and scale invariant keypoints combined with GPU implementation. Given a topological node as a target, the robot navigation mission is a concatenation of topological node subsets. In the on-line navigation stage, the robot hierarchical localizes itself to the most likely node through the robust probability distribution global localization algorithm, and estimates the relative robot pose in topological node with an effective solution to the classical five-point relative pose estimation algorithm. Then the robot is controlled by a vision based control law adapted to omnidirectional cameras to follow the visual path. Experiment results carried out with a real robot in an indoor environment show the performance of the proposed method.  相似文献   

2.
Self-localization is the basis to realize autonomous ability such as motion planning and decision-making for mobile robots, and omnidirectional vision is one of the most important sensors for RoboCup Middle Size League (MSL) soccer robots. According to the characteristic that RoboCup competition is highly dynamic and the deficiency of the current self-localization methods, a robust and real-time self-localization algorithm based on omnidirectional vision is proposed for MSL soccer robots. Monte Carlo localization and matching optimization localization, two most popular approaches used in MSL, are combined in our algorithm. The advantages of these two approaches are maintained, while the disadvantages are avoided. A camera parameters auto-adjusting method based on image entropy is also integrated to adapt the output of omnidirectional vision to dynamic lighting conditions. The experimental results show that global localization can be realized effectively while highly accurate localization is achieved in real-time, and robot self-localization is robust to the highly dynamic environment with occlusions and changing lighting conditions.  相似文献   

3.
针对室内环境下机器人的移动和定位需要,提出基于视觉FastSLAM的移动机器人自主探索方法.该方法综合考虑信息增益和路径距离,基于边界选取探索位置并规划路径,最大化机器人的自主探索效率,确保探索任务的完整实现.在FastSLAM 2.0的基础上,利用视觉作为观测手段,有效融合全景扫描和地标跟踪方法,提高数据观测效率,并且引入地标视觉特征增强数据关联估计,完成定位和地图绘制.实验表明,文中方法能正确选取最优探索位置并合理规划路径,完成探索任务,并且定位精度和地图绘制精度较高,鲁棒性较好.  相似文献   

4.
5.
In this article, we propose a new approach to the map building task: the implementation of the Spatial Semantic Hierarchy (SSH), proposed by B. Kuipers, on a real robot fitted with an omnidirectional camera. The original Kuiper's formulation of the SSH was slightly modified, in order to manage in a more efficient way the knowledge the real robot collects while moving in the environment. The sensory data experienced by the robot are transformed by the different levels of the SSH in order to obtain a compact representation of the environment. This knowledge is stored in the form of a topological map and, eventually, of a metrical map. The aim of this article is to show that a catadioptric omnidirectional camera is a good sensor for the SSH and nicely couples with several elements of the SSH. The panoramic view and rotational invariance of our omnidirectional camera makes the identification and labelling of places a simple matter. A deeper insight is that the tracking and identification of events on an omnidirectional image such as occlusions and alignments can be used for the segmentation of continuous sensory image data into the discrete topological and metric elements of a map. The proposed combination of the SSH and omnidirectional vision provides a powerful general framework for robot maping and offers new insights into the concept of “place.” Some preliminary experiments performed with a real robot in an unmodified office environment are presented.  相似文献   

6.
This paper describes a method for spatial representation, place recognition and qualitative self-localization in dynamic indoor environments, based on omnidirectional images. This is a difficult problem because of the perceptual ambiguity of the acquired images, and their weak robustness to noise, geometrical and photometric variations of real world scenes. The spatial representation is built up invariant signatures using Invariance Theory where we suggest to adapt Haar invariant integrals to the particular geometry and image transformations of catadioptric omnidirectional sensors. It follows that combining simple image features in a process of integration over visual transformations and robot motion, can build discriminant percepts about robot spatial locations. We further analyze the invariance properties of the signatures and the apparent relation between their similarity measures and metric distances. The invariance properties of the signatures can be adapted to infer a hierarchical process, from global room recognition to local and coarse robot localization.  相似文献   

7.
汤一平  姜荣剑  林璐璐 《计算机科学》2015,42(3):284-288, 315
针对现有的移动机器人视觉系统计算资源消耗大、实时性能欠佳、检测范围受限等问题,提出一种基于主动式全景视觉传感器(AODVS)的移动机器人障碍物检测方法。首先,将单视点的全方位视觉传感器(ODVS)和由配置在1个平面上的4个红色线激光组合而成的面激光发生器进行集成,通过主动全景视觉对移动机器人周边障碍物进行检测;其次,移动机器人中的全景智能感知模块根据面激光发生器投射到周边障碍物上的激光信息,通过视觉处理方法解析出移动机器人周边障碍物的距离和方位等信息;最后,基于上述信息采用一种全方位避障策略,实现移动机器人的快速避障。实验结果表明,基于AODVS的障碍物检测方法能在实现快速高效避障的同时,降低对移动机器人的计算资源的要求。  相似文献   

8.
研究全景视觉机器人同时定位和地图创建(SLAM)问题。针对普通视觉视野狭窄, 对路标的连续跟踪和定位能力差的问题, 提出了一种基于改进的扩展卡尔曼滤波(EKF)算法的全景视觉机器人SLAM方法, 用全景视觉得到机器人周围的环境信息, 然后从这些信息中提取出环境特征, 定位出路标位置, 进而通过EKF算法同步更新机器人位姿和地图库。仿真实验和实体机器人实验结果验证了该算法的准确性和有效性, 且全景视觉比普通视觉定位精度更高。  相似文献   

9.
Mobile robot localization, which allows a robot to identify its position, is one of main challenges in the field of Robotics. In this work, we provide an evaluation of consolidated feature extractions and machine learning techniques from omnidirectional images focusing on topological map and localization tasks. The main contributions of this work are a novel method for localization via classification with reject option using omnidirectional images, as well as two novel omnidirectional image data sets. The localization system was analyzed in both virtual and real environments. Based on the experiments performed, the Minimal Learning Machine with Nearest Neighbors classifier and Local Binary Patterns feature extraction proved to be the best combination for mobile robot localization with accuracy of 96.7% and an Fscore of 96.6%.  相似文献   

10.
The approach to passive stereo vision-based real-time localization is developed in this context, as it can adapt to complex factors such as the illuminations, backgrounds, viewing angles, and occlusions. After the calibration of the visual coordinates, the proposed system can jointly calibrate the robots and the vision systems flexibly by controlling the terminal positions of the robots. The fast phase-only correlation algorithm is proposed to considerably improve the matching efficiency of the image pairs. Pose normalization and regional three-dimensional vector feature detection can make the proposed system robust to the surface patterns of the cars and the battery shapes, allowing for reliable localization and detection. The proposed real-time system has been verified in the practical application.  相似文献   

11.
本文提出了一种基于梯度直方图的全景图像匹配算法, 并将该算法与蒙特卡罗定位方法相结合, 构建了一种基于全景视觉的移动机器人定位方法. 在分析所提出的匹配算法特点的基础上建立了系统的观测模型, 推导出粒子滤波中重要权重系数的计算方法. 该方法能够抵抗环境中相似场景对于定位结果的干扰, 同时能够使机器人从“绑架”中快速恢复. 实验结果证明该方法正确、有效.  相似文献   

12.
在动态背景下的运动目标检测中,由于目标和背景两者都是各自独立运动的,在提取前景运动目标时需要考虑由移动机器人自身运动引起的背景变化。仿射变换是一种广泛用于估计图像间背景变换的方法。然而,在移动机器人上使用全方位视觉传感器(ODVS)时,由于全方位图像的扭曲变形会 造成图像中背景运动不一致,无法通过单一的仿射变换描述全方位图像上的背景运动。将图像划分为网格窗口,然后对每个窗口分别进行仿射变换,从背景变换补偿帧差中得到运动目标的区域。最后,根据ODVS的成像特性,通过视觉方法解析出运动障碍物的距离和方位信息。实验结果表明,提出的方法能准确检测出移动机器人360°范围内的运动障碍物,并实现运动障碍物的精确定位,有效地提高了移动机器人的实时避障能力。  相似文献   

13.
王珂  王伟  庄严  孙传昱 《自动化学报》2008,34(11):1369-1378
面向大规模室内环境, 研究了基于全向视觉的移动机器人自定位. 提出用分层的几何-拓扑三维地图管理广域环境特征, 定义了不同层次的三维局部环境特征及全局拓扑属性, 给出了分层地图的应用方法. 构建了全向视觉传感器成像模型及其不确定性传播方法, 使得地图中的概率元素能够在系统中有效应用. 采用随机点预估搜索的方法提取环境元素对应的曲线边缘特征. 用带反馈的分层估计方法在融合中心对多观测特征产生的相应估计状态进行总体融合. 以分层逻辑架构设计实现了移动机器人交互式自定位系统. 实验分析了真实环境中不同初始位姿和观测信息情况下定位系统的收敛性和定位精度, 在考虑动态障碍物的遮挡情况下完成了机器人的在线环境感知和运动自定位任务. 实验结果表明本文方法的可靠性和实用性.  相似文献   

14.
This paper describes ongoing research on vision based mobile robot navigation for wheel chairs. After a guided tour through a natural environment while taking images at regular time intervals, natural landmarks are extracted to automatically build a topological map. Later on this map can be used for place recognition and navigation. We use visual servoing on the landmarks to steer the robot. In this paper, we investigate ways to improve the performance by incorporating inertial sensors. © 2004 Wiley Periodicals, Inc.  相似文献   

15.
针对传统类人机器人在控制系统实时性和视觉识别方面的不足,以S3C6410作为主控芯片,设计了具有视觉识别功能的类人机器人控制系统,通过改进和简化视频识别算法取得了良好的目标识别效果。实验表明,基于本控制系统设计而成的类人机器人实时性好,目标识别准确,通过调整运动路径能够快速找到目标。  相似文献   

16.
一种改进的类人足球机器人彩色目标识别算法   总被引:1,自引:0,他引:1  
针对类人足球机器人视觉需求,提出一种结合区域生长和基于形状判别的阈值自适应更新的彩色目标识别算法。该算法在HSI空间基于S分量把图像分为高饱和区域和低饱和区域,在高饱和区域基于H分量采用区域生长算法识别目标;通过目标形状判别自适应更新阈值,并用新阈值更新区域生长中原来的阈值,以稳定准确地识别彩色目标。在类人足球机器人系统中的成功应用表明,该算法能在不同光照条件下稳定地识别出彩色目标,对光照环境有良好的适应性和鲁棒性,具有良好的识别效果。  相似文献   

17.
基于场景识别的移动机器人定位方法研究   总被引:8,自引:0,他引:8  
提出了一种基于场景识别的移动机器人定位方法.对CCD采集的工作环境的系列场景图像,用多通道Gabor 滤波器提取场景图像的全局纹理特征,然后通过SVM分类器来识别场景图像,实现机器人的逻辑定位.在移动机器人CASIA-I 上对该算法进行了实验.实验结果表明,该定位方法可达到91.11%的定位准确率,对光照、对比度等因素有较强的鲁棒性,并且满足机器人实时定位的要求.  相似文献   

18.
In this work, several robust vision modules are developed and implemented for fully automated micromanipulation. These are autofocusing, object and end-effector detection, real-time tracking and optical system calibration modules. An image based visual servoing architecture and a path planning algorithm are also proposed based on the developed vision modules. Experimental results are provided to assess the performance of the proposed visual servoing approach in positioning and trajectory tracking tasks. Proposed path planning algorithm in conjunction with visual servoing imply successful micromanipulation tasks.  相似文献   

19.
The localization problem for an autonomous robot moving in a known environment is a well-studied problem which has seen many elegant solutions. Robot localization in a dynamic environment populated by several moving obstacles, however, is still a challenge for research. In this paper, we use an omnidirectional camera mounted on a mobile robot to perform a sort of scan matching. The omnidirectional vision system finds the distances of the closest color transitions in the environment, mimicking the way laser rangefinders detect the closest obstacles. The similarity of our sensor with classical rangefinders allows the use of practically unmodified Monte Carlo algorithms, with the additional advantage of being able to easily detect occlusions caused by moving obstacles. The proposed system was initially implemented in the RoboCup Middle-Size domain, but the experiments we present in this paper prove it to be valid in a general indoor environment with natural color transitions. We present localization experiments both in the RoboCup environment and in an unmodified office environment. In addition, we assessed the robustness of the system to sensor occlusions caused by other moving robots. The localization system runs in real-time on low-cost hardware.  相似文献   

20.
This paper presents a robust place recognition algorithm for mobile robots that can be used for planning and navigation tasks. The proposed framework combines nonlinear dimensionality reduction, nonlinear regression under noise, and Bayesian learning to create consistent probabilistic representations of places from images. These generative models are incrementally learnt from very small training sets and used for multi-class place recognition. Recognition can be performed in near real-time and accounts for complexity such as changes in illumination, occlusions, blurring and moving objects. The algorithm was tested with a mobile robot in indoor and outdoor environments with sequences of 1579 and 3820 images, respectively. This framework has several potential applications such as map building, autonomous navigation, search-rescue tasks and context recognition.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号