共查询到19条相似文献,搜索用时 187 毫秒
1.
2.
3.
4.
5.
在未知环境下,机器人很难快速获取周边环境信息并建立实时环境地图,实现自主运行.为此提出基于视觉导航的方法,利用全景摄像机作为机器人的视觉传感器系统采集环境信息,将彩色地图进行HSI空间下模糊聚类图像分割,得到环境二值图像;将图像进行栅格化处理来构建环境地图,运用8方向连接的Dijkstra进行全局路径规划,计算出最优路径,从而实现移动机器人的快速、自主运动.经过仿真实验证明,该方法有效且可行. 相似文献
6.
7.
随着对足球机器人智能水平的要求进一步提高,机器人足球委员会将比赛场地的立柱、球门颜色取消.这使以前的基于球门、立柱地标的足球机器人自定位方法失效了.本文提出了一种利用了里程计、罗盘和全景摄像头多种传感器信息的视觉图像特征匹配的足球机器人自定位方法.首先,机器人通过里程计和罗盘取定一个可能位姿.然后,由视觉处理系统把机器人的可能位姿当作变换因子对实时拍摄到的场景图像作旋转、平移变换.最后,将变换后的图像中的白线与参考图像中的白线相比较,选择使图像匹配程度最大的变换因子作为机器人自定位的结果.实验结果表明该自定位方法达到了较高定位精度并能满足比赛的高实时性要求. 相似文献
8.
一种鲁棒高效的足球机器人自定位方法 总被引:1,自引:0,他引:1
为了解决中型组比赛环境下的足球机器人自定位问题,本文提出一种新的基于全向视觉的自定位方法,该定位方法首先从全景图像中提取出场上白线的对应点,并将其作为机器人的观测信息,然后计算观测信息与静态地图的匹配误差,且根据匹配误差的大小决定是否重新选取主位姿,最后利用梯度优化算法修正主位姿,并以主位姿信息作为定位结果,仿真结果说明了该定位方法的有效性. 相似文献
9.
10.
全向视觉系统是RoboCup中型组足球机器人最重要的传感器之一。为了实现足球机器人的目标识别与自定位,提示了一种新的足球机器人全向视觉系统的设计与实现,其中在硬件上设计了一种由水平等比镜面和垂直等比镜面组合而成的新型全向反射镜面,其能够采集到较理想的全景图像;软件上则根据该镜面的成像特性实现了一种新颖的基于场地标志线信息的机器人自定位算法,该算法能够获得较准确的机器人自定位值。实验结果表明,该全向视觉系统能够有效地应用于机器人足球赛中。 相似文献
11.
自定位是自主移动机器人在RoboCup中型组比赛中的一个主要研究任务。本文提出了一种基于场地标志线上白点的中型组机器人自定位算法。该自定位方法采用数字罗盘确定机器人在世界坐标系下的朝向,白色标志线上的点确定机器人在世界坐标系下的坐标值,该算法进一步融合了视觉定位结果和里程计信息,使得定位结果更加鲁棒。 相似文献
12.
A vision-based navigation system is presented for determining a mobile robot's position and orientation using panoramic imagery. Omni-directional sensors are useful in obtaining a 360° field of view, permitting various objects in the vicinity of a robot to be imaged simultaneously. Recognizing landmarks in a panoramic image from an a priori model of distinct features in an environment allows a robot's location information to be updated. A system is shown for tracking vertex and line features for omni-directional cameras constructed with catadioptric (containing both mirrors and lenses) optics. With the aid of the panoramic Hough transform, line features can be tracked without restricting the mirror geometry so that it satisfies the single viewpoint criteria. This allows the use of rectangular scene features to be used as landmarks. Two paradigms for localization are explored, with experiments conducted with synthetic and real images. A working implementation on a mobile robot is also shown. 相似文献
13.
基于多传感器信息融合的移动机器人快速精确自定位 总被引:3,自引:1,他引:2
通过分析全向视觉、电子罗盘和里程计等传感器的感知模型,设计并实现了一种给定环境模型下移动机器人全局自定位算法.该算法利用蒙特卡罗粒子滤波,融合多个传感器在不同观测点获取的观测数据完成机器人自定位.与传统的、采用单一传感器自定位的方法相比,它把多个同质或异质传感器所提供的不完整测量及相关联数据库中的信息加以综合,降低单个... 相似文献
14.
15.
Mobile Robot Self-Localization without Explicit Landmarks 总被引:3,自引:0,他引:3
Localization is the process of determining the robot's location within its environment. More precisely, it is a procedure which takes
as input a geometric map, a current estimate of the robot's pose, and sensor readings, and produces as output an improved
estimate of the robot's current pose (position and orientation). We describe a combinatorially precise algorithm which performs
mobile robot localization using a geometric model of the world and a point-and-shoot ranging device. We also describe a rasterized
version of this algorithm which we have implemented on a real mobile robot equipped with a laser rangefinder we designed.
Both versions of the algorithm allow for uncertainty in the data returned by the range sensor. We also present experimental
results for the rasterized algorithm, obtained using our mobile robots at Cornell.
Received November 15, 1996; revised January 13, 1998. 相似文献
16.
As the autonomy of personal service robotic systems increases so has their need to interact with their environment. The most
basic interaction a robotic agent may have with its environment is to sense and navigate through it. For many applications
it is not usually practical to provide robots in advance with valid geometric models of their environment. The robot will
need to create these models by moving around and sensing the environment, while minimizing the complexity of the required
sensing hardware. Here, an information-based iterative algorithm is proposed to plan the robot's visual exploration strategy,
enabling it to most efficiently build a graph model of its environment. The algorithm is based on determining the information
present in sub-regions of a 2-D panoramic image of the environment from the robot's current location using a single camera
fixed on the mobile robot. Using a metric based on Shannon's information theory, the algorithm determines potential locations
of nodes from which to further image the environment. Using a feature tracking process, the algorithm helps navigate the robot
to each new node, where the imaging process is repeated. A Mellin transform and tracking process is used to guide the robot
back to a previous node. This imaging, evaluation, branching and retracing its steps continues until the robot has mapped
the environment to a pre-specified level of detail. The set of nodes and the images taken at each node are combined into a
graph to model the environment. By tracing its path from node to node, a service robot can navigate around its environment.
This method is particularly well suited for flat-floored environments. Experimental results show the effectiveness of this
algorithm. 相似文献
17.
《Robotics and Autonomous Systems》2007,55(5):383-390
In human–robot communication it is often important to relate robot sensor readings to concepts used by humans. We suggest the use of a virtual sensor (one or several physical sensors with a dedicated signal processing unit for the recognition of real world concepts) and a method with which the virtual sensor can learn from a set of generic features. The virtual sensor robustly establishes the link between sensor data and a particular human concept. In this work, we present a virtual sensor for building detection that uses vision and machine learning to classify the image content in a particular direction as representing buildings or non-buildings. The virtual sensor is trained on a diverse set of image data, using features extracted from grey level images. The features are based on edge orientation, the configurations of these edges, and on grey level clustering. To combine these features, the AdaBoost algorithm is applied. Our experiments with an outdoor mobile robot show that the method is able to separate buildings from nature with a high classification rate, and to extrapolate well to images collected under different conditions. Finally, the virtual sensor is applied on the mobile robot, combining its classifications of sub-images from a panoramic view with spatial information (in the form of location and orientation of the robot) in order to communicate the likely locations of buildings to a remote human operator. 相似文献
18.
《Advanced Robotics》2013,27(7):749-762
This paper proposes a method of robot navigation in outdoor environments based upon panoramic view and Global Positioning System (GPS) information. Our system is equipped with a GPS navigator and a camera. The route scene can be described by three-dimensional objects extracted as landmarks from panoramic representations. For an environment having limited routes, a two-dimensional map can be made based upon routes scenes, assuming that the topological relation of routes at intersections is known. By using GPS information, the global position of a mobile robot can be known, and a coarse-to-fine method is used to generate an outdoor environment map and locate a mobile robot. First, a robot finds its approximate position based on the GPS information. Then, it identifies its location from the image information. Experimental results in outdoor environments are given. 相似文献