首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 187 毫秒
1.
针对室内移动机器人定位问题提出了一种基于目标识别的运动视定位方法.该方法确定机器人相对于其接近目标的位姿.选取目标上与相机投影中心相对于地面等高的两个点作为特征点,首先拍摄一幅包含这两个特征点的图像,机器人移动后再拍摄一幅包含这两个特征点的图像,该方法就能根据这两个特征点在这两幅图像中的坐标计算出机器人移动前后的位姿.该方法只需两个特征点和两幅图像,因而比较简单且有较高的实时性.试验结果验证了该方法的有效性.  相似文献   

2.
杜姗姗  周祥 《计算机应用》2015,35(9):2678-2681
工具标定就是确定工具坐标系相对于机器人末端坐标系的变换矩阵,但传统的解决方案是通过人工示教点约束的方法,为此提出一种基于视觉相机空间的自动工具标定方法。在末端工具上增加特征点如圆环标志,利用相机建立机器人三维空间与相机二维空间之间的关系,通过自动的三维空间视觉定位,实现对圆环标志的中心点的点约束,视觉定位不需要相机的标定等繁琐过程。基于机器人的正运动学和相机空间点约束完成工具中心点(TCP)求解。重复实验的标定误差小于0.05 mm,实验的绝对定位误差小于0.1 mm,验证了基于相机空间定位的工具标定具有较高的可重复性以及可靠性。  相似文献   

3.
提出了一种新颖的基于运动视的定位方法.该方法从机器人接近的目标上选取两个特征点并根据它们的图像坐标确定机器人运动前后相对于目标的位姿.本文还提出了一种能够更准确地找出特征点图像坐标的搜索算法.试验结果表明该方法有较高的定位精度和较强的鲁棒性.  相似文献   

4.
在摄像机以某一角度俯拍地面的情形下,根据透视透影的成像原理,推导了一个从地面各点到摄像机图像间的变换公式,也得到了从摄像机图像中各点到地面的变换公式。只要知道世界坐标系中两个点之间的相对位置及其在摄像机图像的位置,就可以得到从摄像机图像到地面坐标系变换关系,这两个变换互为逆变换,它们可以在机器人定位及其他应用过程中发挥重要作用。  相似文献   

5.
为了解决机器人系统工具端定位精度不高的问题,提出一种基于二维码修正机器人系统的位姿估计方法。通过机器人系统获取的地面定位修正二维码图像信息,计算出机器人系统实际位置与机器人示教位置的转换矩阵,并利用转换矩阵生成新的工件坐标系,按照新的工件坐标系移动机器人系统,到达修正后的目标位置。该方法使机器人工具端达到了高精度定位,修正了机器人系统的位姿偏差,精度可达到0.5 mm。  相似文献   

6.
付睿云 《软件》2022,(11):94-97
针对传统工业机器人预先示教或单目视觉引导应用的局限性,提出一种基于双目立体视觉的机器人定位系统设计,获取目标物体在空间中的三维信息。该系统设计基于ABB机器人执行系统,使用TCP触碰标定板的方式完成双相机的标定,采用SIFT斑点算法获取工件的特征,结合Opencv3.4.1库,基于图像特征点对左右相机获得的图像进行立体匹配,得到物体的三维点坐标,引导机器人实现抓取。  相似文献   

7.
设计了一种基于视场中单个目标点的视觉系统标定方法,任意选取视场中的一点作为目标点,以该目标点为基准,机器人作相对运动来获得多个特征点。建立图像系列对应点之间的几何约束关系及各坐标系之间的变换矩阵,确定变换矩阵关系式,进一步求解摄像机的内外参数。该标定方法只需提取场景中的一个景物点,对机器人的运动控制操作方便、算法实现简洁。实验结果验证了该方法的有效性。  相似文献   

8.
本文采用了一种基于AKAZE特征检测和PnP算法的单目视觉测量方法对相机的相对姿态进行解算,用于快速准确地确定空间中两个目标间的位姿关系.采集合作目标的模板图像,提取附加到合作目标上的4个特征点的像素坐标,利用AKAZE关键点对模板图像和待测图像进行匹配并计算映射矩阵,通过映射矩阵得到4个特征点在待测图像中的像素坐标,然后结合合作目标的尺寸信息求解基于4个共面特征点的PnP问题,解算相机与合作目标的相对位置.实验分析表明该方法计算的实时图像相机位姿与真实结果接近,验证了本文方法的有效性.  相似文献   

9.
自然路标提取与匹配是vSLAM的基础.文中提出了一种基于特征点三维信息的自然路标提取、局部特征描述与快速匹配方法.采用双目视觉获取环境图像,提取左右目图像的特征点,并进行匹配.建立左摄像机坐标系下的每个匹配点的三维信息,提出视场约束规则对特征点进行过滤.在此基础上基于改进的MeanShift聚类算法进行自然路标提取.提出一种路标描述符,可以快速进行两个聚类的匹配.该方法可以有效提取非结构化环境中的自然路标,对机器人位姿估计精度要求较低.  相似文献   

10.
针对空地协同机器人中无人机对地面无人车的实时精准定位问题,提出一种红色双圆型定位标记及标记识别与定位方法。引入颜色分割与轮廓提取相结合的方式,减少提取到的轮廓特征数量,排除背景信息干扰以减少误识别;提出一种圆形轮廓快速检测算法,快速识别目标轮廓并准确定位目标像素坐标和方向;基于针孔相机成像模型,根据目标像素坐标和方向,估计出目标在机体坐标系下三维坐标和偏航角。实验结果表明,无人机与地面无人车相对高度1.5 m时,该方法在[x]轴和[y]轴方向定位误差分别为3.9 mm和3.6 mm,每帧图像平均处理耗时为11.6 ms,优于基于核相关滤波的识别定位方法的13.3 mm、14.3 mm和56.3 ms。该方法与无人机控制相结合,可以实现无人机协同跟踪与自主降落功能,提升空地协同机器人作业效率,具有显著的工程意义。  相似文献   

11.
An autonomous mobile robot must have the ability to navigate in an unknown environment. The simultaneous localization and map building (SLAM) problem have relation to this autonomous ability. Vision sensors are attractive equipment for an autonomous mobile robot because they are information-rich and rarely have restrictions on various applications. However, many vision based SLAM methods using a general pin-hole camera suffer from variation in illumination and occlusion, because they mostly extract corner points for the feature map. Moreover, due to the narrow field of view of the pin-hole camera, they are not adequate for a high speed camera motion. To solve these problems, this paper presents a new SLAM method which uses vertical lines extracted from an omni-directional camera image and horizontal lines from the range sensor data. Due to the large field of view of the omni-directional camera, features remain in the image for enough time to estimate the pose of the robot and the features more accurately. Furthermore, since the proposed SLAM does not use corner points but the lines as the features, it reduces the effect of illumination and partial occlusion. Moreover, we use not only the lines at corners of wall but also many other vertical lines at doors, columns and the information panels on the wall which cannot be extracted by a range sensor. Finally, since we use the horizontal lines to estimate the positions of the vertical line features, we do not require any camera calibration. Experimental work based on MORIS, our mobile robot test bed, moving at a human’s pace in the real indoor environment verifies the efficacy of this approach.  相似文献   

12.
尹磊    彭建盛    江国来    欧勇盛 《集成技术》2019,8(2):11-22
激光雷达和视觉传感是目前两种主要的服务机器人定位与导航技术,但现有的低成本激光雷 达定位精度较低且无法实现大范围闭环检测,而单独采用视觉手段构建的特征地图又不适用于导航应用。因此,该文以配备低成本激光雷达与视觉传感器的室内机器人为研究对象,提出了一种激光和视觉相结合的定位与导航建图方法:通过融合激光点云数据与图像特征点数据,采用基于稀疏姿态调整的优化方法,对机器人位姿进行优化。同时,采用基于视觉特征的词袋模型进行闭环检测,并进一步优化基于激光点云的栅格地图。真实场景下的实验结果表明,相比于单一的激光或视觉定位建图方 法,基于多传感器数据融合的方法定位精度更高,并有效地解决了闭环检测问题。  相似文献   

13.
Pathfinding is becoming more and more common in autonomous vehicle navigation, robot localization, and other computer vision applications. In this paper, a novel approach to mapping and localization is presented that extracts visual landmarks from a robot dataset acquired by a Kinect sensor. The visual landmarks are detected and recognized using the improved scale-invariant feature transform (I-SIFT) method. The methodology is based on detecting stable and invariant landmarks in consecutive (red-green-blue depth) RGB-D frames of the robot dataset. These landmarks are then used to determine the robot path, and a map is constructed by using the visual landmarks. A number of experiments were performed on various datasets in an indoor environment. The proposed method performs efficient landmark detection in various environments, which includes changes in rotation and illumination. The experimental results show that the proposed method can solve the simultaneous localization and mapping (SLAM) problem using stable visual landmarks, but with less computation time.  相似文献   

14.
15.
Recently, many extensive studies have been conducted on robot control via self-positioning estimation techniques. In the simultaneous localization and mapping (SLAM) method, which is one approach to self-positioning estimation, robots generally use both autonomous position information from internal sensors and observed information on external landmarks. SLAM can yield higher accuracy positioning estimations depending on the number of landmarks; however, this technique involves a degree of uncertainty and has a high computational cost, because it utilizes image processing to detect and recognize landmarks. To overcome this problem, we propose a state-of-the-art method called a generalized measuring-worm (GMW) algorithm for map creation and position estimation, which uses multiple cooperating robots that serve as moving landmarks for each other. This approach allows problems of uncertainty and computational cost to be overcome, because a robot must find only a simple two-dimensional marker rather than feature-point landmarks. In the GMW method, the robots are given a two-dimensional marker of known shape and size and use a front-positioned camera to determine the marker distance and direction. The robots use this information to estimate each other’s positions and to calibrate their movement. To evaluate the proposed method experimentally, we fabricated two real robots and observed their behavior in an indoor environment. The experimental results revealed that the distance measurement and control error could be reduced to less than 3 %.  相似文献   

16.
《Advanced Robotics》2013,27(11):1595-1613
For successful simultaneous localization and mapping (SLAM), perception of the environment is important. This paper proposes a scheme to autonomously detect visual features that can be used as natural landmarks for indoor SLAM. First, features are roughly selected from the camera image through entropy maps that measure the level of randomness of pixel information. Then, the saliency of each pixel is computed by measuring the level of similarity between the selected features and the given image. In the saliency map, it is possible to distinguish the salient features from the background. The robot estimates its pose by using the detected features and builds a grid map of the unknown environment by using a range sensor. The feature positions are stored in the grid map. Experimental results show that the feature detection method proposed in this paper can autonomously detect features in unknown environments reasonably well.  相似文献   

17.
Dong  Xiaoming  Ai  Liefu  Jiang  Rong 《Multimedia Tools and Applications》2019,78(21):29747-29763

Robot motion estimation is fundamental in most robot applications such as robot navigation, which is an indispensable part of future internet of things. Indoor robot motion estimation is difficult to be resolved because GPS (Global Positioning System) is unavailable. Vision sensors can provide larger amount of image sequences information compared with other traditional sensors, but it is subject to the changes of light. In order to improve the robustness of indoor robot motion estimation, an enhanced particle filter framework is constructed: firstly, motion estimation was implemented based on the distinguished indoor feature points; secondly, particle filter method was utilized and the least square curve fitting was inserted into the particle resampling process to solve the problem of particle depletion. The various experiments based on real robots show that the proposed method can reduce the estimation errors greatly and provide an effective resolution for the indoor robot localization and motion estimation.

  相似文献   

18.
庄严  王伟  王珂  徐晓东 《自动化学报》2005,31(6):925-933
该文研究了部分结构化室内环境中自主移动机器人同时定位和地图构建问题.基于激光和视觉传感器模型的不同,加权最小二乘拟合方法和非局部最大抑制算法被分别用于提取二维水平环境特征和垂直物体边缘.为完成移动机器人在缺少先验地图支持的室内环境中的自主导航任务,该文提出了同时进行扩展卡尔曼滤波定位和构建具有不确定性描述的二维几何地图的具体方法.通过对于SmartROB-2移动机器人平台所获得的实验结果和数据的分析讨论,论证了所提出方法的有效性和实用性.  相似文献   

19.
In this work, we examine the classic problem of robot navigation via visual simultaneous localization and mapping (SLAM), but introducing the concept of dual optical and thermal (cross-spectral) sensing with the addition of sensor handover from one to the other. In our approach we use a novel combination of two primary sensors: co-registered optical and thermal cameras. Mobile robot navigation is driven by two simultaneous camera images from the environment over which feature points are extracted and matched between successive frames. A bearing-only visual SLAM approach is then implemented using successive feature point observations to identify and track environment landmarks using an extended Kalman filter (EKF). Six-degree-of-freedom mobile robot and environment landmark positions are managed by the EKF approach illustrated using optical, thermal and combined optical/thermal features in addition to handover from one sensor to another. Sensor handover is primarily targeted at a continuous SLAM operation during varying illumination conditions (e.g., changing from night to day). The final methodology is tested in outdoor environments with variation in the light conditions and robot trajectories producing results that illustrate that the additional use of a thermal sensor improves the accuracy of landmark detection and that the sensor handover is viable for solving the SLAM problem using this sensor combination.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号