首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 250 毫秒
1.
针对室内环境的结构特点,提出一种使用平面与线段特征的RGB-D视觉里程计算法.首先根据RGB-D扫描点的法向量对3D点云进行聚类,并使用随机抽样一致(RANSAC)算法对每簇3D点集进行平面拟合,抽取出环境中的平面特征;随后利用边缘点检测算法分割出环境中的边缘点集,并提取出环境中的线段特征;然后提出一种基于平面与线段几何约束的特征匹配算法,完成特征之间的匹配.在平面与线段特征匹配结果能提供充足的位姿约束的条件下,利用特征之间的匹配关系直接求解RGB-D相机的位姿;若不能,则利用匹配线段的端点以及线段点集来实现RGB-D相机位姿的估计.在TUM公开数据集中的实验证明了选择平面与线段作为环境特征可以提升视觉里程计估计和环境建图的精度.特别是在fr3/cabinet数据集中,本文算法的旋转、平移的均方根误差分别为2.046°/s、0.034m/s,要显著优于其他经典的视觉里程计算法.最终将本文系统应用到实际的移动机器人室内建图中,系统可以建立准确的环境地图,且系统运行速度可以达到3帧/s,满足实时处理的要求.  相似文献   

2.
基于快速不变卡尔曼滤波的视觉惯性里程计   总被引:1,自引:0,他引:1  
黄伟杰  张国山 《控制与决策》2019,34(12):2585-2593
针对相机定位问题,设计基于深度相机和惯性传感器的视觉惯性里程计,里程计包含定位部分和重定位部分.定位部分使用不变卡尔曼滤波融合多层迭代最近点(ICP)的估计值和惯性传感器的测量值来获得精确的相机位姿,其中ICP的估计误差使用费舍尔信息矩阵进行量化.由于需要使用海量的点云作为输入,采用GPU并行计算以快速实现ICP估计和误差量化的过程. 当视觉惯性里程计出现定位失败时,结合惯性传感器数据建立恒速模型,并基于此模型改进随机蕨定位方法,实现视觉惯性里程计的重定位.实验结果表明,所设计的视觉惯性里程计可以获得准确追踪相机且可以进行有效的重定位.  相似文献   

3.
针对现有视觉里程计在实时性、鲁棒性和准确性之间难以协调统一的问题,提出增强视觉里程计实用性的方法.分别运用基于图形处理器的定向加速分割测试特征和旋转感知的二进制鲁棒基元独立特征以及K最邻近加速提取、匹配图像的特征点.根据Kinect有效的深度量程剔除深度误差较大的特征点.求解相机帧间运动时,首先采用高效n点透视快速求解相机帧间运动参数的估计,然后将其作为Levenberg-Marquedt迭代法的初值,优化相机帧间运动参数.在运动参数解计算过程中,使用随机采样一致排除特征外点的干扰.实验表明,文中措施可以提高相机运动轨迹的解算速度,在室内环境下获得的相机运动轨迹更准确,鲁棒性更强,因此适用于室内机器人导航及定位.  相似文献   

4.
针对如何准确获取位姿信息来实现移动机器人的避障问题,提出一种可用于实时获取移动机器人位姿的单目视觉里程计算法。该算法利用单目摄像机获取连续帧间图像路面SURF(Speeded Up Robust Features)特征点;并结合极线几何约束来解决路面特征点匹配较难的问题,通过计算平面单应性矩阵获取移动机器人的位姿变化。实验结果表明该算法具有较高的精度和实时性。  相似文献   

5.
研究了一种精度较高的视觉里程计实现方法。首先采用计算量小、精度高的SURF算法提取特征点,并采用最近邻向量匹配法进行特征点匹配;采用RANSAC算法求得帧间摄像机坐标系的转换矩阵;提出了一种视觉里程计标定方法,并通过该方法得到机器人起始点坐标系下的位移信息,保证了机器人有自身姿态变化时的定位的准确性;圆轨迹实验验证了视觉里程计算法的可行性。  相似文献   

6.
刘辉  张雪波  李如意  苑晶 《控制与决策》2024,39(6):1787-1800
激光同步定位与地图构建(simultaneous localization and mapping,SLAM)算法在位姿估计和构建环境地图时依赖环境结构特征信息,在结构特征缺乏的场景下,此类算法的位姿估计精度与鲁棒性将下降甚至运行失败.对此,结合惯性测量单元(inertial measurement unit,IMU)不受环境约束、相机依赖视觉纹理的特点,提出一种双目视觉辅助的激光惯导SLAM算法,以解决纯激光SLAM算法在环境结构特征缺乏时的退化问题.即采用双目视觉惯导里程计算法为激光扫描匹配模块提供视觉先验位姿,并进一步兼顾视觉约束与激光结构特征约束进行联合位姿估计.此外,提出一种互补滤波算法与因子图优化求解的组合策略,完成激光里程计参考系与惯性参考系对准,并基于因子图将激光位姿与IMU数据融合以约束IMU偏置,在视觉里程计失效的情况下为激光扫描匹配提供候补的相对位姿预测.为进一步提高全局轨迹估计精度,提出基于迭代最近点匹配算法(iterative closest point,ICP)与基于图像特征匹配算法融合的混合闭环检测策略,利用6自由度位姿图优化方法显著降低里程计漂移误差并构建环境地图.最后,将所提出方法在公开与自制数据集上进行实验验证,并与主流开源的SLAM算法进行对比.实验结果表明,所提出算法可以在结构特征缺乏环境下稳定运行,并且相较于对比算法具有更高的位姿估计精度和算法鲁棒性.  相似文献   

7.
扫描匹配算法被广泛应用于基于视觉、声纳、激光等传感器数据的特征匹配中,其中迭代最近点扫描匹配算法(ICP)是最常见的扫描匹配算法,但该算法存在匹配误差较大、对角度误差修正较差等缺点;针对基于ICP的激光传感器数据配准中存在的问题,提出了一种遗传迭代最近点扫描匹配算法(GICP);通过遗传算法搜索当前扫描数据和参考扫描数据的最优匹配,修正初始里程计读数的误差以及机器人的位姿;实验结果表明,提出的算法能够有效地解决扫描匹配算法中任意的配准问题,提高了机器人的定位精度。  相似文献   

8.

In order to overcome the defects where the surface of the object lacks sufficient texture features and the algorithm cannot meet the real-time requirements of augmented reality, a markerless augmented reality tracking registration method based on multimodal template matching and point clouds is proposed. The method first adapts the linear parallel multi-modal LineMod template matching method with scale invariance to identify the texture-less target and obtain the reference image as the key frame that is most similar to the current perspective. Then, we can obtain the initial pose of the camera and solve the problem of re-initialization because of tracking registration interruption. A point cloud-based method is used to calculate the precise pose of the camera in real time. In order to solve the problem that the traditional iterative closest point (ICP) algorithm cannot meet the real-time requirements of the system, Kd-tree (k-dimensional tree) is used under the graphics processing unit (GPU) to replace the part of finding the nearest points in the original ICP algorithm to improve the speed of tracking registration. At the same time, the random sample consensus (RANSAC) algorithm is used to remove the error point pairs to improve the accuracy of the algorithm. The results show that the proposed tracking registration method has good real-time performance and robustness.

  相似文献   

9.
Conventional particle filtering-based visual ego-motion estimation or visual odometry often suffers from large local linearization errors in the case of abrupt camera motion. The main contribution of this paper is to present a novel particle filtering-based visual ego-motion estimation algorithm that is especially robust to the abrupt camera motion. The robustness to the abrupt camera motion is achieved by multi-layered importance sampling via particle swarm optimization (PSO), which iteratively moves particles to higher likelihood region without local linearization of the measurement equation. Furthermore, we make the proposed visual ego-motion estimation algorithm in real-time by reformulating the conventional vector space PSO algorithm in consideration of the geometry of the special Euclidean group SE(3), which is a Lie group representing the space of 3-D camera poses. The performance of our proposed algorithm is experimentally evaluated and compared with the local linearization and unscented particle filter-based visual ego-motion estimation algorithms on both simulated and real data sets.  相似文献   

10.
袁梦  李艾华  崔智高  姜柯  郑勇 《机器人》2018,40(1):56-63
针对目前流行的单目视觉里程计当移动机器人做“近似纯旋转运动”时鲁棒性不强的问题,从理论上分析了其定位鲁棒性不高的原因,提出了一种基于改进的3维迭代最近点(ICP)匹配的单目视觉里程计算法.该算法首先初始化图像的边特征点对应的深度值,之后利用改进的3维ICP算法迭代求解2帧图像之间对应的3维坐标点集的6维位姿,最后结合边特征的几何约束关系利用扩展卡尔曼深度滤波器更新深度值.改进的ICP算法利用反深度不确定度加权、边特征梯度搜索与匹配等方法,提高了传统ICP算法迭代求解的实时性和准确性.并且将轮子里程计数据作为迭代初始值,能够进一步提高定位算法的精度和针对“近似纯旋转运动”问题的鲁棒性.本文采用3个公开数据集进行算法验证,该算法在不损失定位精度的前提下,能够有效提高针对近似纯旋转运动、大场景下的鲁棒性.单目移动机器人利用本文算法可在一定程度上校正里程计漂移的问题.  相似文献   

11.
针对传统ICP(iterative closest points,迭代最近点算法)存在易陷入局部最优、匹配误差大等问题,提出了一种新的欧氏距离和角度阈值双重限制方法,并在此基础上构建了基于Kinect的室内移动机器人RGB-D SLAM(simultaneous localization and mapping)系统。首先,使用Kinect获取室内环境的彩色信息和深度信息,通过图像特征提取与匹配,结合相机内参与像素点深度值,建立三维点云对应关系;然后,利用RANSAC(random sample consensus)算法剔除外点,完成点云的初匹配;采用改进的点云配准算法完成点云的精匹配;最后,在关键帧选取中引入权重,结合g2o(general graph optimization)算法对机器人位姿进行优化。实验证明该方法的有效性与可行性,提高了三维点云地图的精度,并估计出了机器人运行轨迹。  相似文献   

12.
《Advanced Robotics》2013,27(3-4):327-348
We present a mobile robot localization method using only a stereo camera. Vision-based localization in outdoor environments is a challenging issue because of extreme changes in illumination. To cope with varying illumination conditions, we use two-dimensional occupancy grid maps generated from three-dimensional point clouds obtained by a stereo camera. Furthermore, we incorporate salient line segments extracted from the ground into the grid maps. The grid maps are not significantly affected by illumination conditions because occupancy information and salient line segments can be robustly obtained. On the grid maps, a robot's poses are estimated using a particle filter that combines visual odometry and map matching. We use edge-point-based stereo simultaneous localization and mapping to obtain simultaneously occupancy information and robot ego-motion estimation. We tested our method under various illumination and weather conditions, including sunny and rainy days. The experimental results showed the effectiveness and robustness of the proposed method. Our method enables localization under extremely poor illumination conditions, which are challenging for even existing state-of-the-art methods.  相似文献   

13.
针对移动机器人视觉导航定位需求,提出一种基于双目相机的视觉里程计改进方案。对于特征信息冗余问题,改进ORB(oriented FAST and rotated BRIEF)算法,引入多阈值FAST图像分割思想,为使误匹配尽可能减少,主要运用快速最近邻和随机采样一致性算法;一般而言,运用的算法主要是立体匹配算法,此算法的特征主要指灰度,对此算法做出改进,运用一种新型的双目视差算法,此算法主要以描述子为特征,据此恢复特征点深度;为使所得位姿坐标具有相对较高的准确度,构造一种特定的最小二乘问题,使其提供初值,以相应的特征点三维坐标为基础,基于有效方式对相机运动进行估计。根据数据集的实验结果可知,所提双目视觉里程具有相对而言较好的精度及较高的实时性。  相似文献   

14.
This paper presents a novel approach to the real-time SLAM problem that works in unstructured indoor environment with a single forward viewing camera. Most existing visual SLAM extract features from the environment, associate them in different images and produce a feature map as a result. However, we estimate the distances between the robot and the obstacles by applying a visual sonar ranging technique to the image and then associate this range data through the Iterative Closest Point (ICP) algorithm and finally produce a grid map. Moreover, we construct a pseudo-dense scan (PDS) which is essentially a temporal accumulation of data, emulating a dense omni-directional sensing of the visual sonar readings based on odometry readings in order to overcome the sparseness of the visual sonar and then associate this scan with the previous one. Moreover, we further correct the slight trajectory error incurred in the PDS construction step to obtain a much more refined map using Sequential Quadratic Programming (SQP) which is a well-known optimization scheme. Experimental results show that our method can obtain an accurate grid map using a single camera alone without the need for more expensive.  相似文献   

15.
This paper presents a visual odometry method that estimates the location and orientation of a robot. The visual odometry approach is based on the Fourier transform, which extracts the translation between consecutive image’s regions captured using a ground-facing camera. The proposed method is especially suited if no distinct visual features are present on the ground. This approach is resistant to wheel slippage because it is independent of the kinematics of the vehicle. The method has been tested on different experimental platforms and evaluated against the ground truth, including a successful loop-closing test, to demonstrate its general use and performance.  相似文献   

16.
于雅楠  卫红  陈静 《自动化学报》2021,47(6):1460-1466
针对移动机器人视觉同步定位与地图创建中由于相机大角度转动造成的帧间匹配失败以及跟踪丢失等问题, 提出了一种基于局部图像熵的细节增强视觉里程计优化算法. 建立图像金字塔, 划分图像块进行均匀化特征提取, 根据图像块的信息熵判断其信息量大小, 将对比度低以及梯度变化小的图像块进行删除, 减小图像特征点计算量. 对保留的图像块进行亮度自适应调整, 增强局部图像细节, 尽可能多地提取能够表征图像信息的局部特征点作为相邻帧匹配以及关键帧匹配的关联依据. 结合姿态图优化方法对位姿累计误差进行局部和全局优化, 进一步提高移动机器人系统性能. 采用TUM数据集测试验证, 由于提取了更能反映物体纹理以及形状的特征属性, 本文算法的运动跟踪成功率最高可提升至60 % 以上, 并且测量的轨迹误差、平移误差以及转动误差都有所降低. 与目前ORB-SLAM2系统相比, 本文提出的算法不但提高了移动机器人视觉定位精度, 而且满足实时SLAM的应用需要.  相似文献   

17.

This paper proposes the object depth estimation in real-time, using only a monocular camera in an onboard computer with a low-cost GPU. Our algorithm estimates scene depth from a sparse feature-based visual odometry algorithm and detects/tracks objects’ bounding box by utilizing the existing object detection algorithm in parallel. Both algorithms share their results, i.e., feature, motion, and bounding boxes, to handle static and dynamic objects in the scene. We validate the scene depth accuracy of sparse features with KITTI and its ground-truth depth map made from LiDAR observations quantitatively, and the depth of detected object with the Hyundai driving datasets and satellite maps qualitatively. We compare the depth map of our algorithm with the result of (un-) supervised monocular depth estimation algorithms. The validation shows that our performance is comparable to that of monocular depth estimation algorithms which train depth indirectly (or directly) from stereo image pairs (or depth image), and better than that of algorithms trained with monocular images only, in terms of the error and the accuracy. Also, we confirm that our computational load is much lighter than the learning-based methods, while showing comparable performance.

  相似文献   

18.
针对滤波方法实现的视觉-惯导里程计(VIO)问题,为更准确传递旋转运动的不确定性并降低系统线性化误差,提高位姿估计的精度,设计并实现了一种高维矩阵李群表示的采用容积卡尔曼滤波框架实现的VIO算法.算法将状态变量构建为一个高维李群矩阵,并定义了李群变量在容积点采样过程中的‘加法’运算,将容积点和状态均值、方差等概念由欧氏空间扩展到流形空间;采用容积变换传递状态均值及方差,避免了旋转运动复杂的雅克比矩阵计算过程,降低了模型线性化误差.最后,使用EuRoc MAV数据集进行算法验证,结果表明所提出算法在提高位姿估计精度方面是有效的.  相似文献   

19.
龚赵慧  张霄力  彭侠夫  李鑫 《机器人》2020,42(5):595-605
针对半直接单目视觉里程计缺乏尺度信息并且在快速运动中鲁棒性较差的缺点,设计了一种融合惯性测量信息的半直接单目视觉里程计,通过IMU(惯性测量单元)信息弥补视觉里程计的缺陷,有效提高跟踪精度与系统鲁棒性.本文联合惯性测量信息与视觉信息进行初始化,较准确地恢复了环境尺度信息.为提高运动跟踪的鲁棒性,提出一种IMU加权的运动先验模型.通过预积分获取IMU的状态估计,根据IMU先验误差调整权重系数,使用IMU先验信息的加权值为前端提供精确的初值.后端构建了紧耦合的图优化模型,融合惯性、视觉以及3维地图点信息进行联合优化,同时在滑动窗口中使用强共视关系作为约束,在消除局部累积误差的同时提高优化效率与优化精度.实验结果表明,本文的先验模型优于匀速运动模型与IMU先验模型,单帧先验误差小于1 cm.后端优化方法改进后,计算效率提高为原来的1.52倍,同时轨迹精度与优化稳定性也得到提高.在EuRoC数据集上进行测试,定位效果优于OKVIS算法,轨迹均方根误差减小为原视觉里程计的1/3.  相似文献   

20.

针对室内复杂环境下的稠密三维建模问题, 提出一种基于RGB-D 相机的移动机器人同时定位与三维地图创建方法. 该方法利用架设在移动机器人上的RGB-D 相机获取环境信息, 根据点云和纹理加权模型建立结合局部纹理约束的混合位姿估计方法, 确保定位精度的同时减小失败率. 在关键帧选取机制下, 结合视觉闭环检测方法, 运用树结构网络优化(TORO) 算法最小化闭环误差, 实现三维地图的全局一致性优化. 在室内环境下的实验结果验证了所提出算法的有效性和可行性.

  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号