首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this paper,we present a novel algorithm for odometry estimation based on ceiling vision.The main contribution of this algorithm is the introduction of principal direction detection that can greatly reduce error accumulation problem in most visual odometry estimation approaches.The principal direction is defned based on the fact that our ceiling is flled with artifcial vertical and horizontal lines which can be used as reference for the current robot s heading direction.The proposed approach can be operated in real-time and it performs well even with camera s disturbance.A moving low-cost RGB-D camera(Kinect),mounted on a robot,is used to continuously acquire point clouds.Iterative closest point(ICP) is the common way to estimate the current camera position by registering the currently captured point cloud to the previous one.However,its performance sufers from data association problem or it requires pre-alignment information.The performance of the proposed principal direction detection approach does not rely on data association knowledge.Using this method,two point clouds are properly pre-aligned.Hence,we can use ICP to fne-tune the transformation parameters and minimize registration error.Experimental results demonstrate the performance and stability of the proposed system under disturbance in real-time.Several indoor tests are carried out to show that the proposed visual odometry estimation method can help to signifcantly improve the accuracy of simultaneous localization and mapping(SLAM).  相似文献   

2.
张晨阳  黄腾  吴壮壮 《计算机工程》2022,48(1):236-244+252
传统的RGB-D视觉同时定位与制图(SLAM)算法在动态场景中识别动态特征时会产生数据错误关联,导致视觉SLAM估计姿态精度退化。提出一种适用于动态场景的RGB-D SLAM算法,利用全新的跨平台神经网络深度学习框架检测场景中的动态语义特征,并分割提取对应的动态语义特征区域。结合深度图像的K均值聚类算法和动态语义特征区域对点特征深度值进行聚类,根据聚类结果剔除动态特征点,同时通过剩余特征点计算RGB-D相机的位姿。实验结果表明,相比ORB-SLAM2、OFD-SLAM、MR-SLAM等算法,该算法能够减小动态场景下的跟踪误差,提高相机位姿估计的精度和鲁棒性,其在TUM动态数据集上相机绝对轨迹的均方根误差约为0.019 m。  相似文献   

3.
Tian  Yuan  Zhou  Xiaolei  Wang  Xuefan  Wang  Zhifeng  Yao  Huang 《Multimedia Tools and Applications》2021,80(14):21041-21058

Spatial position consistency and occlusion consistency are two important problems in augmented reality systems. In this paper, we proposed a novel method that can address the registration problem and occlusion problem simultaneously by using an RGB-D camera. First, to solve the image alignment errors caused by the imaging mode of the RGB-D camera, we developed a depth map inpainting method that combines the FMM and RGB-D information. Second, we established an automatic method to judge the close-range mode based on the depth histogram to solve the registration failure problem caused by hardware limitations. In the close-range mode, the registration method combining the fast ICP and ORB was adopted to calculate the camera pose. Third, we developed an occlusion handling method based on the geometric analysis of the scene. Several experiments were performed to validate the performance of the proposed method. The experimental results indicate that our method can obtain stable and accurate registration and occlusion handling results in both the close-range and non-close-range modes. Moreover, the mutual occlusion problem can be handled effectively, and the proposed method can satisfy the real-time requirements of augmented reality systems.

  相似文献   

4.
针对现有室内环境存在多个平面特征的特点,提出了一种使用平面特征优化相机位姿和建图的实时室内RGB-D 同时定位与地图构建(SLAM)系统。系统在前端采用迭代最近点(ICP)算法和直接法联合估计相机位姿,在后端提取相机关键帧平面特征并建立多种基于平面特征的约束关系,优化相机关键帧的位姿和平面特征参数,同时增量式地构建环境的平面结构模型。在多个公开数据序列上的实验结果表明,在平面信息丰富的环境中,平面特征约束能够减小位姿估计的累积误差,系统能够构建场景平面模型并只消耗少量的存储空间,在真实环境下的实验结果验证了系统在室内增强现实领域具有可行性和应用价值。  相似文献   

5.
赵宏  刘向东  杨永娟 《计算机应用》2020,40(12):3637-3643
同时定位与地图构建(SLAM)是机器人在未知环境实现自主导航的关键技术,针对目前常用的RGB-D SLAM系统实时性差和精确度低的问题,提出一种新的RGB-D SLAM系统,以进一步提升实时性和精确度。首先,采用ORB算法检测图像特征点,并对提取的特征点采用基于四叉树的均匀化策略进行处理,并结合词袋模型(BoW)进行特征匹配。然后,在系统相机姿态初始值估计阶段,结合PnP和非线性优化方法为后端优化提供一个更接近最优值的初始值;在后端优化中,使用光束法平差(BA)对相机姿态初始值进行迭代优化,从而得到相机姿态的最优值。最后,根据相机姿态和每帧点云地图的对应关系,将所有的点云数据注册到同一个坐标系中,得到场景的稠密点云地图,并对点云地图利用八叉树进行递归式的压缩以得到一种用于机器人导航的三维地图。在TUM RGB-D数据集上,将构建的RGB-D SLAM同RGB-D SLAMv2、ORB-SLAM2系统进行了对比,实验结果表明所构建的RGB-D SLAM系统在实时性和精确度上的综合表现更优。  相似文献   

6.
赵宏  刘向东  杨永娟 《计算机应用》2005,40(12):3637-3643
同时定位与地图构建(SLAM)是机器人在未知环境实现自主导航的关键技术,针对目前常用的RGB-D SLAM系统实时性差和精确度低的问题,提出一种新的RGB-D SLAM系统,以进一步提升实时性和精确度。首先,采用ORB算法检测图像特征点,并对提取的特征点采用基于四叉树的均匀化策略进行处理,并结合词袋模型(BoW)进行特征匹配。然后,在系统相机姿态初始值估计阶段,结合PnP和非线性优化方法为后端优化提供一个更接近最优值的初始值;在后端优化中,使用光束法平差(BA)对相机姿态初始值进行迭代优化,从而得到相机姿态的最优值。最后,根据相机姿态和每帧点云地图的对应关系,将所有的点云数据注册到同一个坐标系中,得到场景的稠密点云地图,并对点云地图利用八叉树进行递归式的压缩以得到一种用于机器人导航的三维地图。在TUM RGB-D数据集上,将构建的RGB-D SLAM同RGB-D SLAMv2、ORB-SLAM2系统进行了对比,实验结果表明所构建的RGB-D SLAM系统在实时性和精确度上的综合表现更优。  相似文献   

7.
针对传统ICP(iterative closest points,迭代最近点算法)存在易陷入局部最优、匹配误差大等问题,提出了一种新的欧氏距离和角度阈值双重限制方法,并在此基础上构建了基于Kinect的室内移动机器人RGB-D SLAM(simultaneous localization and mapping)系统。首先,使用Kinect获取室内环境的彩色信息和深度信息,通过图像特征提取与匹配,结合相机内参与像素点深度值,建立三维点云对应关系;然后,利用RANSAC(random sample consensus)算法剔除外点,完成点云的初匹配;采用改进的点云配准算法完成点云的精匹配;最后,在关键帧选取中引入权重,结合g2o(general graph optimization)算法对机器人位姿进行优化。实验证明该方法的有效性与可行性,提高了三维点云地图的精度,并估计出了机器人运行轨迹。  相似文献   

8.

针对室内复杂环境下的稠密三维建模问题, 提出一种基于RGB-D 相机的移动机器人同时定位与三维地图创建方法. 该方法利用架设在移动机器人上的RGB-D 相机获取环境信息, 根据点云和纹理加权模型建立结合局部纹理约束的混合位姿估计方法, 确保定位精度的同时减小失败率. 在关键帧选取机制下, 结合视觉闭环检测方法, 运用树结构网络优化(TORO) 算法最小化闭环误差, 实现三维地图的全局一致性优化. 在室内环境下的实验结果验证了所提出算法的有效性和可行性.

  相似文献   

9.
Although the introduction of commercial RGB-D sensors has enabled significant progress in the visual navigation methods for mobile robots, the structured-light-based sensors, like Microsoft Kinect and Asus Xtion Pro Live, have some important limitations with respect to their range, field of view, and depth measurements accuracy. The recent introduction of the second- generation Kinect, which is based on the time-of-flight measurement principle, brought to the robotics and computer vision researchers a sensor that overcomes some of these limitations. However, as the new Kinect is, just like the older one, intended for computer games and human motion capture rather than for navigation, it is unclear how much the navigation methods, such as visual odometry and SLAM, can benefit from the improved parameters. While there are many publicly available RGB-D data sets, only few of them provide ground truth information necessary for evaluating navigation methods, and to the best of our knowledge, none of them contains sequences registered with the new version of Kinect. Therefore, this paper describes a new RGB-D data set, which is a first attempt to systematically evaluate the indoor navigation algorithms on data from two different sensors in the same environment and along the same trajectories. This data set contains synchronized RGB-D frames from both sensors and the appropriate ground truth from an external motion capture system based on distributed cameras. We describe in details the data registration procedure and then evaluate our RGB-D visual odometry algorithm on the obtained sequences, investigating how the specific properties and limitations of both sensors influence the performance of this navigation method.  相似文献   

10.
RGB-D cameras like PrimeSense and Microsoft Kinect are popular sensors in the simultaneous localization and mapping researches on mobile robots because they can provide both vision and depth information. Most of the state-of-the-art RGB-D SLAM systems employ the Iterative Closest Point (ICP) algorithm to align point features, whose spatial positions are computed by the corresponding depth data of the sensors. However, the depth measurements of features are often disturbed by noise because visual features tend to lie at the margins of real objects. In order to reduce the estimation error, we propose a method that extracts and selects the features with reliable depth values, i.e. planar point features. The planar features can benefit the accuracy and robustness of traditional ICP, while holding a reasonable computation cost for real-time applications. An efficient RGB-D SLAM system based on planar features is also demonstrated, with trajectory and map results from open datasets and a physical robot in real-world experiments.  相似文献   

11.
Wei  Hongyu  Zhang  Tao  Zhang  Liang 《Multimedia Tools and Applications》2021,80(21-23):31729-31751

As a research hotspot in the field of robotics, Simultaneous localization and mapping (SLAM) has made great progress in recent years, but few SLAM algorithms take dynamic or movable targets in the scene into account. In this paper, a robust new RGB-D SLAM method with dynamic area detection towards dynamic environments named GMSK-SLAM is proposed. Most of the existing related papers use the method of directly eliminating the whole dynamic targets. Although rejecting dynamic objects can increase the accuracy of robot positioning to a certain extent, this type of algorithm will result in the reduction of the number of available feature points in the image. The lack of sufficient feature points will seriously affect the subsequent precision of positioning and mapping for feature-based SLAM. The proposed GMSK-SLAM method innovatively combines Grid-based Motion Statistics (GMS) feature points matching method with K-means cluster algorithm to distinguish dynamic areas from the images and retain static information from dynamic environments, which can effectively increase the number of reliable feature points and keep more environment features. This method can achieve a highly improvements on localization accuracy in dynamic environments. Finally, sufficient experiments were conducted on the public TUM RGB-D dataset. Compared with ORB-SLAM2 and the RGB-D SLAM, our system, respectively, got 97.3% and 90.2% improvements in dynamic environments localization evaluated by root-mean-square error. The empirical results show that the proposed algorithm can eliminate the influence of the dynamic objects effectively and achieve a comparable or better performance than state-of-the-art methods.

  相似文献   

12.
基于RGB-D深度相机的室内场景重建   总被引:1,自引:1,他引:0       下载免费PDF全文
目的 重建包含真实纹理的彩色场景3维模型是计算机视觉领域重要的研究课题之一,由于室内场景复杂、采样图像序列长且运动无规则,现有的3维重建算法存在重建尺度受限、局部细节重建效果差的等问题。方法 以RGBD-SLAM 算法为基础并提出了两方面的改进,一是将深度图中的平面信息加入帧间配准算法,提高了帧间配准算法的鲁棒性与精度;二是在截断符号距离函数(TSDF)体重建过程中,提出了一种指数权重函数,相比普通的权重函数能更好地减少相机深度畸变对重建的影响。结果 本文方法在相机姿态估计中带来了比RGBD-SLAM方法更好的结果,平均绝对路径误差减少1.3 cm,能取得到更好的重建效果。结论 本文方法有效地提高了相机姿态估计精度,可以应用于室内场景重建中。  相似文献   

13.
Algorithm frameworks based on feature point matching are mature and widely used in simultaneous localization and mapping (SLAM). However, in the complex and changeable indoor environment, feature point matching-based SLAM currently has two major problems, namely, decreased accuracy of pose estimation due to the interference caused by dynamic objects to the SLAM system and tracking loss caused by the lack of feature points in weak texture scenes. To address these problems, herein, we present a robust and real-time RGB-D SLAM algorithm that is based on ORBSLAM3. For interference caused by indoor moving objects, we add the improved lightweight object detection network YOLOv4-tiny to detect dynamic regions, and the dynamic features in the dynamic area are then eliminated in the algorithm tracking stage. In the case of indoor weak texture scenes, while extracting point features the system extracts surface features at the same time. The framework fuses point and surface features to track camera pose. Experiments on the public TUM RGB-D data sets show that compared with the ORB-SLAM3 algorithm in highly dynamic scenes, the root mean square error (RMSE) of the absolute path error of the proposed algorithm improved by an average of 94.08%. Camera pose is tracked without loss over time. The algorithm takes an average of 34 ms to track each frame of the picture just with a CPU, which is suitably real-time and practical. The proposed algorithm is compared with other similar algorithms, and it exhibits excellent real-time performance and accuracy. We also used a Kinect camera to evaluate our algorithm in complex indoor environment, and also showed high robustness and real-time. To sum up, our algorithm can not only deal with the interference caused by dynamic objects to the system but also stably run in the open indoor weak texture scene.  相似文献   

14.
The objective of this article is to provide a generalized framework of a novel method that investigates the problem of combining and fusing different types of measurements for pose estimation. The proposed method allows to jointly minimize the different metric errors as a single measurement vector in n-dimensions without requiring a scaling factor to tune their importance. This paper is an extended version of previous works that introduced the Point-to-hyperplane Iterative Closest Point (ICP) approach. In this approach, an increased convergence domain and a faster alignment were demonstrated by considering a four-dimensional measurement vector (3D Euclidean points + Intensity). The method has the advantages of the classic point-to-plane ICP method, but extends this to higher dimensions. For demonstration purposes, this paper will focus on a RGB-D sensor that provides colour and depth measurements simultaneously and an optimal error in higher dimensions will be minimized from this. Results on both, simulated and real environments will be provided and the performance of the proposed method will be carried on real-time visual SLAM.  相似文献   

15.
We tackle the task of dense 3D reconstruction from RGB-D data. Contrary to the majority of existing methods, we focus not only on trajectory estimation accuracy, but also on reconstruction precision. The key technique is SDF-2-SDF registration, which is a correspondence-free, symmetric, dense energy minimization method, performed via the direct voxel-wise difference between a pair of signed distance fields. It has a wider convergence basin than traditional point cloud registration and cloud-to-volume alignment techniques. Furthermore, its formulation allows for straightforward incorporation of photometric and additional geometric constraints. We employ SDF-2-SDF registration in two applications. First, we perform small-to-medium scale object reconstruction entirely on the CPU. To this end, the camera is tracked frame-to-frame in real time. Then, the initial pose estimates are refined globally in a lightweight optimization framework, which does not involve a pose graph. We combine these procedures into our second, fully real-time application for larger-scale object reconstruction and SLAM. It is implemented as a hybrid system, whereby tracking is done on the GPU, while refinement runs concurrently over batches on the CPU. To bound memory and runtime footprints, registration is done over a fixed number of limited-extent volumes, anchored at geometry-rich locations. Extensive qualitative and quantitative evaluation of both trajectory accuracy and model fidelity on several public RGB-D datasets, acquired with various quality sensors, demonstrates higher precision than related techniques.  相似文献   

16.
艾青林  王威  刘刚江 《机器人》2022,44(4):431-442
为解决室内动态环境下现有RGB-D SLAM(同步定位与地图创建)系统定位精度低、建图效果差的问题,提出一种基于网格分割与双地图耦合的RGB-D SLAM算法。基于单应运动补偿与双向补偿光流法,根据几何连通性与深度图像聚类结果实现网格化运动分割,同时保证算法的快速性。利用静态区域内的特征点最小化重投影误差对相机进行位置估计。结合相机位姿、RGB-D图像、网格化运动分割图像,同时构建场景的稀疏点云地图和静态八叉树地图并进行耦合,在关键帧上使用基于网格分割和八叉树地图光线遍历的方法筛选静态地图点,更新稀疏点云地图,保障定位精度。公开数据集和实际动态场景中的实验结果都表明,本文算法能够有效提升室内动态场景中的相机位姿估计精度,实现场景静态八叉树地图的实时构建和更新。此外,本文算法能够实时运行在标准CPU硬件平台上,无需GPU等额外计算资源。  相似文献   

17.
Visual Simultaneous Localization and Mapping (visual SLAM) has attracted more and more researchers in recent decades and many state-of-the-art algorithms have been proposed with rather satisfactory performance in static scenarios. However, in dynamic scenarios, the performance of current visual SLAM algorithms degrades significantly due to the disturbance of the dynamic objects. To address this problem, we propose a novel method which uses optical flow to distinguish and eliminate the dynamic feature points from the extracted ones using RGB images as the only input. The static feature points are fed into the visual SLAM system for the camera pose estimation. We integrate our method with the original ORB-SLAM system and validate the proposed method with the challenging dynamic sequences from the TUM dataset and our recorded office dataset. The whole system can work in real time. Qualitative and quantitative evaluations demonstrate that our method significantly improves the performance of ORB-SLAM in dynamic scenarios.  相似文献   

18.
针对RGB-D视觉里程计中kinect相机所捕获的图像深度区域缺失的问题,提出了一种基于PnP(perspective-n-point)和ICP(iterative closest point)的融合优化算法。传统ICP算法迭代相机位姿时由于深度缺失,经常出现特征点丢失导致算法无法收敛或误差过大。本算法通过对特征点的深度值判定,建立BA优化模型,并利用g2o求解器进行特征点与相机位姿的优化。实验证明了该方法的有效性,提高了相机位姿估计的精度及算法的收敛成功率,从而提高了RGB-D视觉里程计的精确性和鲁棒性。  相似文献   

19.
同时定位与建图(Simultaneous Localization and Mapping,SLAM)是机器人领域的研究热点,被认为是实现机器人自主运动的关键。传统的基于RGB-D摄像头的SLAM算法(RGB-D SLAM)采用SIFT(Scale-Invariant Feature Transform)特征描述符来计算相机位姿,采用GPU加速的siftGPU算法克服SITF特征提取慢的缺点,但多数嵌入式设备缺乏足够的GPU运算能力,使其应用性受到局限。此外,常规算法在闭环检测时效率较低,实时性不强。针对上述问题,提出了一种结合ORB(oriented FAST and rotated BRIEF)特征与视觉词典的SLAM算法。在算法前端,首先提取相邻图像的ORB特征,然后利用k近邻(k-Nearest Neighbor,kNN)匹配找到对应的最临近与次临近匹配,接着采用比值检测与交叉检测剔除误匹配点,最后采用改进的PROSAC-PnP(Progressive Sample Consensus based Perspective-N-Point)算法进行相机姿态计算,得到对相机位姿的高精度估计。在后端,提出了一种基于视觉词典的闭环检测算法来消除机器人运动中的累计误差。通过闭环检测增加帧间约束,利用通用图优化工具进行位姿图优化,得到全局一致的相机位姿与点云。通过对标准fr1数据集的测试和对比,表明了该算法具有较强的鲁棒性。  相似文献   

20.
牛珉玉  黄宜庆 《机器人》2022,44(3):333-342
为了解决动态环境下视觉SLAM(同步定位与地图创建)算法定位与建图精度下降的问题,提出了一种基于动态耦合与空间数据关联的RGB-D SLAM算法。首先,使用语义网络获得预处理的语义分割图像,并利用边缘检测算法和相邻语义判定获得完整的语义动态物体;其次,利用稠密直接法模块实现对相机姿态的初始估计,这里动态耦合分数值的计算在利用了传统的动态区域剔除之外,还使用了空间平面一致性判据和深度信息筛选;然后,结合空间数据关联算法和相机位姿实时更新地图点集,并利用最小化重投影误差和闭环优化线程完成对相机位姿的优化;最后,使用相机位姿和地图点集构建八叉树稠密地图,实现从平面到空间的动态区域剔除,完成静态地图在动态环境下的构建。根据高动态环境下TUM数据集测试结果,本文算法定位误差相比于ORB-SLAM算法减小了约90%,有效提高了RGB-D SLAM算法的定位精度和相机位姿估计精度。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号