首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Monocular visual odometry is the process of computing the egomotion of a vehicle purely from images of a single camera. This process involves extracting salient points from consecutive image pairs, matching them, and computing the motion using standard algorithms. This paper analyzes one of the most important steps toward accurate motion computation, which is outlier removal. The random sample consensus (RANSAC) has been established as the standard method for model estimation in the presence of outliers. RANSAC is an iterative method, and the number of iterations necessary to find a correct solution is exponential in the minimum number of data points needed to estimate the model. It is therefore of utmost importance to find the minimal parameterization of the model to estimate. For unconstrained motion [six degrees of freedom (DoF)] of a calibrated camera, this would be five correspondences. In the case of planar motion, the motion model complexity is reduced (three DoF) and can be parameterized with two points. In this paper we show that when the camera is installed on a nonholonomic wheeled vehicle, the model complexity reduces to two DoF and therefore the motion can be parameterized with a single‐point correspondence. Using a single‐feature correspondence for motion estimation is the lowest model parameterization possible and results in the most efficient algorithm for removing outliers, which we call 1‐point RANSAC. To support our method, we run many experiments on both synthetic and real data and compare the performance with state‐of‐the‐art approaches and with different vehicles, both indoors and outdoors. © 2011 Wiley Periodicals, Inc.  相似文献   

2.
An important off‐road driving rule is to keep the vehicle wheels in existing ruts when possible. Rut following has the following benefits: (1) it prevents the ruts from serving as obstacles that can lead to undesirable vehicle vibrations and even vehicle instability at high speeds; (2) it improves vehicle safety on turns by utilizing the extra lateral force provided by the ruts to reduce lateral slippage and guide the vehicle through its path; (3) it improves the vehicle energy efficiency by reducing the energy wasted on compacting the ground; and (4) it increases vehicle traction when traversing soft terrains such as mud, sand, and snow. This paper first presents a set of field experiments that illustrate the improved energy efficiency and traction obtained by rut following. Then, a laser‐based rut detection and following system is proposed so that autonomous ground vehicles can benefit from the application of this off‐road driving rule. The proposed system utilizes a path planning algorithm to aid in the rut detection process and an extended Kalman filter to recursively estimate the parameters of a local model of the rut being followed. Experimental evaluation on a Pioneer 3‐AT robot shows that the proposed system is capable of detecting and following ruts in a variety of scenarios. © 2010 Wiley Periodicals, Inc.  相似文献   

3.
Autonomous navigation of microaerial vehicles in environments that are simultaneously GPS‐denied and visually degraded, and especially in the dark, texture‐less and dust‐ or smoke‐filled settings, is rendered particularly hard. However, a potential solution arises if such aerial robots are equipped with long wave infrared thermal vision systems that are unaffected by darkness and can penetrate many types of obscurants. In response to this fact, this study proposes a keyframe‐based thermal–inertial odometry estimation framework tailored to the exact data and concepts of operation of thermal cameras. The front‐end component of the proposed solution utilizes full radiometric data to establish reliable correspondences between thermal images, as opposed to operating on rescaled data as previous efforts have presented. In parallel, taking advantage of a keyframe‐based optimization back‐end the proposed method is suitable for handling periods of data interruption which are commonly present in thermal cameras, while it also ensures the joint optimization of reprojection errors of 3D landmarks and inertial measurement errors. The developed framework was verified with respect to its resilience, performance, and ability to enable autonomous navigation in an extensive set of experimental studies including multiple field deployments in severely degraded, dark, and obscurants‐filled underground mines.  相似文献   

4.
基于最小平方中值定理的立体视觉里程计   总被引:1,自引:0,他引:1       下载免费PDF全文
提出了一种基于最小平方中值定理(LMedS)的立体视觉里程计方法。利用图像中尺度不变的SIFT特征点作为路标,基于KD树的最邻近点搜索算法来实现左右图像对特征点的匹配和前后帧间特征点跟踪。通过特征点的三维重建,基于最小平方中值定理估计出机器人的运动距离和方向信息。实验表明该方法在不同图像间匹配、三维路标跟踪和机器人运动估计中具有很强的鲁棒性。  相似文献   

5.
This paper extends the progress of single beacon one‐way‐travel‐time (OWTT) range measurements for constraining XY position for autonomous underwater vehicles (AUV). Traditional navigation algorithms have used OWTT measurements to constrain an inertial navigation system aided by a Doppler Velocity Log (DVL). These methodologies limit AUV applications to where DVL bottom‐lock is available as well as the necessity for expensive strap‐down sensors, such as the DVL. Thus, deep water, mid‐water column research has mostly been left untouched, and vehicles that need expensive strap‐down sensors restrict the possibility of using multiple AUVs to explore a certain area. This work presents a solution for accurate navigation and localization using a vehicle's odometry determined by its dynamic model velocity and constrained by OWTT range measurements from a topside source beacon as well as other AUVs operating in proximity. We present a comparison of two navigation algorithms: an Extended Kalman Filter (EKF) and a Particle Filter(PF). Both of these algorithms also incorporate a water velocity bias estimator that further enhances the navigation accuracy and localization. Closed‐loop online field results on local waters as well as a real‐time implementation of two days field trials operating in Monterey Bay, California during the Keck Institute for Space Studies oceanographic research project prove the accuracy of this methodology with a root mean square error on the order of tens of meters compared to GPS position over a distance traveled of multiple kilometers.  相似文献   

6.
《传感器与微系统》2019,(6):133-136
针对单目视觉里程计的精确定位问题,提出一种专门应用于单目视觉里程计的特征点匹配方法。对FAST算法提取的特征点进行DELAUNAY三角剖分,获得特征点的位置关系。采用LK算法获得匹配点,结合位置关系判断误匹配候选点。对候选点计算SIFT描述子,根据相似程度剔除误匹配点。对获得的匹配对估计基础矩阵,结合尺度信息求取位姿。实验结果表明:该算法可提升匹配点的正确率,提高单目视觉里程计的精度。  相似文献   

7.
We present a system that estimates the motion of a stereo head, or a single moving camera, based on video input. The system operates in real time with low delay, and the motion estimates are used for navigational purposes. The front end of the system is a feature tracker. Point features are matched between pairs of frames and linked into image trajectories at video rate. Robust estimates of the camera motion are then produced from the feature tracks using a geometric hypothesize‐and‐test architecture. This generates motion estimates from visual input alone. No prior knowledge of the scene or the motion is necessary. The visual estimates can also be used in conjunction with information from other sources, such as a global positioning system, inertia sensors, wheel encoders, etc. The pose estimation method has been applied successfully to video from aerial, automotive, and handheld platforms. We focus on results obtained with a stereo head mounted on an autonomous ground vehicle. We give examples of camera trajectories estimated in real time purely from images over previously unseen distances (600 m) and periods of time. © 2006 Wiley Periodicals, Inc.  相似文献   

8.
在移动机器人快速发展的今天,视觉里程计是通过视觉传感器进行导航定位的主要方式之一。文章介绍了基于单个摄像头的单目视觉里程计的一种实现方法,使用SURF算法提取特征点,用LK光流法进行特征追踪,相比于传统特征匹配在运算效率上有极大的提高,再对特征点计算位移与旋转量,以里程计的方式达到定位目的。详细介绍了视觉里程计的数学原理以及SURF算法和LK算法的原理及其数学推导、单目视觉里程计的尺度不确定原理。最后通过对比确定LK算法的优势以及最优参数的选择。  相似文献   

9.
目的 视觉里程计(visual odometry,VO)仅需要普通相机即可实现精度可观的自主定位,已经成为计算机视觉和机器人领域的研究热点,但是当前研究及应用大多基于场景为静态的假设,即场景中只有相机运动这一个运动模型,无法处理多个运动模型,因此本文提出一种基于分裂合并运动分割的多运动视觉里程计方法,获得场景中除相机运动外多个运动目标的运动状态。方法 基于传统的视觉里程计框架,引入多模型拟合的方法分割出动态场景中的多个运动模型,采用RANSAC(random sample consensus)方法估计出多个运动模型的运动参数实例;接着将相机运动信息以及各个运动目标的运动信息转换到统一的坐标系中,获得相机的视觉里程计结果,以及场景中各个运动目标对应各个时刻的位姿信息;最后采用局部窗口光束法平差直接对相机的姿态以及计算出来的相机相对于各个运动目标的姿态进行校正,利用相机运动模型的内点和各个时刻获得的相机相对于运动目标的运动参数,对多个运动模型的轨迹进行优化。结果 本文所构建的连续帧运动分割方法能够达到较好的分割结果,具有较好的鲁棒性,连续帧的分割精度均能达到近100%,充分保证后续估计各个运动模型参数的准确性。本文方法不仅能够有效估计出相机的位姿,还能估计出场景中存在的显著移动目标的位姿,在各个分段路径中相机自定位与移动目标的定位结果位置平均误差均小于6%。结论 本文方法能够同时分割出动态场景中的相机自身运动模型和不同运动的动态物体运动模型,进而同时估计出相机和各个动态物体的绝对运动轨迹,构建出多运动视觉里程计过程。  相似文献   

10.
Classifier combination methods have proved to be an effective tool to increase the performance of classification techniques that can be used in any pattern recognition applications. Despite a significant number of publications describing successful classifier combination implementations, the theoretical basis is still not matured enough and achieved improvements are inconsistent. In this paper, we propose a novel statistical validation technique known as correlation‐based classifier combination technique for combining classifier in any pattern recognition problem. This validation has significant influence on the performance of combinations, and their utilization is necessary for complete theoretical understanding of combination algorithms. The analysis presented is statistical in nature but promises to lead to a class of algorithms for rank‐based decision combination. The potentials of the theoretical and practical issues in implementation are illustrated by applying it on 2 standard datasets in pattern recognition domain, namely, handwritten digit recognition and letter image recognition datasets taken from UCI Machine Learning Database Repository ( http://www.ics.uci.edu/_mlearn ). 1 An empirical evaluation using 8 well‐known distinct classifiers confirms the validity of our approach compared to some other combinations of multiple classifiers algorithms. Finally, we also suggest a methodology for determining the best mix of individual classifiers.  相似文献   

11.
现有的基于深度学习的视觉里程计(visual odometry,VO)训练样本与应用场景存在差异时,普遍存在难以适应新环境的问题,因此提出了一种在线更新单目视觉里程计算法OUMVO。其特点在于应用阶段利用实时采集到的图像序列在线优化位姿估计网络模型,提高网络的泛化能力和对新环境的适用能力。该方法使用了自监督学习方法,无须额外标注地面真值,并采用了Transformer对图像流进行序列建模,以充分利用局部窗口内的视觉信息,提高位姿估计精度,以避免传统方法只能利用相邻两帧图像来估计位姿的局限,还可以弥补采用RNN进行序列建模无法并行计算的缺点。此外,还采用图像空间几何一致性约束,解决了传统单目视觉里程计算法存在的尺度漂移问题。在KITTI数据集上的定量和定性实验结果表明,OUMVO的位姿估计精度和对新环境的适应能力均优于现有的先进单目视觉里程计方法。  相似文献   

12.
针对如何准确获取位姿信息来实现移动机器人的避障问题,提出一种可用于实时获取移动机器人位姿的单目视觉里程计算法。该算法利用单目摄像机获取连续帧间图像路面SURF(Speeded Up Robust Features)特征点;并结合极线几何约束来解决路面特征点匹配较难的问题,通过计算平面单应性矩阵获取移动机器人的位姿变化。实验结果表明该算法具有较高的精度和实时性。  相似文献   

13.
《计算机工程与科学》2017,(10):1862-1869
针对室内视觉里程计受光照影响明显和现代移动设备双摄像头的特点,提出同时利用双侧摄像头视觉里程计算法。通过对单侧视觉里程计状态的评估和重启,实现单侧故障时系统的持续稳定输出,提高了视觉里程计的鲁棒性。在双侧正常运行时通过对双侧视觉里程计输出结果的卡尔曼滤波融合,提高了视觉里程计的精度。在室内多种实际环境、多种移动速度下的实验结果表明,本算法有效地保证了单侧异常情况下视觉里程计的正常输出,同时在双侧有效的情况下明显提高了里程计输出精度。  相似文献   

14.
针对移动服务机器人在未知环境下三维路径估计的问题,设计了一种基于Kinect的实时估计机器人运动轨迹的方法。该方法采用Kinect获取机器人运动过程中连续帧的彩色和深度信息,首先,提取并匹配目标帧和参考帧的SURF的特征点;然后,结合深度信息利用经典P3P问题的方法及改进的随机采样一致性(RANSAC)算法计算机器人的初始6自由度(DOF)位姿;最后,通过非线性最小二乘算法最小化初始位姿内点的双向投影误差来提高位姿精度,进而得到机器人的运动轨迹。同时对比了不同特征点及描述符结合下的里程计精度。实验结果表明,所提方法能够将里程计误差降低到3.1%,且能够满足实时要求,可为机器人同时定位与地图创建提供重要的先验信息。  相似文献   

15.
Visual Odometry (VO) is one of the fundamental building blocks of modern autonomous robot navigation and mapping. While most state-of-the-art techniques use geometrical methods for camera ego-motion estimation from optical flow vectors, in the last few years learning approaches have been proposed to solve this problem. These approaches are emerging and there is still much to explore. This work follows this track applying Kernel Machines to monocular visual ego-motion estimation. Unlike geometrical methods, learning-based approaches to monocular visual odometry allow issues like scale estimation and camera calibration to be overcome, assuming the availability of training data. While some previous works have proposed learning paradigms to VO, to our knowledge no extensive evaluation of applying kernel-based methods to Visual Odometry has been conducted. To fill this gap, in this work we consider publicly available datasets and perform several experiments in order to set a comparison baseline with traditional techniques. Experimental results show good performances of learning algorithms and set them as a solid alternative to the computationally intensive and complex to implement geometrical techniques.  相似文献   

16.
视觉里程计技术综述   总被引:3,自引:0,他引:3  
视觉里程计是通过视觉信息估计运动信息的技术,其中采用了里程计式的方法。该技术作为一种新的导航定位方式,已成功地运用于自主移动机器人中。首先介绍了常用的两种视觉里程计即单目视觉里程计和立体视觉里程计,然后从鲁棒性、实时性和精确性三个方面详细讨论了视觉里程计技术的研究现状,最后对视觉里程计的发展趋势进行了展望。  相似文献   

17.
视觉里程计通过分析相机所获取的图像流信息估计移动机器人的位姿。为了深入分析视觉里程计算法的发展现状,结合一些先进的视觉里程计系统,综述了视觉里程计的相关技术以及最新的研究成果。首先简述了视觉里程计的概念和发展历程,介绍了视觉里程计问题的数学描述和分类方法;然后,详细阐述了视觉里程计的关键技术,包括特征模块、帧间位姿估计和减少漂移;此外,还介绍了基于深度学习的视觉里程计的发展动态。最后,总结了视觉里程计目前存在的问题,展望了未来的发展趋势。  相似文献   

18.
针对特征点法的视觉里程计VO中光度、视点变化对特征点提取稳定性降低的不利影响,提出一种基于深度学习特征点法的单目VO方法。采用自监督深度学习网络训练得到DSP特征点检测器。首先使用亮度非线性逐点调整方法对训练图像进行光度调整;然后使用非极大值抑制方法剔除冗余DSP特征点,改进最邻近方法得到双向最邻近方法,解决特征点匹配问题;最后建立最小化重投影误差方程求解优化位姿及空间点参数。在Hpatches、Visual Odometry数据集上进行验证,实验结果表明:DSP特征点检测器增强了特征匹配对光度、视点变化的鲁棒性;无后端优化的条件下,本方法定位均方根误差比ORB方法明显降低,且保证了系统实时性,为特征点法的VO提供新的解决思路。  相似文献   

19.
20.
研究了一种精度较高的视觉里程计实现方法。首先采用计算量小、精度高的SURF算法提取特征点,并采用最近邻向量匹配法进行特征点匹配;采用RANSAC算法求得帧间摄像机坐标系的转换矩阵;提出了一种视觉里程计标定方法,并通过该方法得到机器人起始点坐标系下的位移信息,保证了机器人有自身姿态变化时的定位的准确性;圆轨迹实验验证了视觉里程计算法的可行性。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号