首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
There are about 253 million people with visual impairment worldwide. Many of them use a white cane and/or a guide dog as the mobility tool for daily travel. Despite decades of efforts, electronic navigation aid that can replace white cane is still research in progress. In this paper, we propose an RGB-D camera based visual positioning system (VPS) for real-time localization of a robotic navigation aid (RNA) in an architectural floor plan for assistive navigation. The core of the system is the combination of a new 6-DOF depth-enhanced visual-inertial odometry (DVIO) method and a particle filter localization (PFL) method. DVIO estimates RNA’s pose by using the data from an RGB-D camera and an inertial measurement unit (IMU). It extracts the floor plane from the camera’s depth data and tightly couples the floor plane, the visual features (with and without depth data), and the IMU’s inertial data in a graph optimization framework to estimate the device’s 6-DOF pose. Due to the use of the floor plane and depth data from the RGB-D camera, DVIO has a better pose estimation accuracy than the conventional VIO method. To reduce the accumulated pose error of DVIO for navigation in a large indoor space, we developed the PFL method to locate RNA in the floor plan. PFL leverages geometric information of the architectural CAD drawing of an indoor space to further reduce the error of the DVIO-estimated pose. Based on VPS, an assistive navigation system is developed for the RNA prototype to assist a visually impaired person in navigating a large indoor space. Experimental results demonstrate that: 1) DVIO method achieves better pose estimation accuracy than the state-of-the-art VIO method and performs real-time pose estimation (18 Hz pose update rate) on a UP Board computer; 2) PFL reduces the DVIO-accrued pose error by 82.5% on average and allows for accurate wayfinding (endpoint position error ≤ 45 cm) in large indoor spaces.   相似文献   

2.
刘辉  张雪波  李如意  苑晶 《控制与决策》2024,39(6):1787-1800
激光同步定位与地图构建(simultaneous localization and mapping, SLAM)算法在位姿估计和构建环境地图时依赖环境结构特征信息,在结构特征缺乏的场景下,此类算法的位姿估计精度与鲁棒性将下降甚至运行失败.对此,结合惯性测量单元(inertial measurement unit, IMU)不受环境约束、相机依赖视觉纹理的特点,提出一种双目视觉辅助的激光惯导SLAM算法,以解决纯激光SLAM算法在环境结构特征缺乏时的退化问题.即采用双目视觉惯导里程计算法为激光扫描匹配模块提供视觉先验位姿,并进一步兼顾视觉约束与激光结构特征约束进行联合位姿估计.此外,提出一种互补滤波算法与因子图优化求解的组合策略,完成激光里程计参考系与惯性参考系对准,并基于因子图将激光位姿与IMU数据融合以约束IMU偏置,在视觉里程计失效的情况下为激光扫描匹配提供候补的相对位姿预测.为进一步提高全局轨迹估计精度,提出基于迭代最近点匹配算法(iterative closest point, ICP)与基于图像特征匹配算法融合的混合闭环检测策略,利用6自由度位姿图优化方法显著降低里程计漂移误...  相似文献   

3.
《Advanced Robotics》2013,27(11-12):1493-1514
In this paper, a fully autonomous quadrotor in a heterogeneous air–ground multi-robot system is established only using minimal on-board sensors: a monocular camera and inertial measurement units (IMUs). Efficient pose and motion estimation is proposed and optimized. A continuous-discrete extended Kalman filter is applied, in which the high-frequency IMU data drive the prediction, while the estimates are corrected by the accurate and steady vision data. A high-frequency fusion at 100 Hz is achieved. Moreover, time delay analysis and data synchronizations are conducted to further improve the pose/motion estimation of the quadrotor. The complete on-board implementation of sensor data processing and control algorithms reduces the influence of data transfer time delay, enables autonomous task accomplishment and extends the work space. Higher pose estimation accuracy and smaller control errors compared to the standard works are achieved in real-time hovering and tracking experiments.  相似文献   

4.
Safety is undoubtedly the most fundamental requirement for any aerial robotic application. It is essential to equip aerial robots with omnidirectional perception coverage to ensure safe navigation in complex environments. In this paper, we present a light‐weight and low‐cost omnidirectional perception system, which consists of two ultrawide field‐of‐view (FOV) fisheye cameras and a low‐cost inertial measurement unit (IMU). The goal of the system is to achieve spherical omnidirectional sensing coverage with the minimum sensor suite. The two fisheye cameras are mounted rigidly facing upward and downward directions and provide omnidirectional perception coverage: 360° FOV horizontally, 50° FOV vertically for stereo, and whole spherical for monocular. We present a novel optimization‐based dual‐fisheye visual‐inertial state estimator to provide highly accurate state‐estimation. Real‐time omnidirectional three‐dimensional (3D) mapping is combined with stereo‐based depth perception for the horizontal direction and monocular depth perception for upward and downward directions. The omnidirectional perception system is integrated with online trajectory planners to achieve closed‐loop, fully autonomous navigation. All computations are done onboard on a heterogeneous computing suite. Extensive experimental results are presented to validate individual modules as well as the overall system in both indoor and outdoor environments.  相似文献   

5.
针对如何准确获取位姿信息来实现移动机器人的避障问题,提出一种可用于实时获取移动机器人位姿的单目视觉里程计算法。该算法利用单目摄像机获取连续帧间图像路面SURF(Speeded Up Robust Features)特征点;并结合极线几何约束来解决路面特征点匹配较难的问题,通过计算平面单应性矩阵获取移动机器人的位姿变化。实验结果表明该算法具有较高的精度和实时性。  相似文献   

6.
张伟  马珺 《计算机仿真》2020,37(2):92-96,207
准确获取无人机的位姿信息是顺利执行无人机自主导航、着陆的首要前提。由于GPS/INS导航系统的局限性和IMU惯性导航系统的误差,提出了一种基于视觉导航的方法。设计了"H"形图像的着陆标志,对机载摄像机采集的实时影像进行图像处理,利用世界坐标系到图像像素坐标系的映射关系得到基于视觉的无人机位姿估计模型,进而解算无人机当前的位姿估计值。上述方法提高了无人机的着陆安全性。仿真结果验证了算法的有效性和位态信息的精确度。  相似文献   

7.
基于视觉/惯导的无人机组合导航算法研究   总被引:1,自引:0,他引:1       下载免费PDF全文
目前视觉惯性组合导航系统多采用优化紧/松耦合以及滤波紧/松耦合算法,应用误差状态卡尔曼滤波能够将较低频率的视觉位姿信息提升到与惯性信息同步的频率;提出一种基于自适应卡尔曼滤波的视觉惯导组合导航算法,首先考虑到系统建模与传感器测量误差,采用自适应渐消卡尔曼滤波进行导航解算,通过实时计算遗忘因子,以调节历史数据的权重,可抑制建模误差,提高组合导航系统性能,然后针对视觉SLAM解算过程造成的视觉位姿信息滞后于惯导信息的问题,提出一种延时补偿方法;仿真实验表明,采用延时补偿的自适应渐消卡尔曼滤波算法能够有效抑制建模误差,并降低视觉位姿信息滞后带来的影响,提高无人机组合导航的解算精度,姿态、速度、位置解算精度分别达到5°、0.5m/s、0.4m以内。  相似文献   

8.
In this paper, we address the problem of ego-motion estimation by fusing visual and inertial information. The hardware consists of an inertial measurement unit (IMU) and a monocular camera. The camera provides visual observations in the form of features on a horizontal plane. Exploiting the geometric constraint of features on the plane into visual and inertial data, we propose a novel closed form measurement model for this system. Our first contribution in this paper is an observability analysis of the proposed planar-based visual inertial navigation system (VINS). In particular, we prove that the system has only three unobservable states corresponding to global translations parallel to the plane, and rotation around the gravity vector. Hence, compared to general VINS, an advantage of using features on the horizontal plane is that the vertical translation along the normal of the plane becomes observable. As the second contribution, we present a state-space formulation for the pose estimation in the analyzed system and solve it via a modified unscented Kalman filter (UKF). Finally, the findings of the theoretical analysis and 6-DoF motion estimation are validated by simulations as well as using experimental data.  相似文献   

9.
针对单目视觉惯性SLAM算法鲁棒性不高且尺度恢复困难的问题,提出基于动态边缘化的双目视觉惯性SLAM算法(DM-SVI-SLAM).前端使用光流法进行特征跟踪,利用预积分计算帧间IMU,后端在滑动窗口内融合单/双目匹配点误差、IMU残差及先验误差构建捆集调整的成本函数,利用动态边缘化策略、Dog-Leg算法提升计算效率...  相似文献   

10.
针对动态场景下视觉SLAM(simultaneous localization and mapping)算法易受运动特征点影响,从而导致位姿估计准确度低、鲁棒性差的问题,提出了一种基于动态区域剔除的RGB-D视觉SLAM算法。首先借助语义信息,识别出属于移动对象的特征点,并借助相机的深度信息利用多视图几何检测特征点在此时是否保持静止;然后使用从静态对象提取的特征点和从可移动对象导出的静态特征点来微调相机姿态估计,以此实现系统在动态场景中准确而鲁棒的运行;最后利用TUM数据集中的动态室内场景进行了实验验证。实验表明,在室内动态环境中,所提算法能够有效提高相机的位姿估计精度,实现动态环境中的地图更新,在提升系统鲁棒性的同时也提高了地图构建的准确性。  相似文献   

11.
This paper presents the design and development of autonomous attitude stabilization, navigation in unstructured, GPS-denied environments, aggressive landing on inclined surfaces, and aerial gripping using onboard sensors on a low-cost, custom-built quadrotor. The development of a multi-functional micro air vehicle (MAV) that utilizes inexpensive off-the-shelf components presents multiple challenges due to noise and sensor accuracy, and there are control challenges involved with achieving various capabilities beyond navigation. This paper addresses these issues by developing a complete system from the ground up, addressing the attitude stabilization problem using extensive filtering and an attitude estimation filter recently developed in the literature. Navigation in both indoor and outdoor environments is achieved using a visual Simultaneous Localization and Mapping (SLAM) algorithm that relies on an onboard monocular camera. The system utilizes nested controllers for attitude stabilization, vision-based navigation, and guidance, with the navigation controller implemented using a nonlinear controller based on the sigmoid function. The efficacy of the approach is demonstrated by maintaining a stable hover even in the presence of wind gusts and when manually hitting and pulling on the quadrotor. Precision landing on inclined surfaces is demonstrated as an example of an aggressive maneuver, and is performed using only onboard sensing. Aerial gripping is accomplished with the addition of a secondary camera, capable of detecting infrared light sources, which is used to estimate the 3D location of an object, while an under-actuated and passively compliant manipulator is designed for effective gripping under uncertainty. The quadrotor is therefore able to autonomously navigate inside and outside, in the presence of disturbances, and perform tasks such as aggressively landing on inclined surfaces and locating and grasping an object, using only inexpensive, onboard sensors.  相似文献   

12.
We present a complete solution for the visual navigation of a small-scale, low-cost quadrocopter in unknown environments. Our approach relies solely on a monocular camera as the main sensor, and therefore does not need external tracking aids such as GPS or visual markers. Costly computations are carried out on an external laptop that communicates over wireless LAN with the quadrocopter. Our approach consists of three components: a monocular SLAM system, an extended Kalman filter for data fusion, and a PID controller. In this paper, we (1) propose a simple, yet effective method to compensate for large delays in the control loop using an accurate model of the quadrocopter’s flight dynamics, and (2) present a novel, closed-form method to estimate the scale of a monocular SLAM system from additional metric sensors. We extensively evaluated our system in terms of pose estimation accuracy, flight accuracy, and flight agility using an external motion capture system. Furthermore, we compared the convergence and accuracy of our scale estimation method for an ultrasound altimeter and an air pressure sensor with filtering-based approaches. The complete system is available as open-source in ROS. This software can be used directly with a low-cost, off-the-shelf Parrot AR.Drone quadrocopter, and hence serves as an ideal basis for follow-up research projects.  相似文献   

13.
This paper presents a hierarchical simultaneous localization and mapping(SLAM) system for a small unmanned aerial vehicle(UAV) using the output of an inertial measurement unit(IMU) and the bearing-only observations from an onboard monocular camera.A homography based approach is used to calculate the motion of the vehicle in 6 degrees of freedom by image feature match.This visual measurement is fused with the inertial outputs by an indirect extended Kalman filter(EKF) for attitude and velocity estimation.Then,another EKF is employed to estimate the position of the vehicle and the locations of the features in the map.Both simulations and experiments are carried out to test the performance of the proposed system.The result of the comparison with the referential global positioning system/inertial navigation system(GPS/INS) navigation indicates that the proposed SLAM can provide reliable and stable state estimation for small UAVs in GPS-denied environments.  相似文献   

14.
There are two main trends in the development of unmanned aerial vehicle(UAV)technologies:miniaturization and intellectualization,in which realizing object tracking capabilities for a nano-scale UAV is one of the most challenging problems.In this paper,we present a visual object tracking and servoing control system utilizing a tailor-made 38 g nano-scale quadrotor.A lightweight visual module is integrated to enable object tracking capabilities,and a micro positioning deck is mounted to provide accurate pose estimation.In order to be robust against object appearance variations,a novel object tracking algorithm,denoted by RMCTer,is proposed,which integrates a powerful short-term tracking module and an efficient long-term processing module.In particular,the long-term processing module can provide additional object information and modify the short-term tracking model in a timely manner.Furthermore,a positionbased visual servoing control method is proposed for the quadrotor,where an adaptive tracking controller is designed by leveraging backstepping and adaptive techniques.Stable and accurate object tracking is achieved even under disturbances.Experimental results are presented to demonstrate the high accuracy and stability of the whole tracking system.  相似文献   

15.
龚赵慧  张霄力  彭侠夫  李鑫 《机器人》2020,42(5):595-605
针对半直接单目视觉里程计缺乏尺度信息并且在快速运动中鲁棒性较差的缺点,设计了一种融合惯性测量信息的半直接单目视觉里程计,通过IMU(惯性测量单元)信息弥补视觉里程计的缺陷,有效提高跟踪精度与系统鲁棒性.本文联合惯性测量信息与视觉信息进行初始化,较准确地恢复了环境尺度信息.为提高运动跟踪的鲁棒性,提出一种IMU加权的运动先验模型.通过预积分获取IMU的状态估计,根据IMU先验误差调整权重系数,使用IMU先验信息的加权值为前端提供精确的初值.后端构建了紧耦合的图优化模型,融合惯性、视觉以及3维地图点信息进行联合优化,同时在滑动窗口中使用强共视关系作为约束,在消除局部累积误差的同时提高优化效率与优化精度.实验结果表明,本文的先验模型优于匀速运动模型与IMU先验模型,单帧先验误差小于1 cm.后端优化方法改进后,计算效率提高为原来的1.52倍,同时轨迹精度与优化稳定性也得到提高.在EuRoC数据集上进行测试,定位效果优于OKVIS算法,轨迹均方根误差减小为原视觉里程计的1/3.  相似文献   

16.
We describe a novel quadrotor Micro Air Vehicle (MAV) system that is designed to use computer vision algorithms within the flight control loop. The main contribution is a MAV system that is able to run both the vision-based flight control and stereo-vision-based obstacle detection parallelly on an embedded computer onboard the MAV. The system design features the integration of a powerful onboard computer and the synchronization of IMU-Vision measurements by hardware timestamping which allows tight integration of IMU measurements into the computer vision pipeline. We evaluate the accuracy of marker-based visual pose estimation for flight control and demonstrate marker-based autonomous flight including obstacle detection using stereo vision. We also show the benefits of our IMU-Vision synchronization for egomotion estimation in additional experiments where we use the synchronized measurements for pose estimation using the 2pt+gravity formulation of the PnP problem.  相似文献   

17.
This paper proposes a real-time system for pose estimation of an unmanned aerial vehicle (UAV) using parallel image processing and a fiducial marker. The system exploits the capabilities of a high-performance CPU/GPU embedded system in order to provide on-board high-frequency pose estimation enabling autonomous takeoff and landing. The system is evaluated extensively with lab and field tests using a custom quadrotor. The autonomous landing is successfully demonstrated, through experimental tests, using the proposed algorithm. The results show that the system is able to provide precise pose estimation with a framerate of at least 30\,fps and an image resolution of 640?480 pixels. The main advantage of the proposed approach is in the use of the GPU for image filtering and marker detection. The GPU provides an upper bound on the required computation time regardless of the complexity of the image thereby allowing for robust marker detection even in cluttered environments.  相似文献   

18.
目的 视觉定位旨在利用易于获取的RGB图像对运动物体进行目标定位及姿态估计。室内场景中普遍存在的物体遮挡、弱纹理区域等干扰极易造成目标关键点的错误估计,严重影响了视觉定位的精度。针对这一问题,本文提出一种主被动融合的室内定位系统,结合固定视角和移动视角的方案优势,实现室内场景中运动目标的精准定位。方法 提出一种基于平面先验的物体位姿估计方法,在关键点检测的单目定位框架基础上,使用平面约束进行3自由度姿态优化,提升固定视角下室内平面中运动目标的定位稳定性。基于无损卡尔曼滤波算法设计了一套数据融合定位系统,将从固定视角得到的被动式定位结果与从移动视角得到的主动式定位结果进行融合,提升了运动目标的位姿估计结果的可靠性。结果 本文提出的主被动融合室内视觉定位系统在iGibson仿真数据集上的平均定位精度为2~3 cm,定位误差在10 cm内的准确率为99%;在真实场景中平均定位精度为3~4 cm,定位误差在10 cm内的准确率在90%以上,实现了cm级的定位精度。结论 提出的室内视觉定位系统融合了被动式和主动式定位方法的优势,能够以较低设备成本实现室内场景中高精度的目标定位结果,并在遮挡、目标...  相似文献   

19.
在室内单目视觉导航任务中,场景的深度信息十分重要.但单目深度估计是一个不适定问题,精度较低.目前, 2D激光雷达在室内导航任务中应用广泛,且价格低廉.因此,本文提出一种融合2D激光雷达的室内单目深度估计算法来提高深度估计精度.本文在编解码结构上增加了2D激光雷达的特征提取,通过跳跃连接增加单目深度估计结果的细节信息,并提出一种运用通道注意力机制融合2D激光雷达特征和RGB图像特征的方法.本文在公开数据集NYUDv2上对算法进行验证,并针对本文算法的应用场景,制作了带有2D激光雷达数据的深度数据集.实验表明,本文提出的算法在公开数据集和自制数据集中均优于现有的单目深度估计.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号