首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
There are about 253 million people with visual impairment worldwide. Many of them use a white cane and/or a guide dog as the mobility tool for daily travel. Despite decades of efforts, electronic navigation aid that can replace white cane is still research in progress. In this paper, we propose an RGB-D camera based visual positioning system (VPS) for real-time localization of a robotic navigation aid (RNA) in an architectural floor plan for assistive navigation. The core of the system is the combination of a new 6-DOF depth-enhanced visual-inertial odometry (DVIO) method and a particle filter localization (PFL) method. DVIO estimates RNA’s pose by using the data from an RGB-D camera and an inertial measurement unit (IMU). It extracts the floor plane from the camera’s depth data and tightly couples the floor plane, the visual features (with and without depth data), and the IMU’s inertial data in a graph optimization framework to estimate the device’s 6-DOF pose. Due to the use of the floor plane and depth data from the RGB-D camera, DVIO has a better pose estimation accuracy than the conventional VIO method. To reduce the accumulated pose error of DVIO for navigation in a large indoor space, we developed the PFL method to locate RNA in the floor plan. PFL leverages geometric information of the architectural CAD drawing of an indoor space to further reduce the error of the DVIO-estimated pose. Based on VPS, an assistive navigation system is developed for the RNA prototype to assist a visually impaired person in navigating a large indoor space. Experimental results demonstrate that: 1) DVIO method achieves better pose estimation accuracy than the state-of-the-art VIO method and performs real-time pose estimation (18 Hz pose update rate) on a UP Board computer; 2) PFL reduces the DVIO-accrued pose error by 82.5% on average and allows for accurate wayfinding (endpoint position error ≤ 45 cm) in large indoor spaces.   相似文献   

2.
An image-based visual servo control is presented for an unmanned aerial vehicle (UAV) capable of stationary or quasi-stationary flight with the camera mounted onboard the vehicle. The target considered consists of a finite set of stationary and disjoint points lying in a plane. Control of the position and orientation dynamics is decoupled using a visual error based on spherical centroid data, along with estimations of the linear velocity and the gravitational inertial direction extracted from image features and an embedded inertial measurement unit. The visual error used compensates for poor conditioning of the image Jacobian matrix by introducing a nonhomogeneous gain term adapted to the visual sensitivity of the error measurements. A nonlinear controller, that ensures exponential convergence of the system considered, is derived for the full dynamics of the system using control Lyapunov function design techniques. Experimental results on a quadrotor UAV, developed by the French Atomic Energy Commission, demonstrate the robustness and performance of the proposed control strategy.  相似文献   

3.
曹毓  张小虎  冯莹 《传感技术学报》2015,28(9):1354-1360
在视觉里程计的应用中,实时准确的获得相机姿态和高度数据有助于提高视觉定位的精度.而现有解决方案要么成本过高,要么精度无法满足要求,为此提出了基于路面激光扫描的相机外参数实时测量方法.该方法将两台二维激光扫描仪相互正交安装且向下扫描,对获得的沿着两个方向的路面扫描线使用RANSAC算法估计出直线方程,根据两直线方程求得道路平面方程,并以该平面为参考获得相机相对路面的姿态和高度数据.室内实验结果表明:静态条件下对姿态的测量误差最大约0.1°,高度测量误差最大6 mm;室外动态实验结果表明:与传统的惯性测量方法不同,相机外参数测量结果不受车辆加减速运动的影响,且其动态姿态测量精度明显高于精度为1°的惯性测量系统.由于该方法获得的姿态和高度数据是以道路平面为参考基准,尤其适用于单目视觉里程计中以辅助提高定位精度.  相似文献   

4.
视觉—惯性导航定位技术是一种利用视觉传感器和惯性传感器实现载体的自定位和周围环境感知的无源导航定位方式,可以在全球定位系统(global positioning system,GPS)拒止环境下实现载体6自由度位姿估计。视觉和低精度惯性传感器具有体积小和价格低的优势,得益于二者在导航定位任务中的互补特性,视觉—惯性导航系统(visual inertial navigation system,VINS)引起了极大关注,在移动端的虚拟现实(virtual reality,VR)、增强现实(augmented reality,AR)以及无人系统的自主导航任务中发挥了重要作用,具有重要的理论研究价值和实际应用需求。本文介绍视觉—惯性导航系统,总结概括该系统中初始化、视觉前端处理、状态估计、地图的构建与维护以及信息融合等关键技术的研究进展。对非理想环境下及基于学习方法的视觉—惯性导航定位算法等热点问题进行综述,总结用于算法评测的方法及标准数据集,阐述该技术在实际应用中所面临的主要问题,并针对这些问题对该领域未来的发展趋势进行展望。  相似文献   

5.
This paper presents a hierarchical simultaneous localization and mapping(SLAM) system for a small unmanned aerial vehicle(UAV) using the output of an inertial measurement unit(IMU) and the bearing-only observations from an onboard monocular camera.A homography based approach is used to calculate the motion of the vehicle in 6 degrees of freedom by image feature match.This visual measurement is fused with the inertial outputs by an indirect extended Kalman filter(EKF) for attitude and velocity estimation.Then,another EKF is employed to estimate the position of the vehicle and the locations of the features in the map.Both simulations and experiments are carried out to test the performance of the proposed system.The result of the comparison with the referential global positioning system/inertial navigation system(GPS/INS) navigation indicates that the proposed SLAM can provide reliable and stable state estimation for small UAVs in GPS-denied environments.  相似文献   

6.
This paper studies vision-aided inertial navigation of small-scale unmanned aerial vehicles (UAVs) in GPS-denied environments. The objectives of the navigation system are to firstly online estimate and compensate the unknown inertial measurement biases, secondly provide drift-free velocity and attitude estimates which are crucial for UAV stabilization control, and thirdly give relatively accurate position estimation such that the UAV is able to perform at least a short-term navigation when the GPS signal is not available. For the vision system, we do not presume maps or landmarks of the environment. The vision system should be able to work robustly even given low-resolution images (e.g., 160 ×120 pixels) of near homogeneous visual features. To achieve these objectives, we propose a novel homography-based vision-aided navigation system that adopts four common sensors: a low-cost inertial measurement unit, a downward-looking monocular camera, a barometer, and a compass. The measurements of the sensors are fused by an extended Kalman filter. Based on both analytical and numerical observability analyses of the navigation system, we theoretically verify that the proposed navigation system is able to achieve the navigation objectives. We also show comprehensive simulation and real flight experimental results to verify the effectiveness and robustness of the proposed navigation system.  相似文献   

7.
The fusion of inertial and visual data is widely used to improve an object??s pose estimation. However, this type of fusion is rarely used to estimate further unknowns in the visual framework. In this paper we present and compare two different approaches to estimate the unknown scale parameter in a monocular SLAM framework. Directly linked to the scale is the estimation of the object??s absolute velocity and position in 3D. The first approach is a spline fitting task adapted from Jung and Taylor and the second is an extended Kalman filter. Both methods have been simulated offline on arbitrary camera paths to analyze their behavior and the quality of the resulting scale estimation. We then embedded an online multi rate extended Kalman filter in the Parallel Tracking and Mapping (PTAM) algorithm of Klein and Murray together with an inertial sensor. In this inertial/monocular SLAM framework, we show a real time, robust and fast converging scale estimation. Our approach does not depend on known patterns in the vision part nor a complex temporal synchronization between the visual and inertial sensor.  相似文献   

8.
In this paper, we present a multi-sensor fusion based monocular visual navigation system for a quadrotor with limited payload, power and computational resources. Our system is equipped with an inertial measurement unit (IMU), a sonar and a monocular down-looking camera. It is able to work well in GPS-denied and markerless environments. Different from most of the keyframe-based visual navigation systems, our system uses the information from both keyframes and keypoints in each frame. The GPU-based speeded up robust feature (SURF) is employed for feature detection and feature matching. Based on the flight characteristics of quadrotor, we propose a refined preliminary motion estimation algorithm combining IMU data. A multi-level judgment rule is then presented which is beneficial to hovering conditions and reduces the error accumulation effectively. By using the sonar sensor, the metric scale estimation problem has been solved. We also present the novel IMU+3P (IMU with three point correspondences) algorithm for accurate pose estimation. This algorithm transforms the 6-DOF pose estimation problem into a 4-DOF problem and can obtain more accurate results with less computation time. We perform the experiments of monocular visual navigation system in real indoor and outdoor environments. The results demonstrate that the monocular visual navigation system performing in real-time has robust and accurate navigation results of the quadrotor.   相似文献   

9.
Conventional particle filtering-based visual ego-motion estimation or visual odometry often suffers from large local linearization errors in the case of abrupt camera motion. The main contribution of this paper is to present a novel particle filtering-based visual ego-motion estimation algorithm that is especially robust to the abrupt camera motion. The robustness to the abrupt camera motion is achieved by multi-layered importance sampling via particle swarm optimization (PSO), which iteratively moves particles to higher likelihood region without local linearization of the measurement equation. Furthermore, we make the proposed visual ego-motion estimation algorithm in real-time by reformulating the conventional vector space PSO algorithm in consideration of the geometry of the special Euclidean group SE(3), which is a Lie group representing the space of 3-D camera poses. The performance of our proposed algorithm is experimentally evaluated and compared with the local linearization and unscented particle filter-based visual ego-motion estimation algorithms on both simulated and real data sets.  相似文献   

10.
In this paper, we consider the problem of motion and shape estimation using a camera and a laser range finder. The object considered is a plane which is undergoing a Riccati motion. The camera observes features on the moving plane perspectively. The range-finder camera is capable of obtaining the range of the plane along a given "laser plane", which can either be kept fixed or can be altered in time. Finally, we assume that the identification is carried out as soon as the visual and range data is available, or after a suitable temporal integration. In each of these various cases, we derive to what extent the motion and shape parameters are identifiable and characterize the results as an orbit of a suitable group. The paper does not emphasize any specific choice of algorithm.  相似文献   

11.
针对多自由度机械臂快速趋近任意四边形态目标的视觉伺服控制难题,提出了结合线特征与内区域特征的机器人视觉伺服解耦控制方法.构建了目标内区域特征以指导相机的平移运动速率,利用目标的线特征给出相机的旋转角速率,并通过引入内区域特征的矢量补偿和质心坐标的位置补偿,实现了平移和旋转控制的部分解耦.最后,对机器人视觉伺服控制系统进行了稳定性分析.仿真验证结果表明所提方法能控制相机以较快而平滑的动作收敛到期望位姿,且在相机光轴与目标平面近似垂直的条件下能较好地克服深度估计造成的不确定性问题.  相似文献   

12.
During the Mars Exploration Rover (MER) landings, the Descent Image Motion Estimation System (DIMES) was used for horizontal velocity estimation. The DIMES algorithm combined measurements from a descent camera, a radar altimeter, and an inertial measurement unit. To deal with large changes in scale and orientation between descent images, the algorithm used altitude and attitude measurements to rectify images to a level ground plane. Feature selection and tracking were employed in the rectified images to compute the horizontal motion between images. Differences of consecutive motion estimates were then compared to inertial measurements to verify correct feature tracking. DIMES combined sensor data from multiple sources in a novel way to create a low-cost, robust, and computationally efficient velocity estimation solution, and DIMES was the first robotics vision system used to control a spacecraft during planetary landing. This paper presents the design and implementation of the DIMES algorithm, the assessment of the algorithm performance using a high fidelity Monte Carlo simulation, validation of performance using field test data and the detailed results from the two landings on Mars. DIMES was used successfully during both MER landings. In the case of Spirit, had DIMES not been used onboard, the total velocity would have been at the limits of the airbag capability. Fortunately, DIMES computed the actual steady state horizontal velocity and it was used by the thruster firing logic to reduce the total velocity prior to landing. For Opportunity, DIMES computed the correct velocity, and the velocity was small enough that the lander performed no action to remove it.  相似文献   

13.
Vision-aided inertial navigation systems (V-INSs) canprovide precise state estimates for the 3-D motion of a vehicle when no external references (e.g., GPS) are available. This is achieved bycombining inertial measurements from an inertial measurement unit (IMU) with visual observations from a camera under the assumption that the rigid transformation between the two sensors is known. Errors in the IMU-camera extrinsic calibration process cause biases that reduce the estimation accuracy and can even lead to divergence of any estimator processing the measurements from both sensors. In this paper, we present an extended Kalman filter for precisely determining the unknown transformation between a camera and an IMU. Contrary to previous approaches, we explicitly account for the time correlation of the IMU measurements and provide a figure of merit (covariance) for the estimated transformation. The proposed method does not require any special hardware (such as spin table or 3-D laser scanner) except a calibration target. Furthermore, we employ the observability rank criterion based on Lie derivatives and prove that the nonlinear system describing the IMU-camera calibration process is observable. Simulation and experimental results are presented that validate the proposed method and quantify its accuracy.   相似文献   

14.
Uncalibrated obstacle detection using normal flow   总被引:2,自引:0,他引:2  
This paper addresses the problem of obstacle detection for mobile robots. The visual information provided by a single on-board camera is used as input. We assume that the robot is moving on a planar pavement, and any point lying outside this plane is treated as an obstacle. We address the problem of obstacle detection by exploiting the geometric arrangement between the robot, the camera, and the scene. During an initialization stage, we estimate an inverse perspective transformation that maps the image plane onto the horizontal plane. During normal operation, the normal flow is computed and inversely projected onto the horizontal plane. This simplifies the resultant flow pattern, and fast tests can be used to detect obstacles. A salient feature of our method is that only the normal flow information, or first order time-and-space image derivatives, is used, and thus we cope with the aperture problem. Another important issue is that, contrasting with other methods, the vehicle motion and intrinsic and extrinsic parameters of the camera need not be known or calibrated. Both translational and rotational motion can be dealt with. We present motion estimation results on synthetic and real-image data. A real-time version implemented on a mobile robot, is described.  相似文献   

15.
In this paper,we present a novel algorithm for odometry estimation based on ceiling vision.The main contribution of this algorithm is the introduction of principal direction detection that can greatly reduce error accumulation problem in most visual odometry estimation approaches.The principal direction is defned based on the fact that our ceiling is flled with artifcial vertical and horizontal lines which can be used as reference for the current robot s heading direction.The proposed approach can be operated in real-time and it performs well even with camera s disturbance.A moving low-cost RGB-D camera(Kinect),mounted on a robot,is used to continuously acquire point clouds.Iterative closest point(ICP) is the common way to estimate the current camera position by registering the currently captured point cloud to the previous one.However,its performance sufers from data association problem or it requires pre-alignment information.The performance of the proposed principal direction detection approach does not rely on data association knowledge.Using this method,two point clouds are properly pre-aligned.Hence,we can use ICP to fne-tune the transformation parameters and minimize registration error.Experimental results demonstrate the performance and stability of the proposed system under disturbance in real-time.Several indoor tests are carried out to show that the proposed visual odometry estimation method can help to signifcantly improve the accuracy of simultaneous localization and mapping(SLAM).  相似文献   

16.
This paper presents a vision‐based localization and mapping algorithm developed for an unmanned aerial vehicle (UAV) that can operate in a riverine environment. Our algorithm estimates the three‐dimensional positions of point features along a river and the pose of the UAV. By detecting features surrounding a river and the corresponding reflections on the water's surface, we can exploit multiple‐view geometry to enhance the observability of the estimation system. We use a robot‐centric mapping framework to further improve the observability of the estimation system while reducing the computational burden. We analyze the performance of the proposed algorithm with numerical simulations and demonstrate its effectiveness through experiments with data from Crystal Lake Park in Urbana, Illinois. We also draw a comparison to existing approaches. Our experimental platform is equipped with a lightweight monocular camera, an inertial measurement unit, a magnetometer, an altimeter, and an onboard computer. To our knowledge, this is the first result that exploits the reflections of features in a riverine environment for localization and mapping.  相似文献   

17.
This paper presents the design of a stable non-linear control system for the remote visual tracking of cellular robots. The robots are controlled through visual feedback based on the processing of the image captured by a fixed video camera observing the workspace. The control algorithm is based only on measurements on the image plane of the visual camera–direct visual control–thus avoiding the problems related to camera calibration. In addition, the camera plane may have any (unknown) orientation with respect to the robot workspace. The controller uses an on-line estimation of the image Jacobians. Considering the Jacobians’ estimation errors, the control system is capable of tracking a reference point moving on the image plane–defining the reference trajectory–with an ultimately bounded error. An obstacle avoidance strategy is also developed in the same context, based on the visual impedance concept. Experimental results show the performance of the overall control system.  相似文献   

18.
Autonomous underwater vehicles (AUVs) can perform flexible operations in complex underwater environments due to their autonomy. Localization is one of the key components of autonomous navigation. Since the inertial navigation system of an AUV suffers from drift, observing fixed objects in an inertial reference system can enhance the localization performance. In this paper, we propose a method of localizing AUVs by exploiting visual measurements of underwater structures and artificial landmarks. In a framework of particle filtering, a camera measurement model that emulates the camera’s observation of underwater structures is designed. The particle weight is then updated based on the extracted visual information of the underwater structures. Detected artificial landmarks are also used in the particle weight update. The proposed method is validated by experiments performed in a structured basin environment.  相似文献   

19.
The possibility of fusion of navigation data obtained by two separate navigation systems (strap‐down inertial one and dynamic vision based one) is considered in this paper. The attention is primarily focused on principles of validation of separate estimates before their use in a combined algorithm. The inertial navigation system (INS) based on sensors of medium level quality has been analyzed on one side, while a visual navigation method is based on the analysis of a sequence of images of ground landmarks produced by an on‐board TV camera. The accuracy of INS estimations is being improved continuously by optimal estimation of a flying object's angular orientation while the visual navigation system offers discrete corrections during the intervals of presence of landmarks inside the camera's field of view. The concept is illustrated by dynamic simulation of a realistic flight scenario. © 2004 Wiley Periodicals, Inc.  相似文献   

20.
刘辉  张雪波  李如意  苑晶 《控制与决策》2024,39(6):1787-1800
激光同步定位与地图构建(simultaneous localization and mapping, SLAM)算法在位姿估计和构建环境地图时依赖环境结构特征信息,在结构特征缺乏的场景下,此类算法的位姿估计精度与鲁棒性将下降甚至运行失败.对此,结合惯性测量单元(inertial measurement unit, IMU)不受环境约束、相机依赖视觉纹理的特点,提出一种双目视觉辅助的激光惯导SLAM算法,以解决纯激光SLAM算法在环境结构特征缺乏时的退化问题.即采用双目视觉惯导里程计算法为激光扫描匹配模块提供视觉先验位姿,并进一步兼顾视觉约束与激光结构特征约束进行联合位姿估计.此外,提出一种互补滤波算法与因子图优化求解的组合策略,完成激光里程计参考系与惯性参考系对准,并基于因子图将激光位姿与IMU数据融合以约束IMU偏置,在视觉里程计失效的情况下为激光扫描匹配提供候补的相对位姿预测.为进一步提高全局轨迹估计精度,提出基于迭代最近点匹配算法(iterative closest point, ICP)与基于图像特征匹配算法融合的混合闭环检测策略,利用6自由度位姿图优化方法显著降低里程计漂移误...  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号