首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
We provide a sensor fusion framework for solving the problem of joint ego-motion and road geometry estimation. More specifically we employ a sensor fusion framework to make systematic use of the measurements from a forward looking radar and camera, steering wheel angle sensor, wheel speed sensors and inertial sensors to compute good estimates of the road geometry and the motion of the ego vehicle on this road. In order to solve this problem we derive dynamical models for the ego vehicle, the road and the leading vehicles. The main difference to existing approaches is that we make use of a new dynamic model for the road. An extended Kalman filter is used to fuse data and to filter measurements from the camera in order to improve the road geometry estimate. The proposed solution has been tested and compared to existing algorithms for this problem, using measurements from authentic traffic environments on public roads in Sweden. The results clearly indicate that the proposed method provides better estimates.  相似文献   

2.
Vision-aided inertial navigation systems (V-INSs) canprovide precise state estimates for the 3-D motion of a vehicle when no external references (e.g., GPS) are available. This is achieved bycombining inertial measurements from an inertial measurement unit (IMU) with visual observations from a camera under the assumption that the rigid transformation between the two sensors is known. Errors in the IMU-camera extrinsic calibration process cause biases that reduce the estimation accuracy and can even lead to divergence of any estimator processing the measurements from both sensors. In this paper, we present an extended Kalman filter for precisely determining the unknown transformation between a camera and an IMU. Contrary to previous approaches, we explicitly account for the time correlation of the IMU measurements and provide a figure of merit (covariance) for the estimated transformation. The proposed method does not require any special hardware (such as spin table or 3-D laser scanner) except a calibration target. Furthermore, we employ the observability rank criterion based on Lie derivatives and prove that the nonlinear system describing the IMU-camera calibration process is observable. Simulation and experimental results are presented that validate the proposed method and quantify its accuracy.   相似文献   

3.
The combination of a camera and an Inertial Measurement Unit (IMU) has received much attention for state estimation of Micro Aerial Vehicles (MAVs). In contrast to many map based solutions, this paper focuses on optic flow (OF) based approaches which are much more computationally efficient. The robustness of a popular OF algorithm is improved using a transformed binary image from the intensity image. Aided by the on-board IMU, a homography model is developed in which it is proposed to directly obtain the speed up to an unknown scale factor (the ratio of speed to distance) from the homography matrix without performing Singular Value Decomposition (SVD) afterwards. The RANSAC algorithm is employed for outlier detection. Real images and IMU data recorded from our quadrotor platform show the superiority of the proposed method over traditional approaches that decompose the homography matrix for motion estimation, especially over poorly-textured scenes. Visual outputs are then fused with the inertial measurements using an Extended Kalman Filter (EKF) to estimate metric speed, distance to the scene and also acceleration biases. Flight experiments prove the visual inertial fusion approach is adequate for the closed-loop control of a MAV.  相似文献   

4.
《Advanced Robotics》2013,27(11-12):1493-1514
In this paper, a fully autonomous quadrotor in a heterogeneous air–ground multi-robot system is established only using minimal on-board sensors: a monocular camera and inertial measurement units (IMUs). Efficient pose and motion estimation is proposed and optimized. A continuous-discrete extended Kalman filter is applied, in which the high-frequency IMU data drive the prediction, while the estimates are corrected by the accurate and steady vision data. A high-frequency fusion at 100 Hz is achieved. Moreover, time delay analysis and data synchronizations are conducted to further improve the pose/motion estimation of the quadrotor. The complete on-board implementation of sensor data processing and control algorithms reduces the influence of data transfer time delay, enables autonomous task accomplishment and extends the work space. Higher pose estimation accuracy and smaller control errors compared to the standard works are achieved in real-time hovering and tracking experiments.  相似文献   

5.
Pose estimation using line-based dynamic vision and inertial sensors   总被引:1,自引:0,他引:1  
An observer problem from a computer vision application is studied. Rigid body pose estimation using inertial sensors and a monocular camera is considered and it is shown how rotation estimation can be decoupled from position estimation. Orientation estimation is formulated as an observer problem with implicit output where the states evolve on SO(3). A careful observability study reveals interesting group theoretic structures tied to the underlying system structure. A locally convergent observer where the states evolve on SO (3) is proposed and numerical estimates of the domain of attraction is given. Further, it is shown that, given convergent orientation estimates, position estimation can be formulated as a linear implicit output problem. From an applications perspective, it is outlined how delayed low bandwidth visual observations and high bandwidth rate gyro measurements can provide high bandwidth estimates. This is consistent with real-time constraints due to the complementary characteristics of the sensors which are fused in a multirate way.  相似文献   

6.
A method is presented for estimating the position of rotational and prismatic joints of a novel SCARA‐type fault‐tolerant redundant manipulator using inertial sensors (accelerometers and gyroscopes). The estimation is based on the integration of the different sensors by means of the modified AUKF algorithm. The results of the evaluation of this integration scheme are compared with the CMRGD and DCMR methods, which are used for the estimation of the positions of rotational joints based on inertial sensors, showing a clear advantage of the proposed integration method over existing methods for estimating the joint angles, in addition to allowing the calculation of the position of prismatic joints.  相似文献   

7.
This paper studies vision-aided inertial navigation of small-scale unmanned aerial vehicles (UAVs) in GPS-denied environments. The objectives of the navigation system are to firstly online estimate and compensate the unknown inertial measurement biases, secondly provide drift-free velocity and attitude estimates which are crucial for UAV stabilization control, and thirdly give relatively accurate position estimation such that the UAV is able to perform at least a short-term navigation when the GPS signal is not available. For the vision system, we do not presume maps or landmarks of the environment. The vision system should be able to work robustly even given low-resolution images (e.g., 160 ×120 pixels) of near homogeneous visual features. To achieve these objectives, we propose a novel homography-based vision-aided navigation system that adopts four common sensors: a low-cost inertial measurement unit, a downward-looking monocular camera, a barometer, and a compass. The measurements of the sensors are fused by an extended Kalman filter. Based on both analytical and numerical observability analyses of the navigation system, we theoretically verify that the proposed navigation system is able to achieve the navigation objectives. We also show comprehensive simulation and real flight experimental results to verify the effectiveness and robustness of the proposed navigation system.  相似文献   

8.
针对惯性测量单元(IMU)本身存在测量漂移,很难获得精确的室内行人轨迹的问题,提出了使用多个传感器信息融合的方案,包括IMU和视频摄像头,并结合卡尔曼滤波和零速检测算法进行参数优化,以提高行人运动轨迹的精度.仿真结果表明:算法可以有效降低行人轨迹的误差.  相似文献   

9.
This paper explores the combination of inertial sensor data with vision. Visual and inertial sensing are two sensory modalities that can be explored to give robust solutions on image segmentation and recovery of 3D structure from images, increasing the capabilities of autonomous robots and enlarging the application potential of vision systems. In biological systems, the information provided by the vestibular system is fused at a very early processing stage with vision, playing a key role on the execution of visual movements such as gaze holding and tracking, and the visual cues aid the spatial orientation and body equilibrium. In this paper, we set a framework for using inertial sensor data in vision systems, and describe some results obtained. The unit sphere projection camera model is used, providing a simple model for inertial data integration. Using the vertical reference provided by the inertial sensors, the image horizon line can be determined. Using just one vanishing point and the vertical, we can recover the camera's focal distance and provide an external bearing for the system's navigation frame of reference. Knowing the geometry of a stereo rig and its pose from the inertial sensors, the collineations of level planes can be recovered, providing enough restrictions to segment and reconstruct vertical features and leveled planar patches.  相似文献   

10.
We aim at developing autonomous miniature hovering flying robots capable of navigating in unstructured GPS-denied environments. A major challenge is the miniaturization of the embedded sensors and processors that allow such platforms to fly by themselves. In this paper, we propose a novel ego-motion estimation algorithm for hovering robots equipped with inertial and optic-flow sensors that runs in real-time on a microcontroller and enables autonomous flight. Unlike many vision-based methods, this algorithm does not rely on feature tracking, structure estimation, additional distance sensors or assumptions about the environment. In this method, we introduce the translational optic-flow direction constraint, which uses the optic-flow direction but not its scale to correct for inertial sensor drift during changes of direction. This solution requires comparatively much simpler electronics and sensors and works in environments of any geometry. Here we describe the implementation and performance of the method on a hovering robot equipped with eight 0.65 g optic-flow sensors, and show that it can be used for closed-loop control of various motions.  相似文献   

11.
基于MEMS惯性传感器的机器人姿态检测系统的研究   总被引:4,自引:0,他引:4  
提出了一种基于MEMS惯性传感器的机器人姿态检测系统,并对检测系统的原理,组成以及数据采集进行了研究.并对陀螺仪和加速度计的影响因素进行说明,利用硬件对采集的数据进行滤波处理.通过卡尔曼滤波方法实现数据融合,充分地利用惯性传感器的信息,从而有效地提高姿态检测系统的检测精度.仿真试验表明了卡尔曼滤波方法对于提高检测精度是切实有效的.在实际的试验中也取得了很好的效果,并应用于实际的机器人姿态检测.  相似文献   

12.
两轮自平衡机器人惯性传感器滤波问题的研究   总被引:2,自引:0,他引:2  
针对惯性传感器在两轮机器人姿态检测中存在随机漂移误差的问题,基于卡尔曼滤波实现对倾角仪与陀螺仪的信息融合,设计了简单而实用的滤波算法,对传感器的误差进行补偿后得到机器人姿态信号的最优估计,从而将其应用于两轮自平衡机器人系统。实验结果表明,采用卡尔曼信息融合的方法,来得到机器人姿态信息最优估计是有效可行的,并且有利于机器人完成自平衡的控制。  相似文献   

13.
In this paper, we address the problem of ego-motion estimation by fusing visual and inertial information. The hardware consists of an inertial measurement unit (IMU) and a monocular camera. The camera provides visual observations in the form of features on a horizontal plane. Exploiting the geometric constraint of features on the plane into visual and inertial data, we propose a novel closed form measurement model for this system. Our first contribution in this paper is an observability analysis of the proposed planar-based visual inertial navigation system (VINS). In particular, we prove that the system has only three unobservable states corresponding to global translations parallel to the plane, and rotation around the gravity vector. Hence, compared to general VINS, an advantage of using features on the horizontal plane is that the vertical translation along the normal of the plane becomes observable. As the second contribution, we present a state-space formulation for the pose estimation in the analyzed system and solve it via a modified unscented Kalman filter (UKF). Finally, the findings of the theoretical analysis and 6-DoF motion estimation are validated by simulations as well as using experimental data.  相似文献   

14.
适用于户外增强现实系统的混合跟踪定位算法   总被引:1,自引:0,他引:1  
单一传感器无法解决户外增强现实系统中的跟踪定位问题.为了提高视觉跟踪定位算法的精度和鲁棒性,提出一种基于惯性跟踪器与视觉测量相结合的混合跟踪定位算法.该算法在扩展卡尔曼滤波框架下,通过融合来自视觉与惯性传感器的信息进行摄像机运动轨迹估计,并利用视觉测量信息对惯性传感器的零点偏差进行实时校正;同时采用SCAAT方法解决惯性传感器与视觉测量间的时间采样不同步问题.实验结果表明,该算法能够有效地提高运动估计的精度和稳定性.  相似文献   

15.
Human motion tracking has many applications in biomedical and industrial services. Low-cost inertial/magnetic sensors are widely used in human motion capture systems to obtain the orientation of the human body segments. In this paper, we have presented a quaternion-based unscented Kalman filter algorithm to fuse inertial/magnetic sensors measurements for tracking human arm movements. In order to have a better estimation of the orientation of the forearm and the upper arm, a constraint equation was developed based on the relative velocity of the elbow joint with respect to the inertial sensors attached to the forearm and the upper arm. Also to compensate for fast body motions, we adapted the measurement covariance matrix in such a way that the filter implements gyroscopes when large accelerations are involved. The proposed algorithm was evaluated experimentally by an optical tracking system as the ground truth reference. The results showed the effectiveness and good performance of the proposed algorithm.  相似文献   

16.
This note presents a simple approach to the observability analysis of the rotation estimation using line-based dynamic vision and inertial sensors. The problem was originally raised and formulated in Rehbinder, and Ghosh [2003. Pose estimation using line-based dynamic vision and inertial sensors. IEEE Transactions on Automatic Control, 48(2), 186-199.] where the unobservable subgroup was derived using complex matrix manipulations. By solving linear quaternion equations and using set operations, we not only successfully obtain the same result but also naturally extend it for the case with linearly dependent lines. The development in this note is more straightforward and gives rise to a clearer picture of the problem.  相似文献   

17.
The growth of civil and military use has recently promoted the development of unmanned miniature aerial vehicles dedicated to surveillance tasks. These flying vehicles are often capable of carrying only a few dozen grammes of payload. To achieve autonomy for this kind of aircraft novel sensors are required, which need to cope with strictly limited onboard processing power. One of the key aspects in autonomous behaviour is target tracking. Our visual tracking approach differs from other methods by not using expensive cameras but a Wii remote camera, i.e. commodity consumer hardware. The system works without stationary sensors and all processing is done with an onboard microcontroller. The only assumptions are a good roll and pitch attitude estimation, provided by an inertial measurement unit and a stationary pattern of four infrared spots on the target or the landing spot. This paper details experiments for hovering above a landing place, but tracking a slowly moving target is also possible.  相似文献   

18.
A tightly-coupled stereo vision-aided inertial navigation system is proposed in this work, as a synergistic incorporation of vision with other sensors. In order to avoid loss of information possibly resulting by visual preprocessing, a set of feature-based motion sensors and an inertial measurement unit are directly fused together to estimate the vehicle state. Two alternative feature-based observation models are considered within the proposed fusion architecture. The first model uses the trifocal tensor to propagate feature points by homography, so as to express geometric constraints among three consecutive scenes. The second one is derived by using a rigid body motion model applied to three-dimensional (3D) reconstructed feature points. A kinematic model accounts for the vehicle motion, and a Sigma-Point Kalman filter is used to achieve a robust state estimation in the presence of non-linearities. The proposed formulation is derived for a general platform-independent 3D problem, and it is tested and demonstrated with a real dynamic indoor data-set alongside of a simulation experiment. Results show improved estimates than in the case of a classical visual odometry approach and of a loosely-coupled stereo vision-aided inertial navigation system, even in GPS (Global Positioning System)-denied conditions and when magnetometer measurements are not reliable.  相似文献   

19.
基于快速不变卡尔曼滤波的视觉惯性里程计   总被引:1,自引:0,他引:1  
黄伟杰  张国山 《控制与决策》2019,34(12):2585-2593
针对相机定位问题,设计基于深度相机和惯性传感器的视觉惯性里程计,里程计包含定位部分和重定位部分.定位部分使用不变卡尔曼滤波融合多层迭代最近点(ICP)的估计值和惯性传感器的测量值来获得精确的相机位姿,其中ICP的估计误差使用费舍尔信息矩阵进行量化.由于需要使用海量的点云作为输入,采用GPU并行计算以快速实现ICP估计和误差量化的过程. 当视觉惯性里程计出现定位失败时,结合惯性传感器数据建立恒速模型,并基于此模型改进随机蕨定位方法,实现视觉惯性里程计的重定位.实验结果表明,所设计的视觉惯性里程计可以获得准确追踪相机且可以进行有效的重定位.  相似文献   

20.
This paper presents a hierarchical simultaneous localization and mapping(SLAM) system for a small unmanned aerial vehicle(UAV) using the output of an inertial measurement unit(IMU) and the bearing-only observations from an onboard monocular camera.A homography based approach is used to calculate the motion of the vehicle in 6 degrees of freedom by image feature match.This visual measurement is fused with the inertial outputs by an indirect extended Kalman filter(EKF) for attitude and velocity estimation.Then,another EKF is employed to estimate the position of the vehicle and the locations of the features in the map.Both simulations and experiments are carried out to test the performance of the proposed system.The result of the comparison with the referential global positioning system/inertial navigation system(GPS/INS) navigation indicates that the proposed SLAM can provide reliable and stable state estimation for small UAVs in GPS-denied environments.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号