首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
We present a quadrotor Micro Aerial Vehicle (MAV) equipped with four cameras, which are arranged in two stereo configurations. The MAV is able to perform stereo matching for each camera pair on-board and in real-time, using an efficient sparse stereo method. In case of the camera pair that is facing forward, the stereo matching results are used for a reduced stereo SLAM system. The other camera pair, which is facing downwards, is used for ground plane detection and tracking. Hence, we are able to obtain a full 6DoF pose estimate from each camera pair, which we fuse with inertial measurements in an extended Kalman filter. Special care is taken to compensate various drift errors. In an evaluation we show that using two instead of one camera pair significantly increases the pose estimation accuracy and robustness.  相似文献   

2.
We describe a novel quadrotor Micro Air Vehicle (MAV) system that is designed to use computer vision algorithms within the flight control loop. The main contribution is a MAV system that is able to run both the vision-based flight control and stereo-vision-based obstacle detection parallelly on an embedded computer onboard the MAV. The system design features the integration of a powerful onboard computer and the synchronization of IMU-Vision measurements by hardware timestamping which allows tight integration of IMU measurements into the computer vision pipeline. We evaluate the accuracy of marker-based visual pose estimation for flight control and demonstrate marker-based autonomous flight including obstacle detection using stereo vision. We also show the benefits of our IMU-Vision synchronization for egomotion estimation in additional experiments where we use the synchronized measurements for pose estimation using the 2pt+gravity formulation of the PnP problem.  相似文献   

3.
The use of unmanned aerial vehicles (UAVs) for military, scientific, and civilian sectors are increasing drastically in recent years. This study presents algorithms for the visual-servo control of an UAV, in which a quadrotor helicopter has been stabilized with visual information through the control loop. Unlike previous study that use pose estimation approach which is time consuming and subject to various errors, the visual-servo control is more reliable and fast. The method requires a camera on-board the vehicle, which is already available on various UAV systems. The UAV with a camera behaves like an eye-in-hand visual servoing system. In this study the controller was designed by using two different approaches; image based visual servo control method and hybrid visual servo control method. Various simulations are developed on Matlab, in which the quadrotor aerial vehicle has been visual-servo controlled. In order to show the effectiveness of the algorithms, experiments were performed on a model quadrotor UAV, which suggest successful performance.  相似文献   

4.
In this paper, we present a multi-sensor fusion based monocular visual navigation system for a quadrotor with limited payload, power and computational resources. Our system is equipped with an inertial measurement unit (IMU), a sonar and a monocular down-looking camera. It is able to work well in GPS-denied and markerless environments. Different from most of the keyframe-based visual navigation systems, our system uses the information from both keyframes and keypoints in each frame. The GPU-based speeded up robust feature (SURF) is employed for feature detection and feature matching. Based on the flight characteristics of quadrotor, we propose a refined preliminary motion estimation algorithm combining IMU data. A multi-level judgment rule is then presented which is beneficial to hovering conditions and reduces the error accumulation effectively. By using the sonar sensor, the metric scale estimation problem has been solved. We also present the novel IMU+3P (IMU with three point correspondences) algorithm for accurate pose estimation. This algorithm transforms the 6-DOF pose estimation problem into a 4-DOF problem and can obtain more accurate results with less computation time. We perform the experiments of monocular visual navigation system in real indoor and outdoor environments. The results demonstrate that the monocular visual navigation system performing in real-time has robust and accurate navigation results of the quadrotor.   相似文献   

5.
《Graphical Models》2008,70(4):57-75
This paper studies the inside looking out camera pose estimation for the virtual studio. The camera pose estimation process, the process of estimating a camera’s extrinsic parameters, is based on closed-form geometrical approaches which use the benefit of simple corner detection of 3D cubic-like virtual studio landmarks. We first look at the effective parameters of the camera pose estimation process for the virtual studio. Our studies include all characteristic landmark parameters like landmark lengths, landmark corner angles and their installation position errors and some camera parameters like lens focal length and CCD resolution. Through computer simulation we investigate and analyze all these parameters’ efficiency in camera extrinsic parameters, including camera rotation and position matrixes. Based on this work, we found that the camera translation vector is affected more than other camera extrinsic parameters because of the noise of effective camera pose estimation parameters. Therefore, we present a novel iterative geometrical noise cancellation method for the closed-form camera pose estimation process. This is based on the collinearity theory that reduces the estimation error of the camera translation vector, which plays a major role in camera extrinsic parameters estimation errors. To validate our method, we test it in a complete virtual studio simulation. Our simulation results show that they are in the same order as those of some commercial systems, such as the BBC and InterSense IS-1200 VisTracker.  相似文献   

6.
Unmanned miniature air vehicles (MAVs) have recently become a focus of much research, due to their potential utility in a number of information gathering applications. MAVs currently carry inertial sensor packages that allow them to perform basic flight maneuvers reliably in a completely autonomous manner. However, MAV navigation requires knowledge of location that is currently available only through GPS sensors, which depend on an external infrastructure and are thus prone to reliability issues. Vision-based methods such as Visual Odometry (VO) have been developed that are capable of estimating MAV pose purely from vision, and thus have the potential to provide an autonomous alternative to GPS for MAV navigation. Because VO estimates pose by combining relative pose estimates, constraining relative pose error is the key element of any Visual Odometry system. In this paper, we present a system that fuses measurements from an MAV inertial navigation system (INS) with a novel VO framework based on direct image registration. We use the inertial sensors in the measurement step of the Extended Kalman Filter to determine the direction of gravity, and hence provide error-bounded measurements of certain portions of the aircraft pose. Because of the relative nature of VO measurements, we use VO in the EKF prediction step. To allow VO to be used as a prediction, we develop a novel linear approximation to the direct image registration procedure that allows us to propagate the covariance matrix at each time step. We present offline results obtained from our pose estimation system using actual MAV flight data. We show that fusion of VO and INS measurements greatly improves the accuracy of pose estimation and reduces the drift compared to unaided VO during medium-length (tens of seconds) periods of GPS dropout.  相似文献   

7.
Localisation and mapping with an omnidirectional camera becomes more difficult as the landmark appearances change dramatically in the omnidirectional image. With conventional techniques, it is difficult to match the features of the landmark with the template. We present a novel robot simultaneous localisation and mapping (SLAM) algorithm with an omnidirectional camera, which uses incremental landmark appearance learning to provide posterior probability distribution for estimating the robot pose under a particle filtering framework. The major contribution of our work is to represent the posterior estimation of the robot pose by incremental probabilistic principal component analysis, which can be naturally incorporated into the particle filtering algorithm for robot SLAM. Moreover, the innovative method of this article allows the adoption of the severe distorted landmark appearances viewed with omnidirectional camera for robot SLAM. The experimental results demonstrate that the localisation error is less than 1 cm in an indoor environment using five landmarks, and the location of the landmark appearances can be estimated within 5 pixels deviation from the ground truth in the omnidirectional image at a fairly fast speed.  相似文献   

8.
The combination of a camera and an Inertial Measurement Unit (IMU) has received much attention for state estimation of Micro Aerial Vehicles (MAVs). In contrast to many map based solutions, this paper focuses on optic flow (OF) based approaches which are much more computationally efficient. The robustness of a popular OF algorithm is improved using a transformed binary image from the intensity image. Aided by the on-board IMU, a homography model is developed in which it is proposed to directly obtain the speed up to an unknown scale factor (the ratio of speed to distance) from the homography matrix without performing Singular Value Decomposition (SVD) afterwards. The RANSAC algorithm is employed for outlier detection. Real images and IMU data recorded from our quadrotor platform show the superiority of the proposed method over traditional approaches that decompose the homography matrix for motion estimation, especially over poorly-textured scenes. Visual outputs are then fused with the inertial measurements using an Extended Kalman Filter (EKF) to estimate metric speed, distance to the scene and also acceleration biases. Flight experiments prove the visual inertial fusion approach is adequate for the closed-loop control of a MAV.  相似文献   

9.
This paper considers the vision-based estimation and pose control with a panoramic camera via passivity approach. First, a hyperbolic projection of a panoramic camera is presented. Next, using standard body-attached coordinate frames (the world frame, mirror frame, camera frame and object frame), we represent the body velocity of the relative rigid body motion (position and orientation). After that, we propose a visual motion observer to estimate the relative rigid body motion from the measured camera data. We show that the estimation error system with a panoramic camera has the passivity which allows us to prove stability in the sense of Lyapunov. The visual motion error system which consists of the estimation error system and the pose control error system preserves the passivity. After that, stability and L 2-gain performance analysis for the closed-loop system are discussed via Lyapunov method and dissipative systems theory, respectively. Finally, simulation and experimental results are shown in order to confirm the proposed method.  相似文献   

10.
Vision-based Target Geo-location using a Fixed-wing Miniature Air Vehicle   总被引:1,自引:0,他引:1  
This paper presents a method for determining the GPS location of a ground-based object when imaged from a fixed-wing miniature air vehicle (MAV). Using the pixel location of the target in an image, measurements of MAV position and attitude, and camera pose angles, the target is localized in world coordinates. The main contribution of this paper is to present four techniques for reducing the localization error. In particular, we discuss RLS filtering, bias estimation, flight path selection, and wind estimation. The localization method has been implemented and flight tested on BYU’s MAV testbed and experimental results are presented demonstrating the localization of a target to within 3 m of its known GPS location.  相似文献   

11.
《Advanced Robotics》2013,27(11-12):1493-1514
In this paper, a fully autonomous quadrotor in a heterogeneous air–ground multi-robot system is established only using minimal on-board sensors: a monocular camera and inertial measurement units (IMUs). Efficient pose and motion estimation is proposed and optimized. A continuous-discrete extended Kalman filter is applied, in which the high-frequency IMU data drive the prediction, while the estimates are corrected by the accurate and steady vision data. A high-frequency fusion at 100 Hz is achieved. Moreover, time delay analysis and data synchronizations are conducted to further improve the pose/motion estimation of the quadrotor. The complete on-board implementation of sensor data processing and control algorithms reduces the influence of data transfer time delay, enables autonomous task accomplishment and extends the work space. Higher pose estimation accuracy and smaller control errors compared to the standard works are achieved in real-time hovering and tracking experiments.  相似文献   

12.
This paper proposes a real-time system for pose estimation of an unmanned aerial vehicle (UAV) using parallel image processing and a fiducial marker. The system exploits the capabilities of a high-performance CPU/GPU embedded system in order to provide on-board high-frequency pose estimation enabling autonomous takeoff and landing. The system is evaluated extensively with lab and field tests using a custom quadrotor. The autonomous landing is successfully demonstrated, through experimental tests, using the proposed algorithm. The results show that the system is able to provide precise pose estimation with a framerate of at least 30\,fps and an image resolution of 640?480 pixels. The main advantage of the proposed approach is in the use of the GPU for image filtering and marker detection. The GPU provides an upper bound on the required computation time regardless of the complexity of the image thereby allowing for robust marker detection even in cluttered environments.  相似文献   

13.
This paper is centered around landmark detection, tracking, and matching for visual simultaneous localization and mapping using a monocular vision system with active gaze control. We present a system that specializes in creating and maintaining a sparse set of landmarks based on a biologically motivated feature-selection strategy. A visual attention system detects salient features that are highly discriminative and ideal candidates for visual landmarks that are easy to redetect. Features are tracked over several frames to determine stable landmarks and to estimate their 3-D position in the environment. Matching of current landmarks to database entries enables loop closing. Active gaze control allows us to overcome some of the limitations of using a monocular vision system with a relatively small field of view. It supports 1) the tracking of landmarks that enable a better pose estimation, 2) the exploration of regions without landmarks to obtain a better distribution of landmarks in the environment, and 3) the active redetection of landmarks to enable loop closing in situations in which a fixed camera fails to close the loop. Several real-world experiments show that accurate pose estimation is obtained with the presented system and that active camera control outperforms the passive approach.   相似文献   

14.
In this paper, we address the problem of globally localizing and tracking the pose of a camera‐equipped micro aerial vehicle (MAV) flying in urban streets at low altitudes without GPS. An image‐based global positioning system is introduced to localize the MAV with respect to the surrounding buildings. We propose a novel air‐ground image‐matching algorithm to search the airborne image of the MAV within a ground‐level, geotagged image database. Based on the detected matching image features, we infer the global position of the MAV by back‐projecting the corresponding image points onto a cadastral three‐dimensional city model. Furthermore, we describe an algorithm to track the position of the flying vehicle over several frames and to correct the accumulated drift of the visual odometry whenever a good match is detected between the airborne and the ground‐level images. The proposed approach is tested on a 2 km trajectory with a small quadrocopter flying in the streets of Zurich. Our vision‐based global localization can robustly handle extreme changes in viewpoint, illumination, perceptual aliasing, and over‐season variations, thus outperforming conventional visual place‐recognition approaches. The dataset is made publicly available to the research community. To the best of our knowledge, this is the first work that studies and demonstrates global localization and position tracking of a drone in urban streets with a single onboard camera.  相似文献   

15.
In this paper, we introduce a method to estimate the object’s pose from multiple cameras. We focus on direct estimation of the 3D object pose from 2D image sequences. Scale-Invariant Feature Transform (SIFT) is used to extract corresponding feature points from adjacent images in the video sequence. We first demonstrate that centralized pose estimation from the collection of corresponding feature points in the 2D images from all cameras can be obtained as a solution to a generalized Sylvester’s equation. We subsequently derive a distributed solution to pose estimation from multiple cameras and show that it is equivalent to the solution of the centralized pose estimation based on Sylvester’s equation. Specifically, we rely on collaboration among the multiple cameras to provide an iterative refinement of the independent solution to pose estimation obtained for each camera based on Sylvester’s equation. The proposed approach to pose estimation from multiple cameras relies on all of the information available from all cameras to obtain an estimate at each camera even when the image features are not visible to some of the cameras. The resulting pose estimation technique is therefore robust to occlusion and sensor errors from specific camera views. Moreover, the proposed approach does not require matching feature points among images from different camera views nor does it demand reconstruction of 3D points. Furthermore, the computational complexity of the proposed solution grows linearly with the number of cameras. Finally, computer simulation experiments demonstrate the accuracy and speed of our approach to pose estimation from multiple cameras.  相似文献   

16.
This paper presents the design and development of autonomous attitude stabilization, navigation in unstructured, GPS-denied environments, aggressive landing on inclined surfaces, and aerial gripping using onboard sensors on a low-cost, custom-built quadrotor. The development of a multi-functional micro air vehicle (MAV) that utilizes inexpensive off-the-shelf components presents multiple challenges due to noise and sensor accuracy, and there are control challenges involved with achieving various capabilities beyond navigation. This paper addresses these issues by developing a complete system from the ground up, addressing the attitude stabilization problem using extensive filtering and an attitude estimation filter recently developed in the literature. Navigation in both indoor and outdoor environments is achieved using a visual Simultaneous Localization and Mapping (SLAM) algorithm that relies on an onboard monocular camera. The system utilizes nested controllers for attitude stabilization, vision-based navigation, and guidance, with the navigation controller implemented using a nonlinear controller based on the sigmoid function. The efficacy of the approach is demonstrated by maintaining a stable hover even in the presence of wind gusts and when manually hitting and pulling on the quadrotor. Precision landing on inclined surfaces is demonstrated as an example of an aggressive maneuver, and is performed using only onboard sensing. Aerial gripping is accomplished with the addition of a secondary camera, capable of detecting infrared light sources, which is used to estimate the 3D location of an object, while an under-actuated and passively compliant manipulator is designed for effective gripping under uncertainty. The quadrotor is therefore able to autonomously navigate inside and outside, in the presence of disturbances, and perform tasks such as aggressively landing on inclined surfaces and locating and grasping an object, using only inexpensive, onboard sensors.  相似文献   

17.
In the autonomous unmanned helicopter landing problem, the position of the unmanned helicopter relative to the landmark is very important. A camera carried on the unmanned helicopter can capture an image of the landmark. In earlier research, it was reported that the camera position could be estimated by features extracted from the landmark image. However, it is necessary that the landmark image should be complete, or with only slight deficiencies, in order for this estimation process to be possible. In this article, we report on an innovative design for an estimation made from a camera position giving an incomplete single image of the landmark. An adaptive neuro-fuzzy inference system (ANFIS) is used to construct the mapping relation between the features of complete and incomplete landmark images. It will be verified that it is possible to estimate the camera position from a landmark image more than half of which is defective via the proposed method.  相似文献   

18.
In this work a method is presented to track and estimate pose of articulated objects using the motion of a sparse set of moving features. This is achieved by using a bottom-up generative approach based on the Pictorial Structures representation [1]. However, unlike previous approaches that rely on appearance, our method is entirely dependent on motion. Initial low-level part detection is based on how a region moves as opposed to its appearance. This work is best described as Pictorial Structures using motion. A standard feature tracker is used to automatically extract a sparse set of features. These features typically contain many tracking errors, however, the presented approach is able to overcome both this and their sparsity. The proposed method is applied to two problems: 2D pose estimation of articulated objects walking side onto the camera and 3D pose estimation of humans walking and jogging at arbitrary orientations to the camera. In each domain quantitative results are reported that improve on state of the art. The motivation of this work is to illustrate the information present in low-level motion that can be exploited for the task of pose estimation.  相似文献   

19.
In this paper, we compare three different marker based approaches for six degrees of freedom (6DOF) pose estimation, which can be used for position and attitude control of micro aerial vehicles (MAV). All methods are able to achieve real time pose estimation onboard without assistance of any external metric sensor. Since these methods can be used in various working environments, we compare their performance by carrying out experiments across two different platforms: an AscTec Hummingbird and a Pixhawk quadrocopter. We evaluate each method’s accuracy by using an external tracking system and compare the methods with respect to their operating ranges and processing time. We also compare each method’s performance during autonomous takeoff, hovering and landing of a quadrocopter. Finally we show how the methods perform in an outdoor environment. The paper is an extended version of the one with the same title published at the ICUAS Conference 2013.  相似文献   

20.
We introduce a multi-target tracking algorithm that operates on prerecorded video as typically found in post-incident surveillance camera investigation. Apart from being robust to visual challenges such as occlusion and variation in camera view, our algorithm is also robust to temporal challenges, in particular unknown variation in frame rate. The complication with variation in frame rate is that it invalidates motion estimation. As such, tracking algorithms based on motion models will show decreased performance. On the other hand, appearance based detection in individual frames suffers from a plethora of false detections. Our tracking algorithm, albeit relying on appearance based detection, deals robustly with the caveats of both approaches. The solution rests on the fact that for prerecorded video we can make fully informed choices; not only based on preceding, but also based on following frames. We start off from an appearance based object detection algorithm able to detect in each frame all target objects. From this we build a graph structure. The detections form the graph’s nodes and the vertices are formed by connecting each detection in a frame to all detections in the following frame. Thus, each path through the graph shows some particular selection of successive detections. Tracking is then reformulated as a heuristic search for optimal paths, where optimal means to find all detections belonging to a single object and excluding any other detection. We show that this approach, without an explicit motion model, is robust to both the visual and temporal challenges.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号