首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We present an open‐source system for Micro‐Aerial Vehicle (MAV) autonomous navigation from vision‐based sensing. Our system focuses on dense mapping, safe local planning, and global trajectory generation, especially when using narrow field‐of‐view sensors in very cluttered environments. In addition, details about other necessary parts of the system and special considerations for applications in real‐world scenarios are presented. We focus our experiments on evaluating global planning, path smoothing, and local planning methods on real maps made on MAVs in realistic search‐and‐rescue and industrial inspection scenarios. We also perform thousands of simulations in cluttered synthetic environments, and finally validate the complete system in real‐world experiments.  相似文献   

2.
For marine industrial inspection, archaeology, and geological formation study, the ability to map unknown underwater enclosed and confined spaces is desirable and well suited for robotic vehicles. To date, there are few solutions thoroughly tested in the field designed to perform this specific task, none of which operate autonomously. With a small, low‐cost biomimetic platform known as the U‐CAT, we developed a mapping‐mission software architecture in which the vehicle executes three key sensor‐based reactive stages: entering, exploring, and exiting. Encapsulated in the exploring stage are several state‐defined navigation strategies, called patterns, which were designed and initially tested in simulation. The results of simulation work informed the selection of two patterns that were executed in field trials at a submerged building in Rummu Quarry Lake, Estonia, as part of several full mapping missions. Over the course of these trials, the vehicle was capable of observing the majority (78–97%) of 49.9 explorable square meters within 7 minutes. Based on these results, we demonstrate the capability of a low‐cost and resource‐constrained vehicle to perform confined space mapping under sensor uncertainty. Further, the observations made by the vehicle are shown to be suitable for a target site reconstruction and analysis in postprocessing, which is the intended outcome of this type of mission in practical applications.  相似文献   

3.

In-water visual ship hull inspection using unmanned underwater vehicles needs to be performed at very close range to the target surface because of the visibility limitations in underwater environments mainly due to light attenuation, scattering, and water turbidity. These environmental challenges result in ineffective photometric and geometric information in hull surface images and, therefore, the performance of conventional three-dimensional (3D) reconstruction techniques is often unsatisfactory. This paper addresses a visual mapping method for 3D reconstruction of underwater ship hull surface using a monocular camera as a primary mapping sensor. The main idea of the proposed approach is to model the moderately curved hull surface as a combination of piecewise-planar panels, and to generate a global map by aligning the local images in a two-dimensional reference frame and correcting them appropriately to reflect the information of perspective projections of the 3D panels. The estimated 3D panels associated with the local images are used to extract the loop-closure relative measurements in the framework of simultaneous localization and mapping (SLAM) for precise camera trajectory estimation and 3D reconstruction results. The validity and practical feasibility of the proposed method are demonstrated using a dataset obtained in a field experiment with a full-scale ship in a real sea environment.

  相似文献   

4.
This study addresses the development of algorithms for multiple target detection and tracking in the framework of sensor fusion and its application to autonomous navigation and collision avoidance systems for the unmanned surface vehicle (USV) Aragon. To provide autonomous navigation capabilities, various perception sensors such as radar, lidar, and cameras have been mounted on the USV platform and automatic ship detection algorithms are applied to the sensor measurements. The relative position information between the USV and nearby objects is obtained to estimate the motion of the target objects in a sensor‐level tracking filter. The estimated motion information from the individual tracking filters is then combined in a central‐level fusion tracker to achieve persistent and reliable target tracking performance. For automatic ship collision avoidance, the combined track data are used as obstacle information, and appropriate collision avoidance maneuvers are designed and executed in accordance with the international regulations for preventing collisions at sea (COLREGs). In this paper, the development processes of the vehicle platform and the autonomous navigation algorithms are described, and the results of field experiments are presented and discussed.  相似文献   

5.
Visual servoing approaches navigate a robot to the desired pose with respect to a given object using image measurements. As a result, these approaches have several applications in manipulation, navigation and inspection. However, existing visual servoing approaches are instance specific, that is, they control camera motion between two views of the same object. In this paper, we present a framework for visual servoing to a novel object instance. We further employ our framework for the autonomous inspection of vehicles using Micro Aerial Vehicles (MAVs), which is vital for day‐to‐day maintenance, damage assessment, and merchandising a vehicle. This visual inspection task comprises the MAV visiting the essential parts of the vehicle, for example, wheels, lights, and so forth, to get a closer look at the damages incurred. Existing methods for autonomous inspection could not be extended for vehicles due to the following reasons: First, several existing methods require a 3D model of the structure, which is not available for every vehicle. Second, existing methods require expensive depth sensor for localization and path planning. Third, current approaches do not account for the semantic understanding of the vehicle, which is essential for identifying parts. Our instance invariant visual servoing framework is capable of autonomously navigating to every essential part of a vehicle for inspection and can be initialized from any random pose. To the best our knowledge, this is the first approach demonstrating fully autonomous visual inspection of vehicles using MAVs. We have validated the efficacy of our approach through a series of experiments in simulation and outdoor scenarios.  相似文献   

6.
This paper addresses the problem of autonomous navigation of a micro air vehicle (MAV) in GPS‐denied environments. We present experimental validation and analysis for our system that enables a quadrotor helicopter, equipped with a laser range finder sensor, to autonomously explore and map unstructured and unknown environments. The key challenge for enabling GPS‐denied flight of a MAV is that the system must be able to estimate its position and velocity by sensing unknown environmental structure with sufficient accuracy and low enough latency to stably control the vehicle. Our solution overcomes this challenge in the face of MAV payload limitations imposed on sensing, computational, and communication resources. We first analyze the requirements to achieve fully autonomous quadrotor helicopter flight in GPS‐denied areas, highlighting the differences between ground and air robots that make it difficult to use algorithms developed for ground robots. We report on experiments that validate our solutions to key challenges, namely a multilevel sensing and control hierarchy that incorporates a high‐speed laser scan‐matching algorithm, data fusion filter, high‐level simultaneous localization and mapping, and a goal‐directed exploration module. These experiments illustrate the quadrotor helicopter's ability to accurately and autonomously navigate in a number of large‐scale unknown environments, both indoors and in the urban canyon. The system was further validated in the field by our winning entry in the 2009 International Aerial Robotics Competition, which required the quadrotor to autonomously enter a hazardous unknown environment through a window, explore the indoor structure without GPS, and search for a visual target. © 2011 Wiley Periodicals, Inc.  相似文献   

7.
This paper presents a vision‐based localization and mapping algorithm developed for an unmanned aerial vehicle (UAV) that can operate in a riverine environment. Our algorithm estimates the three‐dimensional positions of point features along a river and the pose of the UAV. By detecting features surrounding a river and the corresponding reflections on the water's surface, we can exploit multiple‐view geometry to enhance the observability of the estimation system. We use a robot‐centric mapping framework to further improve the observability of the estimation system while reducing the computational burden. We analyze the performance of the proposed algorithm with numerical simulations and demonstrate its effectiveness through experiments with data from Crystal Lake Park in Urbana, Illinois. We also draw a comparison to existing approaches. Our experimental platform is equipped with a lightweight monocular camera, an inertial measurement unit, a magnetometer, an altimeter, and an onboard computer. To our knowledge, this is the first result that exploits the reflections of features in a riverine environment for localization and mapping.  相似文献   

8.
This paper extends the progress of single beacon one‐way‐travel‐time (OWTT) range measurements for constraining XY position for autonomous underwater vehicles (AUV). Traditional navigation algorithms have used OWTT measurements to constrain an inertial navigation system aided by a Doppler Velocity Log (DVL). These methodologies limit AUV applications to where DVL bottom‐lock is available as well as the necessity for expensive strap‐down sensors, such as the DVL. Thus, deep water, mid‐water column research has mostly been left untouched, and vehicles that need expensive strap‐down sensors restrict the possibility of using multiple AUVs to explore a certain area. This work presents a solution for accurate navigation and localization using a vehicle's odometry determined by its dynamic model velocity and constrained by OWTT range measurements from a topside source beacon as well as other AUVs operating in proximity. We present a comparison of two navigation algorithms: an Extended Kalman Filter (EKF) and a Particle Filter(PF). Both of these algorithms also incorporate a water velocity bias estimator that further enhances the navigation accuracy and localization. Closed‐loop online field results on local waters as well as a real‐time implementation of two days field trials operating in Monterey Bay, California during the Keck Institute for Space Studies oceanographic research project prove the accuracy of this methodology with a root mean square error on the order of tens of meters compared to GPS position over a distance traveled of multiple kilometers.  相似文献   

9.
We present a system for automatically building three‐dimensional (3‐D) maps of underwater terrain fusing visual data from a single camera with range data from multibeam sonar. The six‐degree‐of‐freedom location of the camera relative to the navigation frame is derived as part of the mapping process, as are the attitude offsets of the multibeam head and the onboard velocity sensor. The system uses pose graph optimization and the square root information smoothing and mapping framework to simultaneously solve for the robot's trajectory, the map, and the camera location in the robot's frame. Matched visual features are treated within the pose graph as images of 3‐D landmarks, while multibeam bathymetry submap matches are used to impose relative pose constraints linking robot poses from distinct tracklines of the dive trajectory. The navigation and mapping system presented works under a variety of deployment scenarios on robots with diverse sensor suites. The results of using the system to map the structure and the appearance of a section of coral reef are presented using data acquired by the Seabed autonomous underwater vehicle.  相似文献   

10.
A key challenge in autonomous mobile manipulation is the ability to determine, in real time, how to safely execute complex tasks when placed in unknown or changing world. Addressing this issue for Intervention Autonomous Underwater Vehicles (I‐AUVs), operating in potentially unstructured environment is becoming essential. Our research focuses on using motion planning to increase the I‐AUVs autonomy, and on addressing three major challenges: (a) producing consistent deterministic trajectories, (b) addressing the high dimensionality of the system and its impact on the real‐time response, and (c) coordinating the motion between the floating vehicle and the arm. The latter challenge is of high importance to achieve the accuracy required for manipulation, especially considering the floating nature of the AUV and the control challenges that come with it. In this study, for the first time, we demonstrate experimental results performing manipulation in unknown environment. The Multirepresentation, Multiheuristic A* (MR‐MHA*) search‐based planner, previously tested only in simulation and in a known a priori environment, is now extended to control Girona500 I‐AUV performing a Valve‐Turning intervention in a water tank. To this aim, the AUV was upgraded with an in‐house‐developed laser scanner to gather three‐dimensional (3D) point clouds for building, in real time, an occupancy grid map (octomap) of the environment. The MR‐MHA* motion planner used this octomap to plan, in real time, collision‐free trajectories. To achieve the accuracy required to complete the task, a vision‐based navigation method was employed. In addition, to reinforce the safety, accounting for the localization uncertainty, a cost function was introduced to keep minimum clearance in the planning. Moreover a visual‐servoing method had to be implemented to complete the last step of the manipulation with the desired accuracy. Lastly, we further analyzed the approach performance from both loose‐coupling and clearance perspectives. Our results show the success and efficiency of the approach to meet the desired behavior, as well as the ability to adapt to unknown environments.  相似文献   

11.
This paper describes a camera position control with aerial manipulator for visual test of bridge inspection. Our developed unmanned aerial vehicle (UAV) has three‐degree‐of‐freedom (3‐DoF) manipulator on its top to execute visual or hammering test of the inspection. This paper focuses on the visual test. A camera was implemented at the end of the manipulator to acquire images of the narrow space of the bridge such as bearings, which the conventional UAV without the camera‐attached manipulators at its top cannot achieve the fine visual test. For the visual test, it is desirable that the camera is above the body with enough distance between the camera and the body. As obvious, the camera position in the inertial coordinate system is effected by the movement of the body. Therefore we implement the camera position control compensating the body movement into the UAV. As a result of an experiment, it is assessed that the proposed control reduces the position error of the camera comparing the one of the body. The mean position error of the camera is 0.039 m that is 51.4% of the one of the body. Our world‐first study enables to acquire the image of the bearing of the bridge by a camera mounted at the end effector of aerial manipulator fixed on UAV.  相似文献   

12.
A new approach to autonomous land vehicle (ALV) navigation by the person following is proposed. This approach is based on sequential pattern recognition and computer vision techniques, and maintenance of smoothness for indoor navigation is the main goal. The ALV is guided automatically to follow a person who walks in front of the vehicle. The vehicle can be used as an autonomous handcart, go‐cart, buffet car, golf cart, weeder, etc. in various applications. Sequential pattern recognition is used to design a classifier for making decisions about whether the person in front of the vehicle is walking straight or is too right or too left of the vehicle. Multiple images in a sequence are used as input to the system. Computer vision techniques are used to detect and locate the person in front of the vehicle. By sequential pattern recognition, the relation between the location of the person and that of the vehicle is categorized into three classes. Corresponding adjustments of the direction of the vehicle are computed to achieve smooth navigation. The approach is implemented on a real ALV, and successful and smooth navigation sessions confirm the feasibility of the approach. ©1999 John Wiley & Sons, Inc.  相似文献   

13.
To participate in the Outback Medical Express UAV Challenge 2016, a vehicle was designed and tested that can autonomously hover precisely, takeoff and land vertically, fly fast forward efficiently, and use computer vision to locate a person and a suitable landing location. The vehicle is a novel hybrid tail‐sitter combining a delta‐shaped biplane fixed‐wing and a conventional helicopter rotor. The rotor and wing are mounted perpendicularly to each other,and the entire vehicle pitches down to transition from hover to fast forward flight where the rotor serves as propulsion. To deliver sufficient thrust in hover while still being efficient in fast forward flight, a custom rotor system was designed. The theoretical design was validated with energy measurements, wind tunnel tests, and application in real‐world missions. A rotor‐head and corresponding control algorithm were developed to allow transitioning flight with the nonconventional rotor dynamics that are caused by the fuselage rotor interaction. Dedicated electronics were designed that meet vehicle needs and comply with regulations to allow safe flight beyond visual line of sight. Vision‐based search and guidance algorithms running on a stereo‐vision fish‐eye camera were developed and tested to locate a person in cluttered terrain never seen before. Flight tests and a competition participation illustrate the applicability of the DelftaCopter concept.  相似文献   

14.
While artificial vision prostheses are quickly becoming a reality, actual testing time with visual prosthesis carriers is at a premium. Moreover, it is helpful to have a more realistic functional approximation of a blind subject. Instead of a normal subject with a healthy retina looking at a low-resolution (pixelated) image on a computer monitor or head-mounted display, a more realistic approximation is achieved by employing a subject-independent mobile robotic platform that uses a pixelated view as its sole visual input for navigation purposes. We introduce CYCLOPS: an AWD, remote controllable, mobile robotic platform that serves as a testbed for real-time image processing and autonomous navigation systems for the purpose of enhancing the visual experience afforded by visual prosthesis carriers. Complete with wireless Internet connectivity and a fully articulated digital camera with wireless video link, CYCLOPS supports both interactive tele-commanding via joystick, and autonomous self-commanding. Due to its onboard computing capabilities and extended battery life, CYCLOPS can perform complex and numerically intensive calculations, such as image processing and autonomous navigation algorithms, in addition to interfacing to additional sensors. Its Internet connectivity renders CYCLOPS a worldwide accessible testbed for researchers in the field of artificial vision systems. CYCLOPS enables subject-independent evaluation and validation of image processing and autonomous navigation systems with respect to the utility and efficiency of supporting and enhancing visual prostheses, while potentially reducing to a necessary minimum the need for valuable testing time with actual visual prosthesis carriers.  相似文献   

15.
A scalar magnetometer payload has been developed and integrated into a two‐man portable autonomous underwater vehicle (AUV) for geophysical and archeological surveys. The compact system collects data from a Geometrics microfabricated atomic magnetometer, a total‐field atomic magnetometer. Data from the sensor is both stored for post‐processing and made available to an onboard autonomy engine for real‐time sense and react behaviors. This system has been characterized both in controlled laboratory conditions and at sea to determine its performance limits. Methodologies for processing the magnetometer data to correct for interference and error introduced by the AUV platform were developed to improve sensing performance. When conducting seabed surveys, detection and characterization of targets of interest are performed in real‐time aboard the AUV. This system is used to drive both single‐ and multiple‐vehicle autonomous target reacquisition behaviors. The combination of on‐board target detection and autonomous reacquire capability is found to increase the effective survey coverage rate of the AUV‐based magnetic sensing system.  相似文献   

16.
This paper represents the development of feature following control and distributed navigation algorithms for visual surveillance using a small unmanned aerial vehicle equipped with a low-cost imaging sensor unit. An efficient map-based feature generation and following control algorithm is developed to make an onboard imaging sensor to track a target. An efficient navigation system is also designed for real-time position and velocity estimates of the unmanned aircraft, which is used as inputs for the path following controller. The performance of the proposed autonomous path following capability with a stabilized gimbaled camera onboard a small unmanned aerial robot is demonstrated through flight tests with application to target tracking for real-time visual surveillance.  相似文献   

17.
Distributed as an open‐source library since 2013, real‐time appearance‐based mapping (RTAB‐Map) started as an appearance‐based loop closure detection approach with memory management to deal with large‐scale and long‐term online operation. It then grew to implement simultaneous localization and mapping (SLAM) on various robots and mobile platforms. As each application brings its own set of constraints on sensors, processing capabilities, and locomotion, it raises the question of which SLAM approach is the most appropriate to use in terms of cost, accuracy, computation power, and ease of integration. Since most of SLAM approaches are either visual‐ or lidar‐based, comparison is difficult. Therefore, we decided to extend RTAB‐Map to support both visual and lidar SLAM, providing in one package a tool allowing users to implement and compare a variety of 3D and 2D solutions for a wide range of applications with different robots and sensors. This paper presents this extended version of RTAB‐Map and its use in comparing, both quantitatively and qualitatively, a large selection of popular real‐world datasets (e.g., KITTI, EuRoC, TUM RGB‐D, MIT Stata Center on PR2 robot), outlining strengths, and limitations of visual and lidar SLAM configurations from a practical perspective for autonomous navigation applications.  相似文献   

18.
The challenge for unmanned aerial vehicles to sense and avoid obstacles becomes even harder if narrow passages have to be crossed. An approach to solve a mission scenario that tackles the problem of such narrow passages is presented here. The task is to fly an unmanned helicopter autonomously through a course with gates that are only slightly larger than the vehicle itself. A camera is installed on the vehicle to detect the gates. Using vehicle localization data from a navigation solution, camera alignment and global gate positions are estimated simultaneously. The presented algorithm calculates the desired target waypoints to fly through the gates. Furthermore, the paper presents a mission execution plan that instructs the vehicle to search for a gate, to fly through it after successful detection, and to search for a proceeding one. All algorithms are designed to run onboard the vehicle so that no interaction with the ground control station is necessary, making the vehicle completely autonomous. To develop and optimize algorithms, and to prove the correctness and accuracy of vision-based gate detection under real operational conditions, gate positions are searched in images taken from manual helicopter flights. Afterwards, the integration of visual sensing and mission control is proven. The paper presents results from full autonomous flight where the helicopter searches and flies through a gate without operator actions.  相似文献   

19.
以微处理器586 -engine为控制核心,采用模型辅助的自主导航算法设计了一种水下自主导航系统.该系统能够实时采集多个外部传感器数据并对数据进行实时解算,同时满足低成本、低功耗和长航时的要求.搭建了整个系统硬件平台,通过陆上跑车实验对整个实时导航系统进行了满足一定精度的简单实验验证,验证了系统的实时性和可行性.  相似文献   

20.
Joint simultaneous localization and mapping (SLAM) constitutes the basis for cooperative action in multi‐robot teams. We designed a stereo vision‐based 6D SLAM system combining local and global methods to benefit from their particular advantages: (1) Decoupled local reference filters on each robot for real‐time, long‐term stable state estimation required for stabilization, control and fast obstacle avoidance; (2) Online graph optimization with a novel graph topology and intra‐ as well as inter‐robot loop closures through an improved submap matching method to provide global multi‐robot pose and map estimates; (3) Distribution of the processing of high‐frequency and high‐bandwidth measurements enabling the exchange of aggregated and thus compacted map data. As a result, we gain robustness with respect to communication losses between robots. We evaluated our improved map matcher on simulated and real‐world datasets and present our full system in five real‐world multi‐robot experiments in areas of up 3,000 m2 (bounding box), including visual robot detections and submap matches as loop‐closure constraints. Further, we demonstrate its application to autonomous multi‐robot exploration in a challenging rough‐terrain environment at a Moon‐analogue site located on a volcano.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号