共查询到20条相似文献,搜索用时 0 毫秒
1.
Learning Occupancy Grid Maps with Forward Sensor Models 总被引:5,自引:0,他引:5
Sebastian Thrun 《Autonomous Robots》2003,15(2):111-127
This article describes a new algorithm for acquiring occupancy grid maps with mobile robots. Existing occupancy grid mapping algorithms decompose the high-dimensional mapping problem into a collection of one-dimensional problems, where the occupancy of each grid cell is estimated independently. This induces conflicts that may lead to inconsistent maps, even for noise-free sensors. This article shows how to solve the mapping problem in the original, high-dimensional space, thereby maintaining all dependencies between neighboring cells. As a result, maps generated by our approach are often more accurate than those generated using traditional techniques. Our approach relies on a statistical formulation of the mapping problem using forward models. It employs the expectation maximization algorithm for searching maps that maximize the likelihood of the sensor measurements. 相似文献
2.
Autonomous navigation of microaerial vehicles in environments that are simultaneously GPS‐denied and visually degraded, and especially in the dark, texture‐less and dust‐ or smoke‐filled settings, is rendered particularly hard. However, a potential solution arises if such aerial robots are equipped with long wave infrared thermal vision systems that are unaffected by darkness and can penetrate many types of obscurants. In response to this fact, this study proposes a keyframe‐based thermal–inertial odometry estimation framework tailored to the exact data and concepts of operation of thermal cameras. The front‐end component of the proposed solution utilizes full radiometric data to establish reliable correspondences between thermal images, as opposed to operating on rescaled data as previous efforts have presented. In parallel, taking advantage of a keyframe‐based optimization back‐end the proposed method is suitable for handling periods of data interruption which are commonly present in thermal cameras, while it also ensures the joint optimization of reprojection errors of 3D landmarks and inertial measurement errors. The developed framework was verified with respect to its resilience, performance, and ability to enable autonomous navigation in an extensive set of experimental studies including multiple field deployments in severely degraded, dark, and obscurants‐filled underground mines. 相似文献
3.
Detecting Loop Closure with Scene Sequences 总被引:1,自引:0,他引:1
This paper is concerned with “loop closing” for mobile robots. Loop closing is the problem of correctly asserting that a robot
has returned to a previously visited area. It is a particularly hard but important component of the Simultaneous Localization
and Mapping (SLAM) problem. Here a mobile robot explores an a-priori unknown environment performing on-the-fly mapping while the map is used to localize the vehicle. Many SLAM implementations
look to internal map and vehicle estimates (p.d.fs) to make decisions about whether a vehicle is revisiting a previously mapped
area or is exploring a new region of workspace. We suggest that one of the reasons loop closing is hard in SLAM is precisely
because these internal estimates can, despite best efforts, be in gross error. The “loop closer” we propose, analyze and demonstrate
makes no recourse to the metric estimates of the SLAM system it supports and aids---it is entirely independent. At regular
intervals the vehicle captures the appearance of the local scene (with camera and laser). We encode the similarity between
all possible pairings of scenes in a “similarity matrix”. We then pose the loop closing problem as the task of extracting
statistically significant sequences of similar scenes from this matrix. We show how suitable analysis (introspection) and decomposition (remediation) of the
similarity matrix allows for the reliable detection of loops despite the presence of repetitive and visually ambiguous scenes.
We demonstrate the technique supporting a SLAM system driven by scan-matching laser data in a variety of settings. Some of
the outdoor settings are beyond the capability of the SLAM system itself in which case GPS was used to provide a ground truth.
We further show how the techniques can equally be applied to detect loop closure using spatial images taken with a scanning
laser. We conclude with an extension of the loop closing technique to a multi-robot mapping problem in which the outputs of
several, uncoordinated and SLAM-enabled robots are fused without requiring inter-vehicle observations or a-priori frame alignment. 相似文献
4.
Robotic collaboration promises increased robustness and efficiency of missions with great potential in applications, such as search‐and‐rescue and agriculture. Multiagent collaborative simultaneous localization and mapping (SLAM) is right at the core of enabling collaboration, such that each agent can colocalize in and build a map of the workspace. The key challenges at the heart of this problem, however, lie with robust communication, efficient data management, and effective sharing of information among the agents. To this end, here we present CCM‐SLAM, a centralized collaborative SLAM framework for robotic agents, each equipped with a monocular camera, a communication unit, and a small processing board. With each agent able to run visual odometry onboard, CCM‐SLAM ensures their autonomy as individuals, while a central server with potentially bigger computational capacity enables their collaboration by collecting all their experiences, merging and optimizing their maps, or disseminating information back to them, where appropriate. An in‐depth analysis on benchmarking datasets addresses the scalability and the robustness of CCM‐SLAM to information loss and communication delays commonly occurring during real missions. This reveals that in the worst case of communication loss, collaboration is affected, but not the autonomy of the agents. Finally, the practicality of the proposed framework is demonstrated with real flights of three small aircraft equipped with different sensors and computational capabilities onboard and a standard laptop as the server, collaboratively estimating their poses and the scene on the fly. 相似文献
5.
This study presents computer vision modules of a multi‐unmanned aerial vehicle (UAV) system, which scored gold, silver, and bronze medals at the Mohamed Bin Zayed International Robotics Challenge 2017. This autonomous system, which was running completely on board and in real time, had to address two complex tasks in challenging outdoor conditions. In the first task, an autonomous UAV had to find, track, and land on a human‐driven car moving at 15 km/hr on a figure‐eight‐shaped track. During the second task, a group of three UAVs had to find small colored objects in a wide area, pick them up, and deliver them into a specified drop‐off zone. The computer vision modules presented here achieved computationally efficient detection, accurate localization, robust velocity estimation, and reliable future position prediction of both the colored objects and the car. These properties had to be achieved in adverse outdoor environments with changing light conditions. Lighting varied from intense direct sunlight with sharp shadows cast over the objects by the UAV itself, to reduced visibility caused by overcast to dust and sand in the air. The results presented in this paper demonstrate good performance of the modules both during testing, which took place in the harsh desert environment of the central area of United Arab Emirates, as well as during the contest, which took place at a racing complex in the urban, near‐sea location of Abu Dhabi. The stability and reliability of these modules contributed to the overall result of the contest, where our multi‐UAV system outperformed teams from world’s leading robotic laboratories in two challenging scenarios. 相似文献
6.
We present a robot, InductoBeast, that greets a new office building by learning the floorplan automatically, with minimal human intervention and a priori knowledge. Our robot architecture is unique because it combines aspects of both abductive and inductive mapping methods to solve this problem. We present experimental results spanning three ofiice environments, mapped and navigated during normal business hours. We hope these results help to establish a performance benchmark against which robust and adaptive mapping robots of the future may be measured. 相似文献
7.
Jesús Pestana Michael Maurer Daniel Muschick Manuel Hofer Friedrich Fraundorfer 《野外机器人技术杂志》2019,36(4):734-762
Achieving the autonomous deployment of aerial robots in unknown outdoor environments using only onboard computation is a challenging task. In this study, we have developed a solution to demonstrate the feasibility of autonomously deploying drones in unknown outdoor environments, with the main capability of providing an obstacle map of the area of interest in a short period of time. We focus on use cases where no obstacle maps are available beforehand, for instance, in search and rescue scenarios, and on increasing the autonomy of drones in such situations. Our vision‐based mapping approach consists of two separate steps. First, the drone performs an overview flight at a safe altitude acquiring overlapping nadir images, while creating a high‐quality sparse map of the environment by using a state‐of‐the‐art photogrammetry method. Second, this map is georeferenced, densified by fitting a mesh model and converted into an Octomap obstacle map, which can be continuously updated while performing a task of interest near the ground or in the vicinity of objects. The generation of the overview obstacle map is performed in almost real time on the onboard computer of the drone, a map of size is created in , therefore, with enough time remaining for the drone to execute other tasks inside the area of interest during the same flight. We evaluate quantitatively the accuracy of the acquired map and the characteristics of the planned trajectories. We further demonstrate experimentally the safe navigation of the drone in an area mapped with our proposed approach. 相似文献
8.
Christophe De Wagter Rick Ruijsink Ewoud J. J. Smeur Kevin G. van Hecke Freek van Tienen Erik van der Horst Bart D. W. Remes 《野外机器人技术杂志》2018,35(6):937-960
To participate in the Outback Medical Express UAV Challenge 2016, a vehicle was designed and tested that can autonomously hover precisely, takeoff and land vertically, fly fast forward efficiently, and use computer vision to locate a person and a suitable landing location. The vehicle is a novel hybrid tail‐sitter combining a delta‐shaped biplane fixed‐wing and a conventional helicopter rotor. The rotor and wing are mounted perpendicularly to each other,and the entire vehicle pitches down to transition from hover to fast forward flight where the rotor serves as propulsion. To deliver sufficient thrust in hover while still being efficient in fast forward flight, a custom rotor system was designed. The theoretical design was validated with energy measurements, wind tunnel tests, and application in real‐world missions. A rotor‐head and corresponding control algorithm were developed to allow transitioning flight with the nonconventional rotor dynamics that are caused by the fuselage rotor interaction. Dedicated electronics were designed that meet vehicle needs and comply with regulations to allow safe flight beyond visual line of sight. Vision‐based search and guidance algorithms running on a stereo‐vision fish‐eye camera were developed and tested to locate a person in cluttered terrain never seen before. Flight tests and a competition participation illustrate the applicability of the DelftaCopter concept. 相似文献
9.
Micro aerial vehicles (MAVs), especially quadrotors, have been widely used in field applications, such as disaster response, field surveillance, and search‐and‐rescue. For accomplishing such missions in challenging environments, the capability of navigating with full autonomy while avoiding unexpected obstacles is the most crucial requirement. In this paper, we present a framework for online generating safe and dynamically feasible trajectories directly on the point cloud, which is the lowest‐level representation of range measurements and is applicable to different sensor types. We develop a quadrotor platform equipped with a three‐dimensional (3D) light detection and ranging (LiDAR) and an inertial measurement unit (IMU) for simultaneously estimating states of the vehicle and building point cloud maps of the environment. Based on the incrementally registered point clouds, we online generate and refine a flight corridor, which represents the free space that the trajectory of the quadrotor should lie in. We represent the trajectory as piecewise Bézier curves by using the Bernstein polynomial basis and formulate the trajectory generation problem as a convex program. By using Bézier curves, we can constrain the position and kinodynamics of the trajectory entirely within the flight corridor and given physical limits. The proposed approach is implemented to run onboard in real‐time and is integrated into an autonomous quadrotor platform. We demonstrate fully autonomous quadrotor flights in unknown, complex environments to validate the proposed method. 相似文献
10.
Marius Beul Matthias Nieuwenhuisen Jan Quenzel Radu Alexandru Rosu Jannis Horn Dmytro Pavlichenko Sebastian Houben Sven Behnke 《野外机器人技术杂志》2019,36(1):204-229
The Mohamed Bin Zayed International Robotics Challenge (MBZIRC) 2017 has defined ambitious new benchmarks to advance the state‐of‐the‐art in autonomous operation of ground‐based and flying robots. This study covers our approaches to solve the two challenges that involved micro aerial vehicles (MAV). Challenge 1 required reliable target perception, fast trajectory planning, and stable control of an MAV to land on a moving vehicle. Challenge 3 demanded a team of MAVs to perform a search and transportation task, coined “Treasure Hunt,” which required mission planning and multirobot coordination as well as adaptive control to account for the additional object weight. We describe our base MAV setup and the challenge‐specific extensions, cover the camera‐based perception, explain control and trajectory‐planning in detail, and elaborate on mission planning and team coordination. We evaluated our systems in simulation as well as with real‐robot experiments during the competition in Abu Dhabi. With our system, we—as part of the larger team NimbRo—won the MBZIRC Grand Challenge and achieved a third place in both subchallenges involving flying robots. 相似文献
11.
In this study, we use unmanned aerial vehicles equipped with multispectral cameras to search for bodies in maritime rescue operations. A series of flights were performed in open‐water scenarios in the northwest of Spain, using a certified aquatic rescue dummy in dangerous areas and real people when the weather conditions allowed it. The multispectral images were aligned and used to train a convolutional neural network for body detection. An exhaustive evaluation was performed to assess the best combination of spectral channels for this task. Three approaches based on a MobileNet topology were evaluated, using (a) the full image, (b) a sliding window, and (c) a precise localization method. The first method classifies an input image as containing a body or not, the second uses a sliding window to yield a class for each subimage, and the third uses transposed convolutions returning a binary output in which the body pixels are marked. In all cases, the MobileNet architecture was modified by adding custom layers and preprocessing the input to align the multispectral camera channels. Evaluation shows that the proposed methods yield reliable results, obtaining the best classification performance when combining green, red‐edge, and near‐infrared channels. We conclude that the precise localization approach is the most suitable method, obtaining a similar accuracy as the sliding window but achieving a spatial localization close to 1 m. The presented system is about to be implemented for real maritime rescue operations carried out by Babcock Mission Critical Services Spain. 相似文献
12.
Danylo Malyuta Christian Brommer Daniel Hentzen Thomas Stastny Roland Siegwart Roland Brockers 《野外机器人技术杂志》2020,37(1):137-157
Recent applications of unmanned aerial systems (UAS) to precision agriculture have shown increased ease and efficiency in data collection at precise remote locations. However, further enhancement of the field requires operation over long periods of time, for example, days or weeks. This has so far been impractical due to the limited flight times of such platforms and the requirement of humans in the loop for operation. To overcome these limitations, we propose a fully autonomous rotorcraft UAS that is capable of performing repeated flights for long‐term observation missions without any human intervention. We address two key technologies that are critical for such a system: full platform autonomy to enable mission execution independently from human operators and the ability of vision‐based precision landing on a recharging station for automated energy replenishment. High‐level autonomous decision making is implemented as a hierarchy of master and slave state machines. Vision‐based precision landing is enabled by estimating the landing pad's pose using a bundle of AprilTag fiducials configured for detection from a wide range of altitudes. We provide an extensive evaluation of the landing pad pose estimation accuracy as a function of the bundle's geometry. The functionality of the complete system is demonstrated through two indoor experiments with duration of 11 and 10.6 hr, and one outdoor experiment with a duration of 4 hr. The UAS executed 16, 48, and 22 flights, respectively, during these experiments. In the outdoor experiment, the ratio between flying to collect data and charging was 1–10, which is similar to past work in this domain. All flights were fully autonomous with no human in the loop. To our best knowledge, this is the first research publication about the long‐term outdoor operation of a quadrotor system with no human interaction. 相似文献
13.
Autonomous soaring has the potential to greatly improve both the range and endurance of small robotic aircraft. This paper describes an autonomous soaring system that generates a dynamic map of lift sources (thermals) in the environment and uses this map for online flight planning and decision making. Components of the autonomy algorithm include thermal mapping, explore/exploit decision making, navigation, optimal airspeed computation, thermal centering control, and energy state estimation. A finite state machine manages the aircraft behavior during flight and determines when changing behavior is appropriate. A complete system to enable autonomous soaring is described with special attention paid to practical considerations encountered during flight testing. A companion paper describes the hardware implementation of this system and the results of a flight test campaign conducted at Aberdeen Proving Ground in September 2015. 相似文献
14.
José Martínez-Carranza Richard Bostock Simon Willcox Ian Cowling Walterio Mayol-Cuevas 《Advanced Robotics》2016,30(2):119-130
This paper develops and evaluates methods for performing auto-retrieval of a micro aerial vehicle (MAV) using fast 6D relocalisation from visual features. Auto-retrieval involves a combination of guided operation to direct the vehicle through obstacles using a human pilot and autonomous operation to navigate the vehicle on its return or during re-exploration. This approach is useful in tasks such as industrial inspection and monitoring, and in particular to operate indoors in GPS-denied environments. Our relocalisation methodology contrasts two sources of information: depth data and feature co-visibility, but in a novel manner that validates matches before a RANSAC procedure. The result is the ability of performing 6D relocalisation at an average of 50 Hz on individual maps containing 120 K features. The use of feature co-visibility reduces memory footprint as well as removes the need to employ depth data as used in previous work. This paper concludes with an example of an industrial application involving visual monitoring from a MAV aided by autonomous navigation. 相似文献
15.
Marco Baglietto Antonio Sgorbissa Damiano Verda Renato ZaccariaAuthor vitae 《Robotics and Autonomous Systems》2011,59(12):1060-1069
This article focuses on human navigation, by proposing a system for mapping and self-localization based on wearable sensors, i.e., a laser scanner and a 6 Degree-of-Freedom Inertial Measurement Unit (6DOF IMU) fixed on a helmet worn by the user. The sensor data are fed to a Simultaneous Localization And Mapping (SLAM) algorithm based on particle filtering, an approach commonly used for mapping and self-localization in mobile robotics. Given the specific scenario considered, some operational hypotheses are introduced in order to reduce the effect of a well-known problem in IMU-based localization, i.e., position drift. Experimental results show that the proposed solution leads to improvements in the quality of the generated map with respect to existing approaches. 相似文献
16.
John Peterson Weilin Li Brian Cesar‐Tondreau John Bird Kevin Kochersberger Wojciech Czaja Morgan McLean 《野外机器人技术杂志》2019,36(4):818-845
This paper discusses the results of a field experiment conducted at Savannah River National Laboratory to test the performance of several algorithms for the localization of radioactive materials. In this multirobot system, both an unmanned aerial vehicle, a custom hexacopter, and an unmanned ground vehicle (UGV), the ClearPath Jackal, equipped with γ‐ray spectrometers, were used to collect data from two radioactive source configurations. Both the Fourier scattering transform and the Laplacian eigenmap algorithms for source detection were tested on the collected data sets. These algorithms transform raw spectral measurements into alternate spaces to allow clustering to detect trends within the data which indicate the presence of radioactive sources. This study also presents a point source model and accompanying information‐theoretic active exploration algorithm. Field testing validated the ability of this model to fuse aerial and ground collected radiation measurements, and the exploration algorithm’s ability to select informative actions to reduce model uncertainty, allowing the UGV to locate radioactive material online. 相似文献
17.
Autonomous soaring has the potential to greatly improve both the range and endurance of small robotic aircraft. This paper describes the results of a test flight campaign to demonstrate an autonomous soaring system that generates a dynamic map of lift sources (thermals) in the environment and uses this map for on‐line flight planning and decision making. The aircraft is based on a commercially available radio‐controlled glider; it is equipped with an autopilot module for low‐level flight control and on‐board computer that hosts all autonomy algorithms. Components of the autonomy algorithm include thermal mapping, explore/exploit decision making, navigation, optimal airspeed computation, thermal centering control, and energy state estimation. A finite state machine manages flight behaviors and switching between behaviors. Flight tests at Aberdeen Proving Ground resulted in 7.8 h flight time with the autonomous soaring system engaged, with three hours spent climbing in thermals. Postflight computation of energy state and frequent observations of groups of birds thermalling with our aircraft indicate that it was effectively exploiting available energy. 相似文献
18.
提出了一套室内四旋翼无人机控制, 导航, 定位和地图构建的完整解决方案. 无人机机载系统包括三个主要传感器, 即惯性测量单元, 下视相机和激光扫描测距仪. 经过处理, 融合这些传感器的测量数据, 无人机能够可靠的估计自己的飞行速度和实时位置, 并且沿着室内的墙壁进行无碰撞飞行. 通过收集一个完整飞行实验的数据, 无人机的飞行路径和在室内的环境也可以被很好地估计出来. 这套系统中的自主导功能不需要任何远程传感信息或脱机计算能力. 这套室内导航方案的性能和可靠性已在实际的飞行实验中被验证. 相似文献
19.
具有广泛应用前景的微型无人机已成为各国学者的研究热点,而不依赖卫星导航系统的室内微型无人机自主导航引导技术是研究重点之一。结合近年国内外室内无人机自主导航引导技术发展情况,讨论了依靠自身传感器实现自主导航引导的关键技术问题,详细分析了无人机位姿的解算、无人机动态避障和同步定位与地图构建的关键技术的实现情况及其难点。最后,对室内无人机自主导航引导技术进行了展望。 相似文献
20.
Vision‐based aircraft detection technology may provide a credible sensing option for automated detect and avoid in small‐to‐medium size fixed‐wing unmanned aircraft systems (UAS). Reliable vision‐based aircraft detection has previously been demonstrated in sky‐region sensing environments. This paper describes a novel vision‐based system for detecting aircraft below the horizon in the presence of ground clutter. We examine the performance of our system on a data set of 63 near collision encounters we collected between a camera‐equipped manned aircraft and a below‐horizon target. In these 63 encounters, our system successfully detects all aircraft, at an average detection range of 1890 m (with a standard error of 43 m and no false alarms in 1.1 h). Furthermore, our system does not require access to inertial sensor data (which significantly reduces system cost) and operates at over 12 frames per second. 相似文献