首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
    
One of the main challenges for autonomous aerial robots is to land safely on a target position on varied surface structures in real‐world applications. Most of current aerial robots (especially multirotors) use only rigid landing gears, which limit the adaptability to environments and can cause damage to the sensitive cameras and other electronics onboard. This paper presents a bioinpsired landing system for autonomous aerial robots, built on the inspire–abstract–implement design paradigm and an additive manufacturing process for soft thermoplastic materials. This novel landing system consists of 3D printable Sarrus shock absorbers and soft landing pads which are integrated with an one‐degree‐of‐freedom actuation mechanism. Both designs of the Sarrus shock absorber and the soft landing pad are analyzed via finite element analysis, and are characterized with dynamic mechanical measurements. The landing system with 3D printed soft components is characterized by completing landing tests on flat, convex, and concave steel structures and grassy field in a total of 60 times at different speeds between 1 and 2 m/s. The adaptability and shock absorption capacity of the proposed landing system is then evaluated and benchmarked against rigid legs. It reveals that the system is able to adapt to varied surface structures and reduce impact force by 540N at maximum. The bioinspired landing strategy presented in this paper opens a promising avenue in Aerial Biorobotics, where a cross‐disciplinary approach in vehicle control and navigation is combined with soft technologies, enabled with adaptive morphology.  相似文献   

2.
    
Autonomous soaring has the potential to greatly improve both the range and endurance of small robotic aircraft. This paper describes the results of a test flight campaign to demonstrate an autonomous soaring system that generates a dynamic map of lift sources (thermals) in the environment and uses this map for on‐line flight planning and decision making. The aircraft is based on a commercially available radio‐controlled glider; it is equipped with an autopilot module for low‐level flight control and on‐board computer that hosts all autonomy algorithms. Components of the autonomy algorithm include thermal mapping, explore/exploit decision making, navigation, optimal airspeed computation, thermal centering control, and energy state estimation. A finite state machine manages flight behaviors and switching between behaviors. Flight tests at Aberdeen Proving Ground resulted in 7.8 h flight time with the autonomous soaring system engaged, with three hours spent climbing in thermals. Postflight computation of energy state and frequent observations of groups of birds thermalling with our aircraft indicate that it was effectively exploiting available energy.  相似文献   

3.
    
Achieving the autonomous deployment of aerial robots in unknown outdoor environments using only onboard computation is a challenging task. In this study, we have developed a solution to demonstrate the feasibility of autonomously deploying drones in unknown outdoor environments, with the main capability of providing an obstacle map of the area of interest in a short period of time. We focus on use cases where no obstacle maps are available beforehand, for instance, in search and rescue scenarios, and on increasing the autonomy of drones in such situations. Our vision‐based mapping approach consists of two separate steps. First, the drone performs an overview flight at a safe altitude acquiring overlapping nadir images, while creating a high‐quality sparse map of the environment by using a state‐of‐the‐art photogrammetry method. Second, this map is georeferenced, densified by fitting a mesh model and converted into an Octomap obstacle map, which can be continuously updated while performing a task of interest near the ground or in the vicinity of objects. The generation of the overview obstacle map is performed in almost real time on the onboard computer of the drone, a map of size is created in , therefore, with enough time remaining for the drone to execute other tasks inside the area of interest during the same flight. We evaluate quantitatively the accuracy of the acquired map and the characteristics of the planned trajectories. We further demonstrate experimentally the safe navigation of the drone in an area mapped with our proposed approach.  相似文献   

4.
    
Autonomous flight of unmanned full‐size rotor‐craft has the potential to enable many new applications. However, the dynamics of these aircraft, prevailing wind conditions, the need to operate over a variety of speeds and stringent safety requirements make it difficult to generate safe plans for these systems. Prior work has shown results for only parts of the problem. Here we present the first comprehensive approach to planning safe trajectories for autonomous helicopters from takeoff to landing. Our approach is based on two key insights. First, we compose an approximate solution by cascading various modules that can efficiently solve different relaxations of the planning problem. Our framework invokes a long‐term route optimizer, which feeds a receding‐horizon planner which in turn feeds a high‐fidelity safety executive. Secondly, to deal with the diverse planning scenarios that may arise, we hedge our bets with an ensemble of planners. We use a data‐driven approach that maps a planning context to a diverse list of planning algorithms that maximize the likelihood of success. Our approach was extensively evaluated in simulation and in real‐world flight tests on three different helicopter systems for duration of more than 109 autonomous hours and 590 pilot‐in‐the‐loop hours. We provide an in‐depth analysis and discuss the various tradeoffs of decoupling the problem, using approximations and leveraging statistical techniques. We summarize the insights with the hope that it generalizes to other platforms and applications.  相似文献   

5.
    
For marine industrial inspection, archaeology, and geological formation study, the ability to map unknown underwater enclosed and confined spaces is desirable and well suited for robotic vehicles. To date, there are few solutions thoroughly tested in the field designed to perform this specific task, none of which operate autonomously. With a small, low‐cost biomimetic platform known as the U‐CAT, we developed a mapping‐mission software architecture in which the vehicle executes three key sensor‐based reactive stages: entering, exploring, and exiting. Encapsulated in the exploring stage are several state‐defined navigation strategies, called patterns, which were designed and initially tested in simulation. The results of simulation work informed the selection of two patterns that were executed in field trials at a submerged building in Rummu Quarry Lake, Estonia, as part of several full mapping missions. Over the course of these trials, the vehicle was capable of observing the majority (78–97%) of 49.9 explorable square meters within 7 minutes. Based on these results, we demonstrate the capability of a low‐cost and resource‐constrained vehicle to perform confined space mapping under sensor uncertainty. Further, the observations made by the vehicle are shown to be suitable for a target site reconstruction and analysis in postprocessing, which is the intended outcome of this type of mission in practical applications.  相似文献   

6.
    
Aerial cinematography is revolutionizing industries that require live and dynamic camera viewpoints such as entertainment, sports, and security. However, safely piloting a drone while filming a moving target in the presence of obstacles is immensely taxing, often requiring multiple expert human operators. Hence, there is a demand for an autonomous cinematographer that can reason about both geometry and scene context in real‐time. Existing approaches do not address all aspects of this problem; they either require high‐precision motion‐capture systems or global positioning system tags to localize targets, rely on prior maps of the environment, plan for short time horizons, or only follow fixed artistic guidelines specified before the flight. In this study, we address the problem in its entirety and propose a complete system for real‐time aerial cinematography that for the first time combines: (a) vision‐based target estimation; (b) 3D signed‐distance mapping for occlusion estimation; (c) efficient trajectory optimization for long time‐horizon camera motion; and (d) learning‐based artistic shot selection. We extensively evaluate our system both in simulation and in field experiments by filming dynamic targets moving through unstructured environments. Our results indicate that our system can operate reliably in the real world without restrictive assumptions. We also provide in‐depth analysis and discussions for each module, with the hope that our design tradeoffs can generalize to other related applications. Videos of the complete system can be found at https://youtu.be/ookhHnqmlaU .  相似文献   

7.
    
We present an open‐source system for Micro‐Aerial Vehicle (MAV) autonomous navigation from vision‐based sensing. Our system focuses on dense mapping, safe local planning, and global trajectory generation, especially when using narrow field‐of‐view sensors in very cluttered environments. In addition, details about other necessary parts of the system and special considerations for applications in real‐world scenarios are presented. We focus our experiments on evaluating global planning, path smoothing, and local planning methods on real maps made on MAVs in realistic search‐and‐rescue and industrial inspection scenarios. We also perform thousands of simulations in cluttered synthetic environments, and finally validate the complete system in real‐world experiments.  相似文献   

8.
    
The herein studied problem is motivated by practical needs of our participation in the Mohamed Bin Zayed International Robotics Challenge (MBZIRC) 2017 in which a team of unmanned aerial vehicles (UAVs) is requested to collect objects in the given area as quickly as possible and score according to the rewards associated with the objects. The mission time is limited, and the most time‐consuming operation is the collection of the objects themselves. Therefore, we address the problem to quickly identify the most valuable objects as surveillance planning with curvature‐constrained trajectories. The problem is formulated as a multivehicle variant of the Dubins traveling salesman problem with neighborhoods (DTSPN). Based on the evaluation of existing approaches to the DTSPN, we propose to use unsupervised learning to find satisfiable solutions with low computational requirements. Moreover, the flexibility of unsupervised learning allows considering trajectory parametrization that better fits the motion constraints of the utilized hexacopters that are not limited by the minimal turning radius as the Dubins vehicle. We propose to use Bézier curves to exploit the maximal vehicle velocity and acceleration limits. Besides, we further generalize the proposed approach to 3D surveillance planning. We report on evaluation results of the developed algorithms and experimental verification of the planned trajectories using the real UAVs utilized in our participation in MBZIRC 2017.  相似文献   

9.
    
We study the problem of planning a tour for an energy‐limited Unmanned Aerial Vehicle (UAV) to visit a set of sites in the least amount of time. We envision scenarios where the UAV can be recharged at a site or along an edge either by landing on stationary recharging stations or on Unmanned Ground Vehicles (UGVs) acting as mobile recharging stations. This leads to a new variant of the Traveling Salesperson Problem (TSP) with mobile recharging stations. We present an algorithm that finds not only the order in which to visit the sites but also when and where to land on the charging stations to recharge. Our algorithm plans tours for the UGVs as well as determines the best locations to place stationary charging stations. We study three variants for charging: Multiple stationary charging stations, single mobile charging station, and multiple mobile charging stations. As the problems we study are nondeterministic polynomial time (NP)‐Hard, we present a practical solution using Generalized TSP that finds the optimal solution that minimizes the total time, subject to the discretization of battery levels. If the UGVs are slower than the UAVs, then the algorithm also finds the minimum number of UGVs required to support the UAV mission such that the UAV is not required to wait for the UGV. Our simulation results show that the running time is acceptable for reasonably sized instances in practice. We evaluate the performance of our algorithm through simulations and proof‐of‐concept field experiments with a fully autonomous system of one UAV and UGV.  相似文献   

10.
    
This study presents a novel octree‐based three‐dimensional (3D) exploration and coverage method for autonomous underwater vehicles (AUVs). Robotic exploration can be defined as the task of obtaining a full map of an unknown environment with a robotic system, achieving full coverage of the area of interest with data from a particular sensor or set of sensors. While most robotic exploration algorithms consider only occupancy data, typically acquired by a range sensor, our approach also takes into account optical coverage, so the environment is discovered with occupancy and optical data of all discovered surfaces in a single exploration mission. In the context of underwater robotics, this capability is of particular interest, since it allows one to obtain better data while reducing operational costs and time. This study expands our previous study in 3D underwater exploration, which was demonstrated through simulation, presenting improvements in the view planning (VP) algorithm and field validation. Our proposal combines VP with frontier‐based (FB) methods, and remains light on computations even for 3D environments thanks to the use of the octree data structure. Finally, this study also presents extensive field evaluation and validation using the Girona 500 AUV. In this regard, the algorithm has been tested in different scenarios, such as a harbor structure, a breakwater structure, and an underwater boulder.  相似文献   

11.
    
This study presents computer vision modules of a multi‐unmanned aerial vehicle (UAV) system, which scored gold, silver, and bronze medals at the Mohamed Bin Zayed International Robotics Challenge 2017. This autonomous system, which was running completely on board and in real time, had to address two complex tasks in challenging outdoor conditions. In the first task, an autonomous UAV had to find, track, and land on a human‐driven car moving at 15 km/hr on a figure‐eight‐shaped track. During the second task, a group of three UAVs had to find small colored objects in a wide area, pick them up, and deliver them into a specified drop‐off zone. The computer vision modules presented here achieved computationally efficient detection, accurate localization, robust velocity estimation, and reliable future position prediction of both the colored objects and the car. These properties had to be achieved in adverse outdoor environments with changing light conditions. Lighting varied from intense direct sunlight with sharp shadows cast over the objects by the UAV itself, to reduced visibility caused by overcast to dust and sand in the air. The results presented in this paper demonstrate good performance of the modules both during testing, which took place in the harsh desert environment of the central area of United Arab Emirates, as well as during the contest, which took place at a racing complex in the urban, near‐sea location of Abu Dhabi. The stability and reliability of these modules contributed to the overall result of the contest, where our multi‐UAV system outperformed teams from world’s leading robotic laboratories in two challenging scenarios.  相似文献   

12.
We present a robot, InductoBeast, that greets a new office building by learning the floorplan automatically, with minimal human intervention and a priori knowledge. Our robot architecture is unique because it combines aspects of both abductive and inductive mapping methods to solve this problem. We present experimental results spanning three ofiice environments, mapped and navigated during normal business hours. We hope these results help to establish a performance benchmark against which robust and adaptive mapping robots of the future may be measured.  相似文献   

13.
    
This paper addresses the perception, control, and trajectory planning for an aerial platform to identify and land on a moving car at 15 km/hr. The hexacopter unmanned aerial vehicle (UAV), equipped with onboard sensors and a computer, detects the car using a monocular camera and predicts the car future movement using a nonlinear motion model. While following the car, the UAV lands on its roof, and it attaches itself using magnetic legs. The proposed system is fully autonomous from takeoff to landing. Numerous field tests were conducted throughout the year‐long development and preparations for the Mohamed Bin Zayed International Robotics Challenge (MBZIRC) 2017 competition, for which the system was designed. We propose a novel control system in which a model predictive controller is used in real time to generate a reference trajectory for the UAV, which are then tracked by the nonlinear feedback controller. This combination allows to track predictions of the car motion with minimal position error. The evaluation presents three successful autonomous landings during the MBZIRC 2017, where our system achieved the fastest landing among all competing teams.  相似文献   

14.
    
We introduce a prototype flying platform for planetary exploration: autonomous robot design for extraterrestrial applications (ARDEA). Communication with unmanned missions beyond Earth orbit suffers from time delay, thus a key criterion for robotic exploration is a robot's ability to perform tasks without human intervention. For autonomous operation, all computations should be done on‐board and Global Navigation Satellite System (GNSS) should not be relied on for navigation purposes. Given these objectives ARDEA is equipped with two pairs of wide‐angle stereo cameras and an inertial measurement unit (IMU) for robust visual‐inertial navigation and time‐efficient, omni‐directional 3D mapping. The four cameras cover a 24 0 ° vertical field of view, enabling the system to operate in confined environments such as caves formed by lava tubes. The captured images are split into several pinhole cameras, which are used for simultaneously running visual odometries. The stereo output is used for simultaneous localization and mapping, 3D map generation and collision‐free motion planning. To operate the vehicle efficiently for a variety of missions, ARDEA's capabilities have been modularized into skills which can be assembled to fulfill a mission's objectives. These skills are defined generically so that they are independent of the robot configuration, making the approach suitable for different heterogeneous robotic teams. The diverse skill set also makes the micro aerial vehicle (MAV) useful for any task where autonomous exploration is needed. For example terrestrial search and rescue missions where visual navigation in GNSS‐denied indoor environments is crucial, such as partially collapsed man‐made structures like buildings or tunnels. We have demonstrated the robustness of our system in indoor and outdoor field tests.  相似文献   

15.
    
Safety is undoubtedly the most fundamental requirement for any aerial robotic application. It is essential to equip aerial robots with omnidirectional perception coverage to ensure safe navigation in complex environments. In this paper, we present a light‐weight and low‐cost omnidirectional perception system, which consists of two ultrawide field‐of‐view (FOV) fisheye cameras and a low‐cost inertial measurement unit (IMU). The goal of the system is to achieve spherical omnidirectional sensing coverage with the minimum sensor suite. The two fisheye cameras are mounted rigidly facing upward and downward directions and provide omnidirectional perception coverage: 360° FOV horizontally, 50° FOV vertically for stereo, and whole spherical for monocular. We present a novel optimization‐based dual‐fisheye visual‐inertial state estimator to provide highly accurate state‐estimation. Real‐time omnidirectional three‐dimensional (3D) mapping is combined with stereo‐based depth perception for the horizontal direction and monocular depth perception for upward and downward directions. The omnidirectional perception system is integrated with online trajectory planners to achieve closed‐loop, fully autonomous navigation. All computations are done onboard on a heterogeneous computing suite. Extensive experimental results are presented to validate individual modules as well as the overall system in both indoor and outdoor environments.  相似文献   

16.
  总被引:2,自引:0,他引:2  
This paper addresses the problem of autonomous cooperative localization, grasping and delivering of colored ferrous objects by a team of unmanned aerial vehicles (UAVs). In the proposed scenario, a team of UAVs is required to maximize the reward by collecting colored objects and delivering them to a predefined location. This task consists of several subtasks such as cooperative coverage path planning, object detection and state estimation, UAV self‐localization, precise motion control, trajectory tracking, aerial grasping and dropping, and decentralized team coordination. The failure recovery and synchronization job manager is used to integrate all the presented subtasks together and also to decrease the vulnerability to individual subtask failures in real‐world conditions. The whole system was developed for the Mohamed Bin Zayed International Robotics Challenge (MBZIRC) 2017, where it achieved the highest score and won Challenge No. 3—Treasure Hunt. This paper does not only contain results from the MBZIRC 2017 competition but it also evaluates the system performance in simulations and field tests that were conducted throughout the year‐long development and preparations for the competition.  相似文献   

17.
  总被引:1,自引:0,他引:1  
Micro aerial vehicles (MAVs), especially quadrotors, have been widely used in field applications, such as disaster response, field surveillance, and search‐and‐rescue. For accomplishing such missions in challenging environments, the capability of navigating with full autonomy while avoiding unexpected obstacles is the most crucial requirement. In this paper, we present a framework for online generating safe and dynamically feasible trajectories directly on the point cloud, which is the lowest‐level representation of range measurements and is applicable to different sensor types. We develop a quadrotor platform equipped with a three‐dimensional (3D) light detection and ranging (LiDAR) and an inertial measurement unit (IMU) for simultaneously estimating states of the vehicle and building point cloud maps of the environment. Based on the incrementally registered point clouds, we online generate and refine a flight corridor, which represents the free space that the trajectory of the quadrotor should lie in. We represent the trajectory as piecewise Bézier curves by using the Bernstein polynomial basis and formulate the trajectory generation problem as a convex program. By using Bézier curves, we can constrain the position and kinodynamics of the trajectory entirely within the flight corridor and given physical limits. The proposed approach is implemented to run onboard in real‐time and is integrated into an autonomous quadrotor platform. We demonstrate fully autonomous quadrotor flights in unknown, complex environments to validate the proposed method.  相似文献   

18.
    
This paper presents coupled and decoupled multi‐autonomous underwater vehicle (AUV) motion planning approaches for maximizing information gain. The work is motivated by applications in which multiple AUVs are tasked with obtaining video footage for the photogrammetric reconstruction of underwater archeological sites. Each AUV is equipped with a video camera and side‐scan sonar. The side‐scan sonar is used to initially collect low‐resolution data to construct an information map of the site. Coupled and decoupled motion planning approaches with respect to this map are presented. Both planning methods seek to generate multi‐AUV trajectories that capture close‐up video footage of a site from a variety of different viewpoints, building on prior work in single‐AUV rapidly exploring random tree (RRT) motion planning. The coupled and decoupled planners are compared in simulation. In addition, the multiple AUV trajectories constructed by each planner were executed at archeological sites located off the coast of Malta, albeit by a single‐AUV due to limited resources. Specifically, each AUV trajectory for a plan was executed in sequence instead of simultaneously. Modifications are also made by both planners to a baseline RRT algorithm. The results of the paper present a number of trade‐offs between the two planning approaches and demonstrate a large improvement in map coverage efficiency and runtime.  相似文献   

19.
    
Rovers operating on Mars require more and more autonomous features to fulfill their challenging mission requirements. However, the inherent constraints of space systems render the implementation of complex algorithms an expensive and difficult task. In this paper, we propose an architecture for autonomous navigation. Efficient implementations of autonomous features are built on top of the ExoMars path following navigation approach to enhance the safety and traversing capabilities of the rover. These features allow the rover to detect and avoid hazards and perform significantly longer traverses planned by operators on the ground. The efficient navigation approach has been implemented and tested during field test campaigns on a planetary analogue terrain. The experiments evaluated the proposed architecture by autonomously completing several traverses of variable lengths while avoiding hazards. The approach relies only on the optical Localization Cameras stereo bench, a sensor that is found in all current rovers, and potentially allows for computationally inexpensive long‐range autonomous navigation in terrains of medium difficulty.  相似文献   

20.
    
This paper discusses the results of a field experiment conducted at Savannah River National Laboratory to test the performance of several algorithms for the localization of radioactive materials. In this multirobot system, both an unmanned aerial vehicle, a custom hexacopter, and an unmanned ground vehicle (UGV), the ClearPath Jackal, equipped with γ‐ray spectrometers, were used to collect data from two radioactive source configurations. Both the Fourier scattering transform and the Laplacian eigenmap algorithms for source detection were tested on the collected data sets. These algorithms transform raw spectral measurements into alternate spaces to allow clustering to detect trends within the data which indicate the presence of radioactive sources. This study also presents a point source model and accompanying information‐theoretic active exploration algorithm. Field testing validated the ability of this model to fuse aerial and ground collected radiation measurements, and the exploration algorithm’s ability to select informative actions to reduce model uncertainty, allowing the UGV to locate radioactive material online.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号