共查询到20条相似文献,搜索用时 0 毫秒
1.
Computer Vision on Mars 总被引:2,自引:0,他引:2
Larry Matthies Mark Maimone Andrew Johnson Yang Cheng Reg Willson Carlos Villalpando Steve Goldberg Andres Huertas Andrew Stein Anelia Angelova 《International Journal of Computer Vision》2007,75(1):67-92
Increasing the level of spacecraft autonomy is essential for broadening the reach of solar system exploration. Computer vision
has and will continue to play an important role in increasing autonomy of both spacecraft and Earth-based robotic vehicles.
This article addresses progress on computer vision for planetary rovers and landers and has four main parts. First, we review
major milestones in the development of computer vision for robotic vehicles over the last four decades. Since research on
applications for Earth and space has often been closely intertwined, the review includes elements of both. Second, we summarize
the design and performance of computer vision algorithms used on Mars in the NASA/JPL Mars Exploration Rover (MER) mission,
which was a major step forward in the use of computer vision in space. These algorithms did stereo vision and visual odometry
for rover navigation and feature tracking for horizontal velocity estimation for the landers. Third, we summarize ongoing
research to improve vision systems for planetary rovers, which includes various aspects of noise reduction, FPGA implementation,
and vision-based slip perception. Finally, we briefly survey other opportunities for computer vision to impact rovers, landers,
and orbiters in future solar system exploration missions. 相似文献
2.
基于立体视觉的移动机器人导航算法 总被引:1,自引:0,他引:1
移动机器人立体视觉系统不仅提供三维地形图用于障碍规避和路径规划,其结果还可以用于视觉导航。以移动机器人立体视觉系统为基础,研究了基于前后两个位置上立体图对的视觉测量算法用于移动机器人的连续导航,讨论了影响导航精度的因素和改进方法;研究了基于局部和全局三维地形图的地形匹配算法用于定期校正位置误差,算法实现简便,定位精度取决于地形图精度。实验结果证明了两种方法的有效性,可以兼顾近距离和中远距离导航任务。 相似文献
3.
Philipp Lutz Marcus G. Müller Moritz Maier Samantha Stoneman Teodor Tomi Ingo von Bargen Martin J. Schuster Florian Steidle Armin Wedler Wolfgang Stürzl Rudolph Triebel 《野外机器人技术杂志》2020,37(4):515-551
We introduce a prototype flying platform for planetary exploration: autonomous robot design for extraterrestrial applications (ARDEA). Communication with unmanned missions beyond Earth orbit suffers from time delay, thus a key criterion for robotic exploration is a robot's ability to perform tasks without human intervention. For autonomous operation, all computations should be done on‐board and Global Navigation Satellite System (GNSS) should not be relied on for navigation purposes. Given these objectives ARDEA is equipped with two pairs of wide‐angle stereo cameras and an inertial measurement unit (IMU) for robust visual‐inertial navigation and time‐efficient, omni‐directional 3D mapping. The four cameras cover a vertical field of view, enabling the system to operate in confined environments such as caves formed by lava tubes. The captured images are split into several pinhole cameras, which are used for simultaneously running visual odometries. The stereo output is used for simultaneous localization and mapping, 3D map generation and collision‐free motion planning. To operate the vehicle efficiently for a variety of missions, ARDEA's capabilities have been modularized into skills which can be assembled to fulfill a mission's objectives. These skills are defined generically so that they are independent of the robot configuration, making the approach suitable for different heterogeneous robotic teams. The diverse skill set also makes the micro aerial vehicle (MAV) useful for any task where autonomous exploration is needed. For example terrestrial search and rescue missions where visual navigation in GNSS‐denied indoor environments is crucial, such as partially collapsed man‐made structures like buildings or tunnels. We have demonstrated the robustness of our system in indoor and outdoor field tests. 相似文献
4.
Marco Legittimo Simone Felicioni Fabio Bagni Andrea Tagliavini Alberto Dionigi Francesco Gatti Micaela Verucchi Gabriele Costante Marko Bertogna 《野外机器人技术杂志》2023,40(3):626-654
In the last decades, ego-motion estimation or visual odometry (VO) has received a considerable amount of attention from the robotic research community, mainly due to its central importance in achieving robust localization and, as a consequence, autonomy. Different solutions have been explored, leading to a wide variety of approaches, mostly grounded on geometric methodologies and, more recently, on data-driven paradigms. To guide researchers and practitioners in choosing the best VO method, different benchmark studies have been published. However, the majority of them compare only a small subset of the most popular approaches and, usually, on specific data sets or configurations. In contrast, in this work, we aim to provide a complete and thorough study of the most popular and best-performing geometric and data-driven solutions for VO. In our investigation, we considered several scenarios and environments, comparing the estimation accuracies and the role of the hyper-parameters of the approaches selected, and analyzing the computational resources they require. Experiments and tests are performed on different data sets (both publicly available and self-collected) and two different computational boards. The experimental results show pros and cons of the tested approaches under different perspectives. The geometric simultaneous localization and mapping methods are confirmed to be the best performing, while data-driven approaches show robustness with respect to nonideal conditions present in more challenging scenarios. 相似文献
5.
6.
在实际应用中,若图像中的动态特征数量多且运动方向一致,这些特征会对视觉里程计的估计结果产生严重的影响.本文针对这类问题提出一种根据图像特征点位置解耦估计摄像机旋转-平移的立体视觉里程计算法.算法通过立体视觉系统将特征点划分成"远点"和"近点".在随机抽样一致性算法(RANSAC)框架下,采用"远点"估计视觉系统的姿态;进而在姿态已知的条件下,通过"近点"估计摄像机平移,实现旋转-平移解耦计算.这样处理可以通过姿态约束减少近距离运动物体对视觉里程计的影响.实验表明,在实际道路环境中,本文基于旋转-平移解耦估计的算法较之传统的同时估计旋转-平移的算法,能有效剔除动态特征.所提出算法对动态特征的抗干扰能力更好,鲁棒性更强,精度更高. 相似文献
7.
In spite of the good performance of space exploratory missions, open issues still await to be solved. In autonomous or composite semi‐autonomous exploration of planetary land surfaces, rover localization is such an issue. The rovers of these missions (e.g., the MER and MSL) navigate relatively to their landing spot, ignoring their exact position on the coordinate system defined for the celestial body they explore. However, future advanced missions, like the Mars Sample Return, will require the localization of rovers on a global frame rather than the arbitrarily defined landing frame. In this paper we attempt to retrieve the absolute rover's location by identifying matching Regions of Interest (ROIs) between orbital and land images. In particular, we propose a system comprising two parts, an offline and an onboard one, which functions as follows: in advance of the mission a Global ROI Network (GN) is built offline by investigating the satellite images near the predicted touchdown ellipse, while during the mission a Local ROI Network (LN) is constructed counting on the images acquired by the vision system of the rover along its traverse. The last procedure relies on the accurate VO‐based relative rover localization. The LN is then paired with the GN through a modified 2D DARCES algorithm. The system has been assessed on real data collected by the ESA at the Atacama desert. The results demonstrate the system's potential to perform absolute localization, on condition that the area includes discriminative ROIs. The main contribution of this work is the enablement of global localization performed on contemporary rovers without requiring any additional hardware, such as long range LIDARs. 相似文献
8.
9.
10.
Dimitrios Geromichalos Martin Azkarate Emmanouil Tsardoulias Levin Gerdes Loukas Petrou Carlos Perez Del Pulgar 《野外机器人技术杂志》2020,37(5):830-847
This paper describes a novel approach to simultaneous localization and mapping (SLAM) techniques applied to the autonomous planetary rover exploration scenario to reduce both the relative and absolute localization errors, using two well‐proven techniques: particle filters and scan matching. Continuous relative localization is improved by matching high‐resolution sensor scans to the online created local map. Additionally, to avoid issues with drifting localization, absolute localization is globally corrected at discrete times, according to predefined event criteria, by matching the current local map to the orbiter's global map. The resolutions of local and global maps can be appropriately chosen for computation and accuracy purposes. Further, the online generated local map, of the form of a structured elevation grid map, can also be used to evaluate the traversability of the surrounding environment and allow for continuous navigation. The objective of this study is to support long‐range low‐supervision planetary exploration. The implemented SLAM technique has been validated with a data set acquired during a field test campaign performed at the Teide Volcano on the island of Tenerife, representative of a Mars/Moon exploration scenario. 相似文献
11.
The Telerobotics Program of the National Aeronautics and Space Administration (NASA) Office of Space Science is developing innovative telerobotics technologies to enable or support a wide range of space missions over the next decade and beyond. These technologies fall into four core application areas: landers, surface vehicles (rovers), and aerovehicles for solar system exploration and science; rovers for commercially supported lunar activities; free-flying and platform-attached robots for in-orbit servicing and assembly; and robots supporting in-orbit biotechnology and microgravity experiments. Such advanced robots will enable missions to explore Mars, Venus, and Saturn's moon Titan, as well as probes to sample comets and asteroids. They may also play an important role in commercially funded exploration of large regions on Earth's Moon, as well as the eventual development of a human-supporting Lunar Outpost. In addition, in-orbit servicing of satellites and maintenance of large platforms like the International Space Station will require extensive robotics capabilities. 相似文献
12.
Outdoor Visual Position Estimation for Planetary Rovers 总被引:2,自引:0,他引:2
This paper describes (1) a novel, effective algorithm for outdoor visual position estimation; (2) the implementation of this algorithm in the Viper system; and (3) the extensive tests that have demonstrated the superior accuracy and speed of the algorithm. The Viper system (Visual Position Estimator for Rovers) is geared towards robotic space missions, and the central purpose of the system is to increase the situational awareness of a rover operator by presenting accurate position estimates. The system has been extensively tested with terrestrial and lunar imagery, in terrains ranging from moderate—the rounded hills of Pittsburgh and the high deserts of Chile—to rugged—the dramatic relief of the Apollo 17 landing site—to extreme—the jagged peaks of the Rockies. Results have consistently demonstrated that the visual estimation algorithm estimates position with an accuracy and reliability that greatly surpass previous work. 相似文献
13.
Future lunar/planetary exploration missions will demand mobile robots with the capability of reaching more challenging science targets and driving farther per day than the current Mars rovers. Among other improvements, reliable slippage estimation and compensation strategies will play a key role in enabling a safer and more efficient navigation. This paper reviews and discusses this body of research in the context of planetary exploration rovers. Previously published state‐of‐the‐art methods that have been validated through field testing are included as exemplary results. Limitations of the current techniques and recommendations for future developments and planetary missions close the survey. 相似文献
14.
15.
José Martínez-Carranza Richard Bostock Simon Willcox Ian Cowling Walterio Mayol-Cuevas 《Advanced Robotics》2016,30(2):119-130
This paper develops and evaluates methods for performing auto-retrieval of a micro aerial vehicle (MAV) using fast 6D relocalisation from visual features. Auto-retrieval involves a combination of guided operation to direct the vehicle through obstacles using a human pilot and autonomous operation to navigate the vehicle on its return or during re-exploration. This approach is useful in tasks such as industrial inspection and monitoring, and in particular to operate indoors in GPS-denied environments. Our relocalisation methodology contrasts two sources of information: depth data and feature co-visibility, but in a novel manner that validates matches before a RANSAC procedure. The result is the ability of performing 6D relocalisation at an average of 50 Hz on individual maps containing 120 K features. The use of feature co-visibility reduces memory footprint as well as removes the need to employ depth data as used in previous work. This paper concludes with an example of an industrial application involving visual monitoring from a MAV aided by autonomous navigation. 相似文献
16.
17.
月球车巡视探测的双目视觉里程算法与实验研究 总被引:2,自引:0,他引:2
月球车在月面巡视的移动距离测量是实现安全有效探测的重要保障.基于视觉里程计的定位方法是解决月面滑移,提高行驶里程推算精度的有效方法,对月球车实现高精度定位具有重要意义.本文对双目视觉里程算法的设计及实现技术进行了深入研究,并对基于不同特征提取算法的视觉里程定位方法进行了实验验证,通过与高精度全站仪数据比较,验证了算法的测量精度和有效性. 相似文献
18.
为了提升复杂环境中双目视觉里程计的精度,提出一种考虑多位姿估计约束的双目视觉里程计方法.首先,分别建立匹配深度已知点与深度未知点的数学模型,将深度未知点引入2D-2D位姿估计模型,从而充分利用图像信息;然后,基于关键帧地图点改进3D-2D位姿估计模型,并结合当前帧地图点更新关键帧地图点,从而增加匹配点对数,提高位姿估计精度;最后,根据改进的2D-2D及3D-2D位姿估计模型,建立多位姿估计约束位姿估计模型,结合局部光束平差法对位姿估计进行局部优化,达到定位精度高且累积误差小的效果.数据集实验和实际场景在线实验表明,所提出方法满足实时定位要求,且有效地提高了自主定位精度. 相似文献
19.
In this paper we present a novel technique to analyze stereo images generated from a SEM. The two main features of this technique are that it uses a binary linear programming approach to set up and solve the correspondence problem and that it uses constraints based on the physics of SEM image formation. Binary linear programming is a powerful tool with which to tackle constrained optimization problems, especially in cases that involve matching between one data set and another. We have also analyzed the process of SEM image formation, and present constraints that are useful in solving the stereo correspondence problem. This technique has been tested on many images. Results for a few wafers are included here. 相似文献
20.
Levin Gerdes Martin Azkarate Jos Ricardo Snchez‐Ibez Luc Joudrier Carlos Jesús Perez‐del‐Pulgar 《野外机器人技术杂志》2020,37(7):1153-1170
Rovers operating on Mars require more and more autonomous features to fulfill their challenging mission requirements. However, the inherent constraints of space systems render the implementation of complex algorithms an expensive and difficult task. In this paper, we propose an architecture for autonomous navigation. Efficient implementations of autonomous features are built on top of the ExoMars path following navigation approach to enhance the safety and traversing capabilities of the rover. These features allow the rover to detect and avoid hazards and perform significantly longer traverses planned by operators on the ground. The efficient navigation approach has been implemented and tested during field test campaigns on a planetary analogue terrain. The experiments evaluated the proposed architecture by autonomously completing several traverses of variable lengths while avoiding hazards. The approach relies only on the optical Localization Cameras stereo bench, a sensor that is found in all current rovers, and potentially allows for computationally inexpensive long‐range autonomous navigation in terrains of medium difficulty. 相似文献