首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到11条相似文献,搜索用时 0 毫秒
1.
Computer Vision on Mars   总被引:2,自引:0,他引:2  
Increasing the level of spacecraft autonomy is essential for broadening the reach of solar system exploration. Computer vision has and will continue to play an important role in increasing autonomy of both spacecraft and Earth-based robotic vehicles. This article addresses progress on computer vision for planetary rovers and landers and has four main parts. First, we review major milestones in the development of computer vision for robotic vehicles over the last four decades. Since research on applications for Earth and space has often been closely intertwined, the review includes elements of both. Second, we summarize the design and performance of computer vision algorithms used on Mars in the NASA/JPL Mars Exploration Rover (MER) mission, which was a major step forward in the use of computer vision in space. These algorithms did stereo vision and visual odometry for rover navigation and feature tracking for horizontal velocity estimation for the landers. Third, we summarize ongoing research to improve vision systems for planetary rovers, which includes various aspects of noise reduction, FPGA implementation, and vision-based slip perception. Finally, we briefly survey other opportunities for computer vision to impact rovers, landers, and orbiters in future solar system exploration missions.  相似文献   

2.
This study presents the electromechanical design, the control approach, and the results of a field test campaign with the hybrid wheeled‐leg rover SherpaTT. The rover ranges in the 150 kg class and features an actively articulated suspension system comprising four legs with actively driven and steered wheels at each leg’s end. Five active degrees of freedom are present in each of the legs, resulting in 20 active degrees of freedom for the complete locomotion system. The control approach is based on force measurements at each wheel mounting point and roll–pitch measurements of the rover’s main body, allowing active adaption to sloping terrain, active shifting of the center of gravity within the rover’s support polygon, active roll–pitch influencing, and body‐ground clearance control. Exteroceptive sensors such as camera or laser range finder are not required for ground adaption. A purely reactive approach is used, rendering a planning algorithm for stability control or force distribution unnecessary and thus simplifying the control efforts. The control approach was tested within a 4‐week field deployment in the desert of Utah. The results presented in this paper substantiate the feasibility of the chosen approach: The main power requirement for locomotion is from the drive system, active adaption only plays a minor role in power consumption. Active force distribution between the wheels is successful in different footprints and terrain types and is not influenced by controlling the body’s roll–pitch angle in parallel to the force control. Slope‐climbing capabilities of the system were successfully tested in slopes of up to 28° inclination, covered with loose soil and duricrust. The main contribution of this study is the experimental validation of the actively articulated suspension of SherpaTT in conjunction with a reactive control approach. Consequently, hardware and software design as well as experimentation are part of this study.  相似文献   

3.
In the last decades, ego-motion estimation or visual odometry (VO) has received a considerable amount of attention from the robotic research community, mainly due to its central importance in achieving robust localization and, as a consequence, autonomy. Different solutions have been explored, leading to a wide variety of approaches, mostly grounded on geometric methodologies and, more recently, on data-driven paradigms. To guide researchers and practitioners in choosing the best VO method, different benchmark studies have been published. However, the majority of them compare only a small subset of the most popular approaches and, usually, on specific data sets or configurations. In contrast, in this work, we aim to provide a complete and thorough study of the most popular and best-performing geometric and data-driven solutions for VO. In our investigation, we considered several scenarios and environments, comparing the estimation accuracies and the role of the hyper-parameters of the approaches selected, and analyzing the computational resources they require. Experiments and tests are performed on different data sets (both publicly available and self-collected) and two different computational boards. The experimental results show pros and cons of the tested approaches under different perspectives. The geometric simultaneous localization and mapping methods are confirmed to be the best performing, while data-driven approaches show robustness with respect to nonideal conditions present in more challenging scenarios.  相似文献   

4.
The Mars Science Laboratory (MSL) Curiosity rover landed in Gale crater in August of 2012 on its mission to explore Mt. Sharp as the first planetary rover to collect and analyze rock and regolith samples. On this new mission, sampling operations were conceived to be executed serially and in situ, on a “sample chain” along which sample would be collected, then processed, then delivered to sample analysis instruments, analyzed there, and then discarded so the chain could be repeated. This paper describes the evolution of this relatively simple chain into a richer sampling network, responding to science and engineering desires that came into focus only as the mission matured, scientific discoveries were made, and anomalies were encountered. The rover flight and ground system architectures retained significant heritage from past missions, while extending capabilities in anticipation of the need for adaptation. As evolution occurred, the architecture permitted nimble extension of sampling behavior without time‐consuming flight software updates or significant impact to daily operations. This paper presents the major components of this architecture and discusses some of the results of successful adaptation across thousands of Sols of Mars operations.  相似文献   

5.
In spite of the good performance of space exploratory missions, open issues still await to be solved. In autonomous or composite semi‐autonomous exploration of planetary land surfaces, rover localization is such an issue. The rovers of these missions (e.g., the MER and MSL) navigate relatively to their landing spot, ignoring their exact position on the coordinate system defined for the celestial body they explore. However, future advanced missions, like the Mars Sample Return, will require the localization of rovers on a global frame rather than the arbitrarily defined landing frame. In this paper we attempt to retrieve the absolute rover's location by identifying matching Regions of Interest (ROIs) between orbital and land images. In particular, we propose a system comprising two parts, an offline and an onboard one, which functions as follows: in advance of the mission a Global ROI Network (GN) is built offline by investigating the satellite images near the predicted touchdown ellipse, while during the mission a Local ROI Network (LN) is constructed counting on the images acquired by the vision system of the rover along its traverse. The last procedure relies on the accurate VO‐based relative rover localization. The LN is then paired with the GN through a modified 2D DARCES algorithm. The system has been assessed on real data collected by the ESA at the Atacama desert. The results demonstrate the system's potential to perform absolute localization, on condition that the area includes discriminative ROIs. The main contribution of this work is the enablement of global localization performed on contemporary rovers without requiring any additional hardware, such as long range LIDARs.  相似文献   

6.
Future lunar/planetary exploration missions will demand mobile robots with the capability of reaching more challenging science targets and driving farther per day than the current Mars rovers. Among other improvements, reliable slippage estimation and compensation strategies will play a key role in enabling a safer and more efficient navigation. This paper reviews and discusses this body of research in the context of planetary exploration rovers. Previously published state‐of‐the‐art methods that have been validated through field testing are included as exemplary results. Limitations of the current techniques and recommendations for future developments and planetary missions close the survey.  相似文献   

7.
刚体目标姿态作为计算机视觉技术的重点研究方向之一,旨在确定场景中3维目标的位置平移和方位旋转等多个自由度,越来越多地应用在工业机械臂操控、空间在轨服务、自动驾驶和现实增强等领域。本文对基于单幅图像的刚体目标姿态过程、方法分类及其现存问题进行了整体综述。通过利用单幅刚体目标图像实现多自由度姿态估计的各类方法进行总结、分类及比较,重点论述了姿态估计的一般过程、估计方法的演进和划分、常用数据集及评估准则、研究现状与展望。目前,多自由度刚体目标姿态估计方法主要针对单一特定应用场景具有较好的效果,还没有通用于复合场景的方法,且现有方法在面对多种光照条件、杂乱遮挡场景、旋转对称和类间相似性目标时,估计精度和效率下降显著。结合现存问题及当前深度学习技术的助推影响,从场景级多目标推理、自监督学习方法、前端检测网络、轻量高效的网络设计、多信息融合姿态估计框架和图像数据表征空间等6个方面对该领域的发展趋势进行预测和展望。  相似文献   

8.
航天活动中的许多任务如对目标的绕飞观测和逼近停靠等测控任务,其中的关键技术之一是相对位姿估计,而对非合作目标的相对位姿估计更是其中的重点与难点。针对该难点,提出了一种融合单目相机和激光测距仪的空间非合作目标相对位姿紧耦合估计方法。采用单目相机获取目标序列图像,在初始化时利用激光测距仪解决单目相机尺度模糊性问题,构建真实尺度下的世界坐标系,在后续对非合作目标进行连续位姿估计时,使用紧耦合的形式融合相机与激光测距仪数据来优化估计位姿,并且解决估计漂移问题。最后使用Blender软件生成空间非合作目标序列图像,仿真验证了本文算法能稳健地得到较高精度的空间非合作目标的相对位姿,且拥有较好的实时性。  相似文献   

9.
In this paper we apply the Clifford geometric algebra for solving problems of visually guided robotics. In particular, using the algebra of motors we model the 3D rigid motion transformation of points, lines and planes useful for computer vision and robotics. The effectiveness of the Clifford algebra representation is illustrated by the example of the hand-eye calibration. It is shown that the problem of the hand-eye calibration is equivalent to the estimation of motion of lines. The authors developed a new linear algorithm which estimates simultaneously translation and rotation as components of rigid motion.  相似文献   

10.
A central task of computer vision is to automatically recognize objects in real-world scenes. The parameters defining image and object spaces can vary due to lighting conditions, camera calibration and viewing position. It is therefore desirable to look for geometric properties of the object which remain invariant under such changes in the observation parameters. The study of such geometric invariance is a field of active research. This paper presents the theory and computation of projective invariants formed from points and lines using the geometric algebra framework. This work shows that geometric algebra is a very elegant language for expressing projective invariants using n views. The paper compares projective invariants involving two and three cameras using simulated and real images. Illustrations of the application of such projective invariants in visual guided grasping, camera self-localization and reconstruction of shape and motion complement the experimental part.  相似文献   

11.
This paper presents a homotopy-based algorithm for the recovery of depth cues in the spatial domain. The algorithm specifically deals with defocus blur and spatial shifts, that is 2D motion, stereo disparities and/or zooming disparities. These cues are estimated from two images of the same scene acquired by a camera evolving in time and/or space. We show that they can be simultaneously computed by resolving a system of equations using a homotopy method. The proposed algorithm is tested using synthetic and real images. The results confirm that the use of a homotopy method leads to a dense and accurate estimation of depth cues. This approach has been integrated into an application for relief estimation from remotely sensed images.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号