首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
Mars Rover Autonomous Navigation   总被引:5,自引:0,他引:5  
M. Maurette 《Autonomous Robots》2003,14(2-3):199-208
Autonomous navigation of a rover on Mars surface can improve very significantly the daily traverse, particularly when driving away from the lander, into unknown areas. The autonomous navigation process developed at CNES is based on stereo cameras perception, used to build a model of the environment and generate trajectories. Multiple perception merging with propagation of the locomotion and localization errors have been implemented. The algorithms developed for Mars exploration programs, the vision hardware, the validation tools, experimental platforms and results of evaluation are presented. Portability and the evaluation of computing resources for implementation on a Mars rover are also addressed. The results show that the implementation of autonomy requires only a very small amount of energy and computing time and that the rover capabilities are fully used, allowing a much longer daily traverse than what is enabled by purely ground-planned strategies.  相似文献   

2.
Current rover localization techniques such as visual odometry have proven to be very effective on short‐ to medium‐length traverses (e.g., up to a few kilometers). This paper deals with the problem of long‐range rover localization (e.g., 10 km and up) by developing an algorithm named MOGA (Multi‐frame Odometry‐compensated Global Alignment). This algorithm is designed to globally localize a rover by matching features detected from a three‐dimensional (3D) orbital elevation map to features from rover‐based, 3D LIDAR scans. The accuracy and efficiency of MOGA are enhanced with visual odometry and inclinometer/sun‐sensor orientation measurements. The methodology was tested with real data, including 37 LIDAR scans of terrain from a Mars–Moon analog site on Devon Island, Nunavut. When a scan contained a sufficient number of good topographic features, localization produced position errors of no more than 100 m, of which most were less than 50 m and some even as low as a few meters. Results were compared to and shown to outperform VIPER, a competing global localization algorithm that was given the same initial conditions as MOGA. On a 10‐km traverse, MOGA's localization estimates were shown to significantly outperform visual odometry estimates. This paper shows how the developed algorithm can be used to accurately and autonomously localize a rover over long‐range traverses. © 2010 Wiley Periodicals, Inc.  相似文献   

3.
Computer Vision on Mars   总被引:2,自引:0,他引:2  
Increasing the level of spacecraft autonomy is essential for broadening the reach of solar system exploration. Computer vision has and will continue to play an important role in increasing autonomy of both spacecraft and Earth-based robotic vehicles. This article addresses progress on computer vision for planetary rovers and landers and has four main parts. First, we review major milestones in the development of computer vision for robotic vehicles over the last four decades. Since research on applications for Earth and space has often been closely intertwined, the review includes elements of both. Second, we summarize the design and performance of computer vision algorithms used on Mars in the NASA/JPL Mars Exploration Rover (MER) mission, which was a major step forward in the use of computer vision in space. These algorithms did stereo vision and visual odometry for rover navigation and feature tracking for horizontal velocity estimation for the landers. Third, we summarize ongoing research to improve vision systems for planetary rovers, which includes various aspects of noise reduction, FPGA implementation, and vision-based slip perception. Finally, we briefly survey other opportunities for computer vision to impact rovers, landers, and orbiters in future solar system exploration missions.  相似文献   

4.
Future exploration rovers will be equipped with substantial onboard autonomy. SLAM is a fundamental part and has a close connection with robot perception, planning, and control. The community has made great progress in the past decade by enabling real‐world solutions and is addressing important challenges in high‐level scalability, resources awareness, and domain adaptation. A novel adaptive SLAM system is proposed to accomplish rover navigation and computational demands. It starts from a three‐dimensional odometry dead reckoning solution and builds up to a full graph optimization that takes into account rover traction performance. A complete kinematics of the rover locomotion system improves the wheel odometry solution. In addition, an odometry error model is inferred using Gaussian processes (GPs) to predict nonsystematic errors induced by poor traction of the rover with the terrain. The nonparametric GP regression serves to adapt the localization and mapping to the current navigation demands (domain adaptation). The method brings scalability and adaptiveness to modern SLAM. Therefore, an adaptive strategy develops to adjust the image frame rate (active perception) and to influence the optimization backend by including high informative keyframes in the graph (adaptive information gain). The work is experimentally verified on a representative planetary rover under a realistic field test scenario. The results show a modern SLAM systems that adapt to the predicted error. The system maintains accuracy with less number of nodes taking the most benefit of both wheel and visual methods in a consistent graph‐based smoothing approach.  相似文献   

5.
目的 视觉定位旨在利用易于获取的RGB图像对运动物体进行目标定位及姿态估计。室内场景中普遍存在的物体遮挡、弱纹理区域等干扰极易造成目标关键点的错误估计,严重影响了视觉定位的精度。针对这一问题,本文提出一种主被动融合的室内定位系统,结合固定视角和移动视角的方案优势,实现室内场景中运动目标的精准定位。方法 提出一种基于平面先验的物体位姿估计方法,在关键点检测的单目定位框架基础上,使用平面约束进行3自由度姿态优化,提升固定视角下室内平面中运动目标的定位稳定性。基于无损卡尔曼滤波算法设计了一套数据融合定位系统,将从固定视角得到的被动式定位结果与从移动视角得到的主动式定位结果进行融合,提升了运动目标的位姿估计结果的可靠性。结果 本文提出的主被动融合室内视觉定位系统在iGibson仿真数据集上的平均定位精度为2~3 cm,定位误差在10 cm内的准确率为99%;在真实场景中平均定位精度为3~4 cm,定位误差在10 cm内的准确率在90%以上,实现了cm级的定位精度。结论 提出的室内视觉定位系统融合了被动式和主动式定位方法的优势,能够以较低设备成本实现室内场景中高精度的目标定位结果,并在遮挡、目标...  相似文献   

6.
Under the umbrella of the European Space Agency (ESA) StarTiger program, a rapid prototyping study called Seeker was initiated. A range of partners from space and nonspace sectors were brought together to develop a prototype Mars rover system capable of autonomously exploring several kilometers of highly representative Mars terrain over a three‐day period. This paper reports on our approach and the final field trials that took place in the Atacama Desert, Chile. Long‐range navigation and the associated remote rover field trials are a new departure for ESA, and this activity therefore represents a novel initiative in this area. The primary focus was to determine if current computer vision and artificial intelligence based software could enable such a capability on Mars, given the current limit of around 200 m per Martian day. The paper does not seek to introduce new theoretical techniques or compare various approaches, but it offers a unique perspective on their behavior in a highly representative environment. The final system autonomously navigated 5.05 km in highly representative terrain during one day. This work is part of a wider effort to achieve a step change in autonomous capability for future Mars/lunar exploration rover platforms.  相似文献   

7.
《Advanced Robotics》2013,27(4):461-482
In hand-eye systems for advanced robotic applications such as assembly, the degrees of freedom of the vision sensor should be increased and actively made use of to cope with unstable scene conditions. Particularly, in the case of using a simple vision sensor, an intelligent adaptation of the sensor is essential to compensate for its inability to adapt to a changing environment. This paper proposes a vision sensor setup planning system which operates based on environmental models and generates plans for using the sensor and its illumination assuming freedom of positioning for both. A typical vision task in which the edges of an object are measured to determine its position and orientation is assumed for the sensor setup planning. In this context, the system is able to generate plans for the camera and illumination position, and to select a set of edges best suited for determining the object's position. The system operates for stationary or moving objects by evaluating scene conditions such as edge length, contrast, and relative angles based on a model of the object and the task environment. Automatic vision sensor setup planning functions, as shown in this paper, will play an important role not only for autonomous robotic systems, but also for teleoperation systems in assisting advanced tasks.  相似文献   

8.
魏彤  李绪 《机器人》2020,42(3):336-345
现有的同步定位与地图创建(SLAM)算法在动态环境中的定位与建图精度通常会大幅度下降,为此提出了一种基于动态区域剔除的双目视觉SLAM算法.首先,基于立体视觉几何约束方法判别场景中动态的稀疏特征点,接下来根据场景深度和颜色信息进行场景区域分割;然后利用动态点与场景分割结果标记出场景中的动态区域,进而剔除现有双目ORB-SLAM算法中动态区域内的特征点,消除场景中的动态目标对SLAM精度的影响;最后进行实验验证,本文算法在KITTI数据集上的动态区域分割查全率达到92.31%.在室外动态环境下,视觉导盲仪测试中动态区域分割查全率达到93.62%,较改进前的双目ORB-SLAM算法的直线行走定位精度提高82.75%,环境建图效果也明显改善,算法的平均处理速度达到4.6帧/秒.实验结果表明本文算法能够显著提高双目视觉SLAM算法在动态场景中的定位与建图精度,且能够满足视觉导盲的实时性要求.  相似文献   

9.
Mars microrover navigation: Performance evaluation and enhancement   总被引:1,自引:1,他引:0  
In 1996, NASA will launch the Mars Pathfinder spacecraft, which will carry an 11 kg rover to explore the immediate vicinity of the lander. To assess the capabilities of the rover, as well as to set priorities for future rover research, it is essential to evaluate the performance of its autonomous navigation system as a function of terrain characteristics. Unfortunately, very little of this kind of evaluation has been done, for either planetary rovers or terrestrial applications. To fill this gap, we have constructed a new microrover testbed consisting of the Rocky 3.2 vehicle and an indoor test arena with overhead cameras for automatic, real-time tracking of the true rover position and heading. We create Mars analog terrains in this arena by randomly distributing rocks according to an exponential model of Mars rock size frequency created from Viking lander imagery. To date, we have recorded detailed logs from over 85 navigation trials in this testbed. In this paper, we outline current plans for Mars exploration over the next decade, summarize the design of the lander and rover for the 1996 Pathfinder mission, and introduce a decomposition of rover navigation into four major functions: goal designation, rover localization, hazard detection, and path selection. We then describe the Pathfinder approach to each function, present results to date of evaluating the performance of each function, and outline our approach to enhancing performance for future missions. The results show key limitations in the quality of rover localization, the speed of hazard detection, and the ability of behavior control algorithms for path selection to negotiate the rock frequencies likely to be encountered on Mars. We believe that the facilities, methodologies, and to some extent the specific performance results presented here will provide valuable examples for efforts to evaluate robotic vehicle performance in other applications.  相似文献   

10.
Open source, low cost sensors, and robotic systems have developed to the point of being able to produce meaningful, repeatable results in real-life applications. We developed a low-cost, open source multispectral camera mounted on a small custom-built robotic rover. We compared the performance of our camera with a commercial multispectral camera and a laboratory spectrometer using minerals commonly found on Mars that exhibited different reflectance values in visible and near-infrared wavelengths. Our camera performed favourably when compared to the commercial instruments. It is a very cost effective solution for operating in extreme situations, where damage to instruments is possible. Our total system of rover and sensor would, therefore, be very useful for operating in delicate and inaccessible environments where damage to the area under investigation and to human observers is of concern.  相似文献   

11.
Long‐term autonomy in robotics requires perception systems that are resilient to unusual but realistic conditions that will eventually occur during extended missions. For example, unmanned ground vehicles (UGVs) need to be capable of operating safely in adverse and low‐visibility conditions, such as at night or in the presence of smoke. The key to a resilient UGV perception system lies in the use of multiple sensor modalities, e.g., operating at different frequencies of the electromagnetic spectrum, to compensate for the limitations of a single sensor type. In this paper, visual and infrared imaging are combined in a Visual‐SLAM algorithm to achieve localization. We propose to evaluate the quality of data provided by each sensor modality prior to data combination. This evaluation is used to discard low‐quality data, i.e., data most likely to induce large localization errors. In this way, perceptual failures are anticipated and mitigated. An extensive experimental evaluation is conducted on data sets collected with a UGV in a range of environments and adverse conditions, including the presence of smoke (obstructing the visual camera), fire, extreme heat (saturating the infrared camera), low‐light conditions (dusk), and at night with sudden variations of artificial light. A total of 240 trajectory estimates are obtained using five different variations of data sources and data combination strategies in the localization method. In particular, the proposed approach for selective data combination is compared to methods using a single sensor type or combining both modalities without preselection. We show that the proposed framework allows for camera‐based localization resilient to a large range of low‐visibility conditions.  相似文献   

12.
In spite of the good performance of space exploratory missions, open issues still await to be solved. In autonomous or composite semi‐autonomous exploration of planetary land surfaces, rover localization is such an issue. The rovers of these missions (e.g., the MER and MSL) navigate relatively to their landing spot, ignoring their exact position on the coordinate system defined for the celestial body they explore. However, future advanced missions, like the Mars Sample Return, will require the localization of rovers on a global frame rather than the arbitrarily defined landing frame. In this paper we attempt to retrieve the absolute rover's location by identifying matching Regions of Interest (ROIs) between orbital and land images. In particular, we propose a system comprising two parts, an offline and an onboard one, which functions as follows: in advance of the mission a Global ROI Network (GN) is built offline by investigating the satellite images near the predicted touchdown ellipse, while during the mission a Local ROI Network (LN) is constructed counting on the images acquired by the vision system of the rover along its traverse. The last procedure relies on the accurate VO‐based relative rover localization. The LN is then paired with the GN through a modified 2D DARCES algorithm. The system has been assessed on real data collected by the ESA at the Atacama desert. The results demonstrate the system's potential to perform absolute localization, on condition that the area includes discriminative ROIs. The main contribution of this work is the enablement of global localization performed on contemporary rovers without requiring any additional hardware, such as long range LIDARs.  相似文献   

13.
以“祝融号”火星车为研究对象,基于车辆地面力学理论分析了火星车发生沉陷的机理,基于滑转率、电流、沉陷深度等给出沉陷判别方法,制定了不同条件下“祝融号”火星车的沉陷判别准则及脱困策略。验证试验结果表明:临界沉陷时滑转率为61.25%、驱动电流为1.90 A,达到沉陷临界值后“祝融号”火星车将无法直接驶离,而“祝融号”火星车主动悬架的蠕动是实现脱困的有效方法。研究成果可为“祝融号”火星车在轨使用策略提供试验数据和参考。  相似文献   

14.
This paper presents a 3D contour reconstruction approach employing a wheeled mobile robot equipped with an active laser‐vision system. With observation from an onboard CCD camera, a laser line projector fixed‐mounted below the camera is used for detecting the bottom shape of an object while an actively‐controlled upper laser line projector is utilized for 3D contour reconstruction. The mobile robot is driven to move around the object by a visual servoing and localization technique while the 3D contour of the object is being reconstructed based on the 2D image of the projected laser line. Asymptotical convergence of the closed‐loop system has been established. The proposed algorithm also has been used experimentally with a Dr Robot X80sv mobile robot upgraded with the low‐cost active laser‐vision system, thereby demonstrating effective real‐time performance. This seemingly novel laser‐vision robotic system can be applied further in unknown environments for obstacle avoidance and guidance control tasks. Copyright © 2011 John Wiley and Sons Asia Pte Ltd and Chinese Automatic Control Society  相似文献   

15.
《Real》1999,5(2):95-107
Human beings act mysteriously well on object recognition tasks; they perceive images by sensors and convey information that is processed in parallel in the brain. To some extent, massively parallel computers offer a natural support for similar tasks, since the detection of an object in a scene can be performed by repeating the same operations in different zones of the scene. Unfortunately, most parametric models, commonly used in computer vision, are not very suitable for complex matching operations that involve both noise and severe image distortions.In this paper we discuss an expectation-driven approach for object recognition where, on the basis of the shape of the object to be recognized, we select a few possible zones of the scene where attention will be focused (shape perception): then we examine the previously selected areas, tyring to confirm or reject hypotheses of objects, if any (object classification). We propose the use of an architecture that relies on neural networks for both shape perception and object classification. A vision system based on the discussed architectures has been tested on board a mobile robot as a support for its localization and navigation in indoor environments. The obtained results demonstrated good tolerance with respect to both noise and landmark distortions, allowing the robot to perform its task “just-in-time”. The proposed approach has also been tested on a massively parallel architecture, with promising performance.  相似文献   

16.
This article presents a review on vision-aided systems and proposes an approach for visual rehabilitation using stereo vision technology. The proposed system utilizes stereo vision, image processing methodology, and a sonification procedure to support blind mobilization. The developed system includes wearable computer, stereo cameras as vision sensor, and stereo earphones, all molded in a helmet. The image of the scene in front of the visually handicapped is captured by the vision sensors. The captured images are processed to enhance the important features in the scene in front for mobilization assistance. The image processing is designed as a model of human vision by identifying the obstacles and their depth information. The processed image is mapped onto musical stereo sound for the blind's understanding of the scene in front. The developed method has been tested in the indoor and outdoor environments and the proposed image processing methodology is found to be effective for object identification.  相似文献   

17.
Most state-of-the-art robotic cars’ perception systems are quite different from the way a human driver understands traffic environments. First, humans assimilate information from the traffic scene mainly through visual perception, while the machine perception of traffic environments needs to fuse information from several different kinds of sensors to meet safety-critical requirements. Second, a robotic car requires nearly 100% correct perception results for its autonomous driving, while an experienced human driver works well with dynamic traffic environments, in which machine perception could easily produce noisy perception results. In this paper, we propose a vision-centered multi-sensor fusing framework for a traffic environment perception approach to autonomous driving, which fuses camera, LIDAR, and GIS information consistently via both geometrical and semantic constraints for efficient self-localization and obstacle perception. We also discuss robust machine vision algorithms that have been successfully integrated with the framework and address multiple levels of machine vision techniques, from collecting training data, efficiently processing sensor data, and extracting low-level features, to higher-level object and environment mapping. The proposed framework has been tested extensively in actual urban scenes with our self-developed robotic cars for eight years. The empirical results validate its robustness and efficiency.  相似文献   

18.
This paper describes a novel approach to simultaneous localization and mapping (SLAM) techniques applied to the autonomous planetary rover exploration scenario to reduce both the relative and absolute localization errors, using two well‐proven techniques: particle filters and scan matching. Continuous relative localization is improved by matching high‐resolution sensor scans to the online created local map. Additionally, to avoid issues with drifting localization, absolute localization is globally corrected at discrete times, according to predefined event criteria, by matching the current local map to the orbiter's global map. The resolutions of local and global maps can be appropriately chosen for computation and accuracy purposes. Further, the online generated local map, of the form of a structured elevation grid map, can also be used to evaluate the traversability of the surrounding environment and allow for continuous navigation. The objective of this study is to support long‐range low‐supervision planetary exploration. The implemented SLAM technique has been validated with a data set acquired during a field test campaign performed at the Teide Volcano on the island of Tenerife, representative of a Mars/Moon exploration scenario.  相似文献   

19.
目的 视觉感知技术是智能车系统中的一项关键技术,但是在复杂挑战下如何有效提高视觉性能已经成为智能驾驶领域的重要研究内容。本文将人工社会(artificial societies)、计算实验(computational experiments)和平行执行(parallel execution)构成的ACP方法引入智能驾驶的视觉感知领域,提出了面向智能驾驶的平行视觉感知,解决了视觉模型合理训练和评估问题,有助于智能驾驶进一步走向实际应用。方法 平行视觉感知通过人工子系统组合来模拟实际驾驶场景,构建人工驾驶场景使之成为智能车视觉感知的“计算实验室”;借助计算实验两种操作模式完成视觉模型训练与评估;最后采用平行执行动态优化视觉模型,保障智能驾驶对复杂挑战的感知与理解长期有效。结果 实验表明,目标检测的训练阶段虚实混合数据最高精度可达60.9%,比单纯用KPC(包括:KITTI(Karlsruhe Institute of Technology and Toyota Technological Institute),PASCAL VOC(pattern analysis,statistical modelling and computational learning visual object classes)和MS COCO(Microsoft common objects in context))数据和虚拟数据分别高出17.9%和5.3%;在评估阶段相较于基准数据,常规任务(-30°且垂直移动)平均精度下降11.3%,环境任务(雾天)平均精度下降21.0%,困难任务(所有挑战)平均精度下降33.7%。结论 本文为智能驾驶设计和实施了在实际驾驶场景难以甚至无法进行的视觉计算实验,对复杂视觉挑战进行分析和评估,具备加强智能车在行驶过程中感知和理解周围场景的意义。  相似文献   

20.
Rovers operating on Mars require more and more autonomous features to fulfill their challenging mission requirements. However, the inherent constraints of space systems render the implementation of complex algorithms an expensive and difficult task. In this paper, we propose an architecture for autonomous navigation. Efficient implementations of autonomous features are built on top of the ExoMars path following navigation approach to enhance the safety and traversing capabilities of the rover. These features allow the rover to detect and avoid hazards and perform significantly longer traverses planned by operators on the ground. The efficient navigation approach has been implemented and tested during field test campaigns on a planetary analogue terrain. The experiments evaluated the proposed architecture by autonomously completing several traverses of variable lengths while avoiding hazards. The approach relies only on the optical Localization Cameras stereo bench, a sensor that is found in all current rovers, and potentially allows for computationally inexpensive long‐range autonomous navigation in terrains of medium difficulty.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号