首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We present a novel approach to estimating depth from single omnidirectional camera images by learning the relationship between visual features and range measurements available during a training phase. Our model not only yields the most likely distance to obstacles in all directions, but also the predictive uncertainties for these estimates. This information can be utilized by a mobile robot to build an occupancy grid map of the environment or to avoid obstacles during exploration—tasks that typically require dedicated proximity sensors such as laser range finders or sonars. We show in this paper how an omnidirectional camera can be used as an alternative to such range sensors. As the learning engine, we apply Gaussian processes, a nonparametric approach to function regression, as well as a recently developed extension for dealing with input-dependent noise. In practical experiments carried out in different indoor environments with a mobile robot equipped with an omnidirectional camera system, we demonstrate that our system is able to estimate range with an accuracy comparable to that of dedicated sensors based on sonar or infrared light.  相似文献   

2.
针对未知环境中声纳传感器定位与地图创建时传感数据不确定性高、可靠性低的问题,提出了一种新的室内环境建图方法。该方法建立容忍函数以判断噪声和镜面反射,同时借鉴了ArcTransversal Median Algorithm的思想和栅格概率估计并采用贝叶斯法则进行两次数据融合以减小声纳传感器信息的不确定性。在MORCS2机器人平台上实时创建地图实验表明,这种方法能快速实现从局部地图到全局地图的更新且有较好的精确性与鲁棒性。  相似文献   

3.
This paper describes a sonar sensor-based exploration method. To build an accurate map in an unknown environment during exploration, a simultaneous localization and mapping problem must be solved. Therefore, a new type of sonar feature called a ??sonar salient feature?? (SS-feature), is proposed for robust data association. The key concept of an SS-feature is to extract circle feature clouds on salient convex objects from environments by associating sets of sonar data. The SS-feature is used as an observation in the extended Kalman filter (EKF)-based SLAM framework. A suitable strategy is needed to efficiently explore the environment. We used utilities of driving cost, expected information about an unknown area, and localization quality. Through this strategy, the exploration method can greatly reduce behavior that leads a robot to explore a previously visited place, and thus shorten the exploration distance. A robot can select a favorable path for localization by localization gain during exploration. Thus, the robot can estimate its pose more robustly than other methods that do not consider localizability during exploration. This proposed exploration method was verified by various experiments, and it ensures that a robot can build an accurate map fully autonomously with sonar sensors in various home environments.  相似文献   

4.
《Advanced Robotics》2013,27(5-6):661-688
In this paper, we propose a heterogeneous multisensor fusion algorithm for mapping in dynamic environments. The algorithm synergistically integrates the information obtained from an uncalibrated camera and sonar sensors to facilitate mapping and tracking. The sonar data is mainly used to build a weighted line-based map via the fuzzy clustering technique. The line weight, with confidence corresponding to the moving object, is determined by both sonar and vision data. The motion tracking is primarily accomplished by vision data using particle filtering and the sonar vectors originated from moving objects are used to modulate the sample weighting. A fuzzy system is implemented to fuse the two sensor data features. Additionally, in order to build a consistent global map and maintain reliable tracking of moving objects, the well-known extended Kalman filter is applied to estimate the states of robot pose and map features. Thus, more robust performance in mapping as well as tracking are achieved. The empirical results carried out on the Pioneer 2DX mobile robot demonstrate that the proposed algorithm outperforms the methods a using homogeneous sensor, in mapping as well as tracking behaviors.  相似文献   

5.
Heterogeneous Teams of Modular Robots for Mapping and Exploration   总被引:3,自引:2,他引:1  
In this article, we present the design of a team of heterogeneous, centimeter-scale robots that collaborate to map and explore unknown environments. The robots, called Millibots, are configured from modular components that include sonar and IR sensors, camera, communication, computation, and mobility modules. Robots with different configurations use their special capabilities collaboratively to accomplish a given task. For mapping and exploration with multiple robots, it is critical to know the relative positions of each robot with respect to the others. We have developed a novel localization system that uses sonar-based distance measurements to determine the positions of all the robots in the group. With their positions known, we use an occupancy grid Bayesian mapping algorithm to combine the sensor data from multiple robots with different sensing modalities. Finally, we present the results of several mapping experiments conducted by a user-guided team of five robots operating in a room containing multiple obstacles.  相似文献   

6.
The position and orientation of moving platform mainly depends on global positioning system and inertial navigation system in the field of low-altitude surveying, mapping and remote sensing and land-based mobile mapping system. However, GPS signal is unavailable in the application of deep space exploration and indoor robot control. In such circumstances, image-based methods are very important for self-position and orientation of moving platform. Therefore, this paper firstly introduces state of the art development of the image-based self-position and orientation method (ISPOM) for moving platform from the following aspects: 1) A comparison among major image-based methods (i.e., visual odometry, structure from motion, simultaneous localization and mapping) for position and orientation; 2) types of moving platform; 3) integration schemes of image sensor with other sensors; 4) calculation methodology and quantity of image sensors. Then, the paper proposes a new scheme of ISPOM for mobile robot — depending merely on image sensors. It takes the advantages of both monocular vision and stereo vision, and estimates the relative position and orientation of moving platform with high precision and high frequency. In a word, ISPOM will gradually speed from research to application, as well as play a vital role in deep space exploration and indoor robot control.  相似文献   

7.
8.
This paper presents a novel approach to the vision based grid map building and localization problem that works in a complex indoor environment with a single forward viewing camera. Most existing visual SLAM has been limited to the feature-based method and only a few researchers have proposed visual SLAM methods for building a grid map using a stereo vision system which has not been popular in practical application. In this paper, we estimate the planar depth by applying a simple visual sonar ranging technique to the single camera image and then associating sequential scans through our own pseudo dense adaptive scan matching algorithm reducing the processing time compared to the standard point-to-point correspondence based algorithm and finally produce a grid map. To this end, we construct a Pseudo Dense Scan (PDS) which is an odometry based temporal accumulation of the visual sonar readings emulating omni-directional sensing in order to overcome the sparseness of the visual sonar. Moreover, in order to obtain a much more refined map, we further correct the slight trajectory error incurred in the PDS construction step using Sequential Quadratic Programming (SQP) which is a well-known optimization scheme. Experimental results show that our method can obtain an accurate grid map using a single camera without the need for a high price range sensors or stereo camera.
Se-Young OhEmail:
  相似文献   

9.
We present a novel solution for topological exploration in corridor environments using cheap and error-prone sonar sensors. Topological exploration requires significant location detection and motion planning. To detect nodes (i.e., significant places) robustly, we propose a new measure, the eigenvalue ratio (EVR), which converts geometrical shapes in the environment into quantitative values using principal component analysis. For planning the safe motion of a robot, we propose the circle following (CF) method, which abstracts the geometry of the environment while taking the characteristics of the sonar sensors into consideration. Integrating the EVR with the CF method results in a topological exploration strategy using sonar sensors approach. The practicality of this approach is demonstrated by simulations and real experiments in corridor environments.  相似文献   

10.
Intelligent autonomous mobile robots must be able to sense and recognize 3D indoor space where they live or work. However, robots are frequently situated in cluttered environments with various objects hard to be robustly perceived. Although the monocular and binocular vision sensors have been widely used for mobile robots, they suffer from image intensity variations, insufficient feature information and correspondence problems. In this paper, we propose a new 3D sensing system, in which the laser-structured-lighting method is basically utilized because of the robustness on the nature of the navigation environment and the easy extraction of feature information of interest. The proposed active trinocular vision system is composed of the flexible multi-stripe laser projector and two cameras arranged with a triangular shape. By modeling the laser projector as a virtual camera and using the trinocular epipolar constraints, the matching pairs of line features observed into two real camera images are established, and 3D information from one-shot image can be extracted on the patterned scene. For robust feature matching, here we propose a new correspondence matching technique based on line grouping and probabilistic voting. Finally, a series of experimental tests is performed to show the simplicity, efficiency, and accuracy of this proposed sensor system for 3D environment sensing and recognition.  相似文献   

11.
This article describes a method of producing high-resolution maps of an indoor environment with an autonomous mobile robot equipped with sonar range-finding sensors. This method is based on investigating obstacles in the near vicinity of a mobile robot. The mobile robot examines the straight line segments extracted from the sonar range data describing obstacles near the robot. The mobile robot then moves parallel to the straight line sonar segments, in close proximity to the obstacles, continually applying sonar barrier test. The sonar barrier test exploits the physical constraints of sonar data, and eliminates noisy data. This test determines whether or not a sonar line segment is a true obstacle edge or a false reflection. Low resolution sonar sensors can be used with the method described. The performance of the algorithm is demonstrated using a Denning Corp. Mobile Robot, equipped with a ring of Polaroid Corp. Ultrasonic Rangefinders.  相似文献   

12.
This paper addresses the problem of exploring and mapping an unknown environment using a robot equipped with a stereo vision sensor. The main contribution of our work is a fully automatic mapping system that operates without the use of active range sensors (such as laser or sonic transducers), can operate on-line and can consistently produce accurate maps of large-scale environments. Our approach implements a Rao-Blackwellised particle filter (RBPF) to solve the simultaneous localization and mapping problem and uses efficient data structures for real-time data association, mapping, and spatial reasoning. We employ a hybrid map representation that infers 3D point landmarks from image features to achieve precise localization, coupled with occupancy grids for safe navigation. We demonstrate two exploration approaches, one based on a greedy strategy and one based on an iteratively deepening strategy. This paper describes our framework and implementation, and presents our exploration method, and experimental results illustrating the functionality of the system.  相似文献   

13.
尹磊    彭建盛    江国来    欧勇盛 《集成技术》2019,8(2):11-22
激光雷达和视觉传感是目前两种主要的服务机器人定位与导航技术,但现有的低成本激光雷 达定位精度较低且无法实现大范围闭环检测,而单独采用视觉手段构建的特征地图又不适用于导航应用。因此,该文以配备低成本激光雷达与视觉传感器的室内机器人为研究对象,提出了一种激光和视觉相结合的定位与导航建图方法:通过融合激光点云数据与图像特征点数据,采用基于稀疏姿态调整的优化方法,对机器人位姿进行优化。同时,采用基于视觉特征的词袋模型进行闭环检测,并进一步优化基于激光点云的栅格地图。真实场景下的实验结果表明,相比于单一的激光或视觉定位建图方 法,基于多传感器数据融合的方法定位精度更高,并有效地解决了闭环检测问题。  相似文献   

14.
In this paper, we provide a systematic study of the task of sensor planning for object search. The search agent's knowledge of object location is encoded as a discrete probability density which is updated whenever a sensing action occurs. Each sensing action of the agent is defined by a viewpoint, a viewing direction, a field-of-view, and the application of a recognition algorithm. The formulation casts sensor planning as an optimization problem: the goal is to maximize the probability of detecting the target with minimum cost. This problem is proved to be NP-Complete, thus a heuristic strategy is favored. To port the theoretical framework to a real working system, we propose a sensor planning strategy for a robot equipped with a camera that can pan, tilt, and zoom. In order to efficiently determine the sensing actions over time, the huge space of possible actions with fixed camera position is decomposed into a finite set of actions that must be considered. The next action is then selected from among these by comparing the likelihood of detection and the cost of each action. When detection is unlikely at the current position, the robot is moved to another position for which the probability of target detection is the highest.  相似文献   

15.
Wireless sensor networks (WSNs) are used in several applications such as healthcare devices, aerospace systems, automobile industry, security monitoring. However, WSNs have several challenges to improve the efficiency, robustness, failure tolerance and reliability of these sensors. Thus, cooperation between sensors is an important deal that increases sensor trust. Cooperative WSNs can be used to optimize the exploration of an unknown area in a distributed way. In this paper, the distributed Markovian model strategy that is used due to their past state-dependent reasoning. Moreover, the exploration strategy depends totally on the wireless communication protocol. Hence, in this paper, we propose an efficient cooperative strategy based on cognitive radio and software-defined radio which are promising technologies that increase spectral utilization and optimize the use of radio resources. We implement a distributed exploration strategy (DES) in mobile robots, and several experiments have been performed to localize targets while avoiding obstacles. Experiments were performed with several exploration robots. A comparison with another exploration strategy shows that DES improves the robots exploration.  相似文献   

16.
This paper presents a novel method, which enhances the use of external mechanisms by considering a multisensor system, composed of sonars and a CCD camera. Monocular vision provides redundant information about the location of the geometric entities detected by the sonar sensors. To reduce ambiguity significantly, an improved and more detailed sonar model is utilized. Moreover, Hough transform is used to extract features from raw sonar data and vision image. Information is fused at the level of features. This technique significantly improves the reliability and precision of the environment observations used for the simultaneous localization and map building problem for mobile robots. Experimental results validate the favorable performance of this approach.  相似文献   

17.
In this paper, we address the problem of building a grid map as accurately as possible using inexpensive and error-prone sonar sensors. In this research area, incorrect sonar measurements, which fail to detect the nearest obstacle in their beamwidth, generally have been dealt with in the same manner as correct measurements or have been excluded from the mapping. In the former case, the map quality may be severely degraded. In the latter case, the resulting map may have insufficient information after the incorrect measurements are removed because only correct measurements are frequently insufficient to cover the whole environment. We propose an efficient grid-mapping approach that incorporates incorrect measurements in a specialized manner to build a better map; we call this the enhanced maximum likelihood (eML) approach. The eML approach fuses the correct and incorrect measurements into a map based on sub-maps generated from each set of measurements. We also propose the maximal sound pressure (mSP) method to detect incorrect sonar readings using the sound pressure of the waves from sonar sensors. In several indoor experiments, integrating the eML approach with the mSP method achieved the best results in terms of map quality among various mapping approaches. We call this the maximum likelihood based on sub-maps (MLS) approach. The MLS map created using only two sonar sensors exhibited similar accuracy to the reference map, which was an accurate representation of the environment.  相似文献   

18.
This paper presents a novel approach to the real-time SLAM problem that works in unstructured indoor environment with a single forward viewing camera. Most existing visual SLAM extract features from the environment, associate them in different images and produce a feature map as a result. However, we estimate the distances between the robot and the obstacles by applying a visual sonar ranging technique to the image and then associate this range data through the Iterative Closest Point (ICP) algorithm and finally produce a grid map. Moreover, we construct a pseudo-dense scan (PDS) which is essentially a temporal accumulation of data, emulating a dense omni-directional sensing of the visual sonar readings based on odometry readings in order to overcome the sparseness of the visual sonar and then associate this scan with the previous one. Moreover, we further correct the slight trajectory error incurred in the PDS construction step to obtain a much more refined map using Sequential Quadratic Programming (SQP) which is a well-known optimization scheme. Experimental results show that our method can obtain an accurate grid map using a single camera alone without the need for more expensive.  相似文献   

19.
同时定位与地图构建(SLAM)技术一直以来都是移动机器人实现自主导航和避障的核心问题,移动机器人需要借助传感器来探测周围的物体同时构建出相应区域的地图。由于传统的1D和2D传感器,如超声波传感器、声呐和激光测距仪等在建图过程中无法检测出Z轴(垂直方向)上的信息,易增加机器人发生碰撞的概率,同时影响建图结果的精确度。本文利用Kinect作为机器人SLAM的传感器,将其采集到的三维信息转化成二维的激光数据进行地图构建,同时借助机器人操作系统(robot operating system,ROS)进行仿真分析和实际测试。结果表明Kinect可以弥补1D和2D传感器采集信息的不足,同时能够较好的保持建图的完整性和可靠性,适用于室内的移动机器人SLAM实现。  相似文献   

20.
In this paper, we present a multi-robot exploration strategy for map building. We consider an indoor structured environment and a team of robots with different sensing and motion capabilities. We combine geometric and probabilistic reasoning to propose a solution to our problem. We formalize the proposed solution using stochastic dynamic programming (SDP) in states with imperfect information. Our modeling can be considered as a partially observable Markov decision process (POMDP), which is optimized using SDP. We apply the dynamic programming technique in a reduced search space that allows us to incrementally explore the environment. We propose realistic sensor models and provide a method to compute the probability of the next observation given the current state of the team of robots based on a Bayesian approach. We also propose a probabilistic motion model, which allows us to take into account errors (noise) on the velocities applied to each robot. This modeling also allows us to simulate imperfect robot motions, and to estimate the probability of reaching the next state given the current state. We have implemented all our algorithms and simulations results are presented.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号