首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
《Knowledge》2006,19(5):324-332
We present a system for visual robotic docking using an omnidirectional camera coupled with the actor critic reinforcement learning algorithm. The system enables a PeopleBot robot to locate and approach a table so that it can pick an object from it using the pan-tilt camera mounted on the robot. We use a staged approach to solve this problem as there are distinct subtasks and different sensors used. Starting with random wandering of the robot until the table is located via a landmark, then a network trained via reinforcement allows the robot to turn to and approach the table. Once at the table the robot is to pick the object from it. We argue that our approach has a lot of potential allowing the learning of robot control for navigation and remove the need for internal maps of the environment. This is achieved by allowing the robot to learn couplings between motor actions and the position of a landmark.  相似文献   

2.
《Advanced Robotics》2013,27(9):925-950
Considering that intelligent robotic systems work in a real environment, it is important that they themselves have the ability to determine their own internal conditions. Therefore, we consider it necessary to pay some attention to the diagnosis of such intelligent systems and to construct a system for the self-diagnosis of an autonomous mobile robot. Autonomous mobile systems must have a self-contained diagnostic system and therefore there are restrictions to building such a system on a mobile robot. In this paper, we describe an internal state sensory system and a method for diagnosing conditions in an autonomous mobile robot. The prototype of our internal sensory system consists of voltage sensors, current sensors and encoders. We show experimental results of the diagnosis using an omnidirectional mobile robot and the developed system. Also, we propose a method that is able to cope with the internal condition using internal sensory information. We focus on the functional units in a single robot system and also examine a method in which the faulty condition is categorized into three levels. The measures taken to cope with the faulty condition are set for each level to enable the robot to continue to execute the task. We show experimental results using an omnidirectional mobile robot with a self-diagnosis system and our proposed method.  相似文献   

3.
In this paper, we present a feature-based approach for monocular scene reconstruction based on Extended Kalman Filters (EKF). Our method processes a sequence of images taken by a single camera mounted frontally on a mobile robot. Using a combination of various techniques, we are able to produce a precise reconstruction that is free from outliers and can therefore be used for reliable obstacle detection and 3D map building. Furthermore, we present an attention-driven method that focuses the feature selection to image areas where the obstacle situation is unclear and where a more detailed scene reconstruction is necessary. In extensive real-world field tests we show that the presented approach is able to detect obstacles that are not seen by other sensors, such as laser range finders. Furthermore, we show that visual obstacle detection combined with a laser range finder can increase the detection rate of obstacles considerably, allowing the autonomous use of mobile robots in complex public and home environments.  相似文献   

4.
The first objective of this research was to develop an omnidirectional home care mobile robot. A PC-based controller controls the mobile robot platform. This service mobile robot is equipped with an “indoor positioning system” and an obstacle avoidance system. The indoor positioning system is used for rapid and precise positioning and guidance of the mobile robot. The obstacle avoidance system can detect static and dynamic obstacles. In order to understand the stability of a three-wheeled omnidirectional mobile robot, we carried out some experiments to measure the rectangular and circular path errors of the proposed mobile robot in this research. From the experimental results, we found that the path error was smaller with the guidance of the localization system. The mobile robot can also return to its starting point. The localization system can successfully maintain the robot’s heading angle along a circular path.  相似文献   

5.
Monocular Vision for Mobile Robot Localization and Autonomous Navigation   总被引:5,自引:0,他引:5  
This paper presents a new real-time localization system for a mobile robot. We show that autonomous navigation is possible in outdoor situation with the use of a single camera and natural landmarks. To do that, we use a three step approach. In a learning step, the robot is manually guided on a path and a video sequence is recorded with a front looking camera. Then a structure from motion algorithm is used to build a 3D map from this learning sequence. Finally in the navigation step, the robot uses this map to compute its localization in real-time and it follows the learning path or a slightly different path if desired. The vision algorithms used for map building and localization are first detailed. Then a large part of the paper is dedicated to the experimental evaluation of the accuracy and robustness of our algorithms based on experimental data collected during two years in various environments.  相似文献   

6.
The localization problem for an autonomous robot moving in a known environment is a well-studied problem which has seen many elegant solutions. Robot localization in a dynamic environment populated by several moving obstacles, however, is still a challenge for research. In this paper, we use an omnidirectional camera mounted on a mobile robot to perform a sort of scan matching. The omnidirectional vision system finds the distances of the closest color transitions in the environment, mimicking the way laser rangefinders detect the closest obstacles. The similarity of our sensor with classical rangefinders allows the use of practically unmodified Monte Carlo algorithms, with the additional advantage of being able to easily detect occlusions caused by moving obstacles. The proposed system was initially implemented in the RoboCup Middle-Size domain, but the experiments we present in this paper prove it to be valid in a general indoor environment with natural color transitions. We present localization experiments both in the RoboCup environment and in an unmodified office environment. In addition, we assessed the robustness of the system to sensor occlusions caused by other moving robots. The localization system runs in real-time on low-cost hardware.  相似文献   

7.
汤一平  姜荣剑  林璐璐 《计算机科学》2015,42(3):284-288, 315
针对现有的移动机器人视觉系统计算资源消耗大、实时性能欠佳、检测范围受限等问题,提出一种基于主动式全景视觉传感器(AODVS)的移动机器人障碍物检测方法。首先,将单视点的全方位视觉传感器(ODVS)和由配置在1个平面上的4个红色线激光组合而成的面激光发生器进行集成,通过主动全景视觉对移动机器人周边障碍物进行检测;其次,移动机器人中的全景智能感知模块根据面激光发生器投射到周边障碍物上的激光信息,通过视觉处理方法解析出移动机器人周边障碍物的距离和方位等信息;最后,基于上述信息采用一种全方位避障策略,实现移动机器人的快速避障。实验结果表明,基于AODVS的障碍物检测方法能在实现快速高效避障的同时,降低对移动机器人的计算资源的要求。  相似文献   

8.
Robust topological navigation strategy for omnidirectional mobile robot using an omnidirectional camera is described. The navigation system is composed of on-line and off-line stages. During the off-line learning stage, the robot performs paths based on motion model about omnidirectional motion structure and records a set of ordered key images from omnidirectional camera. From this sequence a topological map is built based on the probabilistic technique and the loop closure detection algorithm, which can deal with the perceptual aliasing problem in mapping process. Each topological node provides a set of omnidirectional images characterized by geometrical affine and scale invariant keypoints combined with GPU implementation. Given a topological node as a target, the robot navigation mission is a concatenation of topological node subsets. In the on-line navigation stage, the robot hierarchical localizes itself to the most likely node through the robust probability distribution global localization algorithm, and estimates the relative robot pose in topological node with an effective solution to the classical five-point relative pose estimation algorithm. Then the robot is controlled by a vision based control law adapted to omnidirectional cameras to follow the visual path. Experiment results carried out with a real robot in an indoor environment show the performance of the proposed method.  相似文献   

9.
Real-Time Omnidirectional Image Sensors   总被引:1,自引:1,他引:0  
Conventional T.V. cameras are limited in their field of view. A real-time omnidirectional camera which can acquire an omnidirectional (360 degrees) field of view at video rate and which could be applied in a variety of fields, such as autonomous navigation, telepresence, virtual reality and remote monitoring, is presented. We have developed three different types of omnidirectional image sensors, and two different types of multiple-image sensing systems which consist of an omnidirectional image sensor and binocular vision. In this paper, we describe the outlines and fundamental optics of our developed sensors and show examples of applications for robot navigation.  相似文献   

10.
The structural features inherent in the visual motion field of a mobile robot contain useful clues about its navigation. The combination of these visual clues and additional inertial sensor information may allow reliable detection of the navigation direction for a mobile robot and also the independent motion that might be present in the 3D scene. The motion field, which is the 2D projection of the 3D scene variations induced by the camera‐robot system, is estimated through optical flow calculations. The singular points of the global optical flow field of omnidirectional image sequences indicate the translational direction of the robot as well as the deviation from its planned path. It is also possible to detect motion patterns of near obstacles or independently moving objects of the scene. In this paper, we introduce the analysis of the intrinsic features of the omnidirectional motion fields, in combination with gyroscopical information, and give some examples of this preliminary analysis. © 2004 Wiley Periodicals, Inc.  相似文献   

11.
Described here is a visual navigation method for navigating a mobile robot along a man-made route such as a corridor or a street. We have proposed an image sensor, named HyperOmni Vision, with a hyperboloidal mirror for vision-based navigation of the mobile robot. This sensing system can acquire an omnidirectional view around the robot in real time. In the case of the man-made route, road boundaries between the ground plane and wall appear as a close-looped curve in the image. By making use of this optical characteristic, the robot can avoid obstacles and move along the corridor by tracking the close-looped curve with an active contour model. Experiments that have been done in a real environment are described.  相似文献   

12.
Due to their wide field of view, omnidirectional cameras are becoming ubiquitous in many mobile robotic applications.  A challenging problem consists of using these sensors, mounted on mobile robotic platforms, as visual compasses (VCs) to provide an estimate of the rotational motion of the camera/robot from the omnidirectional video stream. Existing VC algorithms suffer from some practical limitations, since they require a precise knowledge either of the camera-calibration parameters, or the 3-D geometry of the observed scene. In this paper we present a novel multiple-view geometry constraint for paracatadioptric views of lines in 3-D, that we use to design a VC algorithm that does not require either the knowledge of the camera calibration parameters, or the 3-D scene geometry. In addition, our algorithm runs in real time since it relies on a closed-form estimate of the camera/robot rotation, and can address the image-feature correspondence problem. Extensive simulations and experiments with real robots have been performed to show the accuracy and robustness of the proposed method.  相似文献   

13.
Autonomous environment mapping is an essential part of efficiently carrying out complex missions in unknown indoor environments. In this paper, a low cost mapping system composed of a web camera with structured light and sonar sensors is presented. We propose a novel exploration strategy based on the frontier concept using the low cost mapping system. Based on the complementary characteristics of a web camera with structured light and sonar sensors, two different sensors are fused to make a mobile robot explore an unknown environment with efficient mapping. Sonar sensors are used to roughly find obstacles, and the structured light vision system is used to increase the occupancy probability of obstacles or walls detected by sonar sensors. To overcome the inaccuracy of the frontier-based exploration, we propose an exploration strategy that would both define obstacles and reveal new regions using the mapping system. Since the processing cost of the vision module is high, we resolve the vision sensing placement problem to minimize the number of vision sensing in analyzing the geometry of the proposed sonar and vision probability models. Through simulations and indoor experiments, the efficiency of the proposed exploration strategy is proved and compared to other exploration strategies.   相似文献   

14.
《Advanced Robotics》2013,27(3-4):441-460
This paper describes the omnidirectional vision-based ego-pose estimation method of an in-pipe mobile robot. An in-pipe mobile robot has been developed for inspecting the inner surface of various pipeline configurations, such as the straight pipeline, the elbow and the multiple-branch. Because the proposed in-pipe mobile robot has four individual drive wheels, it has the ability of flexible motions in various pipelines. The ego-pose estimation is indispensable for the autonomous navigation of the proposed in-pipe robot. An omnidirectional camera and four laser modules mounted on the mobile robot are used for ego-pose estimation. An omnidirectional camera is also used for investigating the inner surface of the pipeline. The pose of the in-pipe mobile robot is estimated from the relationship equation between the pose of a robot and the pixel coordinates of four intersection points where light rays that emerge from four laser modules intersect the inside of the pipeline. This relationship equation is derived from the geometry analysis of an omnidirectional camera and four laser modules. In experiments, the performance of the proposed method is evaluated by comparing the result of our algorithm with the measurement value of a specifically designed sensor, which is a kind of a gyroscope.  相似文献   

15.
An obstacle-avoidance algorithm is presented for autonomous mobile robots equipped with a CCD camera and ultrasonic sensors. This approach uses segmentation techniques to segregate the floor from other fixtures, and measurement techniques to measure the distance between the mobile robot and any obstacles. It uses a simple computation for the selection of a threshold value. This approach also uses a cost function, which is combined with image information, distance information, and a weight factor, to find an obstacle-free path. This algorithm, which uses a CCD camera and ultrasonic sensors, can be used for cases including shadow regions, and obstacles in visual navigation and in various lighting conditions. This work was presented in part at the 7th International Symposium on Artificial Life and Robotics, Oita, Japan, January 16–18, 2002  相似文献   

16.
Localisation and mapping with an omnidirectional camera becomes more difficult as the landmark appearances change dramatically in the omnidirectional image. With conventional techniques, it is difficult to match the features of the landmark with the template. We present a novel robot simultaneous localisation and mapping (SLAM) algorithm with an omnidirectional camera, which uses incremental landmark appearance learning to provide posterior probability distribution for estimating the robot pose under a particle filtering framework. The major contribution of our work is to represent the posterior estimation of the robot pose by incremental probabilistic principal component analysis, which can be naturally incorporated into the particle filtering algorithm for robot SLAM. Moreover, the innovative method of this article allows the adoption of the severe distorted landmark appearances viewed with omnidirectional camera for robot SLAM. The experimental results demonstrate that the localisation error is less than 1 cm in an indoor environment using five landmarks, and the location of the landmark appearances can be estimated within 5 pixels deviation from the ground truth in the omnidirectional image at a fairly fast speed.  相似文献   

17.
We have designed a mobile robot with a distribution structure for intelligent life space. The mobile robot was constructed using an aluminum frame. The mobile robot has the shape of a cylinder, and its diameter, height, and weight are 40 cm, 80 cm, and 40 kg, respectively. There are six systems in the mobile robot, including structure, an obstacle avoidance and driving system, a software development system, a detection module system, a remote supervision system, and others. In the obstacle avoidance and driving system, we use an NI motion control card to drive two DC servomotors in the mobile robot, and detect obstacles using a laser range finder and a laser positioning system. Finally, we control the mobile robot using an NI motion control card and a MAXON driver according to the programmed trajectory. The mobile robot can avoid obstacles using the laser range finder, and follow the programmed trajectory. We developed the user interface with four functions for the mobile robot. In the security system, we designed module-based security devices to detect dangerous events and transmit the detection results to the mobile robot using a wireless RF interface. The mobile robot can move to the event position using the laser positioning system.  相似文献   

18.
Rapid, safe, and incremental learning of navigation strategies   总被引:1,自引:0,他引:1  
In this paper we propose a reinforcement connectionist learning architecture that allows an autonomous robot to acquire efficient navigation strategies in a few trials. Besides rapid learning, the architecture has three further appealing features. First, the robot improves its performance incrementally as it interacts with an initially unknown environment, and it ends up learning to avoid collisions even in those situations in which its sensors cannot detect the obstacles. This is a definite advantage over nonlearning reactive robots. Second, since it learns from basic reflexes, the robot is operational from the very beginning and the learning process is safe. Third, the robot exhibits high tolerance to noisy sensory data and good generalization abilities. All these features make this learning robot's architecture very well suited to real-world applications. We report experimental results obtained with a real mobile robot in an indoor environment that demonstrate the appropriateness of our approach to real autonomous robot control.  相似文献   

19.
Complex robot tasks are usually described as high level goals, with no details on how to achieve them. However, details must be provided to generate primitive commands to control a real robot. A sensor explication concept that makes details explicit from general commands is presented. We show how the transformation from high-level goals to primitive commands can be performed at execution time and we propose an architecture based on reconfigurable objects that contain domain knowledge and knowledge about the sensors and actuators available. Our approach is based on two premises: 1) plan execution is an information gathering process where determining what information is relevant is a great part of the process; and 2) plan execution requires that many details are made explicit. We show how our approach is used in solving the task of moving a robot to and through an unknown, and possibly narrow, doorway; where sonic range data is used to find the doorway, walls, and obstacles. We illustrate the difficulty of such a task using data from a large number of experiments we conducted with a real mobile robot. The laboratory results illustrate how the proper application of knowledge in the integration and utilization of sensors and actuators increases the robustness of plan execution.  相似文献   

20.
在动态背景下的运动目标检测中,由于目标和背景两者都是各自独立运动的,在提取前景运动目标时需要考虑由移动机器人自身运动引起的背景变化。仿射变换是一种广泛用于估计图像间背景变换的方法。然而,在移动机器人上使用全方位视觉传感器(ODVS)时,由于全方位图像的扭曲变形会 造成图像中背景运动不一致,无法通过单一的仿射变换描述全方位图像上的背景运动。将图像划分为网格窗口,然后对每个窗口分别进行仿射变换,从背景变换补偿帧差中得到运动目标的区域。最后,根据ODVS的成像特性,通过视觉方法解析出运动障碍物的距离和方位信息。实验结果表明,提出的方法能准确检测出移动机器人360°范围内的运动障碍物,并实现运动障碍物的精确定位,有效地提高了移动机器人的实时避障能力。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号