共查询到20条相似文献,搜索用时 15 毫秒
1.
A vision-based navigation system is presented for determining a mobile robot's position and orientation using panoramic imagery. Omni-directional sensors are useful in obtaining a 360° field of view, permitting various objects in the vicinity of a robot to be imaged simultaneously. Recognizing landmarks in a panoramic image from an a priori model of distinct features in an environment allows a robot's location information to be updated. A system is shown for tracking vertex and line features for omni-directional cameras constructed with catadioptric (containing both mirrors and lenses) optics. With the aid of the panoramic Hough transform, line features can be tracked without restricting the mirror geometry so that it satisfies the single viewpoint criteria. This allows the use of rectangular scene features to be used as landmarks. Two paradigms for localization are explored, with experiments conducted with synthetic and real images. A working implementation on a mobile robot is also shown. 相似文献
2.
When navigating in an unknown environment for the first time, a natural behavior consists on memorizing some key views along
the performed path, in order to use these references as checkpoints for a future navigation mission. The navigation framework
for wheeled mobile robots presented in this paper is based on this assumption. During a human-guided learning step, the robot
performs paths which are sampled and stored as a set of ordered key images, acquired by an embedded camera. The set of these
obtained visual paths is topologically organized and provides a visual memory of the environment. Given an image of one of
the visual paths as a target, the robot navigation mission is defined as a concatenation of visual path subsets, called visual
route. When running autonomously, the robot is controlled by a visual servoing law adapted to its nonholonomic constraint.
Based on the regulation of successive homographies, this control guides the robot along the reference visual route without
explicitly planning any trajectory. The proposed framework has been designed for the entire class of central catadioptric
cameras (including conventional cameras). It has been validated onto two architectures. In the first one, algorithms have
been implemented onto a dedicated hardware and the robot is equipped with a standard perspective camera. In the second one,
they have been implemented on a standard PC and an omnidirectional camera is considered.
相似文献
Youcef MezouarEmail: |
3.
In landmark-based navigation systems for mobile robots, sensory perceptions (e.g., laser or sonar scans) are used to identify the robot’s current location or to construct internal representations, maps, of the robot’s environment. Being based on an external frame of reference (which is not subject to incorrigible drift errors such as those occurring in odometry-based systems), landmark-based robot navigation systems are now widely used in mobile robot applications.The problem that has attracted most attention to date in landmark-based navigation research is the question of how to deal with perceptual aliasing, i.e., perceptual ambiguities. In contrast, what constitutes a good landmark, or how to select landmarks for mapping, is still an open research topic. The usual method of landmark selection is to map perceptions at regular intervals, which has the drawback of being inefficient and possibly missing ‘good’ landmarks that lie between sampling points.In this paper, we present an automatic landmark selection algorithm that allows a mobile robot to select conspicuous landmarks from a continuous stream of sensory perceptions, without any pre-installed knowledge or human intervention during the selection process. This algorithm can be used to make mapping mechanisms more efficient and reliable. Experimental results obtained with two different mobile robots in a range of environments are presented and analysed. 相似文献
4.
In this paper, an experimental study of a navigation system that allows a mobile robot to travel in an environment about which it has no prior knowledge is described. Data from multiple ultrasonic range sensors are fused into a representation called Heuristic Asymmetric Mapping to deal with the problem of uncertainties in the raw sensory data caused mainly by the transducer's beam-opening angle and specular reflections. It features a fast data-refresh rate to handle a dynamic environment. Potential-field method is used for on-line path planning based on the constructed gridtype sonar map. The mobile robot can therefore learn to find a safe path according to its self-built sonar map. To solve the problem of local minima in conventional potential field method, a new type of potential function is formulated. This new method is simple and fast in execution using the concept from distance-transform path-finding algorithms. The developed navigation system has been tested on our experimental mobile robot to demonstrate its possible application in practical situations. Several interesting simulation and experimental results are presented.This work was supported partly by the National Science Council of Taiwan, ROC under the grant NSC-82-0422-E-009-321. 相似文献
5.
Owing to the upcoming applications in the field of service robotics mobile robots are currently receiving increasing attention in industry and the scientific community. Applications in the area of service robotics demand a high degree of system autonomy, which robots without learning capabilities will not be able to meet. Learning is required in the context of action models and appropriate perception procedures. In both areas flexible adaptivity is difficult to achieve especially when high bandwidth sensors (e.g. video cameras) - which are needed in the envisioned unstructured worlds - are used. This paper proposes a new methodology for image-based navigation using a self-organized visual representation of the environment. Self-organization leads to internal representations, which can be used by the robot, but are not transparent to the user. It is shown how this conceptual gap can be bridged. 相似文献
6.
We present a method for representing tracking and human-following by fusing distributed multiple vision systems in intelligent
space, with applications to pedestrian tracking in a crowd. In this context, particle filters provide a robust tracking framework
under ambiguous conditions. The particle filter technique is used in this work, but in order to reduce its computational complexity
and increase its robustness, we propose to track the moving objects by generating hypotheses not in the image plan but on
a top-view reconstruction of the scene. Comparative results on real video sequences show the advantage of our method for multiobject
tracking. Simulations are carried out to evaluate the proposed performance. Also, the method is applied to the intelligent
environment, and its performance is verified by experiments.
This work was presented in part at the 10th International Symposium on Artificial Life and Robotics, Oita, Japan, February
4–6, 2005 相似文献
7.
This paper addresses the sonar-based navigation of mobile robots. The extended Kalman filtering (EKF) technique is considered, but from a deterministic, nonstochastic, point of view. For this problem, new results are presented on the robustness of the nonlinear observation scheme. The original feature is that the region-of-convergence question is posed in its complete nonlinear framework, that is, considering the dynamics not only of the estimation error ζ(t), but also of the covariance matrix P(t). In this way the approach followed makes less conservative the treatment and improves the convergence analysis. The proposed ideas were tested successfully on simulation experiments of a mobile platform. 相似文献
8.
This paper addresses autonomous intelligent navigation of mobile robotic platforms based on the recently reported algorithms of language-measure-theoretic optimal control. Real-time sensor data and model-based information on the robot's motion dynamics are fused to construct a probabilistic finite state automaton model that dynamically computes a time-dependent discrete-event supervisory control policy. The paper also addresses detection and avoidance of livelocks that might occur during execution of the robot navigation algorithm. Performance and robustness of autonomous intelligent navigation under the proposed algorithm have been experimentally validated on Segway RMP robotic platforms in a laboratory environment. 相似文献
9.
10.
Emanuele Menegatti Takeshi Maeda Hiroshi Ishiguro 《Robotics and Autonomous Systems》2004,47(4):423-267
This paper proposes a new technique for vision-based robot navigation. The basic framework is to localise the robot by comparing images taken at its current location with reference images stored in its memory. In this work, the only sensor mounted on the robot is an omnidirectional camera. The Fourier components of the omnidirectional image provide a signature for the views acquired by the robot and can be used to simplify the solution to the robot navigation problem. The proposed system can calculate the robot position with variable accuracy (‘hierarchical localisation’) saving computational time when the robot does not need a precise localisation (e.g. when it is travelling through a clear space). In addition, the system is able to self-organise its visual memory of the environment. The self-organisation of visual memory is essential to realise a fully autonomous robot that is able to navigate in an unexplored environment. Experimental evidence of the robustness of this system is given in unmodified office environments. 相似文献
11.
Li Maohai Wang Han Sun Lining Cai Zesu 《Engineering Applications of Artificial Intelligence》2013,26(8):1942-1952
Robust topological navigation strategy for omnidirectional mobile robot using an omnidirectional camera is described. The navigation system is composed of on-line and off-line stages. During the off-line learning stage, the robot performs paths based on motion model about omnidirectional motion structure and records a set of ordered key images from omnidirectional camera. From this sequence a topological map is built based on the probabilistic technique and the loop closure detection algorithm, which can deal with the perceptual aliasing problem in mapping process. Each topological node provides a set of omnidirectional images characterized by geometrical affine and scale invariant keypoints combined with GPU implementation. Given a topological node as a target, the robot navigation mission is a concatenation of topological node subsets. In the on-line navigation stage, the robot hierarchical localizes itself to the most likely node through the robust probability distribution global localization algorithm, and estimates the relative robot pose in topological node with an effective solution to the classical five-point relative pose estimation algorithm. Then the robot is controlled by a vision based control law adapted to omnidirectional cameras to follow the visual path. Experiment results carried out with a real robot in an indoor environment show the performance of the proposed method. 相似文献
12.
We examined human navigational principles for intercepting a projected object and tested their application in the design of
navigational algorithms for mobile robots. These perceptual principles utilize a viewer-based geometry that allows the robot
to approach the target without need of time-consuming calculations to determine the world coordinates of either itself or
the target. Human research supports the use of an Optical Acceleration Cancellation (OAC) strategy to achieve interception.
Here, the fielder selects a running path that nulls out the acceleration of the retinal image of an approaching ball, and
maintains an image that rises at a constant rate throughout the task. We compare two robotic control algorithms for implementing
the OAC strategy in cases in which the target remains in the sagittal plane headed directly toward the robot (which only moves
forward or backward). In the “passive” algorithm, the robot keeps the orientation of the camera constant, and the image of
the ball rises at a constant rate. In the “active” algorithm, the robot maintains a camera fixation that is centered on the
image of the ball and keeps the tangent of the camera angle rising at a constant rate. Performance was superior with the active
algorithm in both computer simulations and trials with actual mobile robots. The performance advantage is principally due
to the higher gain and effectively wider viewing angle when the camera remains centered on the ball image. The findings confirm
the viability and robustness of human perceptual principles in the design of mobile robot algorithms for tasks like interception.
Thomas Sugar works in the areas of mobile robot navigation and wearable robotics assisting gait of stroke survivors. In mobile robot navigation,
he is interested in combining human perceptual principles with mobile robotics. He majored in business and mechanical engineering
for his Bachelors degrees and mechanical engineering for his Doctoral degree all from the University of Pennsylvania. In industry,
he worked as a project engineer for W. L. Gore and Associates. He has been a faculty member in the Department of Mechanical
and Aerospace Engineering and the Department of Engineering at Arizona State University. His research is currently funded
by three grants from the National Sciences Foundation and the National Institutes of Health, and focuses on perception and
action, and wearable robots using tunable springs.
Michael McBeath works in the area combining Psychology and Engineering. He majored in both fields for his Bachelors degree from Brown University
and again for his Doctoral degree from Stanford University. Parallel to his academic career, he worked as a research scientist
at NASA—Ames Research Center, and at the Interval Corporation, a technology think tank funded by Microsoft co-founder, Paul
Allen. He has been a faculty member in the Department of Psychology at Kent State University and at Arizona State University,
where he is Program Director for the Cognition and Behavior area, and is on the Executive Committee for the interdisciplinary
Arts, Media, and Engineering program. His research is currently funded by three grants from the National Sciences Foundation,
and focuses on perception and action, particularly in sports. He is best known for his research on navigational strategies
used by baseball players, animals, and robots. 相似文献
13.
14.
This paper describes a method of robustly modeling road boundaries on-line for autonomous navigation. Since sensory evidence for road boundaries might change from place to place, we cannot depend on a single cue but have to use multiple sensory features. It is also necessary to cope with various road shapes and road type changes. These requirements are naturally met in the proposed particle filter-based method, which makes use of multiple features with the corresponding likelihood functions and keeps multiple road hypotheses as particles. The proposed method has been successfully applied to various road scenes with cameras and a laser range finder. To show that the proposed method is applicable to other sensors, preliminary results of using stereo instead of the laser range finder are also described. 相似文献
15.
Experimental research in biology has uncovered a number of different ways in which flying insects use cues derived from optical flow for navigational purposes, such as safe landing, obstacle avoidance and dead reckoning. In this study, we use a synthetic methodology to gain additional insights into the navigation behavior of bees. Specifically, we focus on the mechanisms of course stabilization behavior and visually mediated odometer by using a biological model of motion detector for the purpose of long-range goal-directed navigation in 3D environment. The performance tests of the proposed navigation method are conducted by using a blimp-type flying robot platform in uncontrolled indoor environments. The result shows that the proposed mechanism can be used for goal-directed navigation. Further analysis is also conducted in order to enhance the navigation performance of autonomous aerial vehicles. 相似文献
16.
视觉导航技术是保证机器人自主移动的关键技术之一。为了从整体上把握当前国际上最新的视觉导航研究动态,全面评述了仿生机器人视觉导航技术的研究进展,重点分析了视觉SLAM(Simultaneous Local-ization and Mapping)、闭环探测、视觉返家三个关键问题的研究现状及存在的问题。提出了一个新的视觉SLAM算法框架,给出了待解决的关键理论问题,并对视觉导航技术发展的难点及未来趋势进行了总结。 相似文献
17.
针对研究在水煤浆环境下工作的机器人,提出了采用超声波进行导航定位的方法;详细介绍了该定位系统的硬件布置方案、定位数学模型,并在水煤浆介质中做了超声波传播实验。 相似文献
18.
An obstacle-avoidance algorithm is presented for autonomous mobile robots equipped with a CCD camera and ultrasonic sensors.
This approach uses segmentation techniques to segregate the floor from other fixtures, and measurement techniques to measure
the distance between the mobile robot and any obstacles. It uses a simple computation for the selection of a threshold value.
This approach also uses a cost function, which is combined with image information, distance information, and a weight factor,
to find an obstacle-free path. This algorithm, which uses a CCD camera and ultrasonic sensors, can be used for cases including
shadow regions, and obstacles in visual navigation and in various lighting conditions.
This work was presented in part at the 7th International Symposium on Artificial Life and Robotics, Oita, Japan, January 16–18,
2002 相似文献
19.