共查询到20条相似文献,搜索用时 0 毫秒
1.
Experimental research in biology has uncovered a number of different ways in which flying insects use cues derived from optical flow for navigational purposes, such as safe landing, obstacle avoidance and dead reckoning. In this study, we use a synthetic methodology to gain additional insights into the navigation behavior of bees. Specifically, we focus on the mechanisms of course stabilization behavior and visually mediated odometer by using a biological model of motion detector for the purpose of long-range goal-directed navigation in 3D environment. The performance tests of the proposed navigation method are conducted by using a blimp-type flying robot platform in uncontrolled indoor environments. The result shows that the proposed mechanism can be used for goal-directed navigation. Further analysis is also conducted in order to enhance the navigation performance of autonomous aerial vehicles. 相似文献
2.
Artificial navigation systems stand to benefit greatly from learning maps of visual environments, but traditional map-making techniques are inadequate in several respects. This paper describes an adaptive, view-based, relational map-making system for navigating within a 3D environment defined by a spatially distributed set of visual landmarks. Inspired by an analogy to learning aspect graphs of 3D objects, the system comprises two neurocomputational architectures that emulate cognitive mapping in the rat hippocampus. The first architecture performs unsupervised place learning by combining the “What” with the “Where”, namely through conjunctions of landmark identity, pose, and egocentric gaze direction within a local, restricted sensory view of the environment. The second associatively learns action consequences by incorporating the “When”, namely through conjunctions of learned places and coarsely coded robot motions. Together, these networks form a map reminiscent of a partially observable Markov decision process, and consequently provide an ideal neural substrate for prediction, environment recognition, route planning, and exploration. Preliminary results from real-time implementations on a mobile robot called MAVIN (the Mobile Adaptive VIsual Navigator) demonstrate the potential for these capabilities. 相似文献
3.
Visual Navigation for Mobile Robots: A Survey 总被引:4,自引:0,他引:4
Francisco Bonin-Font Alberto Ortiz Gabriel Oliver 《Journal of Intelligent and Robotic Systems》2008,53(3):263-296
Mobile robot vision-based navigation has been the source of countless research contributions, from the domains of both vision
and control. Vision is becoming more and more common in applications such as localization, automatic map construction, autonomous
navigation, path following, inspection, monitoring or risky situation detection. This survey presents those pieces of work,
from the nineties until nowadays, which constitute a wide progress in visual navigation techniques for land, aerial and autonomous
underwater vehicles. The paper deals with two major approaches: map-based navigation and mapless navigation. Map-based navigation has been in turn subdivided in metric map-based navigation and topological map-based navigation. Our outline to mapless navigation includes reactive techniques based on qualitative characteristics extraction, appearance-based
localization, optical flow, features tracking, plane ground detection/tracking, etc... The recent concept of visual sonar has also been revised.
This work is partially supported by DPI 2005-09001-C03-02 and FEDER funding. 相似文献
4.
This paper presents a visual homing algorithm for autonomous robots inspired by the behaviour of bees and other social insects. The homing method presented is based on an affine motion model whose parameters are estimated by a best matching criterion. No attempts are made to recognize the objects or to extract 3D models from the scene. Improvements in the algorithm and in the use of colour information are introduced in order to enhance the efficiency of the navigation vector estimate. Tests and results are presented. 相似文献
5.
When navigating in an unknown environment for the first time, a natural behavior consists on memorizing some key views along
the performed path, in order to use these references as checkpoints for a future navigation mission. The navigation framework
for wheeled mobile robots presented in this paper is based on this assumption. During a human-guided learning step, the robot
performs paths which are sampled and stored as a set of ordered key images, acquired by an embedded camera. The set of these
obtained visual paths is topologically organized and provides a visual memory of the environment. Given an image of one of
the visual paths as a target, the robot navigation mission is defined as a concatenation of visual path subsets, called visual
route. When running autonomously, the robot is controlled by a visual servoing law adapted to its nonholonomic constraint.
Based on the regulation of successive homographies, this control guides the robot along the reference visual route without
explicitly planning any trajectory. The proposed framework has been designed for the entire class of central catadioptric
cameras (including conventional cameras). It has been validated onto two architectures. In the first one, algorithms have
been implemented onto a dedicated hardware and the robot is equipped with a standard perspective camera. In the second one,
they have been implemented on a standard PC and an omnidirectional camera is considered.
相似文献
Youcef MezouarEmail: |
6.
An obstacle-avoidance algorithm is presented for autonomous mobile robots equipped with a CCD camera and ultrasonic sensors.
This approach uses segmentation techniques to segregate the floor from other fixtures, and measurement techniques to measure
the distance between the mobile robot and any obstacles. It uses a simple computation for the selection of a threshold value.
This approach also uses a cost function, which is combined with image information, distance information, and a weight factor,
to find an obstacle-free path. This algorithm, which uses a CCD camera and ultrasonic sensors, can be used for cases including
shadow regions, and obstacles in visual navigation and in various lighting conditions.
This work was presented in part at the 7th International Symposium on Artificial Life and Robotics, Oita, Japan, January 16–18,
2002 相似文献
7.
In this paper, a mobile robot control law for corridor navigation and wall-following, based on sonar and odometric sensorial information is proposed. The control law allows for stable navigation avoiding actuator saturation. The posture information of the robot travelling through the corridor is estimated by using odometric and sonar sensing. The control system is theoretically proved to be asymptotically stable. Obstacle avoidance capability is added to the control system as a perturbation signal. A state variables estimation structure is proposed that fuses the sonar and odometric information. Experimental results are presented to show the performance of the proposed control system. 相似文献
8.
In landmark-based navigation systems for mobile robots, sensory perceptions (e.g., laser or sonar scans) are used to identify the robot’s current location or to construct internal representations, maps, of the robot’s environment. Being based on an external frame of reference (which is not subject to incorrigible drift errors such as those occurring in odometry-based systems), landmark-based robot navigation systems are now widely used in mobile robot applications.The problem that has attracted most attention to date in landmark-based navigation research is the question of how to deal with perceptual aliasing, i.e., perceptual ambiguities. In contrast, what constitutes a good landmark, or how to select landmarks for mapping, is still an open research topic. The usual method of landmark selection is to map perceptions at regular intervals, which has the drawback of being inefficient and possibly missing ‘good’ landmarks that lie between sampling points.In this paper, we present an automatic landmark selection algorithm that allows a mobile robot to select conspicuous landmarks from a continuous stream of sensory perceptions, without any pre-installed knowledge or human intervention during the selection process. This algorithm can be used to make mapping mechanisms more efficient and reliable. Experimental results obtained with two different mobile robots in a range of environments are presented and analysed. 相似文献
9.
Many researchers are studying ways to create machines that can make their own decisions and act on them. Recently, great advances have been made in intelligent mobile robot technology, advances which will provide autonomous traveling ability to autonomous systems, allowing them not only to surmount stairs but also other obstacles. The autonomous systems are expected to gather knowledge about their environment, construct a symbolic world model of the environment, and use this model in planning and carrying out tasks set them in high-level style. An approach to automatic path planning for self-navigation problems is presented. It is structured as a knowledge-based system and is a method of planning safe paths around circular obstacles in a two-dimensional plane for autonomous systems. The expert system path planner reduces the complexity of the problem and the computer run-time, enabling the agent to achieve a quicker response to its own environment. Also, computer run-time increases very slowly with problem complexity. It is done by: (1) representing the environmental information by sets of facts; (2) guiding the moving object by groups of rules and (3) deriving the result with simple algorithm and fewer calculations. This algorithm is implemented in the expert system environment, and some examples drawn from the system are also demonstrated. 相似文献
10.
A mobile platform mounted with omnidirectional vision sensor (ODVS) can be used to monitor large areas and detect interesting events such as independently moving persons and vehicles. To avoid false alarms due to extraneous features, the image motion induced by the moving platform should be compensated. This paper describes a formulation and application of parametric egomotion compensation for an ODVS. Omni images give 360
view of surroundings but undergo considerable image distortion. To account for these distortions, the parametric planar motion model is integrated with the transformations into omni image space. Prior knowledge of approximate camera calibration and camera speed is integrated with the estimation process using a Bayesian approach. Iterative, coarse-to-fine, gradient-based estimation is used to correct the motion parameters for vibrations and other inaccuracies in prior knowledge. Experiments with a camera mounted on various types of mobile platforms demonstrate successful detection of moving persons and vehicles.Published online: 11 October 2004 相似文献
11.
In this paper, we demonstrate a reliable and robust system for localization of mobile robots in indoors environments which are relatively consistent to a priori known maps. Through the use of an Extended Kalman Filter combining dead-reckoning, ultrasonic, and infrared sensor data, estimation of the position and orientation of the robot is achieved. Based on a thresholding approach, unexpected obstacles can be detected and their motion predicted. Experimental results from implementation on our mobile robot, Nomad-200, are also presented. 相似文献
12.
Algorithms for path navigation and generation of guidance setpoints for an AGV are developed. Navigation is based on a simple flat world model of connected nodes using a suboptimal path solution for execution speed. The guidance algorithms use vision data from a stereo pair of linear image array cameras, which described an obstacle location and height where intersected by the vision system plane of view. The development of a map of the area on the AGV path allows the detection of obstacles, which may be passed subject to the aisle markings. The complete vision-guidance system can be implemented using inexpensive commercially available 16 bit microprocessors. 相似文献
13.
Mobile robot navigation in a partially structured static environment, using neural predictive control 总被引:3,自引:0,他引:3
This paper presents a way of implementing a model-based predictive controller (MBPC) for mobile robot navigation when unexpected static obstacles are present in the robot environment. The method uses a nonlinear model of mobile robot dynamics, and thus allows an accurate prediction of the future trajectories. An ultrasonic ranging system has been used for obstacle detection. A multilayer perceptron is used to implement the MBPC, allowing real-time implementation and also eliminating the need for high-level data sensor processing. The perceptron has been trained in a supervised manner to reproduce the MBPC behaviour. Experimental results obtained when applying the neural-network controller to a TRC Labmate mobile robot are given in the paper. 相似文献
14.
《Advanced Robotics》2013,27(7-8):791-816
—This paper presents a new idea for an obstacle recognition method for mobile robots by analyzing optical flow information acquired from dynamic images. First, the optical flow field is detected in image sequences from a camera on a moving observer and moving object candidates are extracted by using a normalized square residual error [focus of expansion (FOE) residual error] value that is calculated in the process of estimating the FOE. Next, the optical flow directions and intensity values are stored for the pixels involved in each candidate region to calculate the distribution width values around the principal axes of inertia and the direction of the principal axes. Finally, each candidate is classified into an object category that is expected to appear in the scene by comparing the proportion and the direction values with standard data ranges for the objects which are determined by preliminary experiments. Experimental results of car/bicycle/pedestrian recognition in real outdoor scenes have shown the effectiveness of the proposed method. 相似文献
15.
《Robotics and Autonomous Systems》2014,62(8):1153-1174
In this paper, we address the problem of robot navigation in environments with deformable objects. The aim is to include the costs of object deformations when planning the robot’s motions and trade them off against the travel costs. We present our recently developed robotic system that is able to acquire deformation models of real objects. The robot determines the elasticity parameters by physical interaction with the object and by establishing a relation between the applied forces and the resulting surface deformations. The learned deformation models can then be used to perform physically realistic finite element simulations. This allows the planner to evaluate robot trajectories and to predict the costs of object deformations. Since finite element simulations are time-consuming, we furthermore present an approach to approximate object-specific deformation cost functions by means of Gaussian process regression. We present two real-world applications of our motion planner for a wheeled robot and a manipulation robot. As we demonstrate in real-world experiments, our system is able to estimate appropriate deformation parameters of real objects that can be used to predict future deformations. We show that our deformation cost approximation improves the efficiency of the planner by several orders of magnitude. 相似文献
16.
基于嵌入式的汽车异常移动视觉监控仿真分析 总被引:1,自引:0,他引:1
研究汽车异常移动的准确视觉监控问题.汽车异常移动是指异于正常驾驶行为的特征,由于汽车运动过程的相似性会给汽车异常移动行为识别带来较大的干扰,传统的汽车异常移动识别方法中,在汽车晃动、轻微移动等行为干扰下,会产生误判.为解决上述问题,提出了一种利用嵌入式技术配合光流运动恢复算法的汽车异常移动视觉监控方法.设计一套嵌入式监控流程,针对传感器采集的汽车视觉监控图像,进行光流运动恢复,抗击由于汽车晃动或者轻微移动造成的行为干扰,为汽车异常移动视觉监控提供依据.对汽车监控图像的光流恢复结果进行背景分离,获取汽车异常移动监控结果.实验结果表明,利用改进算法进行嵌入式的汽车异常移动视觉监控,能够消除干扰,极大的提高监控的准确性,完成汽车异常移动的准确识别. 相似文献
17.
《Robotics and Autonomous Systems》2014,62(10):1398-1407
In order to develop an autonomous mobile manipulation system that works in an unstructured environment, a modified image-based visual servo (IBVS) controller using hybrid camera configuration is proposed in this paper. In particular, an eye-in-hand web camera is employed to visually track the target object while a stereo camera is used to measure the depth information online. A modified image-based controller is developed to utilize the information from the two cameras. In addition, a rule base is integrated into the visual servo controller to adaptively tune its gain based on the image deviation data so as to improve the response speed of the controller. A physical mobile manipulation system is developed and the developed IBVS controller is implemented. The experimental results obtained using the systems validate the developed approach. 相似文献
18.
19.
H. Schneiderman M. Nashman A. J. Wavering R. Lumia 《Machine Vision and Applications》1995,8(6):359-364
This article describes a method for vision-based autonomous convoy driving in which a robotic vehicle autonomously pursues another vehicle. Pursuit is achieved by visually tracking a target mounted on the back of the pursued vehicle. Visual tracking must be robust, since a failure leads to catastrophic results. To make our system as reliable as possible, uncertainty is accounted for in each measurement and propagated through all computations. We use a best linear unbiased estimate (BLUE) of the target's position in each separate image, and a polynomial least-mean-square fit (LMSF) to estimate the target's motion. Robust autonomous convoy driving has been demonstrated in the presence of various lighting conditions, shadowing, other vehicles, turns at intersections, curves, and hills. A continuous, autonomous, convoy drive of over 33 km (20 miles) was successful, at speeds averaging between 50 and 75km/h (30–45 miles/h). 相似文献
20.
《Advanced Robotics》2013,27(3):217-232
This paper analyzes the effects of the application of visual adaptation mechanisms on snapshot-based guidance methods. The guidance principle of the visual homing is proven to be a visual potential function with an equilibrium point located at the goal position. The presence of a potential function means that classical control theory principles based on the Lyapunov functions can be applied to assess the robustness of the navigation strategy. The Retinex algorithm, a blind chromatic equalization pre-filtering that performs color constancy with no a priori information about the illuminant, is proposed as an unsupervised visual adaptation mechanism. It increases the visual information similarity under changes in the illuminant, thus increasing the robustness of the visual guidance. Tests and comparisons are presented. 相似文献