首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper proposes a novel multiple clue-based filtration algorithm (MCFA), which is developed to detect lane markings on roads using camera vision images for autonomous mobile robot navigation. The main goal of the algorithm is the robust estimation of the relative position and angle of the lane in the image by using multiple clues based on different characteristics of the lane. In particular, robustness against environmental changes is enhanced greatly since a dynamic model of the lane, besides static features of the lane such as color, intensity, etc., is incorporated for reliable estimation. The efficiency of the algorithm is verified through mobile robot experiments under various extreme illumination conditions in outdoor environments. The increased robustness performance enables reliable closed-loop control of a mobile robot that operates in a variety of navigation-related missions.  相似文献   

2.
Realizing steady and reliable navigation is a prerequisite for a mobile robot, but this facility is often weakened by an unavoidable slip or some irreparable drift errors of sensors in long-distance navigation. Although perceptual landmarks were solutions to such problems, it is impossible not to miss landmarks occasionally at some specific spots when the robot moves at different speeds, especially at higher speeds. If the landmarks are put at random intervals, or if the illumination conditions are not good, the landmarks will be easier to miss. In order to detect and extract artificial landmarks robustly under multiple illumination conditions, some low-level but robust image processing techniques were implemented. The moving speed and self-location were controlled by the visual servo control method. In cases where a robot suddenly misses some specific landmarks when it is moving, it will find them again in a short time based on its intelligence and the inertia of the previous search motion. These methods were verified by the reliable vision-based indoor navigation of an A-life mobile robot.This work was presented in part at the 8th International Symposium on Artificial Life and Robotics, Oita, Japan, January 24–26, 2003  相似文献   

3.
基于增强转移网络(ATN)的室外移动机器人道路图像理解   总被引:2,自引:0,他引:2  
道路图像理解是室外移动机器人视觉导航自主驾驶研究中的一个关键技术 ,由于基于视觉导航的室外移动机器人自主驾驶时 ,对实时性和鲁棒性要求很高 ,因此 ,为了满足室外移动机器人自主驾驶的实时性和鲁棒性要求 ,将人工智能研究句法分析中的一个形式体系——增强转移网络 (ATN )成功地应用于室外移动机器人的道路理解中 ,进而提出了基于 ATN的室外移动机器人道路图像理解算法 ,该算法在统一的 ATN构建思想指导下 ,针对不同的道路情况 ,不仅可以灵活地构建出不同的道理理解 ATN网络 ,还可达到本质上的统一及应用上的灵活。经实验检验 ,该算法在满足系统要求的鲁棒性条件下 ,具有非常高的实时性 ,即能充分地满足自主移动机器人高速自主导航的需要  相似文献   

4.
This paper presents a robust place recognition algorithm for mobile robots that can be used for planning and navigation tasks. The proposed framework combines nonlinear dimensionality reduction, nonlinear regression under noise, and Bayesian learning to create consistent probabilistic representations of places from images. These generative models are incrementally learnt from very small training sets and used for multi-class place recognition. Recognition can be performed in near real-time and accounts for complexity such as changes in illumination, occlusions, blurring and moving objects. The algorithm was tested with a mobile robot in indoor and outdoor environments with sequences of 1579 and 3820 images, respectively. This framework has several potential applications such as map building, autonomous navigation, search-rescue tasks and context recognition.  相似文献   

5.
陆军  穆海军  朱齐丹  杨明 《计算机应用》2007,27(7):1677-1679
研究一种基于全景视觉的机器人自主定位的方法。利用光学视觉原理,设计了一种全景视觉传感器,从而获得机器人周围环境的全方位景物的图像。通过去除全景图像的噪声、分割颜色阈值、计算中心点等处理,识别出机器人周围景物的已知路标,采用三角定位法,计算出机器人的坐标,从而为机器人的导航、避碰等任务奠定良好的基础。实验结果表明,此方法对于机器人自主定位具有一定的可行性。  相似文献   

6.
张健  张国山  邴志刚  崔世钢 《微计算机信息》2007,23(20):200-201,300
本文设计了一种采用RFID和机器视觉相结合的自主移动机器人导航系统.该系统在搜索和识别走廊内RFID标签和门牌号的基础上实现自定位和导航.本文首先给出了移动机器人导航系统的整体设计,然后分别给出了基于RFID辅助定位方法和基于DSP的门牌号识别方法,最后在博创UpVoyager移动机器人平台上验证了本系统的有效性.  相似文献   

7.
This paper describes an implementation of a mobile robot system for autonomous navigation in outdoor concurred walkways. The task was to navigate through nonmodified pedestrian paths with people and bicycles passing by. The robot has multiple redundant sensors, which include wheel encoders, an inertial measurement unit, a differential global positioning system, and four laser scanner sensors. All the computation was done on a single laptop computer. A previously constructed map containing waypoints and landmarks for position correction is given to the robot. The robot system's perception, road extraction, and motion planning are detailed. The system was used and tested in a 1‐km autonomous robot navigation challenge held in the City of Tsukuba, Japan, named “Tsukuba Challenge 2007.” The proposed approach proved to be robust for outdoor navigation in cluttered and crowded walkways, first on campus paths and then running the challenge course multiple times between trials and the challenge final. The paper reports experimental results and overall performance of the system. Finally the lessons learned are discussed. The main contribution of this work is the report of a system integration approach for autonomous outdoor navigation and its evaluation. © 2009 Wiley Periodicals, Inc.  相似文献   

8.
针对室外移动机器人GPS与惯性导航不足之处,在GPS与惯性导航基础上,提出了采用视觉检测方法实时识别路面的车道线信息,对移动机器人进行辅助定位。在传统的Canny边缘检测算子基础上,提出了使用改进型小波阀值算法与Canny边缘检测算子进行融合处理,其基本原理是先使用改进型小波阀值算法,代替传统的高斯滤波器进行平滑和降噪处理,然后再使用Canny边缘检测算子提取边缘特征。最后使用matlab软件对采集到的路面视频信息进行处理,计算出移动机器人相对于路面车道线的偏转角度和偏离距离。实验发现12000帧图像中有仅有892帧图像检测失败,成功率达到92.6%,取得较好效果。为移动机器人的室外自主移动提供有力支撑。  相似文献   

9.
10.
论文引用物理学中“能量场”的概念对机器人周围障碍物距离信息的定性环境进行描述,衍生出一种导航控制算法──势场法。实验证明它具有良好的实时性和鲁棒性,尤其是它引导人们从机器人行为模式的高度,去认识导航控制,由此提出的“感知—动作”反射式行为,用这种方法进行导航是一种可行的方法。势场法的特点是计算简单、开销小、速度快,适合于底层实时控制。在室外的复杂环境中势场法仍有较好的导航效果。  相似文献   

11.
针对助行机器人在室外未知环境中的导航需求,分析了不同导航方式的优缺点,设计并实现基于全球定位系统(GPS)的机器人定位导航系统.详细地描述了室外环境地图的创建过程和地图精度的控制.为了提高定位的精度,利用地图匹配修正GPS定位误差,同时融合机器人实时速度数据,得到最终的机器人位置.在机器人定位的基础上,实现助行机器人的...  相似文献   

12.
In this paper, we describe the artificial evolution of adaptive neural controllers for an outdoor mobile robot equipped with a mobile camera. The robot can dynamically select the gazing direction by moving the body and/or the camera. The neural control system, which maps visual information to motor commands, is evolved online by means of a genetic algorithm, but the synaptic connections (receptive fields) from visual photoreceptors to internal neurons can also be modified by Hebbian plasticity while the robot moves in the environment. We show that robots evolved in physics-based simulations with Hebbian visual plasticity display more robust adaptive behavior when transferred to real outdoor environments as compared to robots evolved without visual plasticity. We also show that the formation of visual receptive fields is significantly and consistently affected by active vision as compared to the formation of receptive fields with grid sample images in the environment of the robot. Finally, we show that the interplay between active vision and receptive field formation amounts to the selection and exploitation of a small and constant subset of visual features available to the robot.  相似文献   

13.
In this article, we present a novel approach to learning efficient navigation policies for mobile robots that use visual features for localization. As fast movements of a mobile robot typically introduce inherent motion blur in the acquired images, the uncertainty of the robot about its pose increases in such situations. As a result, it cannot be ensured anymore that a navigation task can be executed efficiently since the robot’s pose estimate might not correspond to its true location. We present a reinforcement learning approach to determine a navigation policy to reach the destination reliably and, at the same time, as fast as possible. Using our technique, the robot learns to trade off velocity against localization accuracy and implicitly takes the impact of motion blur on observations into account. We furthermore developed a method to compress the learned policy via a clustering approach. In this way, the size of the policy representation is significantly reduced, which is especially desirable in the context of memory-constrained systems. Extensive simulated and real-world experiments carried out with two different robots demonstrate that our learned policy significantly outperforms policies using a constant velocity and more advanced heuristics. We furthermore show that the policy is generally applicable to different indoor and outdoor scenarios with varying landmark densities as well as to navigation tasks of different complexity.  相似文献   

14.
目的 为降低室外自主移动机器人视觉导航中遇到的阴影、裂纹及道路边界不规则造成的道路检测算法不鲁棒性,提出一种每帧灰度阈值可调的快速自适应道路检测方法。方法 先采用2维离散小波进行道路图像分解与重构,比较各级小波重构后的近似道路图像,确定出不影响“路-非路”灰度二分类的最佳分辨率等级;在低分辨率尺度空间中,用灰度类间最大方差和类内最小方差共同构造适应度函数,采用改进的遗传算法对各帧道路图像进行阈值自适应分割,找到准确的道路边界,最近两边界中心位置即机器人行驶方向。采用小型陆地自主车作为研究平台,并在卡耐基梅隆大学(CMU)提供的室外移动机器人道路视频中进行算法测试。结果 本文方法能够在具有阴影、裂纹、光照度变化的道路条件下鲁棒分割出道路边界,机器人可以平均30 km/h的速度在有较严重阴影干扰的校园道路上行驶,视觉系统的处理速度平均可达到20 ms/帧。结论 本文方法比传统的灰度直方图分割法表现出更强的环境自适应性,可实现较为鲁棒的室外道路检测,并可作为室外自主移动机器人非结构化道路检测的一种鲁棒性较强的方法加以推广。  相似文献   

15.
Wide-baseline stereo vision for terrain mapping   总被引:3,自引:0,他引:3  
Terrain mapping is important for mobile robots to perform localization and navigation. Stereo vision has been used extensively for this purpose in outdoor mapping tasks. However, conventional stereo does not scale well to distant terrain. This paper examines the use of wide-baseline stereo vision in the context of a mobile robot for terrain mapping, and we are particularly interested in the application of this technique to terrain mapping for Mars exploration. In wide-baseline stereo, the images are not captured simultaneously by two cameras, but by a single camera at different positions. The larger baseline allows more accurate depth estimation of distant terrain, but the robot motion between camera positions introduces two new problems. One issue is that the robot estimates the relative positions of the camera at the two locations imprecisely, unlike the precise calibration that is performed in conventional stereo. Furthermore, the wide-baseline results in a larger change in viewpoint than in conventional stereo. Thus, the images are less similar and this makes the stereo matching process more difficult. Our methodology addresses these issues using robust motion estimation and feature matching. We give results using real images of terrain on Earth and Mars and discuss the successes and failures of the technique.  相似文献   

16.
With the recent proliferation of robust but computationally demanding robotic algorithms, there is now a need for a mobile robot platform equipped with powerful computing facilities. In this paper, we present the design and implementation of Beobot 2.0, an affordable research‐level mobile robot equipped with a cluster of 16 2.2‐GHz processing cores. Beobot 2.0 uses compact Computer on Module (COM) processors with modest power requirements, thus accommodating various robot design constraints while still satisfying the requirement for computationally intensive algorithms. We discuss issues involved in utilizing multiple COM Express modules on a mobile platform, such as interprocessor communication, power consumption, cooling, and protection from shocks, vibrations, and other environmental hazards such as dust and moisture. We have applied Beobot 2.0 to the following computationally demanding tasks: laser‐based robot navigation, scale‐invariant feature transform (SIFT) object recognition, finding objects in a cluttered scene using visual saliency, and vision‐based localization, wherein the robot has to identify landmarks from a large database of images in a timely manner. For the last task, we tested the localization system in three large‐scale outdoor environments, which provide 3,583, 6,006, and 8,823 test frames, respectively. The localization errors for the three environments were 1.26, 2.38, and 4.08 m, respectively. The per‐frame processing times were 421.45, 794.31, and 884.74 ms respectively, representing speedup factors of 2.80, 3.00, and 3.58 when compared to a single dual‐core computer performing localization. © 2010 Wiley Periodicals, Inc.  相似文献   

17.
In this work, we present a new real-time image-based monocular path detection method. It does not require camera calibration and works on semi-structured outdoor paths. The core of the method is based on segmenting images and classifying each super-pixel to infer a contour of navigable space. This method allows a mobile robot equipped with a monocular camera to follow different naturally delimited paths. The contour shape can be used to calculate the forward and steering speed of the robot. To achieve real-time computation necessary for on-board execution in mobile robots, the image segmentation is implemented on a low-power embedded GPU. The validity of our approach has been verified with an image dataset of various outdoor paths as well as with a real mobile robot.  相似文献   

18.
19.
20.
THMR-V导航控制算法的研究   总被引:6,自引:1,他引:5  
李华  丁冬花  何克忠 《机器人》2001,23(6):525-530
THMR-V室外移动机器人是清华大学智能技术与系统国家重点实验室为配合国家863项 目而研制的.本文介绍了THMR-V导航控制算法中用于转弯的切弧跟踪算法和停障、避障算 法 ,其中后者是本文的重点.以上两种算法在文中都给出了仿真效果图,而且在实验中也验证 了这两种算法的可行性.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号