首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 33 毫秒
1.
运动目标跟踪技术是未知环境下移动机器人研究领域的一个重要研究方向。该文提出了一种基于主动视觉和超声信息的移动机器人运动目标跟踪设计方法,利用一台SONY EV-D31彩色摄像机、自主研制的摄像机控制模块、图像采集与处理单元等构建了主动视觉系统。移动机器人采用了基于行为的分布式控制体系结构,利用主动视觉锁定运动目标,通过超声系统感知外部环境信息,能在未知的、动态的、非结构化复杂环境中可靠地跟踪运动目标。实验表明机器人具有较高的鲁棒性,运动目标跟踪系统运行可靠。  相似文献   

2.
针对未知环境下移动机器人平稳上坡控制对坡度感知精度的要求,本文提出了一种基于迁移学习的移动机器人单帧图像坡度检测算法.利用室内图像标准数据集训练深度卷积神经场-全连接超像素池化网络(deep convolutional neural field-fully connected superpixel pooling ne...  相似文献   

3.
为了有效确保移动机器人视觉伺服控制效果,提高移动机器人视觉伺服控制精度,设计了基于虚拟现实技术的移动机器人视觉伺服控制系统。通过三维视觉传感器和立体显示器等虚拟环境的I/O设备、位姿传感器、视觉图像处理器以及伺服控制器元件,完成系统硬件设计。从运动学和动力学两个方面,搭建移动机器人数学模型,利用标定的视觉相机,生成移动机器人实时视觉图像,通过图像滤波、畸变校正等步骤,完成图像的预处理。利用视觉图像,构建移动机器人虚拟移动环境。在虚拟现实技术下,通过目标定位、路线生成、碰撞检测、路线调整等步骤,规划移动机器人行动路线,通过控制量的计算,实现视觉伺服控制功能。系统测试结果表明,所设计控制系统的位置控制误差较小,姿态角和移动速度控制误差仅为0.05°和0.12m/s,移动机器人碰撞次数较少,具有较好的移动机器人视觉伺服控制效果,能够有效提高移动机器人视觉伺服控制精度。  相似文献   

4.
在基于视觉的足球机器人系统中,对场上焦点目标——球的动态跟踪识别是系统设计的第一要务。针对半自主微型机器人足球比赛中的小球易受场上干扰、小车遮挡造成的识别丢失问题,提出基于预测与搜索窗的图像目标跟踪识别方法。通过最小二乘法预测丢失小球的可能位置,将图像目标搜索限制在局部小区域内,并利用搜索窗内的在线状态信息加以判断,实现运动目标被遮挡情况下的有效跟踪识别。实验与比赛结果统计表明,该方法实时跟踪识别效果好、鲁棒性强。  相似文献   

5.
基于多传感器数据融合的环境理解及障碍物检测算法   总被引:2,自引:0,他引:2  
张奇  顾伟康 《机器人》1998,20(2):104-110
本文研究了移动机器人中基于Dempster-Shafer证据推理理论的多传感器数据融合技术.通过融合由CCD彩色摄像机获取的2D彩色图像及由激光测距成像雷达获取的3D距离图像,移动机器人的环境理解及障碍物检测的可靠性与精度比在任何单一传感器所获得的信息的基础上有了很大的提高.文中探讨了移动机器人中视觉信息融合的许多具有较大难度的实际问题,取得了有意义的结果.  相似文献   

6.
Legged robots are an efficient alternative for navigation in challenging terrain. In this paper we describe Weaver, a six‐legged robot that is designed to perform autonomous navigation in unstructured terrain. It uses stereo vision and proprioceptive sensing based terrain perception for adaptive control while using visual‐inertial odometry for autonomous waypoint‐based navigation. Terrain perception generates a minimal representation of the traversed environment in terms of roughness and step height. This reduces the complexity of the terrain model significantly, enabling the robot to feed back information about the environment into its controller. Furthermore, we combine exteroceptive and proprioceptive sensing to enhance the terrain perception capabilities, especially in situations in which the stereo camera is not able to generate an accurate representation of the environment. The adaptation approach described also exploits the unique properties of legged robots by adapting the virtual stiffness, stride frequency, and stride height. Weaver's unique leg design with five joints per leg improves locomotion on high gradient slopes, and this novel configuration is further analyzed. Using these approaches, we present an experimental evaluation of this fully self‐contained hexapod performing autonomous navigation on a multiterrain testbed and in outdoor terrain.  相似文献   

7.
室外智能移动机器人的发展及其关键技术研究   总被引:24,自引:5,他引:19  
欧青立  何克忠 《机器人》2000,22(6):519-526
室外智能移动机器人有着广泛的应用前景,是机器人研究中的热点之一.本文分析 了在室外移动机器人发展中有着代表意义的几个典型系统,进而论述了室外移动机器人研究 中的若干关键技术的研究现状及发展水平.这些关键技术包括移动机器人的控制体系结构、 机器人视觉信息的实时处理技术、车体的定位系统、多传感器信息的集成与融合技术以及路 径规划技术与车体控制技术等.  相似文献   

8.
Recent developments in sensor technology have made it feasible to use mobile robots in several fields, but robots still lack the ability to accurately sense the environment. A major challenge to the widespread deployment of mobile robots is the ability to function autonomously, learning useful models of environmental features, recognizing environmental changes, and adapting the learned models in response to such changes. This article focuses on such learning and adaptation in the context of color segmentation on mobile robots in the presence of illumination changes. The main contribution of this article is a survey of vision algorithms that are potentially applicable to color-based mobile robot vision. We therefore look at algorithms for color segmentation, color learning and illumination invariance on mobile robot platforms, including approaches that tackle just the underlying vision problems. Furthermore, we investigate how the inter-dependencies between these modules and high-level action planning can be exploited to achieve autonomous learning and adaptation. The goal is to determine the suitability of the state-of-the-art vision algorithms for mobile robot domains, and to identify the challenges that still need to be addressed to enable mobile robots to learn and adapt models for color, so as to operate autonomously in natural conditions.  相似文献   

9.
This paper presents results generated with a new evolutionary robotics (ER) simulation environment and its complementary real mobile robot colony research test-bed. Neural controllers producing mobile robot maze searching and exploration behaviors using binary tactile sensors as inputs were evolved in a simulated environment and subsequently transferred to and tested on real robots in a physical environment. There has been a considerable amount of proof-of-concept and demonstration research done in the field of ER control in recent years, most of which has focused on elementary behaviors such as object avoidance and homing. Artificial neural networks (ANN) are the most commonly used evolvable controller paradigm found in current ER literature. Much of the research reported to date has been restricted to the implementation of very simple behaviors using small ANN controllers. In order to move beyond the proof-of-concept stage our ER research was designed to train larger more complicated ANN controllers, and to implement those controllers on real robots quickly and efficiently. To achieve this a physical robot test-bed that includes a colony of eight real robots with advanced computing and communication abilities was designed and built. The real robot platform has been coupled to a simulation environment that facilitates the direct wireless transfer of evolved neural controllers from simulation to real robots (and vice versa). We believe that it is the simultaneous development of ER computing systems in both the simulated and the physical worlds that will produce advances in mobile robot colony research. Our simulation and training environment development focuses on the definition and training of our new class of ANNs, networks that include multiple hidden layers, and time-delayed and recurrent connections. Our physical mobile robot design focuses on maximizing computing and communications power while minimizing robot size, weight, and energy usage. The simulation and ANN-evolution environment was developed using MATLAB. To allow for efficient control software portability our physical evolutionary robots (EvBots) are equipped with a PC-104-based computer running a custom distribution of Linux and connected to the Internet via a wireless network connection. In addition to other high-level computing applications, the mobile robots run a condensed version of MATLAB, enabling ANN controllers evolved in simulation to be transferred directly onto physical robots without any alteration to the code. This is the first paper in a series to be published cataloging our results in this field.  相似文献   

10.
A self-localization system for autonomous mobile robots is presented. This system estimates the robot position in previously learned environments, using data provided solely by an omnidirectional visual perception subsystem composed of a camera and of a special conical reflecting surface. It performs an optical pre-processing of the environment, allowing a compact representation of the collected data. These data are then fed to a learning subsystem that associates the perceived image to an estimate of the actual robot position. Both neural networks and statistical methods have been tested and compared as learning subsystems. The system has been implemented and tested and results are presented.  相似文献   

11.
This paper presents a learning rule, CBA, to develop oriented receptive fields similar to those founded in cat striate cortex. The inherent complexity of the development of selectivity in visual cortex has led most authors to test their models by using a restricted input environment. Only recently, some learning rules (the PCA and the BCM rules) have been studied in a realistic visual environment. For these rules, which are based upon Hebbian learning, single neuron models have been proposed in order to get a better understanding of their properties and dynamics. These models suffered from unbounded growing of synaptic strength, which is remedied by a normalization process. However, normalization seems biologically implausible, given the non-local nature of this process. A detailed stability analysis of the proposed rule proves that the CBA attains a stable state without any need for normalization. Also, a comparison among the results achieved in different types of visual environments by the PCA, the BCM and the CBA rules is provided. The final results show that the CBA rule is appropriate for studying the biological process of receptive field formation and its application in image processing and artificial vision tasks.  相似文献   

12.
智能机器人对复杂地貌环境的识别一直是机器人应用领域研究的前沿问题,移动机器人在不同的地貌上采取的运动方式并非一成不变,所以选择的运动方式对于迅速准确识别所处地貌的类型至关重要。针对该问题本文提出了一种基于贝叶斯框架的主动感知探索方法,使移动机器人能够主动探索有兴趣的运动方式并且感知识别和运动之间的匹配关系,可以优化在地貌识别之中的模糊不确定性;为了进一步验证实验的可靠性,还使用了被动感知策略来比较和分析不同策略之间的差异。实验结果表明:主动感知方法能够规划出有效的地貌识别动作序列,能够引导移动机器人主动感知目标地貌,该框架对于室外未知环境下主动感知后的地貌识别效果优于被动感知。  相似文献   

13.
Wide-baseline stereo vision for terrain mapping   总被引:3,自引:0,他引:3  
Terrain mapping is important for mobile robots to perform localization and navigation. Stereo vision has been used extensively for this purpose in outdoor mapping tasks. However, conventional stereo does not scale well to distant terrain. This paper examines the use of wide-baseline stereo vision in the context of a mobile robot for terrain mapping, and we are particularly interested in the application of this technique to terrain mapping for Mars exploration. In wide-baseline stereo, the images are not captured simultaneously by two cameras, but by a single camera at different positions. The larger baseline allows more accurate depth estimation of distant terrain, but the robot motion between camera positions introduces two new problems. One issue is that the robot estimates the relative positions of the camera at the two locations imprecisely, unlike the precise calibration that is performed in conventional stereo. Furthermore, the wide-baseline results in a larger change in viewpoint than in conventional stereo. Thus, the images are less similar and this makes the stereo matching process more difficult. Our methodology addresses these issues using robust motion estimation and feature matching. We give results using real images of terrain on Earth and Mars and discuss the successes and failures of the technique.  相似文献   

14.
Executing complex robotic tasks including dexterous grasping and manipulation requires a combination of dexterous robots, intelligent sensors and adequate object information processing. In this paper, vision has been integrated into a highly redundant robotic system consisting of a tiltable camera and a three-fingered dexterous gripper both mounted on a puma-type robot arm. In order to condense the image data of the robot working space acquired from the mobile camera, contour image processing is used for offline grasp and motion planning as well as for online supervision of manipulation tasks. The performance of the desired robot and object motions is controlled by a visual feedback system coordinating motions of hand, arm and eye according to the specific requirements of the respective situation. Experiences and results based on several experiments in the field of service robotics show the possibilities and limits of integrating vision and tactile sensors into a dexterous hand-arm-eye system being able to assist humans in industrial or servicing environments.  相似文献   

15.
There is a bottleneck of mobile robots positioning technologies for uncertain goals in complex field environment. Owing to the disturbance of the environment, the objects are hard to be located precisely by robot manipulator. Aiming at the positioning problem, binocular stereo vision system and positioning principle of the picking manipulator in virtual environment (VE) were proposed and expatiated upon; in addition, the manipulator positioning model was built in VE, and the manipulator positioning simulation system was developed by Microsoft Visual C++ 6.0; and the binocular stereo vision system platform with a three-coordinate guideway positioning was constructed for test; what’s more the error sources of vision positioning system was analyzed, and the camera system error was established; the mathematical model of experimental error and the camera calibration matching error were also found; with the developed robot manipulator positioning simulation software and vision system hardware, an experimental platform of positioning system was constructed, and using the platform, the stereo vision data was mapped to the manipulator and was guiding the accurate positioning in VE. Finally, experiment of positioning error compensation was carried out. Results of simulation in VE and the experiment showed that the vision positioning method was feasible for positioning in the field environment; it can be applied to control robot operation and to correct the positioning errors in real-time, especially to the long-range precision modelling and error compensation of robots.  相似文献   

16.
Vision-based remote control of cellular robots   总被引:1,自引:0,他引:1  
This paper describes the development and design of a vision-based remote controlled cellular robot. Cellular robots have numerous applications in industrial problems where simple inexpensive robots can be used to perform different tasks that involve covering a large working space. As a methodology, the robots are controlled based on the visual input from one or more cameras that monitor the working area. As a result, a robust control of the robot trajectory is achieved without depending on the camera calibration. The remote user simply specifies a target point in the image to indicate the robot final position.

We describe the complete system at various levels: the visual information processing, the robot characteristics and the closed loop control system design, including the stability analysis when the camera location is unknown. Results are presented and discussed.

In our opinion, such a system may have a wide spectrum of applications in industrial robotics and may also serve as an educational testbed for advanced students in the fields of vision, robotics and control.  相似文献   


17.
In this work, we present a new real-time image-based monocular path detection method. It does not require camera calibration and works on semi-structured outdoor paths. The core of the method is based on segmenting images and classifying each super-pixel to infer a contour of navigable space. This method allows a mobile robot equipped with a monocular camera to follow different naturally delimited paths. The contour shape can be used to calculate the forward and steering speed of the robot. To achieve real-time computation necessary for on-board execution in mobile robots, the image segmentation is implemented on a low-power embedded GPU. The validity of our approach has been verified with an image dataset of various outdoor paths as well as with a real mobile robot.  相似文献   

18.
利用SONYEV-D31摄像机和自主研发的摄像机控制模块,构建了一套主动视觉子系统,并将该子系统应用于RIRA-Ⅱ型移动机器人上,实现了移动机器人运动目标自动跟踪功能。RIRA-Ⅱ移动机器人采用了由一组分布式行为模块和集中命令仲裁器组成的基于行为的分布式控制体系结构。各行为模块基于领域知识通过反应方式产生投票,由仲裁器产生动作指令,机器人完成相应的动作。在设置了障碍、窄通道以及模拟墙体的复杂环境下进行运动目标跟踪实验,实验表明运动目标跟踪系统运行可靠,具有较高的鲁棒性。  相似文献   

19.
针对移动机器人视觉伺服跟踪控制问题, 提出一种基于自适应动态规划(Adaptive dynamic programming, ADP) 的控制方法. 通过移动机器人上的相机拍摄共面特征点的当前图像、期望图像以及参考图像, 利用单应性技术得到移动机器人当前的位姿信息与期望的位姿信息(即平移量与旋转角度), 从而通过当前与期望的平移旋转之间差值得到系统的开环误差模型. 进而, 针对此系统设计最优控制器, 同时做合适的控制输入变换. 在此基础上设计一个基于ADP的视觉伺服控制方法以保证移动机器人完成轨迹跟踪任务. 为求出最优控制输入, 采用一个评价神经网络近似值函数, 通过不断学习逼近哈密顿−雅可比−贝尔曼(Hamilton-Jacobi-Bellman, HJB)方程的解. 与以往不同的是, 由于系统存在时变项, 导致HJB方程也含有时变项, 因此需要设计具有时变权值结构的神经网络近似值函数. 最终证明在所设计的控制方法作用下, 闭环系统是一致最终有界的.  相似文献   

20.
杨芳  王朝立 《信息与控制》2012,41(1):57-62,68
基于视觉反馈和标准链式形式,研究了一类不确定非完整移动机器人的轨迹跟踪控制问题.首先,利用针孔摄像机模型,提出了一种新的基于视觉伺服的移动机器人运动学跟踪误差模型.基于这个模型,在具有不确定视觉参数的情形下,利用back-stepping技术,设计出了一种新的自适应动态反馈跟踪控制器,实现了全局渐近的轨迹跟踪,并通过李亚普诺夫方法严格证明了闭环系统的稳定性和估计参数的有界性.仿真结果证明了所提出的控制器的有效性.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号