首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到18条相似文献,搜索用时 140 毫秒
1.
基于多传感器信息融合的斜面移动机器人定位新方法   总被引:6,自引:1,他引:5  
仲欣  吕恬生 《机器人》1999,21(5):321-327
首先分析了本文的研究意义;然后针对在斜面 上工作的轮式移动机器人,提出了通过卡尔曼滤波融合倾斜角传感器和码盘传感器信息的移 动机器人定位新方法;最后进行了斜面定位试验.试验结果证实了本文定位方法的有效性.  相似文献   

2.
基于粒子滤波的单目视觉SLAM算法   总被引:3,自引:0,他引:3  
陈伟  吴涛  李政  贺汉根 《机器人》2008,30(3):1-248
针对携带有单目摄像机和码盘的微小机器人的定位与建图问题,提出了基于粒子滤波的SLAM(同时定位与建图)算法.从摄像机中提取图像特征点,并在图像序列中加以匹配,根据相应时刻的摄像机位姿计算得到对应的环境标志点坐标;机器人的大致位姿估计由码盘运动模型获得.在机器人移动过程中,环境标志点的观测信息和码盘信息通过粒子滤波相融合,从而提高了机器人定位的精度,同时也得到了更为准确的环境标志点坐标.仿真实验结果表明本算法有效、可靠.  相似文献   

3.
光流场的计算是移动机器人领域一个很重要的研究课题,但现有的光流场计算方法并未充分利用移动机器人的特点。本文假设移动机器人配备了双目视觉系统和码盘,在此基础上提出了一种基于立体匹配技术的光流场计算方法。该方法的基本思想是:首先利用Forstner算子对图像中易于跟踪的特征点进行提取;然后利用立体视觉方法计算这些特
特征点的三维坐标;在此基础上,进一步结合码盘信息对前后帧图像中的特征点进行预测和跟踪,从而最终实现了光流的准确提取。实验表明,该方法比传统方法具有速度更快、准确性更高的优点。  相似文献   

4.
移动机器人基于多传感器信息融合的室外场景理解   总被引:1,自引:0,他引:1  
闫飞  庄严  王伟 《控制理论与应用》2011,28(8):1093-1098
本文研究了移动机器人多传感器信息融合技术,提出一种融合激光测距与视觉信息的实时室外场景理解方法.基于三维激光测距数据构建了高程图描述场景地形特征,同时利用条件随机场模型从视觉信息中获取地貌特征,并以高程图中的栅格作为载体,应用投影变换和信息统计方法将激光信息与视觉信息进行有效融合.在此基础上,对融合后的环境模型分别在地形和地貌两个层面进行可通过性评估,从而实现自主移动机器人实时室外场景理解.实验结果和数据分析验证了所提方法的有效性和实用性.  相似文献   

5.
提出了一种多探头k分割编码结构原理及一种新型的增量式序位码编码器结构,可有效地增加码盘容量.提高分辨率并减少码道标志数,可缩小码盘尺寸.  相似文献   

6.
曹会彬  李斌  刘金国 《机器人》2007,29(5):0-484
将GPS、电子罗盘、倾角仪、码盘传感器应用到可变形机器人自主运动控制中。针对可变形机器人自身结构特点,提出了一种基于多传感器信息融合的可变形机器人在野外环境中自主控制的方法。该方法主要实现了在非结构环境中机器人的自主变形、自主避障和自主导航定位的功能。实验验证了该方法的有效性。  相似文献   

7.
定位系统是保证准确避障和导航的必要条件,只有准确稳定的定位才能够准确避障和导航,所以定位系统在室外移动机器人中是不可缺少的组成部分。本文介绍室外移动机器人TYIRV-I的组合定位系统,即GPS、磁罗盘和光码盘组合定位,其中,GPS虽然能够提供较精确的位置信息,但是会由于遮挡等原因导致数据无效。磁罗盘和光码盘通过航位推算得到位置信息,短距离定位效果较好,但存在严重的累计误差。对以上两种定位方法进行信息融合形成GPS、磁罗盘和光码盘组合定位,取长补短,取得较好的效果。实践证明,该定位系统具有较高的准确性和稳定性,能够满足移动机器人的定位和导航需要。  相似文献   

8.
基于融合信息的道路分割方法   总被引:2,自引:0,他引:2  
基于融合信息的道路分割方法邬永革,杨静宇(南京理工大学计算机系南京210014)关键词:证据理论;信息融合;属性金字塔;道路分割l引言道路分割技术是室外机器人视觉系统实现道路跟踪的基础.本文提出的基于融合信息的道路分割技术利用DemPster.Sha...  相似文献   

9.
《传感器世界》2004,10(12):39-39
主要研究了多传感器智能化信息融合算法、异类传感器信息融合中的时间配准及空间对准算法,提出了一种基于均衡信度分配准则的冲突证据组合算法。1、在经典粗糙集理论方法研究推广的基础上.提出了一种新的基于随机加权粗糙集模型的特征知识提取方法和利用阵列神经网络技术、模糊神经网络技术进行目标识别融合的计算方法。2、研究将知识发现技术和数据挖掘技术相集成.给出了动态目标识别融台的体系结构。3、针对目标综合识刷中冲突信息的组合问题.  相似文献   

10.
介绍了用16位单片机80C196KC组成位置控制系统的特点,提出了用GAL器件处理80C196KC的有关信号以及有效分配地址和实现码盘脉冲信号倍频鉴相的方法  相似文献   

11.
Humanoid robots have complex kinematic chains whose modeling is error prone. If the robot model is not well calibrated, its hand pose cannot be determined precisely from the encoder readings, and this affects reaching and grasping accuracy. In our work, we propose a novel method to simultaneously i) estimate the pose of the robot hand, and ii) calibrate the robot kinematic model. This is achieved by combining stereo vision, proprioception, and a 3D computer graphics model of the robot. Notably, the use of GPU programming allows to perform the estimation and calibration in real time during the execution of arm reaching movements. Proprioceptive information is exploited to generate hypotheses about the visual appearance of the hand in the camera images, using the 3D computer graphics model of the robot that includes both kinematic and texture information. These hypotheses are compared with the actual visual input using particle filtering, to obtain both i) the best estimate of the hand pose and ii) a set of joint offsets to calibrate the kinematics of the robot model. We evaluate two different approaches to estimate the 6D pose of the hand from vision (silhouette segmentation and edges extraction) and show experimentally that the pose estimation error is considerably reduced with respect to the nominal robot model. Moreover, the GPU implementation ensures a performance about 3 times faster than the CPU one, allowing real-time operation.  相似文献   

12.
基于不确定性知识的实时道路场景理解   总被引:8,自引:0,他引:8       下载免费PDF全文
由于室外机器人的工作环境非常复杂,因此机器人的视觉导航必须具有足够的智能和鲁棒性,为此,提出了一种基于不确定性知识的实时道路理解算法,该算法通过不确定性知识推理来融合多种信息和知识,以满足在复杂道路环境下的鲁棒性要求,它即使在有强烈阴影、水迹等干扰下也能给出比较好的结果;通过图象边缘信息的提取可以得到精确的道路边界,以满足视觉导航的精确性要求;同时在算法设计时,兼顾了实时性要求;使得算法得以实时实现,该算法已在实际的机器人上进行了测试,并得到了很好的结果。  相似文献   

13.
针对两轮自平衡机器人运行过程中遇到打滑、越障、碰撞等异常事件,测程法进行位置估计失效的情况,提出一种Accodometry方法,通过融合码盘与加速度计数据对位置进行估计,解决了非系统测程法误差对机器人位置估计的影响,降低了加速度计固有漂移的不利影响,提高了两轮自平衡机器人的定位精度.实验验证了Accodometry方法的有效性,结果显示位置误差降为原来的1/4.  相似文献   

14.
为了解决语义分割应用到现实世界的下游任务时无法处理未定义类别的问题,提出了指称对象分割任务,该任务根据自然语言文本的描述找到图像中对应的目标。现有方法大多使用一个跨模态解码器来融合从视觉编码器和语言编码器中独立提取的特征,但是这种方法无法有效利用图像的边缘特征且训练复杂。CLIP(contrastive language-image pre-training)是一个强大的预训练视觉语言跨模态模型,能够有效提取图像与文本特征,因此提出一种在频域融合CLIP编码后的多模态特征方法。首先,使用无监督模型对图像进行粗粒度分割,并提取自然语言文本中的名词用于后续任务。接着利用CLIP的图像编码器与文本编码器分别对图像与文本进行编码。然后使用小波变换分解图像与文本特征,可以充分利用图像的边缘特征与图像内的位置信息在频域进行分解并融合,并在频域分别对图像特征与文本特征进行融合,并将融合后的特征进行反变换。最后将文本特征与图像特征进行逐像素匹配,得到分割结果,并在常用的数据集上进行测试。实验结果证明,网络在无训练零样本的条件下取得了良好的效果,并且具有较好的鲁棒性与泛化能力。  相似文献   

15.
Most localization algorithms are either range-based or vision-based, but the use of only one type of sensor cannot often ensure successful localization. This paper proposes a particle filter-based localization method that combines the range information obtained from a low-cost IR scanner with the SIFT-based visual information obtained from a monocular camera to robustly estimate the robot pose. The rough estimation of the robot pose by the range sensor can be compensated by the visual information given by the camera and the slow visual object recognition can be overcome by the frequent updates of the range information. Although the bandwidths of the two sensors are different, they can be synchronized by using the encoder information of the mobile robot. Therefore, all data from both sensors are used to estimate the robot pose without time delay and the samples used for estimating the robot pose converge faster than those from either range-based or vision-based localization. This paper also suggests a method for evaluating the state of localization based on the normalized probability of a vision sensor model. Various experiments show that the proposed algorithm can reliably estimate the robot pose in various indoor environments and can recover the robot pose upon incorrect localization. Recommended by Editorial Board member Sooyong Lee under the direction of Editor Hyun Seok Yang. This research was conducted by the Intelligent Robotics Development Program, one of the 21st Century Frontier R&D Programs funded by the Ministry of Knowledge Economy of Korea. Yong-Ju Lee received the B.S. degree in Mechanical Engineering from Korea University in 2004. He is now a Student for Ph.D. of Mechanical Engineering from Korea University. His research interests include mobile robotics. Byung-Doo Yim received the B.S. degree in Control and Instrumentation Engineering from Seoul National University of Technology in 2005. Also, he received the M.S. degree in Mechatroncis Engineering from Korea University in 2007. His research interests include mobile robotics. Jae-Bok Song received the B.S. and M.S. degrees in Mechanical Engineering from Seoul National University in 1983 and 1985, respectively. Also, he received the Ph.D. degree in Mechanical Engineering from MIT in 1992. He is currently a Professor of Mechanical Engineering, Korea University, where he is also the Director of the Intelligent Robotics Laboratory from 1993. His current research interests lie mainly in mobile robotics, safe robot arms, and design/control of intelligent robotic systems.  相似文献   

16.
针对移动机器人的避障问题,以AS-R移动机器人为研究平台,提出了一种将神经网络和模糊神经网络相结合的两级融合方法。采用BP神经网络对多超声波传感器信息进行融合,以减少传感器信息的不确定,提高对障碍物识别的准确率;采用模糊神经网络实现移动机器人的避障决策控制,使之更适合系统的避障要求。该方法使移动机器人在避障中具有较好的灵活性和鲁棒性。机器人避障实验验证了所提方法的有效性。  相似文献   

17.
空地正交视角下的多机器人协同定位及融合建图   总被引:1,自引:0,他引:1  
针对单一机器人在复杂场景下进行同步定位与建图存在的视角局限等问题,本文提出了一种空地正交视角下的空中无人机与地面机器人协同定位与融合建图方法.鉴于无人机的空中视角与地面机器人视角属于正交关系,该方法主要思想是解决空地正交视角的坐标系转换问题.首先,设计了一种空中无人机和地面机器人协同定位与建图的框架,通过无人机提供的全局俯视图像与地面机器人的局部平视图像获得全面丰富的场景信息.在此基础上,通过融合惯性测量单元和图像信息修正偏移并优化轨迹,利用地面机器人上带有尺度信息的视觉标识,获得坐标系转换矩阵以融合地图.最后多组真实场景实验验证了该方法具有有效性,是空地协同多机器人协同定位及融合建图(simultaneous localization and mapping, SLAM)领域中值得参考的方法.  相似文献   

18.
Open-chain manipulator robots play an important role in the industry, since they are utilized in applications requiring precise motion. High-performance motion of a robot system mainly relies on adequate trajectory planning and the controller that coordinates the movement. The controller performance depends of both, the employed control law and the sensor feedback. Optical encoder feedback is the most used sensor for angular position estimation of each joint in the robot, since they feature accurate and low noise angular position measurements. However, it cannot detect mechanical imperfections and deformations common in open chain robots. Moreover, velocity and acceleration cannot be extracted from the encoder data without adding phase delays. Sensor fusion techniques are found to be a good solution for solving this problem. However, few works has been carried out in serial robots for kinematic estimation of angular position, velocity and acceleration, since the delays induced by the filtering techniques avoids its use as controller feedback. This work proposes a novel sensor-fusion-based feedback system capable of providing complete kinematic information from each joint in 4-degrees of freedom serial robot, with the contribution of a proposed methodology based on Kalman filtering for fusing the information from optical encoder, gyroscope and accelerometer appended to the robot. Calibration and experimentation are carried out for validating the proposal. The results are compared with another kinematic estimation technique finding that this proposal provides more information about the robot movement without adding state delays, which is important for being used as controller feedback.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号