首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
利用SONYEV-D31摄像机和自主研发的摄像机控制模块,构建了一套主动视觉子系统,并将该子系统应用于RIRA-Ⅱ型移动机器人上,实现了移动机器人运动目标自动跟踪功能。RIRA-Ⅱ移动机器人采用了由一组分布式行为模块和集中命令仲裁器组成的基于行为的分布式控制体系结构。各行为模块基于领域知识通过反应方式产生投票,由仲裁器产生动作指令,机器人完成相应的动作。在设置了障碍、窄通道以及模拟墙体的复杂环境下进行运动目标跟踪实验,实验表明运动目标跟踪系统运行可靠,具有较高的鲁棒性。  相似文献   

2.
《Advanced Robotics》2013,27(3):273-294
In order to search and rescue victims in rubble effectively, a three-dimensional (3D) map of the rubble is required. As a part of the national project on rescue robot systems, we are investigating a method for constructing a 3D map of rubble by teleoperated mobile robots. In this paper, we developed a laser range finder for 3D map building in rubble. The developed range finder consists of a ring laser beam module and an omnivison camera. The ring laser beam is generated by using a conical mirror and it is radiated toward the interior wall of the rubble around a mobile robot on which the laser range finder is mounted. The ominivison camera with a hyperbolic mirror can capture the reflected image of the ring laser on the rubble. Based on the triangulation principle, cross-section range data is obtained. Continuing this measurement as the mobile robot moves inside the rubble, a 3D map is obtained. We constructed a geometric model of the laser range finder for error analysis and obtained an optimal dimension of the laser range finder. Based on this analysis, we actually prototyped a range finder. Experimental results show that the actual measurement errors are well matched to the theoretical values. Using the prototyped laser range finder, a 3D map of rubble was actually built with reasonable accuracy.  相似文献   

3.
针对传统ICP(iterative closest points,迭代最近点算法)存在易陷入局部最优、匹配误差大等问题,提出了一种新的欧氏距离和角度阈值双重限制方法,并在此基础上构建了基于Kinect的室内移动机器人RGB-D SLAM(simultaneous localization and mapping)系统。首先,使用Kinect获取室内环境的彩色信息和深度信息,通过图像特征提取与匹配,结合相机内参与像素点深度值,建立三维点云对应关系;然后,利用RANSAC(random sample consensus)算法剔除外点,完成点云的初匹配;采用改进的点云配准算法完成点云的精匹配;最后,在关键帧选取中引入权重,结合g2o(general graph optimization)算法对机器人位姿进行优化。实验证明该方法的有效性与可行性,提高了三维点云地图的精度,并估计出了机器人运行轨迹。  相似文献   

4.
针对室内环境移动机器人的自定位问题,提出一种嵌入式移动机器人红外路标定位模块。采用基于单应矩阵的初始标定算法和陒机初始标定方法,补偿由于实际使用中的安装误差所引起的定位偏差。实验结果表明,该模块易于嵌入式系统实现,定位模块位置精度可达厘米级别,角度定位精度雓于6°。  相似文献   

5.
This paper presents a 3D contour reconstruction approach employing a wheeled mobile robot equipped with an active laser‐vision system. With observation from an onboard CCD camera, a laser line projector fixed‐mounted below the camera is used for detecting the bottom shape of an object while an actively‐controlled upper laser line projector is utilized for 3D contour reconstruction. The mobile robot is driven to move around the object by a visual servoing and localization technique while the 3D contour of the object is being reconstructed based on the 2D image of the projected laser line. Asymptotical convergence of the closed‐loop system has been established. The proposed algorithm also has been used experimentally with a Dr Robot X80sv mobile robot upgraded with the low‐cost active laser‐vision system, thereby demonstrating effective real‐time performance. This seemingly novel laser‐vision robotic system can be applied further in unknown environments for obstacle avoidance and guidance control tasks. Copyright © 2011 John Wiley and Sons Asia Pte Ltd and Chinese Automatic Control Society  相似文献   

6.
运动目标跟踪技术是未知环境下移动机器人研究领域的一个重要研究方向。该文提出了一种基于主动视觉和超声信息的移动机器人运动目标跟踪设计方法,利用一台SONY EV-D31彩色摄像机、自主研制的摄像机控制模块、图像采集与处理单元等构建了主动视觉系统。移动机器人采用了基于行为的分布式控制体系结构,利用主动视觉锁定运动目标,通过超声系统感知外部环境信息,能在未知的、动态的、非结构化复杂环境中可靠地跟踪运动目标。实验表明机器人具有较高的鲁棒性,运动目标跟踪系统运行可靠。  相似文献   

7.
This article describes a vision-based auto-recharging system that guides a mobile robot moving toward a docking station. The system contains a docking station and a mobile robot. The docking station contains a docking structure, a control device, a charger, a safety detection device, and a wireless RF interface. The mobile robot contains a power detection module (voltage and current), an auto-switch, a wireless RF interface, a controller, and a camera. The controller of the power detection module is a Holtek chip. The docking structure is designed with one active degree of freedom and two passive degrees of freedom. For image processing, the mobile robot uses a webcam to capture a real-time image. The image signal is transmitted to the controller of the mobile robot via a USB interface. We use an Otsu algorithm to calculate the distance and orientation of the docking station from the mobile robot. In the experiment, the proposed algorithm guided the mobile robot to the docking station.  相似文献   

8.
Digital 3D models of the environment are needed in rescue and inspection robotics, facility managements and architecture. This paper presents an automatic system for gaging and digitalization of 3D indoor environments. It consists of an autonomous mobile robot, a reliable 3D laser range finder and three elaborated software modules. The first module, a fast variant of the Iterative Closest Points algorithm, registers the 3D scans in a common coordinate system and relocalizes the robot. The second module, a next best view planner, computes the next nominal pose based on the acquired 3D data while avoiding complicated obstacles. The third module, a closed-loop and globally stable motor controller, navigates the mobile robot to a nominal pose on the base of odometry and avoids collisions with dynamical obstacles. The 3D laser range finder acquires a 3D scan at this pose. The proposed method allows one to digitalize large indoor environments fast and reliably without any intervention and solves the SLAM problem. The results of two 3D digitalization experiments are presented using a fast octree-based visualization method.  相似文献   

9.
针对计算机视觉系统在移动机器人中的应用,对摄像机标定、图像分割、模式识别、目标距离探测以及双目视觉系统在移动机器人导航中的运用进行分析与研究。文章提出了在特定三维场景中,对不同研究对象采取不同处理方式的复合算法,实现了对于机器人视野内简单几何物体的识别,同时使用双目摄像机结构,直接探测出目标物体相对于机器人的深度距离及其方位角度。  相似文献   

10.
An environmental camera is a camera embedded in a working environment to provide vision guidance to a mobile robot. In the setup of such robot systems, the relative position and orientation between the mobile robot and the environmental camera are parameters that must unavoidably be calibrated. Traditionally, because the configuration of the robot system is task-driven, these kinds of external parameters of the camera are measured separately and should be measured each time a task is to be performed. In this paper, a method is proposed for the robot system in which calibration of the environmental camera is rendered by the robot system itself on the spot after a system is set up. Specific kinds of motion patterns of the mobile robot, which are called test motions, have been explored for calibration. The calibration approach is based upon executing certain selected test motions on the mobile robot and then using the camera to observe the robot. According to a comparison of odometry and sensing data, the external parameters of the camera can be calibrated. Furthermore, an evaluation index (virtual sensing error) has been developed for the selection and optimization of test motions to obtain good calibration performance. All the test motion patterns are computed offline in advance and saved in a database, which greatly shorten the calibration time. Simulations and experiments verified the effectiveness of the proposed method.  相似文献   

11.
For miniaturized mobile robots that aim at exploring unknown environments, no-contact 3D sensing of basic geometrical features of the surrounding environment is one of the most important capabilities for survival and the mission. In this paper, a low-cost active 3D triangulation laser scanner for indoor navigation of miniature mobile robots is presented. It is implemented by moving both a camera and a laser diode together on the robot’s movable part. The movable part is actuated by a servo motor through a gear train to achieve ±90° scanning view angle. The software module includes image processing and data post-processing. 3D world coordinates are calculated from 2D image coordinates based on the triangulation principle. With a 3D laser scanning method, navigation algorithms for obstacle avoidance and gateway passing are proposed. Finally, experiments are conducted to validate performance of the scanner and to test the efficiency of the navigation algorithms.  相似文献   

12.
移动机器人自适应视觉伺服镇定控制   总被引:2,自引:0,他引:2  
对有单目视觉的移动机器人系统,提出了一种自适应视觉伺服镇定控制算法;在缺乏深度信息传感器并且摄像机外参数未知的情况下,该算法利用视觉反馈实现了移动机器人位置和姿态的渐近稳定.由于机器人坐标系与摄像机坐标系之间的平移外参数(手眼参数)是未知的,本文利用静态特征点的位姿变化特性,建立移动机器人在摄像机坐标系下的运动学模型.然后,利用单应矩阵分解的方法得到了可测的角度误差信号,并结合2维图像误差信号,通过一组坐标变换,得到了系统的开环误差方程.在此基础之上,基于Lyapunov稳定性理论设计了一种自适应镇定控制算法.理论分析、仿真与实验结果均证明了本文所设计的单目视觉控制器在摄像机外参数未知的情况下,可以使移动机器人渐近稳定到期望的位姿.  相似文献   

13.
This article is concerned with calibrating an anthropomorphic two-armed robot equipped with a stereo-camera vision system, that is estimating the different geometric relationships involved in the model of the robot. The calibration procedure that is presented is fully vision-based: the relationships between each camera and the neck and between each arm and the neck are determined using visual measurements. The online calculation of all the relationships involved in the model of the robot is obtained with satisfactory precision and, above all, without expensive calibration mechanisms. For this purpose, two new main algorithms have been developed. The first one implements a non-linear optimization method using quaternions for camera calibration from 2D to 3D point or line correspondences. The second one implements a real-time camera pose estimation method based on the iterative use of a paraperspective camera model.  相似文献   

14.
钟宇  张静  张华  肖贤鹏 《计算机工程》2022,48(3):100-106
智能协作机器人依赖视觉系统感知未知环境中的动态工作空间定位目标,实现机械臂对目标对象的自主抓取回收作业。RGB-D相机可采集场景中的彩色图和深度图,获取视野内任意目标三维点云,辅助智能协作机器人感知周围环境。为获取抓取机器人与RGB-D相机坐标系之间的转换关系,提出基于yolov3目标检测神经网络的机器人手眼标定方法。将3D打印球作为标靶球夹持在机械手末端,使用改进的yolov3目标检测神经网络实时定位标定球的球心,计算机械手末端中心在相机坐标系下的3D位置,同时运用奇异值分解方法求解机器人与相机坐标系转换矩阵的最小二乘解。在6自由度UR5机械臂和Intel RealSense D415深度相机上的实验结果表明,该标定方法无需辅助设备,转换后的空间点位置误差在2 mm以内,能较好满足一般视觉伺服智能机器人的抓取作业要求。  相似文献   

15.
The structural features inherent in the visual motion field of a mobile robot contain useful clues about its navigation. The combination of these visual clues and additional inertial sensor information may allow reliable detection of the navigation direction for a mobile robot and also the independent motion that might be present in the 3D scene. The motion field, which is the 2D projection of the 3D scene variations induced by the camera‐robot system, is estimated through optical flow calculations. The singular points of the global optical flow field of omnidirectional image sequences indicate the translational direction of the robot as well as the deviation from its planned path. It is also possible to detect motion patterns of near obstacles or independently moving objects of the scene. In this paper, we introduce the analysis of the intrinsic features of the omnidirectional motion fields, in combination with gyroscopical information, and give some examples of this preliminary analysis. © 2004 Wiley Periodicals, Inc.  相似文献   

16.
In the current article, we address the problem of constructing radiofrequency identification (RFID)-augmented environments for mobile robots and the issues related to creating user interfaces for efficient remote navigation with a mobile robot in such environments. First, we describe an RFID-based positioning and obstacle identification solution for remotely controlled mobile robots in indoor environments. In the robot system, an architecture specifically developed by the authors for remotely controlled robotic systems was tested in practice. Second, using the developed system, three techniques for displaying information about the position and movements of a remote robot to the user were compared. The experimental visualization techniques displayed the position of the robot on an indoor floor plan augmented with (1) a video view from a camera attached to the robot, (2) display of nearby obstacles (identified using RFID technology) on the floor plan, and (3) both features. In the experiment, test subjects controlled the mobile robot through predetermined routes as quickly as possible avoiding collisions. The results suggest that the developed RFID-based environment and the remote control system can be used for efficient control of mobile robots. The results from the comparison of the visualization techniques showed that the technique without a camera view (2) was the fastest, and the number of steering motions made was smallest using this technique, but it also had the highest need for physical human interventions. The technique with both additional features (3) was subjectively preferred by the users. The similarities and differences between the current results and those found in the literature are discussed.  相似文献   

17.
We present path-planning techniques for a multiple mobile robot system. The mobile robot has the shape of a cylinder, and its diameter, height, and weight are 8 cm, 15 cm, and 1.5 kg, respectively. The controller of the mobile robot is an MCS-51 chip, and it acquires detection signals from sensors through I/O pins. It receives commands from the supervising computer via a wireless RF interface, and transmits the status of the robots to the supervising computer via a wireless RF interface. The mobile robot system is a module-based system, and contains a controller module (including two DC motors and drivers), an obstacle detection module, a voice module, a wireless RF module, an encoder module, and a compass detection module. We propose an evaluation method to arrange the position of the multiple mobile robot system, and develop a path-planning interface on the supervising computer. In the experimental results, the mobile robots were able to receive commands from the supervising computer, and to move their next positions according to the proposed method.  相似文献   

18.
Monocular Vision for Mobile Robot Localization and Autonomous Navigation   总被引:5,自引:0,他引:5  
This paper presents a new real-time localization system for a mobile robot. We show that autonomous navigation is possible in outdoor situation with the use of a single camera and natural landmarks. To do that, we use a three step approach. In a learning step, the robot is manually guided on a path and a video sequence is recorded with a front looking camera. Then a structure from motion algorithm is used to build a 3D map from this learning sequence. Finally in the navigation step, the robot uses this map to compute its localization in real-time and it follows the learning path or a slightly different path if desired. The vision algorithms used for map building and localization are first detailed. Then a large part of the paper is dedicated to the experimental evaluation of the accuracy and robustness of our algorithms based on experimental data collected during two years in various environments.  相似文献   

19.
目的 SLAM(simultaneous localization and mapping)是移动机器人在未知环境进行探索、感知和导航的关键技术。激光SLAM测量精确,便于机器人导航和路径规划,但缺乏语义信息。而视觉SLAM的图像能提供丰富的语义信息,特征区分度更高,但其构建的地图不能直接用于路径规划和导航。为了实现移动机器人构建语义地图并在地图上进行路径规划,本文提出一种语义栅格建图方法。方法 建立可同步获取激光和语义数据的激光-相机系统,将采集的激光分割数据与目标检测算法获得的物体包围盒进行匹配,得到各物体对应的语义激光分割数据。将连续多帧语义激光分割数据同步融入占据栅格地图。对具有不同语义类别的栅格进行聚类,得到标注物体类别和轮廓的语义栅格地图。此外,针对语义栅格地图发布导航任务,利用路径搜索算法进行路径规划,并对其进行改进。结果 在实验室走廊和办公室分别进行了语义栅格建图的实验,并与原始栅格地图进行了比较。在语义栅格地图的基础上进行了路径规划,并采用了语义赋权算法对易移动物体的路径进行对比。结论 多种环境下的实验表明本文方法能获得与真实环境一致性较高、标注环境中物体类别和轮廓的语义栅格地图,且实验硬件结构简单、成本低、性能良好,适用于智能化机器人的导航和路径规划。  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号