首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 562 毫秒
1.
Autonomous environment mapping is an essential part of efficiently carrying out complex missions in unknown indoor environments. In this paper, a low cost mapping system composed of a web camera with structured light and sonar sensors is presented. We propose a novel exploration strategy based on the frontier concept using the low cost mapping system. Based on the complementary characteristics of a web camera with structured light and sonar sensors, two different sensors are fused to make a mobile robot explore an unknown environment with efficient mapping. Sonar sensors are used to roughly find obstacles, and the structured light vision system is used to increase the occupancy probability of obstacles or walls detected by sonar sensors. To overcome the inaccuracy of the frontier-based exploration, we propose an exploration strategy that would both define obstacles and reveal new regions using the mapping system. Since the processing cost of the vision module is high, we resolve the vision sensing placement problem to minimize the number of vision sensing in analyzing the geometry of the proposed sonar and vision probability models. Through simulations and indoor experiments, the efficiency of the proposed exploration strategy is proved and compared to other exploration strategies.   相似文献   

2.
Currently, most of the stereo vision systems are constructed on PC-based or multi-CPU combination structures with two CCD cameras. It is difficult to be applied in movable plants for stand-alone requirement. Due to electronic technology development, the complementary metal-oxide semiconductor (CMOS) image sensor has been widely used in a lot of electronic commercial products and the digital signal processor (DSP) operation speed and capacity are good enough for stereo vision system requirement. Here, a new stereo vision platform is designed with TMS320C6416 DSK board integrated with two CMOS color image sensors for detecting and locating moving objects. The data communication interface, system monitoring timing flow, and image pre-processing software programs are developed, too. This system can be used to detect and track any moving object without object color and shape limitations of previous study. Experimental results are used to evaluate this system’s dynamic performance. This low cost stereo vision system can be employed in movable platform for stand-alone application, i.e., mobile robot.  相似文献   

3.
The control of a robot system using camera information is a challenging task regarding unpredictable conditions, such as feature point mismatch and changing scene illumination. This paper presents a solution for the visual control of a nonholonomic mobile robot in demanding real world circumstances based on machine learning techniques. A novel intelligent approach for mobile robots using neural networks (NNs), learning from demonstration (LfD) framework, and epipolar geometry between two views is proposed and evaluated in a series of experiments. A direct mapping from the image space to the actuator command is conducted using two phases. In an offline phase, NN–LfD approach is employed in order to relate the feature position in the image plane with the angular velocity for lateral motion correction. An online phase refers to a switching vision based scheme between the epipole based linear velocity controller and NN–LfD based angular velocity controller, which selection depends on the feature distance from the pre-defined interest area in the image. In total, 18 architectures and 6 learning algorithms are tested in order to find optimal solution for robot control. The best training outcomes for each learning algorithms are then employed in real time so as to discover optimal NN configuration for robot orientation correction. Experiments conducted on a nonholonomic mobile robot in a structured indoor environment confirm an excellent performance with respect to the system robustness and positioning accuracy in the desired location.  相似文献   

4.
为了解决传统深度强化学习在室内未知环境下移动机器人路径规划中存在探索能力差和环境状态空间奖励稀疏的问题,提出了一种基于深度图像信息的改进深度强化学习算法。利用Kinect视觉传感器直接获取的深度图像信息和目标位置信息作为网络的输入,以机器人的线速度和角速度作为下一步动作指令的输出。设计了改进的奖惩函数,提高了算法的奖励值,优化了状态空间,在一定程度上缓解了奖励稀疏的问题。仿真结果表明,改进算法提高了机器人的探索能力,优化了路径轨迹,使机器人有效地避开了障碍物,规划出更短的路径,简单环境下比DQN算法的平均路径长度缩短了21.4%,复杂环境下平均路径长度缩短了11.3%。  相似文献   

5.
Since precise self-position estimation is required for autonomous flight of aerial robots, there has been some studies on self-position estimation of indoor aerial robots. In this study, we tackle the self-position estimation problem by mounting a small downward-facing camera on the chassis of an aerial robot. We obtain robot position by sensing the features on the indoor floor. In this work, we used the vertex points (tile corners) where four tiles on a typical tiled floor connected, as an existing feature of the floor. Furthermore, a small lightweight microcontroller is mounted on the robot to perform image processing for the on-board camera. A lightweight image processing algorithm is developed. So, the real-time image processing could be performed by the microcontroller alone which leads to conduct on-board real time tile corner detection. Furthermore, same microcontroller performs control value calculation for flight commanding. The flight commands are implemented based on the detected tile corner information. The above mentioned all devices are mounted on an actual machine, and the effectiveness of the system was investigated.   相似文献   

6.
在动态背景下的运动目标检测中,由于目标和背景两者都是各自独立运动的,在提取前景运动目标时需要考虑由移动机器人自身运动引起的背景变化。仿射变换是一种广泛用于估计图像间背景变换的方法。然而,在移动机器人上使用全方位视觉传感器(ODVS)时,由于全方位图像的扭曲变形会 造成图像中背景运动不一致,无法通过单一的仿射变换描述全方位图像上的背景运动。将图像划分为网格窗口,然后对每个窗口分别进行仿射变换,从背景变换补偿帧差中得到运动目标的区域。最后,根据ODVS的成像特性,通过视觉方法解析出运动障碍物的距离和方位信息。实验结果表明,提出的方法能准确检测出移动机器人360°范围内的运动障碍物,并实现运动障碍物的精确定位,有效地提高了移动机器人的实时避障能力。  相似文献   

7.
《Advanced Robotics》2013,27(6):737-762
Latest advances in hardware technology and state-of-the-art of mobile robots and artificial intelligence research can be employed to develop autonomous and distributed monitoring systems. A mobile service robot requires the perception of its present position to co-exist with humans and support humans effectively in populated environments. To realize this, a robot needs to keep track of relevant changes in the environment. This paper proposes localization of a mobile robot using images recognized by distributed intelligent networked devices in intelligent space (ISpace) in order to achieve these goals. This scheme combines data from the observed position, using dead-reckoning sensors, and the estimated position, using images of moving objects, such as a walking human captured by a camera system, to determine the location of a mobile robot. The moving object is assumed to be a point-object and projected onto an image plane to form a geometrical constraint equation that provides position data of the object based on the kinematics of the ISpace. Using the a priori known path of a moving object and a perspective camera model, the geometric constraint equations that represent the relation between image frame coordinates for a moving object and the estimated robot's position are derived. The proposed method utilizes the error between the observed and estimated image coordinates to localize the mobile robot, and the Kalman filtering scheme is used for the estimation of the mobile robot location. The proposed approach is applied for a mobile robot in ISpace to show the reduction of uncertainty in determining the location of a mobile robot, and its performance is verified by computer simulation and experiment.  相似文献   

8.
针对室内环境下机器人的移动和定位需要,提出基于视觉FastSLAM的移动机器人自主探索方法.该方法综合考虑信息增益和路径距离,基于边界选取探索位置并规划路径,最大化机器人的自主探索效率,确保探索任务的完整实现.在FastSLAM 2.0的基础上,利用视觉作为观测手段,有效融合全景扫描和地标跟踪方法,提高数据观测效率,并且引入地标视觉特征增强数据关联估计,完成定位和地图绘制.实验表明,文中方法能正确选取最优探索位置并合理规划路径,完成探索任务,并且定位精度和地图绘制精度较高,鲁棒性较好.  相似文献   

9.
Inexpensive ultrasonic sensors, incremental encoders, and grid-based probabilistic modeling are used for improved robot navigation in indoor environments. For model-building, range data from ultrasonic sensors are constantly sampled and a map is built and updated immediately while the robot is travelling through the workspace. The local world model is based on the concept of an occupancy grid. The world model extracted from the range data is based on the geometric primitive of line segments. For the extraction of these features, methods such as the Hough transform and clustering are utilized. The perceived local world model along with dead-reckoning and ultrasonic sensor data are combined using an extended Kalman filter in a localization scheme to estimate the current position and orientation of the mobile robot, which is subsequently fed to the map-building algorithm. Implementation issues and experimental results with the Nomad 150 mobile robot in a real-world indoor environment (office space) are presented  相似文献   

10.
A novel hybrid visual servoing control method based on structured light vision is proposed for robotic arc welding with a general six degrees of freedom robot. It consists of a position control inner-loop in Cartesian space and two outer-loops. One is position-based visual control in Cartesian space for moving in the direction of weld seam, i.e., weld seam tracking, another is image-based visual control in image space for adjustment to eliminate the errors in the process of tracking. A new Jacobian matrix from image space of the feature point on structured light stripe to Cartesian space is provided for differential movement of the end-effector. The control system model is simplified and its stability is discussed. An experiment of arc welding protected by gas CO_2 for verifying is well conducted.  相似文献   

11.
This paper describes the use of vision for navigation of mobile robots floating in 3D space. The problem addressed is that of automatic station keeping relative to some naturally textured environmental region. Due to the motion disturbances in the environment (currents), these tasks are important to keep the vehicle stabilized relative to an external reference frame. Assuming short range regions in the environment, vision can be used for local navigation, so that no global positioning methods are required. A planar environmental region is selected as a visual landmark and tracked throughout a monocular video sequence. For a camera moving in 3D space, the observed deformations of the tracked image region are according to planar projective transformations and reveal information about the robot relative position and orientation w.r.t. the landmark. This information is then used in a visual feedback loop so as to realize station keeping. Both the tracking system and the control design are discussed. Two robotic platforms are used for experimental validation, namely an indoor aerial blimp and a remote operated underwater vehicle. Results obtained from these experiments are described.  相似文献   

12.
室内自主式移动机器人定位方法   总被引:3,自引:0,他引:3  
定位是确定机器人在其工作环境中所处位置的过程.应用各种传感器感知信息实现可靠的定位是自主式移动机器人最基本、也是最重要的一项功能之一.本文对室内自主式移动机器人的定位技术进行了综述,介绍了当前自主式移动机器人定位方法的研究现状.同时,对国内外具有典型性的研究方法进行了较洋细的介绍,并重点提出了几种室内自主式移动机器人通用的定位方法,对其中的地图构造、位姿估计方法进行了详细介绍.最后,论述了自主式移动机器人定位系统与地图构造中所面临的主要问题及其解决方法并指出了该领域今后的研究方向.  相似文献   

13.
移动机器人视觉定位方法的研究与实现   总被引:1,自引:0,他引:1  
针对移动机器人的局部视觉定位问题进行了研究。首先通过移动机器人视觉定位与目标跟踪系统求出目标质心特征点的位置时间序列,然后在分析二次成像法获取目标深度信息的缺陷的基础上,提出了一种获取目标的空间位置和运动信息的方法。该方法利用序列图像和推广卡尔曼滤波,目标获取采用了HIS模型。在移动机器人满足一定机动的条件下,较精确地得到了目标的空间位置和运动信息。仿真结果验证了该方法的有效性和可行性。  相似文献   

14.
尹磊    彭建盛    江国来    欧勇盛 《集成技术》2019,8(2):11-22
激光雷达和视觉传感是目前两种主要的服务机器人定位与导航技术,但现有的低成本激光雷 达定位精度较低且无法实现大范围闭环检测,而单独采用视觉手段构建的特征地图又不适用于导航应用。因此,该文以配备低成本激光雷达与视觉传感器的室内机器人为研究对象,提出了一种激光和视觉相结合的定位与导航建图方法:通过融合激光点云数据与图像特征点数据,采用基于稀疏姿态调整的优化方法,对机器人位姿进行优化。同时,采用基于视觉特征的词袋模型进行闭环检测,并进一步优化基于激光点云的栅格地图。真实场景下的实验结果表明,相比于单一的激光或视觉定位建图方 法,基于多传感器数据融合的方法定位精度更高,并有效地解决了闭环检测问题。  相似文献   

15.
用于移动机器人的视觉全局定位系统研究   总被引:6,自引:1,他引:5  
魏芳  董再励  孙茂相  王晓蕾 《机器人》2001,23(5):400-403
本文叙述了用于移动机器人自主导航定位的一种视觉全局定位系统技术.该视觉定 位系统由LED主动路标、全景视觉传感器和数据处理系统组成.本文主要介绍了为提高全景 视觉图像处理速度和环境信标识别可靠性、准确性的应用方法,并给出了实验结果.实验表 明,视觉定位是具有明显研究价值和应用前景的全局导航定位技术.  相似文献   

16.
一种新型的并联机器人位姿立体视觉检测系统   总被引:1,自引:0,他引:1  
建立了一种并联机器人位姿立体视觉测量系统框架,主要包括图像采集与传输、摄像机标定、尺度不变量特征变换(SIFT)匹配、空间点重建和位姿测量五个部分。该系统基于SIFT,能够很好地处理图像在大视角有遮挡、平移、旋转、亮度和尺度变化时的特征点匹配,有较高的匹配精度,特别适用于对并联机器人多自由度和空间复杂运动的检测。最后使用该方法对并联机器人位姿检测做了仿真实验。  相似文献   

17.
This paper presents empirical results of the effect of the global position information on the performance of the modified local navigation algorithm (MLNA) for unknown world exploration. The results show that global position information enables the algorithm to maintain 100% success rate irrespective of initial robot position, movement speed, and environment complexity. Most mobile robot systems accrue an odometry error while moving, and hence need to use external sensors to recalibrate their position on an ongoing basis. We deal with position calibration to compensate the odometry error using the global position information provided by the Teleworkbench, which is a teleoperated platform and test bed for managing experiments using mini-robots. In this paper we demonstrate how we incorporate the global position information during and after the experiments.  相似文献   

18.
机器视觉与机器人的结合是未来机器人行业发展的一大趋势。在移动机器人的避障导航方案中,使用传统的传感器存在诸多问题,且获取的信息有限。提出一种基于单目视觉的移动机器人导航算法,在算法应用中,如果使用镜头焦距已知的相机,则无需对相机标定。为降低光照对障碍物边缘检测的影响,将机器人拍摄的彩色图像转换到HSI空间。采用canny算法对转换后的分量分别进行边缘检测,并合成检测结果。通过阈值处理过滤合成边缘,去除弱边缘信息,提高检测准确度。采用形态学处理连接杂散边缘,通过区域生长得到非障碍区域,并由几何关系建立图像坐标系与机器人坐标系之间的映射关系。利用结合隶属度函数的模糊逻辑得出机器人控制参数。实验结果表明,对图像颜色空间的转换降低了地面反光、阴影的影响,算法能有效排除地面条纹等的干扰并准确检测出障碍物边缘,而模糊逻辑决策方法提高了算法的鲁棒性和结果的可靠性。  相似文献   

19.
《Advanced Robotics》2013,27(1-2):179-206
The capability to acquire the position and orientation of an autonomous mobile robot is an important element for achieving specific tasks requiring autonomous exploration of the workplace. In this paper, we present a localization method that is based on a fuzzy tuned extended Kalman filter (FT-EKF) without a priori knowledge of the state noise model. The proposed algorithm is employed in a mobile robot equipped with 16 Polaroid sonar sensors and tested in a structured indoor environment. The state noise model is estimated and adapted by a fuzzy rule-based scheme. The proposed algorithm is compared with other EKF localization methods through simulations and experiments. The simulation and experimental studies demonstrate the improved performance of the proposed FT-EKF localization method over those using the conventional EKF algorithm.  相似文献   

20.
We have developed a technology for a robot that uses an indoor navigation system based on visual methods to provide the required autonomy. For robots to run autonomously, it is extremely important that they are able to recognize the surrounding environment and their current location. Because it was not necessary to use plural external world sensors, we built a navigation system in our test environment that reduced the burden of information processing mainly by using sight information from a monocular camera. In addition, we used only natural landmarks such as walls, because we assumed that the environment was a human one. In this article we discuss and explain two modules: a self-position recognition system and an obstacle recognition system. In both systems, the recognition is based on image processing of the sight information provided by the robot’s camera. In addition, in order to provide autonomy for the robot, we use an encoder and information from a two-dimensional space map given beforehand. Here, we explain the navigation system that integrates these two modules. We applied this system to a robot in an indoor environment and evaluated its performance, and in a discussion of our experimental results we consider the resulting problems.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号