首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Autonomous and mobile robots are being expected to provide various services in human living environments. However, many problems remain to be solved in the development of autonomous robots that can work like humans. When a robot moves, it is important that it be able to have self-localization abilities and recognize obstacles. For a human, the present location can be correctly checked through a comparison between memorized information assuming, it is correct, and the present situation. In addition, the distance to an object and the perception of its size can be estimated by a sense of distance based on memory or experience. Therefore, the environment for robotic activity assumed in this study was a finite-space such as a family room, an office, or a hospital room. Because an accurate estimation of position is important to the success of a robot, we have developed a navigation system with self-localization ability which uses only a CCD camera that can detect whether the robot is moving accurately in a room or corridor. This article describes how this system has been implemented and tested with our developed robot.  相似文献   

2.
We have developed a technology for a robot that uses an indoor navigation system based on visual methods to provide the required autonomy. For robots to run autonomously, it is extremely important that they are able to recognize the surrounding environment and their current location. Because it was not necessary to use plural external world sensors, we built a navigation system in our test environment that reduced the burden of information processing mainly by using sight information from a monocular camera. In addition, we used only natural landmarks such as walls, because we assumed that the environment was a human one. In this article we discuss and explain two modules: a self-position recognition system and an obstacle recognition system. In both systems, the recognition is based on image processing of the sight information provided by the robot’s camera. In addition, in order to provide autonomy for the robot, we use an encoder and information from a two-dimensional space map given beforehand. Here, we explain the navigation system that integrates these two modules. We applied this system to a robot in an indoor environment and evaluated its performance, and in a discussion of our experimental results we consider the resulting problems.  相似文献   

3.
CCD摄像机标定   总被引:3,自引:0,他引:3  
在基于单目视觉的农业轮式移动机器人自主导航系统中,CCD摄像机标定是农业轮式移动机器人正确和安全导航的前提和关键。摄像机标定确立了地面某点的三维空间坐标与计算机图像二维坐标之间的对应关系,机器人根据该关系计算出车体位姿值自主导航。因此,根据CCD摄像机针孔成像模型,利用大地坐标系中平面模板上已知的各点坐标,建立与计算机图像空间中各对应像素值之间的关系方程组,在Matlab环境下拟合出摄像机各内外参数。实验结果表明:该方法可以正确完成CCD摄像机标定。  相似文献   

4.
A reactive navigation system for an autonomous mobile robot in unstructured dynamic environments is presented. The motion of moving obstacles is estimated for robot motion planning and obstacle avoidance. A multisensor-based obstacle predictor is utilized to obtain obstacle-motion information. Sensory data from a CCD camera and multiple ultrasonic range finders are combined to predict obstacle positions at the next sampling instant. A neural network, which is trained off-line, provides the desired prediction on-line in real time. The predicted obstacle configuration is employed by the proposed virtual force based navigation method to prevent collision with moving obstacles. Simulation results are presented to verify the effectiveness of the proposed navigation system in an environment with multiple mobile robots or moving objects. This system was implemented and tested on an experimental mobile robot at our laboratory. Navigation results in real environment are presented and analyzed.  相似文献   

5.
An autonomous mobile robot must have the ability to navigate in an unknown environment. The simultaneous localization and map building (SLAM) problem have relation to this autonomous ability. Vision sensors are attractive equipment for an autonomous mobile robot because they are information-rich and rarely have restrictions on various applications. However, many vision based SLAM methods using a general pin-hole camera suffer from variation in illumination and occlusion, because they mostly extract corner points for the feature map. Moreover, due to the narrow field of view of the pin-hole camera, they are not adequate for a high speed camera motion. To solve these problems, this paper presents a new SLAM method which uses vertical lines extracted from an omni-directional camera image and horizontal lines from the range sensor data. Due to the large field of view of the omni-directional camera, features remain in the image for enough time to estimate the pose of the robot and the features more accurately. Furthermore, since the proposed SLAM does not use corner points but the lines as the features, it reduces the effect of illumination and partial occlusion. Moreover, we use not only the lines at corners of wall but also many other vertical lines at doors, columns and the information panels on the wall which cannot be extracted by a range sensor. Finally, since we use the horizontal lines to estimate the positions of the vertical line features, we do not require any camera calibration. Experimental work based on MORIS, our mobile robot test bed, moving at a human’s pace in the real indoor environment verifies the efficacy of this approach.  相似文献   

6.
This paper addresses the problem of integrating the human operator with autonomous robotic visual tracking and servoing modules. A CCD camera is mounted on the end-effector of a robot and the task is to servo around a static or moving rigid target. In manual control mode, the human operator, with the help of a joystick and a monitor, commands robot motions in order to compensate for tracking errors. In shared control mode, the human operator and the autonomous visual tracking modules command motion along orthogonal sets of degrees of freedom. In autonomous control mode, the autonomous visual tracking modules are in full control of the servoing functions. Finally, in traded control mode, the control can be transferred from the autonomous visual modules to the human operator and vice versa. This paper presents an experimental setup where all these different schemes have been tested. Experimental results of all modes of operation are presented and the related issues are discussed. In certain degrees of freedom (DOF) the autonomous modules perform better than the human operator. On the other hand, the human operator can compensate fast for failures in tracking while the autonomous modules fail. Their failure is due to difficulties in encoding an efficient contingency plan.  相似文献   

7.
The localization problem for an autonomous robot moving in a known environment is a well-studied problem which has seen many elegant solutions. Robot localization in a dynamic environment populated by several moving obstacles, however, is still a challenge for research. In this paper, we use an omnidirectional camera mounted on a mobile robot to perform a sort of scan matching. The omnidirectional vision system finds the distances of the closest color transitions in the environment, mimicking the way laser rangefinders detect the closest obstacles. The similarity of our sensor with classical rangefinders allows the use of practically unmodified Monte Carlo algorithms, with the additional advantage of being able to easily detect occlusions caused by moving obstacles. The proposed system was initially implemented in the RoboCup Middle-Size domain, but the experiments we present in this paper prove it to be valid in a general indoor environment with natural color transitions. We present localization experiments both in the RoboCup environment and in an unmodified office environment. In addition, we assessed the robustness of the system to sensor occlusions caused by other moving robots. The localization system runs in real-time on low-cost hardware.  相似文献   

8.
This paper describes an object rearrangement system for an autonomous mobile robot. The objective of the robot is to autonomously explore and learn about an environment, to detect changes in the environment on a later visit after object disturbances and finally, to move objects back to their original positions. In the implementation, it is assumed that the robot does not have any prior knowledge of the environment and the positions of the objects. The system exploits Simultaneous Localisation and Mapping (SLAM) and autonomous exploration techniques to achieve the task. These techniques allow the robot to perform localisation and mapping which is required to perform the object rearrangement task autonomously. The system includes an arrangement change detector, object tracking and map update that work with a Polar Scan Match (PSM) Extended Kalman Filter (EKF) SLAM system. In addition, a path planning technique for dragging and pushing an object is also presented in this paper. Experimental results of the integrated approach are shown to demonstrate that the proposed approach provides real-time autonomous object rearrangements by a mobile robot in an initially unknown real environment. Experiments also show the limits of the system by investigating failure modes.  相似文献   

9.
This research aimed to develop an autonomous mobile robot that helps various kinds of people. The evasion of obstacles is absolutely imperative so that the robot can act in a human-life environment. Therefore, we developed a robot that moves through doors and avoids obstacles with the help of images taken by a camera set on the robot. This work was presented in part at the 13th International Symposium on Artifical Life and Robotics, Oita, Japan, January 31–February 2, 2008  相似文献   

10.
Legged robots are an efficient alternative for navigation in challenging terrain. In this paper we describe Weaver, a six‐legged robot that is designed to perform autonomous navigation in unstructured terrain. It uses stereo vision and proprioceptive sensing based terrain perception for adaptive control while using visual‐inertial odometry for autonomous waypoint‐based navigation. Terrain perception generates a minimal representation of the traversed environment in terms of roughness and step height. This reduces the complexity of the terrain model significantly, enabling the robot to feed back information about the environment into its controller. Furthermore, we combine exteroceptive and proprioceptive sensing to enhance the terrain perception capabilities, especially in situations in which the stereo camera is not able to generate an accurate representation of the environment. The adaptation approach described also exploits the unique properties of legged robots by adapting the virtual stiffness, stride frequency, and stride height. Weaver's unique leg design with five joints per leg improves locomotion on high gradient slopes, and this novel configuration is further analyzed. Using these approaches, we present an experimental evaluation of this fully self‐contained hexapod performing autonomous navigation on a multiterrain testbed and in outdoor terrain.  相似文献   

11.
This paper describes an autonomous mobile device that was designed, developed and implemented as a library assistant robot. A complete autonomous system incorporating human–robot interaction has been developed and implemented within a real world environment. The robotic development is comprehensively described in terms of its localization systems, which incorporates simple image processing techniques fused with odometry and sonar data, which is validated through the use of an extended Kalman filter (EKF). The essential principles required for the development of a successful assistive robot are described and put into demonstration through a human–robot interaction application applied to the library assistant robot.  相似文献   

12.
13.
We propose a path-planning algorithm for an autonomous mobile robot using geographical information, under the condition that the robot moves in an unknown environment. Images input by a camera at every sampling time are analyzed and geographical elements are recognized, and the geographical information is embedded in an environmental map. Then the path is updated by integrating the known information and the prediction on the unknown environment. We used a sensor fusion method to improve the mobile robot's dead-reckoning accuracy. The experimental results confirm the effectiveness of the proposed algorithm as the robot reached the goal successfully using the geographical information.  相似文献   

14.
This paper presents the design, implementation and evaluation of a trainable vision guided mobile robot. The robot, CORGI, has a CCD camera as its only sensor which it is trained to use for a variety of tasks. The techniques used for training and the choice of natural light vision as the primary sensor makes the methodology immediately applicable to tasks such as trash collection or fruit picking. For example, the robot is readily trained to perform a ball finding task which involves avoiding obstacles and aligning with tennis balls. The robot is able to move at speeds up to 0.8 ms-1 while performing this task, and has never had a collision in the trained environment. It can process video and update the actuators at 11 Hz using a single $20 microprocessor to perform all computation. Further results are shown to evaluate the system for generalization across unseen domains, fault tolerance and dynamic environments.  相似文献   

15.
Wyeth  Gordon 《Machine Learning》1998,31(1-3):201-222
This paper presents the design, implementation and evaluation of a trainable vision guided mobile robot. The robot, CORGI, has a CCD camera as its only sensor which it is trained to use for a variety of tasks. The techniques used for train ing and the choice of natural light vision as the primary sensor makes the methodology immediately applicable to tasks such as trash collection or fruit picking. For example, the robot is readily trained to perform a ball finding task which involves avoiding obstacles and aligning with tennis balls. The robot is able to move at speeds up to 0.8 ms-1 while performing this task, and has never had a collision in the trained environment. It can process video and update the actuators at 11 Hz using a single $20 microprocessor to perform all computation. Further results are shown to evaluate the system for generalization across unseen domains, fault tolerance and dynamic environments.  相似文献   

16.
简单介绍了全自主足球机器人比赛系统。设计了全自主足球机器人视觉系统的典型电路,包括CCD摄像头、SAA7111、AL422B、EPM7128及TMS320VC5402,并详细分析了系统的时序关系。给出了典型电路的变型设计,并进行了比较。  相似文献   

17.
基于传感器信息融合的移动机器人自主爬楼梯技术研究   总被引:2,自引:0,他引:2  
机器人自主爬楼梯是移动机器人完成危险环境探查、侦察、救灾等任务需要具备的基本智能行为之一.分析了楼梯的多样性和履带式机器人爬楼梯固有的不稳定性导致机器人爬楼梯工作的复杂性,描述了带前导手臂的履带式移动机器人爬楼梯的步骤,简要介绍了利用超声波、视频摄像头和激光扫描测距仪信息来感知楼梯和判断机器人与楼梯相对位置的算法,最后提出了一个基于传感器测量值可信度的信息融合方法进行楼梯参数感知和行驶方向计算的机器人自主爬楼梯的控制系统结构.  相似文献   

18.
Motivated by the human autonomous development process from infancy to adulthood, we have built a robot that develops its cognitive and behavioral skills through real-time interactions with the environment. We call such a robot a developmental robot. In this paper, we present the theory and the architecture to implement a developmental robot and discuss the related techniques that address an array of challenging technical issues. As an application, experimental results on a real robot, self-organizing, autonomous, incremental learner (SAIL), are presented with emphasis on its audition perception and audition-related action generation. In particular, the SAIL robot conducts the auditory learning from unsegmented and unlabeled speech streams without any prior knowledge about the auditory signals, such as the designated language or the phoneme models. Neither available before learning starts are the actions that the robot is expected to perform. SAIL learns the auditory commands and the desired actions from physical contacts with the environment including the trainers.  相似文献   

19.
This article describes a prototype autonomous mobile robot, KANTARO, designed for inspecting sewer pipes. It is able to move autonomously in 200–300-mm-diameter sewer pipes, to turn smoothly through 90° at a junction, and to go down a 5-cm step. KANTARO carries all the resources required, such as a control unit, a camera, a 2D laser, and an IR sensor. Damage or abnormalities in sewer pipes are detected based on recorded sensory data. KANTARO has demonstrated its effectiveness in inspection and in autonomous navigation in a dry sewer test field at the FAIS–Robotics Development Support Office (FAIS–RDSO). This work was presented in part at the 11th International Symposium on Artificial Life and Robotics, Oita, Japan, January 23–25, 2006  相似文献   

20.
However well we control a walking bipedal robot, the images obtained by a camera are tilted to the left or right, and have small irregularities. This complicates the recognition of an environment by using a camera in a walking robot when the robot cannot move smoothly. The reason for using a bipedal robot is to make the robot as similar as possible to a human in body shape and behavior in order to make collaboration easier. This is difficult to attain with other types of robot such as wheel-driven robots (Sato et al. AROB2008; Fujiwara et al. WMSCI2009). In an artificial environment which mainly consists of vertical and horizontal lines, the tilt angle of camera images must be corrected by using the Hough transformation, which detects lines which are nearly vertical (Okutomi et al. 2004; Forsyth and Ponce 2007). As a result, the robot can successfully recognize the environment with stereo vision using images obtained by correcting the tilted ones.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号