首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We have developed a technology for a robot that uses an indoor navigation system based on visual methods to provide the required autonomy. For robots to run autonomously, it is extremely important that they are able to recognize the surrounding environment and their current location. Because it was not necessary to use plural external world sensors, we built a navigation system in our test environment that reduced the burden of information processing mainly by using sight information from a monocular camera. In addition, we used only natural landmarks such as walls, because we assumed that the environment was a human one. In this article we discuss and explain two modules: a self-position recognition system and an obstacle recognition system. In both systems, the recognition is based on image processing of the sight information provided by the robot’s camera. In addition, in order to provide autonomy for the robot, we use an encoder and information from a two-dimensional space map given beforehand. Here, we explain the navigation system that integrates these two modules. We applied this system to a robot in an indoor environment and evaluated its performance, and in a discussion of our experimental results we consider the resulting problems.  相似文献   

2.
《Advanced Robotics》2013,27(8):771-786
In the framework of our research on biologically inspired microrobotics, we have developed two novel Automatic Flight Control Systems (AFCs): OCTAVE ( Optical flow Control sysTem for Aerospace VEhicles) and OSCAR ( Optical Scanning sensor for the Control of Autonomous Robots), both based on insects' visuomotor control systems. OCTAVE confers upon a tethered aerial robot (the OCTAVE robot) the ability to perform terrain following. OSCAR gives a tethered aerial robot (the OSCAR robot) the ability to fixate and track a contrasting target with a high level of accuracy. Both OCTAVE and OSCAR robots are based on optical velocity sensors, the principle of which is based on the results of previous electrophysiological studies on the fly's Elementary Motion Detectors (EMDs) performed at our laboratory. Both processing systems described are light enough to be mounted on-board Micro-Air Vehicles (MAVs) with an avionic payload small enough to be expressed in grams rather than kilograms.  相似文献   

3.
To date, many studies related to robots have been performed around the world. Many of these studies have assumed operation at locations where entry is difficult, such as disaster sites, and have focused on various terrestrial robots, such as snake-like, humanoid, spider-type, and wheeled units. Another area of active research in recent years has been aerial robots with small helicopters for operation indoors and outdoors. However, less research has been performed on robots that operate both on the ground and in the air. Accordingly, in this paper, we propose a hybrid aerial/terrestrial robot system. The proposed robot system was developed by equipping a quadcopter with a mechanism for ground movement. It does not use power dedicated to ground movement, and instead uses the flight mechanism of the quadcopter to achieve ground movement as well. Furthermore, we addressed the issue of obstacle avoidance as part of studies on autonomous control. Thus, we found that autonomous control of ground movement and flight was possible for the hybrid aerial/terrestrial robot system, as was autonomous obstacle avoidance by flight when an obstacle appeared during ground movement.   相似文献   

4.
针对未知环境下移动机器人平稳上坡控制对坡度感知精度的要求,本文提出了一种基于迁移学习的移动机器人单帧图像坡度检测算法.利用室内图像标准数据集训练深度卷积神经场-全连接超像素池化网络(deep convolutional neural field-fully connected superpixel pooling ne...  相似文献   

5.
服务型巡航机器人在交通和运载等领域的影响非常大,业内对于服务型机器人的研发力度不断增长。针对目前服务型机器人自巡航策略的问题,提出一种基于MK60FX512VLQ15微控制器的同轨异向可交汇行走的服务型机器人自巡航系统。该系统通过摄像头采集路径图像经MK60FX512VLQ15微控制器处理实现路径规划,采用超声波测距模块实现机器人之间的安全距离控制,通过蓝牙2.0进行机器人之间的实时通信,实现机器人间的相互交汇。同时采用QT Creator软件开发一款上位机软件,实现机器人的图像处理分析以及远程在线调试参数。实验结果表明,该系统的寻轨服务在复杂场景下,能够有效地在指定路径实现机器人间的互相交汇行走。  相似文献   

6.
针对目前服务机器人自巡航策略问题,提出一种基于MK60FX512VLQ15的单轨同向可互相超越行走的服务机器人自巡航系统.该系统通过摄像头采集图像,经MK60FX512VLQ15微控制器处理实现路径规划,采用超声波测距模块实现机器人之间的距离控制,通过nRF24L01+进行机器人之间的实时通信实现机器人间的互相超越.同时采用Visual Studio开发一款上位机软件,实现机器人的图像处理分析以及远程在线调试参数.实验结果表明,该系统的寻轨服务在复杂场景下,能够有效地在指定路径下行走,并且能在紧急事件下实现直道和十字路口上机器人间的互相超越行走.  相似文献   

7.
A new approach to robot location by house corners   总被引:1,自引:0,他引:1  
A new approach to robot location in an in-house 3-D space using house corners as the standard mark is proposed. A monocular image of a house corner is first taken. Image processing and numerical analysis techniques are then applied to find the equations of the three lines going through the corner point. Under the reasonable assumption that the distance from the camera to the ceiling is known in advance, the position of the robot, on which the camera is mounted, is finally uniquely determined according to 3-D imaging geometry. Experimental results with location error less than 5% on the average prove the feasibility of the proposed approach. Error analysis useful for determining location precision is also included.  相似文献   

8.
This paper presents the design of a stable non-linear control system for the remote visual tracking of cellular robots. The robots are controlled through visual feedback based on the processing of the image captured by a fixed video camera observing the workspace. The control algorithm is based only on measurements on the image plane of the visual camera–direct visual control–thus avoiding the problems related to camera calibration. In addition, the camera plane may have any (unknown) orientation with respect to the robot workspace. The controller uses an on-line estimation of the image Jacobians. Considering the Jacobians’ estimation errors, the control system is capable of tracking a reference point moving on the image plane–defining the reference trajectory–with an ultimately bounded error. An obstacle avoidance strategy is also developed in the same context, based on the visual impedance concept. Experimental results show the performance of the overall control system.  相似文献   

9.
Recently, many extensive studies have been conducted on robot control via self-positioning estimation techniques. In the simultaneous localization and mapping (SLAM) method, which is one approach to self-positioning estimation, robots generally use both autonomous position information from internal sensors and observed information on external landmarks. SLAM can yield higher accuracy positioning estimations depending on the number of landmarks; however, this technique involves a degree of uncertainty and has a high computational cost, because it utilizes image processing to detect and recognize landmarks. To overcome this problem, we propose a state-of-the-art method called a generalized measuring-worm (GMW) algorithm for map creation and position estimation, which uses multiple cooperating robots that serve as moving landmarks for each other. This approach allows problems of uncertainty and computational cost to be overcome, because a robot must find only a simple two-dimensional marker rather than feature-point landmarks. In the GMW method, the robots are given a two-dimensional marker of known shape and size and use a front-positioned camera to determine the marker distance and direction. The robots use this information to estimate each other’s positions and to calibrate their movement. To evaluate the proposed method experimentally, we fabricated two real robots and observed their behavior in an indoor environment. The experimental results revealed that the distance measurement and control error could be reduced to less than 3 %.  相似文献   

10.
《Advanced Robotics》2013,27(2):215-230
Autonomous mobile robots should have the capability of recognizing their environments and manoeuvring through those environments on the basis of their own judgement. Fuzzy control is suitable for autonomous mobile robot control where the amount of information to be handled is limited as much as possible and the processing is simple. Autonomous mobile control of a robot is derived from two kinds of controls: for obstacle avoidance and for guidance following an appropriate path to a destination point. Fuzzy control of a robot for obstacle avoidance based on finding permissible passageways using the edges between the floor and the wall or obstacles obtained by processing the image from a CCD camera in front of the robot is developed. Furthermore, guidance control of the robot over paths that are specified in terms of maps may be developed by a process that treats a wrong path as a virtual obstacle on the screen, and the robot advances in the designated direction when it reaches intersections. An autonomous fuzzy robot based on the above method is fabricated as a trial and its usefulness is demonstrated.  相似文献   

11.
Uncalibrated obstacle detection using normal flow   总被引:2,自引:0,他引:2  
This paper addresses the problem of obstacle detection for mobile robots. The visual information provided by a single on-board camera is used as input. We assume that the robot is moving on a planar pavement, and any point lying outside this plane is treated as an obstacle. We address the problem of obstacle detection by exploiting the geometric arrangement between the robot, the camera, and the scene. During an initialization stage, we estimate an inverse perspective transformation that maps the image plane onto the horizontal plane. During normal operation, the normal flow is computed and inversely projected onto the horizontal plane. This simplifies the resultant flow pattern, and fast tests can be used to detect obstacles. A salient feature of our method is that only the normal flow information, or first order time-and-space image derivatives, is used, and thus we cope with the aperture problem. Another important issue is that, contrasting with other methods, the vehicle motion and intrinsic and extrinsic parameters of the camera need not be known or calibrated. Both translational and rotational motion can be dealt with. We present motion estimation results on synthetic and real-image data. A real-time version implemented on a mobile robot, is described.  相似文献   

12.
Monocular vision-based navigation is a considerable ability for a home mobile robot. However, due to diverse disturbances, helping robots avoid obstacles, especially non-Manhattan obstacles, remains a big challenge. In indoor environments, there are many spatial right-corners that are projected into two dimensional projections with special geometric configurations. These projections, which consist of three lines, might enable us to estimate their position and orientation in 3D scenes. In this paper, we present a method for home robots to avoid non-Manhattan obstacles in indoor environments from a monocular camera. The approach first detects non-Manhattan obstacles. Through analyzing geometric features and constraints, it is possible to estimate posture differences between orientation of the robot and non-Manhattan obstacles. Finally according to the convergence of posture differences, the robot can adjust its orientation to keep pace with the pose of detected non-Manhattan obstacles, making it possible avoid these obstacles by itself. Based on geometric inferences, the proposed approach requires no prior training or any knowledge of the camera’s internal parameters, making it practical for robots navigation. Furthermore, the method is robust to errors in calibration and image noise. We compared the errors from corners of estimated non-Manhattan obstacles against the ground truth. Furthermore, we evaluate the validity of convergence of differences between the robot orientation and the posture of non-Manhattan obstacles. The experimental results showed that our method is capable of avoiding non-Manhattan obstacles, meeting the requirements for indoor robot navigation.   相似文献   

13.
In this work, we present a new real-time image-based monocular path detection method. It does not require camera calibration and works on semi-structured outdoor paths. The core of the method is based on segmenting images and classifying each super-pixel to infer a contour of navigable space. This method allows a mobile robot equipped with a monocular camera to follow different naturally delimited paths. The contour shape can be used to calculate the forward and steering speed of the robot. To achieve real-time computation necessary for on-board execution in mobile robots, the image segmentation is implemented on a low-power embedded GPU. The validity of our approach has been verified with an image dataset of various outdoor paths as well as with a real mobile robot.  相似文献   

14.
In the current article, we address the problem of constructing radiofrequency identification (RFID)-augmented environments for mobile robots and the issues related to creating user interfaces for efficient remote navigation with a mobile robot in such environments. First, we describe an RFID-based positioning and obstacle identification solution for remotely controlled mobile robots in indoor environments. In the robot system, an architecture specifically developed by the authors for remotely controlled robotic systems was tested in practice. Second, using the developed system, three techniques for displaying information about the position and movements of a remote robot to the user were compared. The experimental visualization techniques displayed the position of the robot on an indoor floor plan augmented with (1) a video view from a camera attached to the robot, (2) display of nearby obstacles (identified using RFID technology) on the floor plan, and (3) both features. In the experiment, test subjects controlled the mobile robot through predetermined routes as quickly as possible avoiding collisions. The results suggest that the developed RFID-based environment and the remote control system can be used for efficient control of mobile robots. The results from the comparison of the visualization techniques showed that the technique without a camera view (2) was the fastest, and the number of steering motions made was smallest using this technique, but it also had the highest need for physical human interventions. The technique with both additional features (3) was subjectively preferred by the users. The similarities and differences between the current results and those found in the literature are discussed.  相似文献   

15.
We present a system consisting of a miniature unmanned aerial vehicle (UAV) and a small carrier vehicle, in which the UAV is capable of autonomously starting from the moving ground vehicle, tracking it at a constant distance and landing on a platform on the carrier in motion. Our visual tracking approach differs from other methods by using low-cost, lightweight commodity consumer hardware. As main sensor we use a Wii remote infrared (IR) camera, which allows robust tracking of a pattern of IR lights in conditions without direct sunlight. The system does not need to communicate with the ground vehicle and works with an onboard 8-bit microcontroller. Nevertheless the position and orientation relative to the IR pattern is estimated at a frequency of approximately 50 Hz. This enables the UAV to fly fully autonomously, performing flight control, self-stabilisation and visual tracking of the ground vehicle. We present experiments in which our UAV performs autonomous flights with a moving ground carrier describing a circular path and where the carrier is rotating. The system provides small errors and allows for safe, autonomous indoor flights.  相似文献   

16.
In the last few years, mobile robot systems that perform complicated tasks have been studied. To work in complicated environments, the robot has to avoid collisions with obstacles. Therefore the robot needs to detect the arrangement of any surrounding obstacles. We considered a simple distance estimation algorithm using ultrasonic sonar. Since the algorithm was able to estimate distance accurately, we also attempted stereo reception using two ultrasonic microphones. The stereo reception sonar was able to detect the direction of obstacles. In order to make precise measurements, we attempted to use the signal coherence of ultrasonic waves. In order to install a small system into mobile robots and to detect any surrounding obstacles, we designed a multichannel sonar signal processing system using a high-performance embedded microcontroller. This article describes our ideas for the distance estimation algorithm for ultrasonic sonar, and a design for a signal processing system using a high-performance microcontroller.  相似文献   

17.
Executing complex robotic tasks including dexterous grasping and manipulation requires a combination of dexterous robots, intelligent sensors and adequate object information processing. In this paper, vision has been integrated into a highly redundant robotic system consisting of a tiltable camera and a three-fingered dexterous gripper both mounted on a puma-type robot arm. In order to condense the image data of the robot working space acquired from the mobile camera, contour image processing is used for offline grasp and motion planning as well as for online supervision of manipulation tasks. The performance of the desired robot and object motions is controlled by a visual feedback system coordinating motions of hand, arm and eye according to the specific requirements of the respective situation. Experiences and results based on several experiments in the field of service robotics show the possibilities and limits of integrating vision and tactile sensors into a dexterous hand-arm-eye system being able to assist humans in industrial or servicing environments.  相似文献   

18.
A new visual measurement method is proposed to estimate three-dimensional (3D) position of the object on the floor based on a single camera. The camera fixed on a robot is in an inclined position with respect to the floor. A measurement model with the camera’s extrinsic parameters such as the height and pitch angle is described. Single image of a chessboard pattern placed on the floor is enough to calibrate the camera’s extrinsic parameters after the camera’s intrinsic parameters are calibrated. Then the position of object on the floor can be computed with the measurement model. Furthermore, the height of object can be calculated with the paired-points in the vertical line sharing the same position on the floor. Compared to the conventional method used to estimate the positions on the plane, this method can obtain the 3D positions. The indoor experiment testifies the accuracy and validity of the proposed method.  相似文献   

19.
We are attempting to develop an autonomous personal robot that has the ability to perform practical tasks in a human living environment by using information derived from sensors. When a robot operates in a human environment, the issue of safety must be considered in regard to its autonomous movement. Thus, robots absolutely require systems that can recognize the external world and perform correct driving control. We have thus developed a navigation system for an autonomous robot. The system requires only image data captured by an ocellus CCD camera. In this system, we allow the robot to search for obstacles present on the floor. Then, the robot obtains distance recognition necessary for evasion of the object, including data of the obstacle’s width, height, and depth by calculating the angles of images taken by the CCD camera. We applied the system to a robot in an indoor environment and evaluated its performance, and we consider the resulting problems in the discussion of our experimental results. This work was presented in part at the 13th International Symposium on Artificial Life and Robotics, Oita, Japan, January 31–February 2, 2008  相似文献   

20.
《Advanced Robotics》2013,27(12):1361-1377
We consider the task of controlling a large team of non-holonomic ground robots with an unmanned aerial vehicle in a decentralized manner that is invariant to the number of ground robots. The central idea is the development of an abstraction for the team of ground robots that allows the aerial platform to control the team without any knowledge of the specificity of individual vehicles. This happens in much the same way as a human operator can control a single robot vehicle by simply commanding the forward and turning velocities without a detailed knowledge of the specifics of the robot. The abstraction includes a gross model of the shape of the formation of the team, and information about the position and orientation of the team in the plane. We derive controllers that allow the team of robots to move in formation while avoiding collisions and respecting the abstraction commanded by the aerial platform. We propose strategies for controlling the physical spread of the ensemble of robots by splitting and merging the team based on distributed techniques. We provide simulation and experimental results using a team of indoor mobile robots and a three-dimensional, cable-controlled, parallel robot which serves as our indoor unmanned aerial platform.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号