首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Recently, many extensive studies have been conducted on robot control via self-positioning estimation techniques. In the simultaneous localization and mapping (SLAM) method, which is one approach to self-positioning estimation, robots generally use both autonomous position information from internal sensors and observed information on external landmarks. SLAM can yield higher accuracy positioning estimations depending on the number of landmarks; however, this technique involves a degree of uncertainty and has a high computational cost, because it utilizes image processing to detect and recognize landmarks. To overcome this problem, we propose a state-of-the-art method called a generalized measuring-worm (GMW) algorithm for map creation and position estimation, which uses multiple cooperating robots that serve as moving landmarks for each other. This approach allows problems of uncertainty and computational cost to be overcome, because a robot must find only a simple two-dimensional marker rather than feature-point landmarks. In the GMW method, the robots are given a two-dimensional marker of known shape and size and use a front-positioned camera to determine the marker distance and direction. The robots use this information to estimate each other’s positions and to calibrate their movement. To evaluate the proposed method experimentally, we fabricated two real robots and observed their behavior in an indoor environment. The experimental results revealed that the distance measurement and control error could be reduced to less than 3 %.  相似文献   

2.
The requirement that mobile robots become independent of external sensors, such as GPS, and are able to navigate in an environment by themselves, means that designers have few alternative techniques available. An increasingly popular approach is to use computer vision as a source of information about the surroundings. This paper presents an implementation of computer vision to hold a quadrocopter aircraft in a stable hovering position using a low-cost, consumer-grade, video system. However, such a system is not able to stabilize the aircraft on its own and must rely on a data-fusion algorithm that uses additional measurements from on-board inertial sensors. Special techniques had to be implemented to compensate for the increased delay in the closed-loop system with the computer vision system, i.e., video timestamping to determine the exact delay of the vision system and a slight modification of the Kalman filter to account for this delay. At the end, the validation results of the proposed filtering technique are presented along with the results of an autonomous flight as a proof of the proposed concept.  相似文献   

3.
In this paper, we present a multi-sensor fusion based monocular visual navigation system for a quadrotor with limited payload, power and computational resources. Our system is equipped with an inertial measurement unit (IMU), a sonar and a monocular down-looking camera. It is able to work well in GPS-denied and markerless environments. Different from most of the keyframe-based visual navigation systems, our system uses the information from both keyframes and keypoints in each frame. The GPU-based speeded up robust feature (SURF) is employed for feature detection and feature matching. Based on the flight characteristics of quadrotor, we propose a refined preliminary motion estimation algorithm combining IMU data. A multi-level judgment rule is then presented which is beneficial to hovering conditions and reduces the error accumulation effectively. By using the sonar sensor, the metric scale estimation problem has been solved. We also present the novel IMU+3P (IMU with three point correspondences) algorithm for accurate pose estimation. This algorithm transforms the 6-DOF pose estimation problem into a 4-DOF problem and can obtain more accurate results with less computation time. We perform the experiments of monocular visual navigation system in real indoor and outdoor environments. The results demonstrate that the monocular visual navigation system performing in real-time has robust and accurate navigation results of the quadrotor.   相似文献   

4.
Accurate and robust calibration is an essential prerequisite for multi-rate sensors fusion. However, most existing calibration methods ignore the temporal calibration and assumed the timestamps of the multi-rate sensors are precisely aligned; more importantly, many approaches are designed for offline calibration. For these reasons, this paper develops a novel online temporal calibration method for multi-rate sensors fusion based on the motion constrains of the sensors. In this new calibration framework, the high update rate inertial measurement unit (IMU) is utilized as the unified calibrating references, while other moderate or low-frequency target sensors can be estimated based on the reference IMU. As a result, the targetless, online, and high-precision temporal self-calibration can be achieved. During the calibration, an improved multi-state constraint Kalman filter (I-MSCKF) algorithm is proposed for both position and temporal states estimation of the multi-rate sensors to establish a multi-constraint filter and correct the temporal offset error in a real-time manner. Furthermore, the motion constraints models in the two-dimensional (2D) planar and three-dimensional (3D) space are developed from per-sensor ego-motion to enhance the robust and reliable abilities of the proposed temporal self-calibration method. Experimental results demonstrate that the proposed method can accurately and online estimate the temporal offset error and transformation parameters, which significantly improves the performance of moving trajectory estimation for robots equipped with the multi-rate sensors.  相似文献   

5.
This study addresses a floor identification method for small humanoid robots that work in such daily environments as homes. The fundamental difficulty lays in a method to understand the physical properties of floors. To achieve floor identification with small humanoid robots, we used inertial sensors that can be easily installed on such robots, and dynamically selected a full-body motion that physically senses floors to achieve accurate floor identification. We collected a training data-set over 10 different kinds of common floors in home environments. We achieved 85.7% precision with our proposed method. We also demonstrate that our robot could appropriately change its locomotion behaviours depending on the floor identification results.  相似文献   

6.
Reinforcement learning (RL) is a popular method for solving the path planning problem of autonomous mobile robots in unknown environments. However, the primary difficulty faced by learning robots using the RL method is that they learn too slowly in obstacle-dense environments. To more efficiently solve the path planning problem of autonomous mobile robots in such environments, this paper presents a novel approach in which the robot’s learning process is divided into two phases. The first one is to accelerate the learning process for obtaining an optimal policy by developing the well-known Dyna-Q algorithm that trains the robot in learning actions for avoiding obstacles when following the vector direction. In this phase, the robot’s position is represented as a uniform grid. At each time step, the robot performs an action to move to one of its eight adjacent cells, so the path obtained from the optimal policy may be longer than the true shortest path. The second one is to train the robot in learning a collision-free smooth path for decreasing the number of the heading changes of the robot. The simulation results show that the proposed approach is efficient for the path planning problem of autonomous mobile robots in unknown environments with dense obstacles.  相似文献   

7.
The relative pose between inertial and visual sensors equipped in autonomous robots is calibrated in two steps. In the first step, the sensing system is moved along a line, the orientations in the relative pose are computed from at least five corresponding points in the two images captured before and after the movement. In the second step, the translation parameters in the relative pose are obtained with at least two corresponding points in the two images captured before and after one step motion. Experiments are conducted to verify the effectiveness of the proposed method.  相似文献   

8.
采煤机的高精度定位是煤炭开采自动化和智能化的重要研究方向,其中惯性导航系统和里程计是长壁综合机械化采煤机定位主要传感器之一.通过两者的信息融合,能够有效抑制惯性导航系统的发散并且具有较好的自主导航能力,但是仍然无法满足井下长时间的高精度导航要求.鉴于此,分析目前常用辅助传感器在采煤机开采过程中存在的问题,提出基于UWB采煤机工作面端头量测的改进因子图优化方法.利用UWB在工作面端头的位置量测信息,推导并构建惯导/里程计/UWB的约束方程和图优化模型.同时通过惯性信息的预积分,减少待优化的节点数量,降低算法的计算量.在此基础上,加入里程计标度因数误差和安装误差的因子节点进行联合估计和优化.最后通过仿真和实际跑车测试表明,相较于传统卡尔曼滤波的采煤机定位方式,所提出方法能够有效提高采煤机的定位精度.  相似文献   

9.
未知环境下移动机器人自主搜索技术研究   总被引:1,自引:0,他引:1  
肖潇  方勇纯  贺锋  马博军 《机器人》2007,29(3):224-229
将全区域搜索技术与基于动态模板匹配的目标识别方法相结合,提出了一种适用于未知环境的目标物体自主搜索方法,实现了移动机器人在陌生环境下的目标搜索任务.具体而言,移动机器人利用声纳和全景摄像头作为传感器来感知周围环境,并利用模糊逻辑方法来进行局部路径规划,在此基础上通过全区域搜索技术实现对空间的遍历,并采用动态模板匹配方法来实现目标物体的识别及其方位的确定.本文所提出的目标物体自主搜索方法可以从任意位置开始进行,算法对于陌生环境具有良好的适应性.论文最后通过实验结果证实了算法的良好性能.  相似文献   

10.
《Advanced Robotics》2013,27(11-12):1493-1514
In this paper, a fully autonomous quadrotor in a heterogeneous air–ground multi-robot system is established only using minimal on-board sensors: a monocular camera and inertial measurement units (IMUs). Efficient pose and motion estimation is proposed and optimized. A continuous-discrete extended Kalman filter is applied, in which the high-frequency IMU data drive the prediction, while the estimates are corrected by the accurate and steady vision data. A high-frequency fusion at 100 Hz is achieved. Moreover, time delay analysis and data synchronizations are conducted to further improve the pose/motion estimation of the quadrotor. The complete on-board implementation of sensor data processing and control algorithms reduces the influence of data transfer time delay, enables autonomous task accomplishment and extends the work space. Higher pose estimation accuracy and smaller control errors compared to the standard works are achieved in real-time hovering and tracking experiments.  相似文献   

11.
Discriminating or classifying different terrains is an important ability for every autonomous mobile robot. A variety of sensors, preprocessing techniques, and algorithms in different robots were applied. However, little attention was paid to the way sensory data was generated and to the contribution of different sensory modalities. In this work, a quadruped robot traversing different grounds using a variety of gaits is used, equipped with a collection of proprioceptive (encoders on active, and passive compliant joints), inertial, and foot pressure sensors. The effect of different gaits on classification performance is assessed and it is demonstrated that separate terrain classifiers for each motor program should be employed. Furthermore, poor performance of randomly generated motor commands confirms the importance of coordinated behavior on sensory information structuring. The collection of sensors sensitive to active, “tactile”, terrain exploration proved effective. Among the individual modalities, encoders on passive compliant joints delivered best results.  相似文献   

12.
In this field note, we detail the operations and discuss the results of an experiment conducted in the unstructured environment of an underwater cave complex using an autonomous underwater vehicle (AUV). For this experiment, the AUV was equipped with two acoustic sonar sensors to simultaneously map the caves' horizontal and vertical surfaces. Although the caves' spatial complexity required AUV guidance by a diver, this field deployment successfully demonstrates a scan‐matching algorithm in a simultaneous localization and mapping framework that significantly reduces and bounds the localization error for fully autonomous navigation. These methods are generalizable for AUV exploration in confined underwater environments where surfacing or predeployment of localization equipment is not feasible, and they may provide a useful step toward AUV utilization as a response tool in confined underwater disaster areas.  相似文献   

13.
The growth of civil and military use has recently promoted the development of unmanned miniature aerial vehicles dedicated to surveillance tasks. These flying vehicles are often capable of carrying only a few dozen grammes of payload. To achieve autonomy for this kind of aircraft novel sensors are required, which need to cope with strictly limited onboard processing power. One of the key aspects in autonomous behaviour is target tracking. Our visual tracking approach differs from other methods by not using expensive cameras but a Wii remote camera, i.e. commodity consumer hardware. The system works without stationary sensors and all processing is done with an onboard microcontroller. The only assumptions are a good roll and pitch attitude estimation, provided by an inertial measurement unit and a stationary pattern of four infrared spots on the target or the landing spot. This paper details experiments for hovering above a landing place, but tracking a slowly moving target is also possible.  相似文献   

14.
Autonomous navigation of microaerial vehicles in environments that are simultaneously GPS‐denied and visually degraded, and especially in the dark, texture‐less and dust‐ or smoke‐filled settings, is rendered particularly hard. However, a potential solution arises if such aerial robots are equipped with long wave infrared thermal vision systems that are unaffected by darkness and can penetrate many types of obscurants. In response to this fact, this study proposes a keyframe‐based thermal–inertial odometry estimation framework tailored to the exact data and concepts of operation of thermal cameras. The front‐end component of the proposed solution utilizes full radiometric data to establish reliable correspondences between thermal images, as opposed to operating on rescaled data as previous efforts have presented. In parallel, taking advantage of a keyframe‐based optimization back‐end the proposed method is suitable for handling periods of data interruption which are commonly present in thermal cameras, while it also ensures the joint optimization of reprojection errors of 3D landmarks and inertial measurement errors. The developed framework was verified with respect to its resilience, performance, and ability to enable autonomous navigation in an extensive set of experimental studies including multiple field deployments in severely degraded, dark, and obscurants‐filled underground mines.  相似文献   

15.
This paper proposes a gradual formation of a spatial pattern for a homogeneous robot group. The autonomous formation of spatial pattern is one of key technologies for the advancement of cooperative robotic systems because a pattern formation can be regarded as function differentiation of a multi-agent system. When multiple autonomous robots without a given local task cooperatively work for a global objective, the function differentiation is the first and indispensable step. For example, each member of cooperative insects or animals can autonomously recognize own local tasks through mutual communication with local members. There were a lot of papers that reported a spatial pattern formation of multiple robots, but the global information was supposed to be available in their approaches. It is however almost impractical assumption for a small robot to be equipped with an advanced sensing system for global localization due to robot’s scale and sensor size. The local information-based algorithm for the pattern formation is desired even if each robot is not equipped with a global localization sensor.We therefore propose a gradual pattern formation algorithm, i.e., a group of robots improves complexity of their pattern from to a simple pattern to a goal pattern like a polygon. In the algorithm, the Turing diffusion-driven instability theory is used so that it could differentiate roles of each robot in a group based only on local information. In experiment, we demonstrate that robots can make a few polygon patterns from a circle pattern by periodically differentiating robot’s roles into a vertex or a side. We show utilities of the proposed gradual pattern formation algorithm for multiple autonomous robots based on local information through some experiments.  相似文献   

16.
Considerable attention has been paid during the past decade to navigation systems based on the use of visual optic flow cues. Optic flow‐‐based visuomotor control systems have been implemented on an increasingly large number of sighted autonomous robots designed to travel under specific lighting conditions. Many algorithms based on conventional cameras or custom‐made sensors are being used nowadays to process visual motion. In this paper, we focus on the reliability of our optical sensors, which can be used to measure the local one‐dimensional angular speed of robots flying outdoors over a visual scene in terms of their accuracy, range, refresh rate, and sensitivity to illuminance variations. We have designed, constructed, and characterized two miniature custom‐made visual motion sensors: (i) the APIS (adaptive pixels for insect‐based sensors)‐based local motion sensor involving the use of an array custom‐made in Very‐Large‐Scale Integration (VLSI) technology, which is equipped with Delbrück‐type autoadaptive pixels, and (ii) the LSC‐based (LSC is a component purchased from iC‐Haus) local motion sensor involving the use of off‐the‐shelf linearly amplified photosensors, which is equipped with an onchip preamplification circuit. By combining these photodetectors with a low‐cost optical assembly and a bioinspired visual processing algorithm, highly effective miniature sensors were obtained for measuring the visual angular speed in field experiments. The present study focused on the static characteristics and the dynamic responses of these local motion sensors over a wide range of illuminance values, ranging from 50 to 10,000 lux both indoors and outdoors. Although outdoor experiments are of great interest to equip micro‐air vehicles with visual motion sensors, we also performed indoor experiments as a comparison. The LSC‐based visual motion sensor was found to be more accurate in a narrow, 1.5‐decade illuminance range, whereas the APIS‐based visual motion sensor was more robust to illuminance changes in a larger, 3‐decade range. The method presented in this study provides a new benchmark test for thoroughly characterizing visual motion and optic flow sensors designed to operate outdoors under various lighting conditions, in unknown environments where future micro‐aerial vehicles will be able to navigate safely. © 2011 Wiley Periodicals, Inc.  相似文献   

17.
Future planetary exploration missions will require wheeled mobile robots ("rovers") to traverse very rough terrain with limited human supervision. Wheel-terrain interaction plays a critical role in rough-terrain mobility. In this paper, an online estimation method that identifies key terrain parameters using on-board robot sensors is presented. These parameters can be used for traversability prediction or in a traction control algorithm to improve robot mobility and to plan safe action plans for autonomous systems. Terrain parameters are also valuable indicators of planetary surface soil composition. The algorithm relies on a simplified form of classical terramechanics equations and uses a linear-least squares method to compute terrain parameters in real time. Simulation and experimental results show that the terrain estimation algorithm can accurately and efficiently identify key terrain parameters for various soil types.  相似文献   

18.
Combining Stereo Vision and Inertial Navigation System for a Quad-Rotor UAV   总被引:1,自引:0,他引:1  
This paper presents the development of a quad-rotor robotic platform equipped with a visual and inertial motion estimation system. Our objective consists of developing a UAV capable of autonomously perform take-off, positioning, navigation and landing in unknown environments. In order to provide accurate estimates of the UAV position and velocity, stereo visual odometry and inertial measurements are fused using a Kalman Filter. Real-time experiments consisting on motion detection and autonomous positioning demonstrate the performance of the robotic platform.  相似文献   

19.
In recent years, Unmanned Aerial Vehicles (UAVs) have gained increasing popularity. These vehicles are employed in many applications, from military operations to civilian tasks. One of the main fields of UAV research is the vehicle positioning problem. Fully autonomous vehicles are required to be as self-sustained as possible in terms of external sensors. To achieve this in situations where the global positioning system (GPS) does not function, computer vision can be used. This paper presents an implementation of computer vision to hold a quadrotor aircraft in a stable hovering position using a low-cost, consumer-grade, video system. The successful implementation of this system required the development of a data-fusion algorithm that uses both inertial sensors and visual system measurements for the purpose of positioning. The system design is unique in its ability to successfully handle missing and considerably delayed video system data. Finally, a control algorithm was implemented and the whole system was tested experimentally. The results suggest the successful continuation of research in this field.  相似文献   

20.
The formation problem of distributed mobile robots was studied in the literature for idealized robots. Idealized robots are able to instantaneously move in any directions, and are equipped with perfect range sensors. In this study, we address the formation problem of distributed mobile robots that are subject to physical constraints. Mobile robots considered in this study have physical dimensions and their motions are governed by physical laws. They are equipped with sonar and infrared range sensors. The formation of lines and circles is investigated in detail. It is demonstrated that line and circle algorithms developed for idealized robots do not work well for physical robots. New line and circle algorithms, with consideration of physical robots and sensors, are presented and validated through extensive simulations. © 1997 John Wiley & Sons, Inc.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号