共查询到20条相似文献,搜索用时 0 毫秒
1.
2.
3.
An amphibious mobile robot relies on effective sensing ability to adapt itself in complicated amphibious environments. In this paper, we present a multifunctional whisker-like touching sensor with low energy consumption, inspired by amphibious animals. The sensor comprises a leverage system and a two-dimensional position tracing system, transforming the moving position of biowhisker to a changing laser spot coordinates. On land, the sensor driven by a motor is able to track the movement of biowhisker directly, telling the change of contact position, to sense nearby objects and explore their surface by touching. In underwater environment, the sensor can obtain in real-time external flow direction and velocity by passive impulsion. Testing results showed that our prototype can sense flow or drag force direction in 360° exactly, and tell flow velocity under 1 m/s, it can also recognize line or arc edges of obstacle correctly by touching. 相似文献
4.
This paper presents a method for estimating position and orientation of multiple robots from a set of azimuth angles of landmarks
and other robots which are observed by multiple omnidirectional vision sensors. Our method simultaneously performs self-localization
by each robot and reconstruction of a relative configuration between robots. Even if it is impossible to identify correspondence
between each index of the observed azimuth angles and those of the robots, our method can reconstruct not only a relative
configuration between robots using `triangle and enumeration constraints' but also an absolute one using the knowledge of
landmarks in the environment. In order to show the validity of our method, this method is applied to multiple mobile robots
each of which has an omnidirectional vision sensor in simulation and the real environment. The experimental results show that
the result of our method is more precise and stabler than that of self-localization by each robot and our method can handle
the combinatorial explosion problem.
Correspondence to:T. Nakamura
(e-mail: ntakayuk@sys.wakayama-u.ac.jp) 相似文献
5.
6.
移动机器人主要依靠激光雷达采集的点云和摄像机采集的图像信息来感知周围环境.在极端天气或夜晚的情况下,摄像机采集图像会受到极大干扰;本文基于聚类典型相关分析(cluster-canonical correlation analysis,cluster–CCA)提出一种面向室外移动机器人的雷达图像跨模态检索技术,首先利用深度学习网络提取点云和图像的特征,然后使用聚类典型相关分析将两种模态的特征映射到子空间,最后计算欧氏距离进行检索,可以从图像数据库中检索得出与点云最相似的图像文件.本文所提出的方法在KITTI数据集上进行了验证,实现了从点云到图像的跨模态检索,结果验证了cluster–CCA在室外移动机器人雷达图像检索方面应用的有效性. 相似文献
7.
以四轮移动机器人为研究对象,建立了机器人完整的数学模型,包括运动学模型、动力学模型以及驱动电机模型。在机器人数学模型的基础上,采用反步法的思想设计具有全局收敛特性的鲁棒轨迹跟踪控制器,设计中考虑了驱动电机模型使控制器更符合实际控制要求,并将其分解为运动学控制器、动力学控制器以及电机控制器三部分,降低了控制器设计的难度。构造了系统的李雅普诺夫函数,证明了该类型移动机器人在所得控制器作用下,能实现对给定轨迹的全局渐近追踪。仿真实验结果表明基于反步法的控制器是有效的。 相似文献
8.
Active Markov localization for mobile robots 总被引:19,自引:0,他引:19
Localization is the problem of determining the position of a mobile robot from sensor data. Most existing localization approaches are passive, i.e., they do not exploit the opportunity to control the robot's effectors during localization. This paper proposes an active localization approach. The approach is based on Markov localization and provides rational criteria for (1) setting the robot's motion direction (exploration), and (2) determining the pointing direction of the sensors so as to most efficiently localize the robot. Furthermore, it is able to deal with noisy sensors and approximative world models. The appropriateness of our approach is demonstrated empirically using a mobile robot in a structured office environment. 相似文献
9.
非完整移动机器人的人工势场法路径规划 总被引:2,自引:0,他引:2
基于人工势场的移动机器人路径规划方法在最近20多年里受到了广泛关注.然而研究者主要将目光集中于解决其各种理论问题,在研究中大都将机器人看作无约束的质点或刚体,通常无法直接应用于受到非完整约束限制的轮式移动机器人.针对人工势场法在轮式移动机器人上的实现问题,本文对两种已有实现方法进行了理论分析,指出其存在目标不可达的隐患和无法在不同环境下兼顾路径规划性能的问题,并提出一种基于模糊规则的新方法,通过在不同的情况下调整控制方式和参数解决前述问题.仿真研究表明,该方法在保证目标可达的前提下能够在多种环境中获得更好的总体规划性能. 相似文献
10.
11.
12.
讨论了在无速度传感器的情况下轮式移动机器人的速度估计问题, 采用了加速度传感器和位置传感器的输出实时估计轮式移动机器人速度, 并用一种按加速度扰动调整权值的方法融合来自不同传感器的数据. 实验验证了方法的有效性. 相似文献
13.
《Advanced Robotics》2013,27(7-8):791-816
—This paper presents a new idea for an obstacle recognition method for mobile robots by analyzing optical flow information acquired from dynamic images. First, the optical flow field is detected in image sequences from a camera on a moving observer and moving object candidates are extracted by using a normalized square residual error [focus of expansion (FOE) residual error] value that is calculated in the process of estimating the FOE. Next, the optical flow directions and intensity values are stored for the pixels involved in each candidate region to calculate the distribution width values around the principal axes of inertia and the direction of the principal axes. Finally, each candidate is classified into an object category that is expected to appear in the scene by comparing the proportion and the direction values with standard data ranges for the objects which are determined by preliminary experiments. Experimental results of car/bicycle/pedestrian recognition in real outdoor scenes have shown the effectiveness of the proposed method. 相似文献
14.
In this paper, an experimental study of a navigation system that allows a mobile robot to travel in an environment about which it has no prior knowledge is described. Data from multiple ultrasonic range sensors are fused into a representation called Heuristic Asymmetric Mapping to deal with the problem of uncertainties in the raw sensory data caused mainly by the transducer's beam-opening angle and specular reflections. It features a fast data-refresh rate to handle a dynamic environment. Potential-field method is used for on-line path planning based on the constructed gridtype sonar map. The mobile robot can therefore learn to find a safe path according to its self-built sonar map. To solve the problem of local minima in conventional potential field method, a new type of potential function is formulated. This new method is simple and fast in execution using the concept from distance-transform path-finding algorithms. The developed navigation system has been tested on our experimental mobile robot to demonstrate its possible application in practical situations. Several interesting simulation and experimental results are presented.This work was supported partly by the National Science Council of Taiwan, ROC under the grant NSC-82-0422-E-009-321. 相似文献
15.
16.
Accounting for wheel–terrain interaction is crucial for navigation and traction control of mobile robots in outdoor environments and rough terrains. Wheel slip is one of the surface hazards that needs to be detected to mitigate against the risk of losing the robot's controllability or mission failure occurring. The open problems in the Terramechanics field addressed are (1) the need for in situ wheel-slippage estimation in harsh environments using low-cost/power and easy to integrate sensors, and (2) removing the need for prior information of the soil, which is not always available. This paper presents a novel slip estimation method that utilizes only two proprioceptive sensors (IMU and wheel encoder) to estimate the wheel slip using deep learning methods. It is experimentally shown to be real-world feasible in outdoor, uneven terrains without prior soil information assumptions. Comparison with previously used machine learning algorithms for continuous and discrete slip estimation problems show more than 9% and 14% improvement in estimation performance, respectively. 相似文献
17.
K. Misu 《Advanced Robotics》2013,27(22):1483-1495
The ability of detecting and following a specific person is indispensable for mobile service robots. Many image-based methods have been proposed for person detection and identification; however, they are sometimes vulnerable to illumination changes. This paper therefore proposes a novel approach to the problem, namely, using 3D LIDARs for person detection and identification and a directivity-controllable antenna (called ESPAR antenna) for localizing a specific person even under long-term occlusion and/or out-of-view situations. A sensor fusion framework, combined with an adaptive state-based strategy switching, has also been developed for achieving a reliable person following. Experimental results in actual outdoor environments show the effectiveness of the proposed framework. 相似文献
18.
Using infrared sensors for distance measurement in mobile robots 总被引:1,自引:0,他引:1
The amplitude response of infrared (IR) sensors based on reflected amplitude of the surrounding objects is non-linear and depends on the reflectance characteristics of the object surface. As a result, the main use of IR sensors in robotics is for obstacle avoidance. Nevertheless, their inherently fast response is very attractive for enhancing the real-time operation of a mobile robot in, for instance, map building tasks. Thus, it seems that the development of new low-cost IR sensors able to accurately measure distances with reduced response times is worth researching. In this paper, a new IR sensor based on the light intensity back-scattered from objects and able to measure distances of up to 1 m is described. Also, the sensor model is described and the expected errors in distance estimates are analysed and modelled. Finally, the experimental results obtained are discussed. 相似文献
19.
20.