首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In field environments it is not usually possible to provide robots in advance with valid geometric models of its environment and task element locations. The robot or robot teams need to create and use these models to locate critical task elements by performing appropriate sensor based actions. This paper presents a multi-agent algorithm for a manipulator guidance task based on cooperative visual feedback in an unknown environment. First, an information-based iterative algorithm to intelligently plan the robots visual exploration strategy is used to enable it to efficiently build 3D models of its environment and task elements. The algorithm uses the measured scene information to find the next camera position based on expected new information content of that pose. This is achieved by utilizing a metric derived from Shannons information theory to determine optimal sensing poses for the agent(s) mapping a highly unstructured environment. Second, after an appropriate environment model has been built, the quality of the information content in the model is used to determine the constraint-based optimum view for task execution. The algorithm is applicable for both an individual agent as well as multiple cooperating agents. Simulation and experimental demonstrations on a cooperative robot platform performing a two component insertion/mating task in the field show the effectiveness of this algorithm.  相似文献   

2.
As the autonomy of personal service robotic systems increases so has their need to interact with their environment. The most basic interaction a robotic agent may have with its environment is to sense and navigate through it. For many applications it is not usually practical to provide robots in advance with valid geometric models of their environment. The robot will need to create these models by moving around and sensing the environment, while minimizing the complexity of the required sensing hardware. Here, an information-based iterative algorithm is proposed to plan the robot's visual exploration strategy, enabling it to most efficiently build a graph model of its environment. The algorithm is based on determining the information present in sub-regions of a 2-D panoramic image of the environment from the robot's current location using a single camera fixed on the mobile robot. Using a metric based on Shannon's information theory, the algorithm determines potential locations of nodes from which to further image the environment. Using a feature tracking process, the algorithm helps navigate the robot to each new node, where the imaging process is repeated. A Mellin transform and tracking process is used to guide the robot back to a previous node. This imaging, evaluation, branching and retracing its steps continues until the robot has mapped the environment to a pre-specified level of detail. The set of nodes and the images taken at each node are combined into a graph to model the environment. By tracing its path from node to node, a service robot can navigate around its environment. This method is particularly well suited for flat-floored environments. Experimental results show the effectiveness of this algorithm.  相似文献   

3.
In this paper, we present techniques that allow one or multiple mobile robots to efficiently explore and model their environment. While much existing research in the area of Simultaneous Localization and Mapping (SLAM) focuses on issues related to uncertainty in sensor data, our work focuses on the problem of planning optimal exploration strategies. We develop a utility function that measures the quality of proposed sensing locations, give a randomized algorithm for selecting an optimal next sensing location, and provide methods for extracting features from sensor data and merging these into an incrementally constructed map.We also provide an efficient algorithm driven by our utility function. This algorithm is able to explore several steps ahead without incurring too high a computational cost. We have compared that exploration strategy with a totally greedy algorithm that optimizes our utility function with a one-step-look ahead.The planning algorithms which have been developed operate using simple but flexible models of the robot sensors and actuator abilities. Techniques that allow implementation of these sensor models on top of the capabilities of actual sensors have been provided.All of the proposed algorithms have been implemented either on real robots (for the case of individual robots) or in simulation (for the case of multiple robots), and experimental results are given.  相似文献   

4.
Rui  Jorge  Adriano   《Robotics and Autonomous Systems》2005,53(3-4):282-311
Building cooperatively 3-D maps of unknown environments is one of the application fields of multi-robot systems. This article addresses that problem through a probabilistic approach based on information theory. A distributed cooperative architecture model is formulated whereby robots exhibit cooperation through efficient information sharing. A probabilistic model of a 3-D map and a statistical sensor model are used to update the map upon range measurements, with an explicit representation of uncertainty through the definition of the map’s entropy. Each robot is able to build a 3-D map upon measurements from its own range sensor and is committed to cooperate with other robots by sharing useful measurements. An entropy-based measure of information utility is used to define a cooperation strategy for sharing useful information, without overwhelming communication resources with redundant or unnecessary information. Each robot reduces the map’s uncertainty by exploring maximum information viewpoints, by using its current map to drive its sensor to frontier regions having maximum entropy gradient. The proposed framework is validated through experiments with mobile robots equipped with stereo-vision sensors.  相似文献   

5.
在动态的非结构化环境中,有效地感知物理接触对于智能机器人安全交互至关重要。为了能够检测各种潜在的物理交互,需要在机器人表面部署大面积触觉传感器。目前,现有的大面积触觉传感器主要是通过传感阵列方式实现的,但是大规模部署传感元件在实际应用中存在巨大挑战。电阻层析成像(Electrical Resistance Tomography, ERT)技术作为一种连续传感方式,有望克服传统触觉传感阵列的一些限制。为此,利用ERT设计了一款新型的大面积触觉传感器。在此基础上,提出了一种基于自适应感兴趣区(Region of Interest, ROI)的图像重构算法,将图像重构限制在交互区域内,以提高传感器的空间分辨率。为了验证提出成像算法的有效性,通过仿真与物理实验对其进行了全面评估。实验验证了该算法可以有效提高触觉传感器在交互区域的空间分辨率,使其具有较高的测量精度。实验结果表明,该传感器的平均定位误差为0.823 cm,能够准确地识别8种不同交互模式,其精度高达98.6%。这一研究工作表明,该传感器为机器人具身触觉传感的实现提供了一个新的解决方案。  相似文献   

6.
Semantic information can help robots understand unknown environments better. In order to obtain semantic information efficiently and link it to a metric map, we present a new robot semantic mapping approach through human activity recognition in a human–robot coexisting environment. An intelligent mobile robot platform called ASCCbot creates a metric map while wearable motion sensors attached to the human body are used to recognize human activities. Combining pre-learned models of activity–furniture correlation and location–furniture correlation, the robot determines the probability distribution of the furniture types through a Bayesian framework and labels them on the metric map. Computer simulations and real experiments demonstrate that the proposed approach is able to create a semantic map of an indoor environment effectively.  相似文献   

7.
Future planetary exploration missions will use cooperative robots to explore and sample rough terrain. To succeed robots will need to cooperatively acquire and share data. Here a cooperative multi-agent sensing architecture is presented and applied to the mapping of a cliff surface. This algorithm efficiently repositions the systems' sensing agents using an information theoretic approach and fuses sensory information using physical models to yield a geometrically consistent environment map. This map is then distributed among the agents using an information based relevant data reduction scheme. Experimental results for cliff face mapping using the JPL Sample Return Rover (SRR) are presented. The method is shown to significantly improve mapping efficiency over conventional methods.  相似文献   

8.
Localization is a key issue for a mobile robot, in particular in environments where a globally accurate positioning system, such as GPS, is not available. In these environments, accurate and efficient robot localization is not a trivial task, as an increase in accuracy usually leads to an impoverishment in efficiency and viceversa. Active perception appears as an appealing way to improve the localization process by increasing the richness of the information acquired from the environment. In this paper, we present an active perception strategy for a mobile robot provided with a visual sensor mounted on a pan-tilt mechanism. The visual sensor has a limited field of view, so the goal of the active perception strategy is to use the pan-tilt unit to direct the sensor to informative parts of the environment. To achieve this goal, we use a topological map of the environment and a Bayesian non-parametric estimation of robot position based on a particle filter. We slightly modify the regular implementation of this filter by including an additional step that selects the best perceptual action using Monte Carlo estimations. We understand the best perceptual action as the one that produces the greatest reduction in uncertainty about the robot position. We also consider in our optimization function a cost term that favors efficient perceptual actions. Previous works have proposed active perception strategies for robot localization, but mainly in the context of range sensors, grid representations of the environment, and parametric techniques, such as the extended Kalman filter. Accordingly, the main contributions of this work are: i) Development of a sound strategy for active selection of perceptual actions in the context of a visual sensor and a topological map; ii) Real time operation using a modified version of the particle filter and Monte Carlo based estimations; iii) Implementation and testing of these ideas using simulations and a real case scenario. Our results indicate that, in terms of accuracy of robot localization, the proposed approach decreases mean average error and standard deviation with respect to a passive perception scheme. Furthermore, in terms of efficiency, the active scheme is able to operate in real time without adding a relevant overhead to the regular robot operation.  相似文献   

9.
A new approach to the design of a neural network (NN) based navigator is proposed in which the mobile robot travels to a pre-defined goal position safely and efficiently without any prior map of the environment. This navigator can be optimized for any user-defined objective function through the use of an evolutionary algorithm. The motivation of this research is to develop an efficient methodology for general goal-directed navigation in generic indoor environments as opposed to learning specialized primitive behaviors in a limited environment. To this end, a modular NN has been employed to achieve the necessary generalization capability across a variety of indoor environments. Herein, each NN module takes charge of navigating in a specialized local environment, which is the result of decomposing the whole path into a sequence of local paths through clustering of all the possible environments. We verify the efficacy of the proposed algorithm over a variety of both simulated and real unstructured indoor environments using our autonomous mobile robot platform.  相似文献   

10.
In this paper, we propose a robust pose tracking method for mobile robot localization with an incomplete map in a highly non-static environment. This algorithm will work with a simple map that does not include complete information about the non-static environment. With only an initial incomplete map, a mobile robot cannot estimate its pose because of the inconsistency between the real observations from the environment and the predicted observations on the incomplete map. The proposed localization algorithm uses the approach of sampling from a non-corrupted window, which allows the mobile robot to estimate its pose more robustly in a non-static environment even when subjected to severe corruption of observations. The algorithm sequence involves identifying the corruption by comparing the real observations with the corresponding predicted observations of all particles, sampling particles from a non-corrupted window that consists of multiple non-corrupted sets, and filtering sensor measurements to provide weights to particles in the corrupted sets. After localization, the estimated path may still contain some errors due to long-term corruption. These errors can be corrected using nonlinear constrained least-squares optimization. The incomplete map is then updated using both the corrected path and the stored sensor information. The performance of the proposed algorithm was verified via simulations and experiments in various highly non-static environments. Our localization algorithm can increase the success rate of tracking its pose to more than 95% compared to estimates made without its use. After that, the initial incomplete map is updated based on the localization result.  相似文献   

11.
A mobile robot needs to know its position and orientation with accuracy in order to decide the control actions that permit it to finish the entrusted tasks successfully. To obtain this information, dead-reckoning-based systems have been used, and more recently inertial navigation systems. However, these systems have some errors that grow bigger as time goes by, therefore a moment comes when the information provided is useless. Because of this, there should be a periodic process that updates the robot position and orientation of the vehicle. The process to determine the robot position and orientation by using information originated from the external sensors is defined as the mobile robot relocalization. It is obvious that the greater the frequency of this process, the better the knowledge of its position the robot will have, and therefore its movements will be better directed to the point it must reach. The algorithm to achieve this can be classified in two large groups: relocalization through an a priori map of the environment and relocalization through the detection of landmarks present in that environment. The algorithm presented in the paper belongs to the first case. The sensor used is a combination of a laser diode and a CCD camera. The sensorial information is modelled as straight lines that will be matched with an a priori map of the environment. With this, the position of the mobile robot is estimated. The matching process is accomplished within an extended Kalman filter. The algorithm is able to work in real time, and it actualizes the position of the robot in a continuous way.  相似文献   

12.
13.
《Advanced Robotics》2013,27(13-14):1651-1673
This paper presents a new complete coverage algorithm of a robotic vacuum cleaner (RVC) with a low-cost sensor in an unknown environment. To achieve complete coverage, the RVC must have navigation systems for precise position estimation with localization and a prior map or a map using information that has been continuously collected from the environment. To do this, two-dimensional laser range finders and vision sensors are becoming increasingly popular in mobile robotics, and various methods using sensors like these have been introduced by many researchers. However, it is difficult to apply the methods to sensors used in most RVCs due to their constraints. In this paper, we present a new method applied to most RVCs. For developing the method, we considered the two main problems of maintaining low computational load, and coping with low-cost sensor systems with limited range, detection uncertainty and measurement error. To solve the problems, we propose an assumption that major structures of an indoor environment are rectilinear, and can be represented by sets of parallel and perpendicular lines. Then we derive an algorithm that uses this assumption to map the environment, localize the robot and plan the coverage path with a new cellular decomposition approach. Simulation and experiments verify that the proposed method guarantees complete coverage.  相似文献   

14.
Autonomous navigation in unstructured environments is a complex task and an active area of research in mobile robotics. Unlike urban areas with lanes, road signs, and maps, the environment around our robot is unknown and unstructured. Such an environment requires careful examination as it is random, continuous, and the number of perceptions and possible actions are infinite.We describe a terrain classification approach for our autonomous robot based on Markov Random Fields (MRFs ) on fused 3D laser and camera image data. Our primary data structure is a 2D grid whose cells carry information extracted from sensor readings. All cells within the grid are classified and their surface is analyzed in regard to negotiability for wheeled robots.Knowledge of our robot’s egomotion allows fusion of previous classification results with current sensor data in order to fill data gaps and regions outside the visibility of the sensors. We estimate egomotion by integrating information of an IMU, GPS measurements, and wheel odometry in an extended Kalman filter.In our experiments we achieve a recall ratio of about 90% for detecting streets and obstacles. We show that our approach is fast enough to be used on autonomous mobile robots in real time.  相似文献   

15.
为了解决超声波传感器在感知环境的过程中的不确定性问题和在定位过程中存在的噪音,以Pioneer 3-AT机器人为实验平台,运用概率算法解决对象本身和对象之间的不确定性关系,理出各种算法之间的内在联系,对移动机器人的定位算法作了相关分析与研究,并利用Mobilesim平台在自建的现场全局地图上进行实验。实验表明:使用改进蒙特—卡罗算法的移动机器人有着较好的定位效果,能够满足实用要求。  相似文献   

16.
靳保  王树国  付宜利  曹政才 《控制与决策》2005,20(11):1216-1220
针对非结构化环境下的多关节机器人实时避障问题,提出一种基于传感信息的机器人在线路径规划方法.由红外线传感器提供机器人手臂周围环境信息,通过计算C-空间内一些方向上的C-空间障碍距离,分阶段控制位姿点到达目标.避免了建立整个位姿点附近的C-空间,适合机器人在未知环境下的实时避障要求.仿真结果验证了该算法的有效性.  相似文献   

17.
Multisensor Fusion: An Autonomous Mobile Robot   总被引:7,自引:0,他引:7  
A conventional autonomous mobile robot is introduced. The main idea is the integration of many conventional and sophisticated sensor fusion techniques, introduced by several authors in recent years. We show the actual possibility of integrating all these techniques together, rather than analyzing implementation details. The topics of multisensor fusion, observation integration and sensor coordination are widely used throuhout the article. The final goal is to demonstrate the validity of both mathematical and artificial intelligence techniques in guaranteeing vehicle survival in a dynamic environment, while the robot carries out a specific task. We review conventional techniques for the management of uncertainty while we describe an implementation of a mobile robot which combines on-line heterogeneous sensors in its navigation and localisation tasks.  相似文献   

18.
Robotic manipulation systems that operate in unstructured environments must be responsive to feedback from sensors that are disparate in both location and modality. This paper describes a distributed framework for assimilating the disparate feedback provided by force and vision sensors, including active vision sensors, for robotic manipulation systems. The main components of the expectation-based framework include object schemas and port-based agents. Object schemas represent the manipulation task internally in terms of geometric models with attached sensor mappings. Object schemas are dynamically updated by sensor feedback, and thus provide an ability to perform three dimensional spatial reasoning during task execution. Because object schemas possess knowledge of sensor mappings, they are able to both select appropriate sensors and guide active sensors based on task characteristics. Port-based agents are the executors of reference inputs provided by object schemas and are defined in terms of encapsulated control strategies. Experimental results demonstrate the capabilities of the framework in two ways: the performance of manipulation tasks with active camera-lens systems, and the assimilation of force and vision sensory feedback.  相似文献   

19.
为实现机器人对其所处区域的有效识别,提出一种基于假设检验的区域类型识别方法。首先考虑观测误差影响提出一种基于概率的未知障碍物识别方法。进而将观测信息视为对周围环境的采样,假设机器人所处区域类型,利用观测信息中的未知障碍物数对其验证,实现对区域类型的识别。该方法考虑了实际中观测误差的影响,限制了误判的概率。实验证明,该方法能够在观测误差影响下有效识别机器人所处区域类型,并成功将其应用于部分未知环境的路径规划中。  相似文献   

20.
We aim at developing autonomous miniature hovering flying robots capable of navigating in unstructured GPS-denied environments. A major challenge is the miniaturization of the embedded sensors and processors that allow such platforms to fly by themselves. In this paper, we propose a novel ego-motion estimation algorithm for hovering robots equipped with inertial and optic-flow sensors that runs in real-time on a microcontroller and enables autonomous flight. Unlike many vision-based methods, this algorithm does not rely on feature tracking, structure estimation, additional distance sensors or assumptions about the environment. In this method, we introduce the translational optic-flow direction constraint, which uses the optic-flow direction but not its scale to correct for inertial sensor drift during changes of direction. This solution requires comparatively much simpler electronics and sensors and works in environments of any geometry. Here we describe the implementation and performance of the method on a hovering robot equipped with eight 0.65 g optic-flow sensors, and show that it can be used for closed-loop control of various motions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号