首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We address the problem of controlling a mobile robot to explore a partially known environment. The robot’s objective is the maximization of the amount of information collected about the environment. We formulate the problem as a partially observable Markov decision process (POMDP) with an information-theoretic objective function, and solve it applying forward simulation algorithms with an open-loop approximation. We present a new sample-based approximation for mutual information useful in mobile robotics. The approximation can be seamlessly integrated with forward simulation planning algorithms. We investigate the usefulness of POMDP based planning for exploration, and to alleviate some of its weaknesses propose a combination with frontier based exploration. Experimental results in simulated and real environments show that, depending on the environment, applying POMDP based planning for exploration can improve performance over frontier exploration.  相似文献   

2.
Flexible, general-purpose robots need to autonomously tailor their sensing and information processing to the task at hand. We pose this challenge as the task of planning under uncertainty. In our domain, the goal is to plan a sequence of visual operators to apply on regions of interest (ROIs) in images of a scene, so that a human and a robot can jointly manipulate and converse about objects on a tabletop. We pose visual processing management as an instance of probabilistic sequential decision making, and specifically as a Partially Observable Markov Decision Process (POMDP). The POMDP formulation uses models that quantitatively capture the unreliability of the operators and enable a robot to reason precisely about the trade-offs between plan reliability and plan execution time. Since planning in practical-sized POMDPs is intractable, we partially ameliorate this intractability for visual processing by defining a novel hierarchical POMDP based on the cognitive requirements of the corresponding planning task. We compare our hierarchical POMDP planning system (HiPPo) with a non-hierarchical POMDP formulation and the Continual Planning (CP) framework that handles uncertainty in a qualitative manner. We show empirically that HiPPo and CP outperform the naive application of all visual operators on all ROIs. The key result is that the POMDP methods produce more robust plans than CP or the naive visual processing. In summary, visual processing problems represent a challenging domain for planning techniques and our hierarchical POMDP-based approach for visual processing management opens up a promising new line of research.  相似文献   

3.
Uncertainty in motion planning is often caused by three main sources: motion error, sensing error, and imperfect environment map. Despite the significant effect of all three sources of uncertainty to motion planning problems, most planners take into account only one or at most two of them. We propose a new motion planner, called Guided Cluster Sampling (GCS), that takes into account all three sources of uncertainty for robots with active sensing capabilities. GCS uses the Partially Observable Markov Decision Process (POMDP) framework and the point-based POMDP approach. Although point-based POMDPs have shown impressive progress over the past few years, it performs poorly when the environment map is imperfect. This poor performance is due to the extremely high dimensional state space, which translates to the extremely large belief space?B. We alleviate this problem by constructing a more suitable sampling distribution based on the observations that when the robot has active sensing capability, B can be partitioned into a collection of much smaller sub-spaces, and an optimal policy can often be generated by sufficient sampling of a small subset of the collection. Utilizing these observations, GCS samples B in two-stages, a subspace is sampled from the collection and then a belief is sampled from the subspace. It uses information from the set of sampled sub-spaces and sampled beliefs to guide subsequent sampling. Simulation results on marine robotics scenarios suggest that GCS can generate reasonable policies for motion planning problems with uncertain motion, sensing, and environment map, that are unsolvable by the best point-based POMDPs today. Furthermore, GCS handles POMDPs with continuous state, action, and observation spaces. We show that for a class of POMDPs that often occur in robot motion planning, given enough time, GCS converges to the optimal policy. To the best of our knowledge, this is the first convergence result for point-based POMDPs with continuous action space.  相似文献   

4.
In this paper, we address the inspection planning problem to ??see?? the whole area of the given workspace by a mobile robot. The problem is decoupled into the sensor placement problem and the multi-goal path planning problem to visit found sensing locations. However the decoupled approach provides a feasible solution, its overall quality can be poor, because the sub-problems are solved independently. We propose a new randomized approach that considers the path planning problem during solution process of the sensor placement problem. The proposed algorithm is based on a guiding of the randomization process according to prior knowledge about the environment. The algorithm is compared with two algorithms already used in the inspection planning. Performance of the algorithms is evaluated in several real environments and for a set of visibility ranges. The proposed algorithm provides better solutions in both evaluated criterions: a number of sensing locations and a length of the inspection path.  相似文献   

5.
This work addresses the problem of decision-making under uncertainty for robot navigation. Since robot navigation is most naturally represented in a continuous domain, the problem is cast as a continuous-state POMDP. Probability distributions over state space, or beliefs, are represented in parametric form using low-dimensional vectors of sufficient statistics. The belief space, over which the value function must be estimated, has dimensionality equal to the number of sufficient statistics. Compared to methods based on discretising the state space, this work trades the loss of the belief space’s convexity for a reduction in its dimensionality and an efficient closed-form solution for belief updates. Fitted value iteration is used to solve the POMDP. The approach is empirically compared to a discrete POMDP solution method on a simulated continuous navigation problem. We show that, for a suitable environment and parametric form, the proposed method is capable of scaling to large state-spaces.  相似文献   

6.
In this paper, we present a multi-robot exploration strategy for map building. We consider an indoor structured environment and a team of robots with different sensing and motion capabilities. We combine geometric and probabilistic reasoning to propose a solution to our problem. We formalize the proposed solution using stochastic dynamic programming (SDP) in states with imperfect information. Our modeling can be considered as a partially observable Markov decision process (POMDP), which is optimized using SDP. We apply the dynamic programming technique in a reduced search space that allows us to incrementally explore the environment. We propose realistic sensor models and provide a method to compute the probability of the next observation given the current state of the team of robots based on a Bayesian approach. We also propose a probabilistic motion model, which allows us to take into account errors (noise) on the velocities applied to each robot. This modeling also allows us to simulate imperfect robot motions, and to estimate the probability of reaching the next state given the current state. We have implemented all our algorithms and simulations results are presented.  相似文献   

7.
《Advanced Robotics》2013,27(4):399-410
Building environmental models by a vision-guided mobile robot is a key problem in robotics. This paper presents a new strategy of the vision-guided mobile robot for building models of an unknown environment by panoramic sensing. The mobile robot perceives with two types of panoramic sensing: one is for acquiring omnidirectional visual information at an observation point to find the outline structure of the local environment and the other is for acquiring visual information along a route to build local environmental models. Before exploring the environment, the robot looks around and finds the outline structure of the local environment as a reference frame for acquiring the local models. Then the robot builds the local models while moving along the directions of the outline structure (the outline structure is represented by a simple convex polygon, each side of which has a direction). We have implemented the above-mentioned robot behaviors into a mobile robot which has multiple vision agents. The multiple vision agents can simultaneously execute different vision tasks needed for panoramic sensing.  相似文献   

8.
《Advanced Robotics》2013,27(8):751-771
We propose a new method of sensor planning for mobile robot localization using Bayesian network inference. Since we can model causal relations between situations of the robot's behavior and sensing events as nodes of a Bayesian network, we can use the inference of the network for dealing with uncertainty in sensor planning and thus derive appropriate sensing actions. In this system we employ a multi-layered-behavior architecture for navigation and localization. This architecture effectively combines mapping of local sensor information and the inference via a Bayesian network for sensor planning. The mobile robot recognizes the local sensor patterns for localization and navigation using a learned regression function. Since the environment may change during the navigation and the sensor capability has limitations in the real world, the mobile robot actively gathers sensor information to construct and reconstruct a Bayesian network, and then derives an appropriate sensing action which maximizes a utility function based on inference of the reconstructed network. The utility function takes into account belief of the localization and the sensing cost. We have conducted some simulation and real robot experiments to validate the sensor planning system.  相似文献   

9.
In this paper, we address the problem of suboptimal behavior during online partially observable Markov decision process (POMDP) planning caused by time constraints on planning. Taking inspiration from the related field of reinforcement learning (RL), our solution is to shape the agent’s reward function in order to lead the agent to large future rewards without having to spend as much time explicitly estimating cumulative future rewards, enabling the agent to save time to improve the breadth planning and build higher quality plans. Specifically, we extend potential-based reward shaping (PBRS) from RL to online POMDP planning. In our extension, information about belief states is added to the function optimized by the agent during planning. This information provides hints of where the agent might find high future rewards beyond its planning horizon, and thus achieve greater cumulative rewards. We develop novel potential functions measuring information useful to agent metareasoning in POMDPs (reflecting on agent knowledge and/or histories of experience with the environment), theoretically prove several important properties and benefits of using PBRS for online POMDP planning, and empirically demonstrate these results in a range of classic benchmark POMDP planning problems.  相似文献   

10.
Continuous-state POMDPs provide a natural representation for a variety of tasks, including many in robotics. However, most existing parametric continuous-state POMDP approaches are limited by their reliance on a single linear model to represent the world dynamics. We introduce a new switching-state dynamics model that can represent multi-modal state-dependent dynamics. We present the Switching Mode POMDP (SM-POMDP) planning algorithm for solving continuous-state POMDPs using this dynamics model. We also consider several procedures to approximate the value function as a mixture of a bounded number of Gaussians. Unlike the majority of prior work on approximate continuous-state POMDP planners, we provide a formal analysis of our SM-POMDP algorithm, providing bounds, where possible, on the quality of the resulting solution. We also analyze the computational complexity of SM-POMDP. Empirical results on an unmanned aerial vehicle collisions avoidance simulation, and a robot navigation simulation where the robot has faulty actuators, demonstrate the benefit of SM-POMDP over a prior parametric approach.  相似文献   

11.
Sensor-based multi-robot coverage path planning problem is one of the challenging problems in managing flexible, computer-integrated, intelligent manufacturing systems. A novel pattern-based genetic algorithm is proposed for this problem. The area subject to coverage is modeled with disks representing the range of sensing devices. Then the problem is defined as finding a sequence of the disks for each robot to minimize the coverage completion time determined by the maximum time traveled by a robot in a mobile robot group. So the environment needs to be partitioned among robots considering their travel times. Robot turns cause the robot to slow down, turn and accelerate inevitably. Therefore, the actual travel time of a mobile robot is calculated based on the traveled distance and the number of turns. The algorithm is designed to handle routing and partitioning concurrently. Experiments are conducted using P3-DX mobile robots in the laboratory and simulation environment to validate the results.  相似文献   

12.
Suman  John L.   《Automatica》2007,43(12):2104-2111
In this work we present a methodology for intelligent path planning in an uncertain environment using vision-like sensors, i.e., sensors that allow the sensing of the environment non-locally. Examples would include a mobile robot exploring an unknown terrain or a micro-UAV navigating in a cluttered urban environment. We show that the problem of path planning in an uncertain environment, under certain assumptions, can be posed as the adaptive optimal control of an uncertain Markov decision process, characterized by a known, control-dependent system, and an unknown, control-independent environment. The strategy for path planning then reduces to computing the control policy based on the current estimate of the environment, also known as the “certainty-equivalence” principle in the adaptive control literature. Our methodology allows the inclusion of vision-like sensors into the problem formulation, which, as empirical evidence suggests, accelerates the convergence of the planning algorithms. Further we show that the path planning and estimation problems, as formulated in this paper, possess special structure which can be exploited to significantly reduce the computational burden of the associated algorithms. We apply this methodology to the problem of path planning of a mobile rover in a completely unknown terrain.  相似文献   

13.
针对移动机器人最优路径规划问题,设计了一种模糊智能控制方法。利用超声波传感器对机器人周围环境进行探测,得到关于障碍物和目标的信息。通过设计模糊控制器,把得到的障碍与目标位置信息模糊化,建立模糊规则并解模糊最终使机器人可以很好地避障,并且解决了模糊算法存在的死锁问题,从而实现了移动机器人的路径规划。仿真实验结果表明了模糊算法优于人工势场法,具有有效性和可行性。  相似文献   

14.
梁家海 《计算机工程与设计》2012,33(6):2451-2454,2471
研究了移动机器人在三维环境的路径规划问题,针对该问题中存在环境适应性和全局性差的不足,对人工势场法进行改进,提出了一种新的路径规划的算法.该算法首先对已知的三维自然环境进行栅格化,建立栅格运行费用的评估模型,计算每个栅格的运行费用;然后依据栅格的运行费用建立斥力场,以目标点为中心的建立引力场,同时提出解决局部最小值的问题的方法;最后将两者合力的方向作为移动机器人在该点的路径走向,规划出一条从起始点到目标点的运行费用较低的路径.仿真实验结果表明,该算法能有效降低运行费用,适应性和全局性好,适合应用于移动机器人在三维自然环境中的路径规划.  相似文献   

15.
In active perception tasks, an agent aims to select sensory actions that reduce its uncertainty about one or more hidden variables. For example, a mobile robot takes sensory actions to efficiently navigate in a new environment. While partially observable Markov decision processes (POMDPs) provide a natural model for such problems, reward functions that directly penalize uncertainty in the agent’s belief can remove the piecewise-linear and convex (PWLC) property of the value function required by most POMDP planners. Furthermore, as the number of sensors available to the agent grows, the computational cost of POMDP planning grows exponentially with it, making POMDP planning infeasible with traditional methods. In this article, we address a twofold challenge of modeling and planning for active perception tasks. We analyze \(\rho \)POMDP and POMDP-IR, two frameworks for modeling active perception tasks, that restore the PWLC property of the value function. We show the mathematical equivalence of these two frameworks by showing that given a \(\rho \)POMDP along with a policy, they can be reduced to a POMDP-IR and an equivalent policy (and vice-versa). We prove that the value function for the given \(\rho \)POMDP (and the given policy) and the reduced POMDP-IR (and the reduced policy) is the same. To efficiently plan for active perception tasks, we identify and exploit the independence properties of POMDP-IR to reduce the computational cost of solving POMDP-IR (and \(\rho \)POMDP). We propose greedy point-based value iteration (PBVI), a new POMDP planning method that uses greedy maximization to greatly improve scalability in the action space of an active perception POMDP. Furthermore, we show that, under certain conditions, including submodularity, the value function computed using greedy PBVI is guaranteed to have bounded error with respect to the optimal value function. We establish the conditions under which the value function of an active perception POMDP is guaranteed to be submodular. Finally, we present a detailed empirical analysis on a dataset collected from a multi-camera tracking system employed in a shopping mall. Our method achieves similar performance to existing methods but at a fraction of the computational cost leading to better scalability for solving active perception tasks.  相似文献   

16.
动态环境中基于遗传算法的移动机器人路径规划的方法   总被引:20,自引:1,他引:20  
刘国栋  谢宏斌  李春光 《机器人》2003,25(4):327-330
动态环境中,移动机器人的动态路径规划是一个较难解决的课题.本文提出一种 基于遗传算法的移动机器人的路径规划方法.该方法采用实数编码的方法,有明确物理意义 的适应度函数,以加快实时的运算速度和提高运算精度.该方法充分挖掘可应用遗传算法解 决移动机器人动态路径规划的潜力.通过计算机仿真表明该控制方法具有良好的动态路径规 划能力.  相似文献   

17.
We present a robust target tracking algorithm for a mobile robot. It is assumed that a mobile robot carries a sensor with a fan-shaped field of view and finite sensing range. The goal of the proposed tracking algorithm is to minimize the probability of losing a target. If the distribution of the next position of a moving target is available as a Gaussian distribution from a motion prediction algorithm, the proposed algorithm can guarantee the tracking success probability. In addition, the proposed method minimizes the moving distance of the mobile robot based on the chosen bound on the tracking success probability. While the considered problem is a non-convex optimization problem, we derive a closed-form solution when the heading is fixed and develop a real-time algorithm for solving the considered target tracking problem. We also present a robust target tracking algorithm for aerial robots in 3D. The performance of the proposed method is evaluated extensively in simulation. The proposed algorithm has been successful applied in field experiments using Pioneer mobile robot with a Microsoft Kinect sensor for following a pedestrian.  相似文献   

18.
For a mobile robot to be practical, it needs to navigate in dynamically changing environments and manipulate objects in the environment with operating ease. The main challenges to satisfying these requirements in mobile robot research include the collection of robot environment information, storage and organization of this information, and fast task planning based on available information. Conventional approaches to these problems are far from satisfactory due to their requirement of high computation time. In this paper, we specifically address the problems of storage and organization of the environment information and fast task planning in the area of robotic research. We propose an special object-oriented data model (OODM) for information storage and management in order to solve the first problem. This model explicitly represents domain knowledge and abstracts a global perspective about the robot's dynamically changing environment. To solve the second problem, we introduce a fast task planning algorithm that fully uses domain knowledge related to robot applications and to the given environment. Our OODM based task planning method presents a general frame work and representation, into which domain specific information, domain decomposition methods and specific path planners can be tailored for different task planning problems. This method unifies and integrates the salient features from various areas such as database, artificial intelligence, and robot path planning, thus increasing the planning speed significantly  相似文献   

19.
自主式移动机器人系统的体系结构   总被引:6,自引:3,他引:6  
张友军  吴春明 《机器人》1997,19(5):378-383
本文在分析已有的几中多智能体协调模型的基础上,提出了一种用于自主式移动机器人系统的多智能体协调模型(离散)事件状态模型,用于组织协调自主式移动机器人系统中的传感器、规划、控制等智能体协调工作,确保自主式移动机器人在复杂、不断变化的环境中自主行驶,并在自主式移动机器人项目中较好地发挥了作用。  相似文献   

20.
Object recognition: A new application for smelling robots   总被引:1,自引:0,他引:1  
Olfaction is a challenging new sensing modality for intelligent systems. With the emergence of electronic noses, it is now possible to detect and recognize a range of different odours for a variety of applications. In this work, we introduce a new application where electronic olfaction is used in cooperation with other types of sensors on a mobile robot in order to acquire the odour property of objects. We examine the problem of deciding when, how and where the electronic nose (e-nose) should be activated by planning for active perception and we consider the problem of integrating the information provided by the e-nose with both prior information and information from other sensors (e.g., vision). Experiments performed on a mobile robot equipped with an e-nose are presented.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号