首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
《Artificial Intelligence》2007,171(8-9):453-490
This study extends the framework of partially observable Markov decision processes (POMDPs) to allow their parameters, i.e., the probability values in the state transition functions and the observation functions, to be imprecisely specified. It is shown that this extension can reduce the computational costs associated with the solution of these problems. First, the new framework, POMDPs with imprecise parameters (POMDPIPs), is formulated. We consider (1) the interval case, in which each parameter is imprecisely specified by an interval that indicates possible values of the parameter, and (2) the point-set case, in which each probability distribution is imprecisely specified by a set of possible distributions. Second, a new optimality criterion for POMDPIPs is introduced. As in POMDPs, the criterion is to regard a policy, i.e., an action-selection rule, as optimal if it maximizes the expected total reward. The expected total reward, however, cannot be calculated precisely in POMDPIPs, because of the parameter imprecision. Instead, we estimate the total reward by adopting arbitrary second-order beliefs, i.e., beliefs in the imprecisely specified state transition functions and observation functions. Although there are many possible choices for these second-order beliefs, we regard a policy as optimal as long as there is at least one of such choices with which the policy maximizes the total reward. Thus there can be multiple optimal policies for a POMDPIP. We regard these policies as equally optimal, and aim at obtaining one of them. By appropriately choosing which second-order beliefs to use in estimating the total reward, computational costs incurred in obtaining such an optimal policy can be reduced significantly. We provide an exact solution algorithm for POMDPIPs that does this efficiently. Third, the performance of such an optimal policy, as well as the computational complexity of the algorithm, are analyzed theoretically. Last, empirical studies show that our algorithm quickly obtains satisfactory policies to many POMDPIPs.  相似文献   

2.
POMDPs and their decentralized multiagent counterparts, DEC-POMDPs, offer a rich framework for sequential decision making under uncertainty. Their high computational complexity, however, presents an important research challenge. One way to address the intractable memory requirements of current algorithms is based on representing agent policies as finite-state controllers. Using this representation, we propose a new approach that formulates the problem as a nonlinear program, which defines an optimal policy of a desired size for each agent. This new formulation allows a wide range of powerful nonlinear programming algorithms to be used to solve POMDPs and DEC-POMDPs. Although solving the NLP optimally is often intractable, the results we obtain using an off-the-shelf optimization method are competitive with state-of-the-art POMDP algorithms and outperform state-of-the-art DEC-POMDP algorithms. Our approach is easy to implement and it opens up promising research directions for solving POMDPs and DEC-POMDPs using nonlinear programming methods.  相似文献   

3.
Reinforcement learning (RL) has been widely used to solve problems with a little feedback from environment. Q learning can solve Markov decision processes (MDPs) quite well. For partially observable Markov decision processes (POMDPs), a recurrent neural network (RNN) can be used to approximate Q values. However, learning time for these problems is typically very long. We present a new combination of RL and RNN to find a good policy for POMDPs in a shorter learning time. This method contains two phases: firstly, state space is divided into two groups (fully observable state group and hidden state group); secondly, a Q value table is used to store values of fully observable states and an RNN is used to approximate values for hidden states. Results of experiments in two grid world problems show that the proposed method enables an agent to acquire a policy with better learning performance compared to the method using only a RNN.  相似文献   

4.
The sensitivity-based optimization of Markov systems has become an increasingly important area. From the perspective of performance sensitivity analysis, policy-iteration algorithms and gradient estimation methods can be directly obtained for Markov decision processes (MDPs). In this correspondence, the sensitivity-based optimization is extended to average reward partially observable MDPs (POMDPs). We derive the performance-difference and performance-derivative formulas of POMDPs. On the basis of the performance-derivative formula, we present a new method to estimate the performance gradients. From the performance-difference formula, we obtain a sufficient optimality condition without the discounted reward formulation. We also propose a policy-iteration algorithm to obtain a nearly optimal finite-state-controller policy.   相似文献   

5.
基于采样的POMDP近似算法   总被引:1,自引:0,他引:1  
部分可观察马尔科夫决策过程(POMDP)是一种描述机器人在动态不确定环境下行动选择的问题模型。对于具有稀疏转移矩阵的POMDP问题模型,该文提出了一种求解该问题模型的快速近似算法。该算法首先利用QMDP算法产生的策略进行信念空间采样,并通过点迭代算法快速生成POMDP值函数,从而产生近似的最优行动选择策略。在相同的POMDP试验模型上,执行该算法产生的策略得到的回报值与执行其他近似算法产生的策略得到的回报值相当,但该算法计算速度快,它产生的策略表示向量集合小于现有其他近似算法产生的集合。因此,它比这些近似算法更适应于大规模的稀疏状态转移矩阵POMDP模型求解计算。  相似文献   

6.
In a partially observable Markov decision process (POMDP), if the reward can be observed at each step, then the observed reward history contains information on the unknown state. This information, in addition to the information contained in the observation history, can be used to update the state probability distribution. The policy thus obtained is called a reward-information policy (RI-policy); an optimal RI-policy performs no worse than any normal optimal policy depending only on the observation history. The above observation leads to four different problem-formulations for POMDPs depending on whether the reward function is known and whether the reward at each step is observable. This exploratory work may attract attention to these interesting problems  相似文献   

7.
部分可观测马尔可夫决策过程(POMDP)是马尔可夫决策过程(MDP)的扩展。通常利用POMDPs来模拟在部分可观测的随机环境中决策的Agents。针对完整POMDP的求解方法扩展能力弱的问题,提出把一个多元的POMDP分解成一组受限制的POMDPs,然后分别独立地求解每个这样的模型,获得一个值函数并将这些受限制的POMDPs的值函数结合起来以便获得一个完整POMDP的策略。该方法主要阐述了识别与独立任务相关的状态变量的过程,以及如何构造一个被限制在一个单独任务上的模型。将该方法应用到两个不同规模的岩石采样问题中,实验结果表明,该方法能够获得很好的策略。  相似文献   

8.
Uncertainty in motion planning is often caused by three main sources: motion error, sensing error, and imperfect environment map. Despite the significant effect of all three sources of uncertainty to motion planning problems, most planners take into account only one or at most two of them. We propose a new motion planner, called Guided Cluster Sampling (GCS), that takes into account all three sources of uncertainty for robots with active sensing capabilities. GCS uses the Partially Observable Markov Decision Process (POMDP) framework and the point-based POMDP approach. Although point-based POMDPs have shown impressive progress over the past few years, it performs poorly when the environment map is imperfect. This poor performance is due to the extremely high dimensional state space, which translates to the extremely large belief space?B. We alleviate this problem by constructing a more suitable sampling distribution based on the observations that when the robot has active sensing capability, B can be partitioned into a collection of much smaller sub-spaces, and an optimal policy can often be generated by sufficient sampling of a small subset of the collection. Utilizing these observations, GCS samples B in two-stages, a subspace is sampled from the collection and then a belief is sampled from the subspace. It uses information from the set of sampled sub-spaces and sampled beliefs to guide subsequent sampling. Simulation results on marine robotics scenarios suggest that GCS can generate reasonable policies for motion planning problems with uncertain motion, sensing, and environment map, that are unsolvable by the best point-based POMDPs today. Furthermore, GCS handles POMDPs with continuous state, action, and observation spaces. We show that for a class of POMDPs that often occur in robot motion planning, given enough time, GCS converges to the optimal policy. To the best of our knowledge, this is the first convergence result for point-based POMDPs with continuous action space.  相似文献   

9.
仵博  吴敏  佘锦华 《软件学报》2013,24(1):25-36
部分可观察马尔可夫决策过程(partially observable Markov decision processes,简称POMDPs)是动态不确定环境下序贯决策的理想模型,但是现有离线算法陷入信念状态“维数灾”和“历史灾”问题,而现有在线算法无法同时满足低误差与高实时性的要求,造成理想的POMDPs模型无法在实际工程中得到应用.对此,提出一种基于点的POMDPs在线值迭代算法(point-based online value iteration,简称PBOVI).该算法在给定的可达信念状态点上进行更新操作,避免对整个信念状态空间单纯体进行求解,加速问题求解;采用分支界限裁剪方法对信念状态与或树进行在线裁剪;提出信念状态结点重用思想,重用上一时刻已求解出的信念状态点,避免重复计算.实验结果表明,该算法具有较低误差率、较快收敛性,满足系统实时性的要求.  相似文献   

10.
针对无线传感器网络(WSNs)中目标跟踪性能与传感器能量消耗难以平衡问题,提出一种信念重用的WSNs能量高效跟踪算法。使用部分可观察马尔可夫决策过程(POMDPs)对动态不确定环境下的WSNs进行建模,将跟踪性能与能量消耗平衡优化问题转化为POMDPs最优值函数求解过程;采用最大报酬值启发式查找方法获得跟踪性能的逼近最优值;采用信念重用方法避免重复获取信念,有效降低传感器通信带来的能量消耗。实验结果表明:信念重用算法能够有效优化跟踪性能与能量消耗之间的平衡,达到以较低的能量消耗获得较高跟踪性能的目的。  相似文献   

11.
部分可观察Markov决策过程是通过引入信念状态空间将非Markov链问题转化为Markov链问题来求解,其描述真实世界的特性使它成为研究随机决策过程的重要分支.介绍了部分可观察Markov决策过程的基本原理和决策过程,提出一种基于策略迭代和值迭代的部分可观察Markov决策算法,该算法利用线性规划和动态规划的思想,解决当信念状态空间较大时出现的"维数灾"问题,得到Markov决策的逼近最优解.实验数据表明该算法是可行的和有效的.  相似文献   

12.
In large-scale transportation problems (TPs), various methods have been developed to obtain an optimal solution. One of the methods is the transportation simplex algorithm (TSA), which obtains an optimal solution for TP. Various heuristic methods have been developed to find an initial basic feasible solution for transportation algorithms. These methods differ in terms of computational cost and finding good initial solution. In TSA, the better the basic feasible solution, the less the number of iterations the algorithm will run. At each step, it uses pivoting operation, where a loop involving the nonbasic variable with the largest cost reduction is determined. Then, it eliminates the entering basic variable. However, for large-scale problems, even the best basic feasible solution may result in high number of iterations. In this paper, we present a novel algorithm called multiloop transportation simplex algorithm which finds multiple independent loops during pivoting operation. This causes larger cost reductions in every iteration. We experimentally show that the average number of iterations and runtime to solve the TP are dramatically reduced.  相似文献   

13.
In this paper, a new structure for cooperative learning automata called extended learning automata (eDLA) is introduced. Based on the new structure, an iterative randomized heuristic algorithm using sampling is proposed for finding an optimal subgraph in a stochastic edge-weighted graph. Stochastic graphs are graphs in which the weights of edges have an unknown probability distribution. The proposed algorithm uses an eDLA to find a policy that leads to a subgraph that satisfy some restrictions such as minimum or maximum weight (length). At each stage of the proposed algorithm, the eDLA determines which edges should be sampled. The proposed eDLA-based sampling method may reduce unnecessary samples and hence decrease the time required for finding an optimal subgraph. It is shown that the proposed method converges to an optimal solution, the probability of which can be made arbitrarily close to 1 by using a sufficiently small learning parameter. A new variance-aware threshold value is also proposed that can significantly improve the convergence rate of the proposed eDLA-based algorithm. It is further shown that our algorithm is competitive in terms of the quality of the solution.  相似文献   

14.
Networks of workstations (NOWs) provide an economical platform for high performance parallel computing. Such networks may comprise a variety of different types of workstations and network devices. This paper addresses the problem of efficient multicast in a heterogeneous communication model. Although the problem of finding optimal multicast schedules is known to be NP-complete in this model, a greedy algorithm has been shown experimentally to find good solutions in practice. In this paper we show that the greedy algorithm finds provably near-optimal schedules in polynomial time and that optimal schedules can be found in polynomial time when the number of distinct types of workstations is bounded by a constant. Specifically, this paper presents three results. First, when there are n workstations of some constant k distinct types, the greedy algorithm is shown to find schedules that complete at most a constant additive term later than optimal. Second, an algorithm is given that finds optimal schedules in time O(n2k). Finally, it is shown that for the general problem, the greedy algorithm finds solutions that complete the multicast in at most twice the optimal time.  相似文献   

15.
郭锐  彭军  吴敏 《计算机工程与应用》2005,41(13):36-38,146
增强学习属于机器学习的一种,它通过与环境的交互获得策略的改进,其在线学习和自适应学习的特点使其成为解决策略寻优问题有力的工具。多智能体系统是人工智能领域的一个研究热点,对于多智能体学习技术的研究需要建立在系统环境模型的基础之上,由于多个智能体的存在,智能体之间的相互影响使得多智能体系统高度复杂,多智能体系统环境属于非确定马尔可夫模型,因此直接把基于马尔可夫模型的增强学习技术引入多智能体系统是不合适的。论文基于智能体间独立的学习机制,提出了一种改进的多智能体Q学习算法,使其适用于非确定马尔可夫环境,并对该学习技术在多智能体系统RoboCup中的应用进行了研究,实验证明了该学习技术的有效性与泛化能力,最后简要给出了多智能体增强学习研究的方向及进一步的工作。  相似文献   

16.
This paper focuses on a pursuit-evasion game (PEG) which involves two teams: one side consists of pursuers trying to minimize the time required to capture evaders, and the other side consists of evaders trying to maximize the capture time by escaping the pursuers. In this paper, we propose a hybrid pursuit policy for a probabilistic PEG, which possesses the combined merits of local-max and global-max pursuit policies proposed in previous literature. A method to find optimal pursuit and evasion polices for two competitive parties of the pursuers and evaders is also proposed. For this, we employ an episodic parameter optimization (EPO) algorithm to learn good values for the weighting parameters of a hybrid pursuit policy and an intelligent evasion policy. The EPO algorithm is performed during the numerous repeated simulation runs of the PEG and the reward of each episode is updated using reinforcement learning, and the optimal weighting parameters are selected by using particle swarm optimization. We analyze the trend of the optimal parameter values with respect to the number of the pursuers and evaders. The proposed strategy is validated both in simulations and experiments with small ground robots.  相似文献   

17.
On setup level tool sequence selection for 2.5-D pocket machining   总被引:1,自引:0,他引:1  
This paper describes algorithms for efficiently machining an entire setup. Previously, the author developed a graph based algorithm to find the optimal tool sequence for machining a single 2.5-axis pocket. This paper extends this algorithm for finding an efficient tool sequence to machine an entire setup. A setup consists of a set of features with precedence constraints, that are machined when the stock is clamped in a particular orientation. The precedence constraints between the features primarily result from nesting of some features within others. Four extensions to the basic graph algorithm are investigated in this research. The first method finds optimal tool sequences on a feature by feature basis. This is a local optimization method that does not consider inter feature tool-path interactions. The second method uses a composite graph for finding an efficient tool sequence for the entire setup. The constrained graph and subgraph approaches have been developed for situations where different features in the setup have distinct critical tools. It is found that the first two methods can produce erroneous results which can lead to machine crashes and incomplete machining. Illustrative examples have been generated for each method.  相似文献   

18.
On optimization of expertise matching with various constraints   总被引:1,自引:0,他引:1  
This paper studies the problem of expertise matching with various constraints. Expertise matching, which aims to find the alignment between experts and queries, is a common problem in many applications such as conference paper-reviewer assignment, product-reviewer alignment, and product-endorser matching. Most existing methods formalize this problem as an information-retrieval problem and focus on finding a set of experts for each query independently. However, in real-world systems, various constraints are often needed to be considered. For example, in order to review a paper, it is desirable that there is at least one senior reviewer to guide the reviewing process. An important question is: “Can we design a framework to efficiently find the optimal solution for expertise matching under various constraints?” This paper explores such an approach by formulating the expertise matching problem in a constraint-based optimization framework. In the proposed framework, the problem of expertise matching is linked to a convex cost flow problem, which guarantees an optimal solution under various constraints. We also present an online matching algorithm to support incorporating user feedbacks in real time. The proposed approach has been evaluated on two different genres of expertise matching problems, namely conference paper-reviewer assignment and teacher-course assignment. Experimental results validate the effectiveness of the proposed approach. Based on the proposed method, we have also developed an online system for paper-reviewer suggestions, which has been used for paper-reviewer assignment in a top conference and feedbacks from the conference organizers are very positive.  相似文献   

19.
Continuous-state partially observable Markov decision processes (POMDPs) are an intuitive choice of representation for many stochastic planning problems with a hidden state. We consider a continuous-state POMDPs with finite action and observation spaces, where the POMDP is parametrised by weighted sums of Gaussians, or Gaussian mixture models (GMMs). In particular, we study the problem of optimising the selection of measurement channel in such a framework. A new error bound for a point-based value iteration algorithm is derived, and a method for constructing a subset of belief states that attempts to reduce the error bound is implemented. In the experiments, applying continuous-state POMDPs for optimal selection of the measurement channel is demonstrated, and the performance of three GMM simplification methods is compared. Convergence of a point-based value iteration algorithm is investigated by considering various metrics for the obtained control policies.  相似文献   

20.
The aircraft collision avoidance problem can be formulated using a decision-theoretic planning framework where the optimal behavior requires balancing the competing objectives of avoiding collision and adhering to a flight plan. Due to noise in the sensor measurements and the stochasticity of intruder state trajectories, a natural representation of the problem is as a partially-observable Markov decision process (POMDP), where the underlying state of the system is Markovian and the observations depend probabilistically on the state. Many algorithms for finding approximate solutions to POMDPs exist in the literature, but they typically require discretization of the state and observation spaces. This paper investigates the introduction of a sample-based representation of state uncertainty to an existing algorithm called Real-Time Belief Space Search (RTBSS), which leverages branch-and-bound pruning to make searching the belief space for the optimal action more efficient. The resulting algorithm, called Monte Carlo Real-Time Belief Space Search (MC-RTBSS), is demonstrated on encounter scenarios in simulation using a beacon-based surveillance system and a probabilistic intruder model derived from recorded radar data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号