首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
仵博  吴敏  佘锦华 《软件学报》2013,24(1):25-36
部分可观察马尔可夫决策过程(partially observable Markov decision processes,简称POMDPs)是动态不确定环境下序贯决策的理想模型,但是现有离线算法陷入信念状态“维数灾”和“历史灾”问题,而现有在线算法无法同时满足低误差与高实时性的要求,造成理想的POMDPs模型无法在实际工程中得到应用.对此,提出一种基于点的POMDPs在线值迭代算法(point-based online value iteration,简称PBOVI).该算法在给定的可达信念状态点上进行更新操作,避免对整个信念状态空间单纯体进行求解,加速问题求解;采用分支界限裁剪方法对信念状态与或树进行在线裁剪;提出信念状态结点重用思想,重用上一时刻已求解出的信念状态点,避免重复计算.实验结果表明,该算法具有较低误差率、较快收敛性,满足系统实时性的要求.  相似文献   

2.
Partially observable Markov decision processes (POMDP) provide a mathematical framework for agent planning under stochastic and partially observable environments. The classic Bayesian optimal solution can be obtained by transforming the problem into Markov decision process (MDP) using belief states. However, because the belief state space is continuous and multi-dimensional, the problem is highly intractable. Many practical heuristic based methods are proposed, but most of them require a complete POMDP model of the environment, which is not always practical. This article introduces a modified memory-based reinforcement learning algorithm called modified U-Tree that is capable of learning from raw sensor experiences with minimum prior knowledge. This article describes an enhancement of the original U-Tree’s state generation process to make the generated model more compact, and also proposes a modification of the statistical test for reward estimation, which allows the algorithm to be benchmarked against some traditional model-based algorithms with a set of well known POMDP problems.  相似文献   

3.
冯延蓬  仵博  郑红燕  孟宪军 《计算机工程》2012,38(11):96-99,103
针对目标追踪无线传感器网络节点能量有限、感知信息存在不确定性等问题,提出一种基于部分可观察马尔可夫决策过程的在线节点调度算法。通过状态转移函数和观察函数描述移动目标的不确定性,根据奖赏函数平衡追踪性能和节点能量消耗,并构造有限深度的可达信念与或树降低运算复杂度,实现调度策略在线求解。实验结果表明,该算法能平衡目标追踪质量与节点能量消耗,且满足实时性 要求。  相似文献   

4.
仵博  吴敏 《控制与决策》2013,28(6):925-929
针对部分可观察马尔可夫决策过程(POMDPs)的信念状态空间是一个双指数规模问题,提出一种基于 Monte Carlo 粒子滤波的 POMDPs 在线算法.首先,分别采用粒子滤波和粒子映射更新和扩展信念状态,建立可达信念状态与或树;然后,采用分支界限裁剪方法对信念状态与或树进行裁剪,降低求解规模.实验结果表明,所提出算法具有较低的误差率和较快的收敛性,能够满足系统实时性的要求.  相似文献   

5.
In this paper, we address the problem of suboptimal behavior during online partially observable Markov decision process (POMDP) planning caused by time constraints on planning. Taking inspiration from the related field of reinforcement learning (RL), our solution is to shape the agent’s reward function in order to lead the agent to large future rewards without having to spend as much time explicitly estimating cumulative future rewards, enabling the agent to save time to improve the breadth planning and build higher quality plans. Specifically, we extend potential-based reward shaping (PBRS) from RL to online POMDP planning. In our extension, information about belief states is added to the function optimized by the agent during planning. This information provides hints of where the agent might find high future rewards beyond its planning horizon, and thus achieve greater cumulative rewards. We develop novel potential functions measuring information useful to agent metareasoning in POMDPs (reflecting on agent knowledge and/or histories of experience with the environment), theoretically prove several important properties and benefits of using PBRS for online POMDP planning, and empirically demonstrate these results in a range of classic benchmark POMDP planning problems.  相似文献   

6.
Recent scaling up of partially observable Markov decision process (POMDP) solvers toward realistic applications is largely due to point-based methods that quickly converge to an approximate solution for medium-sized domains. These algorithms compute a value function for a finite reachable set of belief points, using backup operations. Point-based algorithms differ on the selection of the set of belief points and on the order by which backup operations are executed on the selected belief points. We first show how current algorithms execute a large number of backups that can be removed without reducing the quality of the value function. We demonstrate that the ordering of backup operations on a predefined set of belief points is important. In the simpler domain of MDP solvers, prioritizing the order of equivalent backup operations on states is known to speed up convergence. We generalize the notion of prioritized backups to the POMDP framework, showing how existing algorithms can be improved by prioritizing backups. We also present a new algorithm, which is the prioritized value iteration, and show empirically that it outperforms current point-based algorithms. Finally, a new empirical evaluation measure (in addition to the standard runtime comparison), which is based on the number of atomic operations and the number of belief points, is proposed in order to provide more accurate benchmark comparisons.   相似文献   

7.
This paper describes a statistically motivated framework for performing real-time dialogue state updates and policy learning in a spoken dialogue system. The framework is based on the partially observable Markov decision process (POMDP), which provides a well-founded, statistical model of spoken dialogue management. However, exact belief state updates in a POMDP model are computationally intractable so approximate methods must be used. This paper presents a tractable method based on the loopy belief propagation algorithm. Various simplifications are made, which improve the efficiency significantly compared to the original algorithm as well as compared to other POMDP-based dialogue state updating approaches. A second contribution of this paper is a method for learning in spoken dialogue systems which uses a component-based policy with the episodic Natural Actor Critic algorithm.The framework proposed in this paper was tested on both simulations and in a user trial. Both indicated that using Bayesian updates of the dialogue state significantly outperforms traditional definitions of the dialogue state. Policy learning worked effectively and the learned policy outperformed all others on simulations. In user trials the learned policy was also competitive, although its optimality was less conclusive. Overall, the Bayesian update of dialogue state framework was shown to be a feasible and effective approach to building real-world POMDP-based dialogue systems.  相似文献   

8.
仵博  吴敏 《计算机工程与设计》2007,28(9):2116-2119,2126
部分可观察马尔可夫决策过程是通过引入信念状态空间将非马尔可夫链问题转化为马尔可夫链问题来求解,其描述真实世界的特性使它成为研究随机决策过程的重要分支.介绍了部分可观察马尔可夫决策过程的基本原理和决策过程,然后介绍了3种典型的算法,它们分别是Littman等人的Witness算法、hcremental Pruning算法和Pineau等人的基于点的值迭代算法,对这3种算法进行了分析比较.讲述部分可观察马尔可夫决策过程的应用.  相似文献   

9.
基于点的值迭代方法是求解部分可观测马尔科夫决策过程(POMDP)问题的一类有效算法.目前基于点的值迭代算法大都基于单一启发式标准探索信念点集,从而限制算法效果.基于此种情况,文中提出基于杂合标准探索信念点集的值迭代算法(HHVI),可以同时维持值函数的上界和下界.在扩展探索点集时,选取值函数上下界差值大于阈值的信念点进行扩展,并且在值函数上下界差值大于阈值的后继信念点中选择与已探索点集距离最远的信念点进行探索,保证探索点集尽量有效分布于可达信念空间内.在4个基准问题上的实验表明,HHVI能保证收敛效率,并能收敛到更好的全局最优解.  相似文献   

10.
在模型未知的部分可观测马尔可夫决策过程(partially observable Markov decision process,POMDP)下,智能体无法直接获取环境的真实状态,感知的不确定性为学习最优策略带来挑战。为此,提出一种融合对比预测编码表示的深度双Q网络强化学习算法,通过显式地对信念状态建模以获取紧凑、高效的历史编码供策略优化使用。为改善数据利用效率,提出信念回放缓存池的概念,直接存储信念转移对而非观测与动作序列以减少内存占用。此外,设计分段训练策略将表示学习与策略学习解耦来提高训练稳定性。基于Gym-MiniGrid环境设计了POMDP导航任务,实验结果表明,所提出算法能够捕获到与状态相关的语义信息,进而实现POMDP下稳定、高效的策略学习。  相似文献   

11.
We address the problem of online path planning for optimal sensing with a mobile robot. The objective of the robot is to learn the most about its pose and the environment given time constraints. We use a POMDP with a utility function that depends on the belief state to model the finite horizon planning problem. We replan as the robot progresses throughout the environment. The POMDP is high-dimensional, continuous, non-differentiable, nonlinear, non-Gaussian and must be solved in real-time. Most existing techniques for stochastic planning and reinforcement learning are therefore inapplicable. To solve this extremely complex problem, we propose a Bayesian optimization method that dynamically trades off exploration (minimizing uncertainty in unknown parts of the policy space) and exploitation (capitalizing on the current best solution). We demonstrate our approach with a visually-guide mobile robot. The solution proposed here is also applicable to other closely-related domains, including active vision, sequential experimental design, dynamic sensing and calibration with mobile sensors.  相似文献   

12.
Continuous-state partially observable Markov decision processes (POMDPs) are an intuitive choice of representation for many stochastic planning problems with a hidden state. We consider a continuous-state POMDPs with finite action and observation spaces, where the POMDP is parametrised by weighted sums of Gaussians, or Gaussian mixture models (GMMs). In particular, we study the problem of optimising the selection of measurement channel in such a framework. A new error bound for a point-based value iteration algorithm is derived, and a method for constructing a subset of belief states that attempts to reduce the error bound is implemented. In the experiments, applying continuous-state POMDPs for optimal selection of the measurement channel is demonstrated, and the performance of three GMM simplification methods is compared. Convergence of a point-based value iteration algorithm is investigated by considering various metrics for the obtained control policies.  相似文献   

13.
To maintain human-like active balance for a humanoid robot, this paper proposes a novel adaptive non-parametric foot positioning compensation approach that can modify predefined step position and step duration online with sensor feedback. A constrained inverted pendulum model taking into account of supporting area to CoM acceleration is used to generate offline training samples with constrained nonlinear optimization programming. To speed up real-time computation and make online model adjustable, a non-parametric regression model based on extended Gaussian Process model is applied for online foot positioning compensation. In addition, a real-time and sample-efficient local adaptation algorithm is proposed for the non-parametric model to enable online adaptation of foot positioning compensation on a humanoid system. Simulation and experiments on a full-body humanoid robot validate the effectiveness of the proposed method.  相似文献   

14.
李响 《计算机学报》2007,30(6):999-1004
规划是人工智能研究的一个重要方向,具有极其广泛的应用背景.POMDPRS是一种结合了PRS的持续规划机制、POMDP的概率分布信念模型和极大效用原理的持续规划系统.它具有较强的对动态不确定性环境的适应能力.但是在大状态空间下的信念更新是其作为实时系统的瓶颈.该文试图将Monte Carlo滤波引入POMDPRS,从而达到降低信念更新的复杂度的目的,满足系统实时性的要求.  相似文献   

15.
Monte-Carlo tree search for Bayesian reinforcement learning   总被引:2,自引:2,他引:0  
Bayesian model-based reinforcement learning can be formulated as a partially observable Markov decision process (POMDP) to provide a principled framework for optimally balancing exploitation and exploration. Then, a POMDP solver can be used to solve the problem. If the prior distribution over the environment’s dynamics is a product of Dirichlet distributions, the POMDP’s optimal value function can be represented using a set of multivariate polynomials. Unfortunately, the size of the polynomials grows exponentially with the problem horizon. In this paper, we examine the use of an online Monte-Carlo tree search (MCTS) algorithm for large POMDPs, to solve the Bayesian reinforcement learning problem online. We will show that such an algorithm successfully searches for a near-optimal policy. In addition, we examine the use of a parameter tying method to keep the model search space small, and propose the use of nested mixture of tied models to increase robustness of the method when our prior information does not allow us to specify the structure of tied models exactly. Experiments show that the proposed methods substantially improve scalability of current Bayesian reinforcement learning methods.  相似文献   

16.
This work addresses the problem of decision-making under uncertainty for robot navigation. Since robot navigation is most naturally represented in a continuous domain, the problem is cast as a continuous-state POMDP. Probability distributions over state space, or beliefs, are represented in parametric form using low-dimensional vectors of sufficient statistics. The belief space, over which the value function must be estimated, has dimensionality equal to the number of sufficient statistics. Compared to methods based on discretising the state space, this work trades the loss of the belief space’s convexity for a reduction in its dimensionality and an efficient closed-form solution for belief updates. Fitted value iteration is used to solve the POMDP. The approach is empirically compared to a discrete POMDP solution method on a simulated continuous navigation problem. We show that, for a suitable environment and parametric form, the proposed method is capable of scaling to large state-spaces.  相似文献   

17.
基于试探(trial-based)的值迭代算法是求解部分可观察Markov决策过程(partially observable Markov decision process,POMDP)模型的一类有效算法,其中FSVI算法是目前最快的算法之一.然而对于较大规模的POMDP问题,FSVI计算MDP值函数的时间是不容忽视的.提出一种基于最短哈密顿通路(shortest Hamiltonian path)的值迭代算法(shortest Hamiltonian path-based value iteration,SHP-VI).该方法用求解最短哈密顿通路问题的蚁群算法计算一条最优信念状态轨迹,然后在这些信念状态上反向更新值函数.通过与FSVI算法的实验比较,结果表明SHP-VI算法很大程度地提高了基于试探的算法计算信念状态轨迹的效率.  相似文献   

18.
于丹宁  倪坤  刘云龙 《计算机工程》2021,47(2):90-94,102
基于卷积神经网络的部分可观测马尔科夫决策过程(POMDP)值迭代算法QMDP-net在无先验知识的情况下具有较好的性能表现,但其存在训练效果不稳定、参数敏感等优化难题.提出基于循环卷积神经网络的POMDP值迭代算法RQMDP-net,使用门控循环单元网络实现值迭代更新,在保留输入和递归权重矩阵卷积特性的同时增强网络时序...  相似文献   

19.
针对实时VBR视频流式传输的在线平滑优化问题,提出一种基于漏斗的最短路径平滑算法——SPSF。SPSF利用滑动窗口对实时VBR视频进行分段处理,顺序读取和缓存每帧视频数据至窗口,并基于漏斗原理求解窗口内数据的最短路径。数据填满窗口后根据求得的最短路径进行传输,同时根据路径特征推进窗口滑动进行下一段数据的平滑处理及传输,以此类推完成整个视频平滑传输。实验结果表明,与传统的在线平滑算法相比,SPSF具有更优的传输比特率峰值、传输比特率谷值、及传输比特率方差;与传统的最短路径算法相比,SPSF具有更快的最短路径  相似文献   

20.
针对实时VBR视频流式传输的在线平滑优化问题,提出一种基于漏斗的最短路径平滑算法——SPSF。SPSF利用滑动窗口对实时VBR视频进行分段处理,顺序读取和缓存每帧视频数据至窗口,并基于漏斗原理求解窗口内数据的最短路径。数据填满窗口后根据求得的最短路径进行传输,同时根据路径特征推进窗口滑动进行下一段数据的平滑处理及传输,以此类推完成整个视频平滑传输。实验结果表明。与传统的在线平滑算法相比,SPSF具有更优的传输比特率峰值、传输比特率谷值、及传输比特率方差;与传统的最短路径算法相比,SPSF具有更快的最短路径求解速度,提高了视频传输的实时性。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号