首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到18条相似文献,搜索用时 218 毫秒
1.
基于试探(trial-based)的值迭代算法是求解部分可观察Markov决策过程(partially observable Markov decision process,POMDP)模型的一类有效算法,其中FSVI算法是目前最快的算法之一.然而对于较大规模的POMDP问题,FSVI计算MDP值函数的时间是不容忽视的.提出一种基于最短哈密顿通路(shortest Hamiltonian path)的值迭代算法(shortest Hamiltonian path-based value iteration,SHP-VI).该方法用求解最短哈密顿通路问题的蚁群算法计算一条最优信念状态轨迹,然后在这些信念状态上反向更新值函数.通过与FSVI算法的实验比较,结果表明SHP-VI算法很大程度地提高了基于试探的算法计算信念状态轨迹的效率.  相似文献   

2.
基于试探(trial-based)的值迭代算法是求解部分可观察Markov决策过程(partially observable Markov decision process,POMDP)模型的一类有效算法,其中FSVI算法是目前最快的算法之一.然而对于较大规模的POMDP问题,FSVI计算MDP值函数的时间是不容忽视的.提出一种基于最短哈密顿通路(shortest Hamiltonian path)的值迭代算法(shortest Hamiltonian path-based value iteration,SHP-VI).该方法用求解最短哈密顿通路问题的蚁群算法计算一条最优信念状态轨迹,然后在这些信念状态上反向更新值函数.通过与FSVI算法的实验比较,结果表明SHP-VI算法很大程度地提高了基于试探的算法计算信念状态轨迹的效率.  相似文献   

3.
仵博  吴敏 《计算机工程与设计》2007,28(9):2116-2119,2126
部分可观察马尔可夫决策过程是通过引入信念状态空间将非马尔可夫链问题转化为马尔可夫链问题来求解,其描述真实世界的特性使它成为研究随机决策过程的重要分支.介绍了部分可观察马尔可夫决策过程的基本原理和决策过程,然后介绍了3种典型的算法,它们分别是Littman等人的Witness算法、hcremental Pruning算法和Pineau等人的基于点的值迭代算法,对这3种算法进行了分析比较.讲述部分可观察马尔可夫决策过程的应用.  相似文献   

4.
仵博  吴敏  佘锦华 《软件学报》2013,24(1):25-36
部分可观察马尔可夫决策过程(partially observable Markov decision processes,简称POMDPs)是动态不确定环境下序贯决策的理想模型,但是现有离线算法陷入信念状态“维数灾”和“历史灾”问题,而现有在线算法无法同时满足低误差与高实时性的要求,造成理想的POMDPs模型无法在实际工程中得到应用.对此,提出一种基于点的POMDPs在线值迭代算法(point-based online value iteration,简称PBOVI).该算法在给定的可达信念状态点上进行更新操作,避免对整个信念状态空间单纯体进行求解,加速问题求解;采用分支界限裁剪方法对信念状态与或树进行在线裁剪;提出信念状态结点重用思想,重用上一时刻已求解出的信念状态点,避免重复计算.实验结果表明,该算法具有较低误差率、较快收敛性,满足系统实时性的要求.  相似文献   

5.
基于Markov决策过程(MDP)的规划方法可以处理多种不确定规划问题,价值迭代算法(VI)是求解MDP的经典算法,但VI需要计算更新每个状态的值,求解过程相当缓慢。在分析了MDP状态图本身的因果依赖关系的基础上,提出一种改进的价值迭代算法,称为顺序价值迭代算法(SVI)。它先将一个MDP分解成多个拓扑有序的强连通分量,然后应用价值迭代算法顺序求解各个分量,这样处理可以避免对大量无用状态的计算并使得可用状态排成拓扑序列。对比实验结果证明了该算法的有效性及优异性能。  相似文献   

6.
基于点的值迭代方法是求解部分可观测马尔科夫决策过程(POMDP)问题的一类有效算法.目前基于点的值迭代算法大都基于单一启发式标准探索信念点集,从而限制算法效果.基于此种情况,文中提出基于杂合标准探索信念点集的值迭代算法(HHVI),可以同时维持值函数的上界和下界.在扩展探索点集时,选取值函数上下界差值大于阈值的信念点进行扩展,并且在值函数上下界差值大于阈值的后继信念点中选择与已探索点集距离最远的信念点进行探索,保证探索点集尽量有效分布于可达信念空间内.在4个基准问题上的实验表明,HHVI能保证收敛效率,并能收敛到更好的全局最优解.  相似文献   

7.
仵博  吴敏 《控制与决策》2007,22(12):1417-1420
针对求解部分可观察马尔可夫决策过程(POMDP)信念状态空间是NP难问题.提出一种信念状态空间压缩(BSSC)算法.将信念状态空间的高维压缩到低维,利用动态贝叶斯网络对状态转移函数、观察函数和报酬函数进行压缩。降低求解规模,达到实时决策的目的.对比实验表明,所提出的算法可以快速求解最优策略和最优值函数.  相似文献   

8.
基于动态贝叶斯网络的可分解信念状态空间压缩算法   总被引:1,自引:0,他引:1  
针对部分可观察马尔可夫决策过程(POMDP)的信念状态空间规模"维数灾"问题,根据信念状态变量存在可分解和独立关系的特性,提出一种基于动态贝叶斯网络(DBN)的可分解信念状态空间压缩算法(factoredbelief states space compression,FBSSC).该算法通过构建变量间依赖关系图,根据独立关系检验去除多余边,将转移函数联合概率分解成若干个条件概率的乘积,实现信念状态空间的无损压缩.对比实验和RoboCupRescue仿真结果表明,本文算法具有较低误差率、较高收敛性和普遍适用性等特性.  相似文献   

9.
对于一类利用集中式构架和分布式构架各自优点的分层非结构化P2P系统,通过定义一种Markov切换空间模型来描述其动态分组切换行为.在Markov决策过程理论的基础上,给出了关于性能指标的策略迭代和在线策略迭代算法,并通过实例仿真说明该方法的优越性.  相似文献   

10.
针对含扩散项不可靠随机生产系统最优生产控制的优化命题, 采用数值解方法来求解该优化命题最优控制所满足的模态耦合的非线性偏微分HJB方程. 首先构造Markov链来近似生产系统状态演化, 并基于局部一致性原理, 把求解连续时间随机控制问题转化为求解离散时间的Markov决策过程问题, 然后采用数值迭代和策略迭代算法来实现最优控制数值求解过程. 文末仿真结果验证了该方法的正确性和有效性.  相似文献   

11.
Markov 控制过程在紧致行动集上的迭代优化算法   总被引:5,自引:0,他引:5       下载免费PDF全文
研究一类连续时间Markov控制过程(CTMCP)在紧致行动集上关于平均代价性能准则的优化算法。根据CTMCP的性能势公式和平均代价最优性方程,导出了求解最优或次最优平稳控制策略的策略迭代算法和数值迭代算法,在无需假设迭代算子是sp—压缩的条件下,给出了这两种算法的收敛性证明。最后通过分析一个受控排队网络的例子说明了这种方法的优越性。  相似文献   

12.
Two-player turn-based stochastic game (2-TBSG) is a two-player game model which aims to find Nash equilibriums and is widely utilized in reinforcement learning and AI. Inspired by the fact that the simplex method for solving the deterministic discounted Markov decision processes is strongly polynomial independent of the discount factor, we are trying to answer an open problem whether there is a similar algorithm for 2-TBSG. We develop a simplex strategy iteration where one player updates its strategy with a simplex step while the other player finds an optimal counterstrategy in turn, and a modified simplex strategy iteration. Both of them belong to a class of geometrically converging algorithms. We establish the strongly polynomial property of these algorithms by considering a strategy combined from the current strategy and the equilibrium strategy. Moreover, we present a method to transform general 2-TBSGs into special 2-TBSGs where each state has exactly two actions.  相似文献   

13.
This communique presents an algorithm called “value set iteration” (VSI) for solving infinite horizon discounted Markov decision processes with finite state and action spaces as a simple generalization of value iteration (VI) and as a counterpart to Chang’s policy set iteration. A sequence of value functions is generated by VSI based on manipulating a set of value functions at each iteration and it converges to the optimal value function. VSI preserves convergence properties of VI while converging no slower than VI and in particular, if the set used in VSI contains the value functions of independently generated sample-policies from a given distribution and a properly defined policy switching policy, a probabilistic exponential convergence rate of VSI can be established. Because the set used in VSI can contain the value functions of any policies generated by other existing algorithms, VSI is also a general framework of combining multiple solution methods.  相似文献   

14.
We present a Reinforcement Learning (RL) algorithm based on policy iteration for solving average reward Markov and semi-Markov decision problems. In the literature on discounted reward RL, algorithms based on policy iteration and actor-critic algorithms have appeared. Our algorithm is an asynchronous, model-free algorithm (which can be used on large-scale problems) that hinges on the idea of computing the value function of a given policy and searching over policy space. In the applied operations research community, RL has been used to derive good solutions to problems previously considered intractable. Hence in this paper, we have tested the proposed algorithm on a commercially significant case study related to a real-world problem from the airline industry. It focuses on yield management, which has been hailed as the key factor for generating profits in the airline industry. In the experiments conducted, we use our algorithm with a nearest-neighbor approach to tackle a large state space. We also present a convergence analysis of the algorithm via an ordinary differential equation method.  相似文献   

15.
提出了一种最长队列优先的分布式迭代算法。与现有算法不同的是,该算法针对可扩展网络交换调度结构的特点,为处于最高优先级的调度器安排了两次迭代。其中的第一次迭代实现最长虚拟输出队列(VOQ)的查找,并且在最高优先级时隙之前的一个时隙完成,以缩短信号的处理时间。仿真结果表明,本算法与现有算法相比,在大流量的uniform流量模式下,延时性能与吞吐率获得了明显的提高;同时,该算法的硬件代价小,有效地实现了性能和复杂度的良好折中。  相似文献   

16.
In this paper we tackle the sailing strategies problem, a stochastic shortest-path Markov decision process. The problem of solving large Markov decision processes accurately and quickly is challenging. Because the computational effort incurred is considerable, current research focuses on finding superior acceleration techniques. For instance, the convergence properties of current solution methods depend, to a great extent, on the order of backup operations. On one hand, algorithms such as topological sorting are able to find good orderings, but their overhead is usually high. On the other hand, shortest path methods, such as Dijkstra's algorithm, which is based on priority queues, have been applied successfully to the solution of deterministic shortest-path Markov decision processes. Here, we propose improved value iteration algorithms based on Dijkstra's algorithm for solving shortest path Markov decision processes. The experimental results on a stochastic shortest-path problem show the feasibility of our approach.  相似文献   

17.
We propose a unified framework to Markov decision problems and performance sensitivity analysis for multichain Markov processes with both discounted and average-cost performance criteria. With the fundamental concept of performance potentials, we derive both performance-gradient and performance-difference formulas, which play the central role in performance optimization. The standard policy iteration algorithms for both discounted- and average-reward MDPs can be established using the performance-difference formulas in a simple and intuitive way; and the performance-gradient formulas together with stochastic approximation may lead to new optimization schemes. This sensitivity-based point of view of performance optimization provides some insights that link perturbation analysis, Markov decision processes, and reinforcement learning together. The research is an extension of the previous work on ergodic Markov chains (Cao, Automatica 36 (2000) 771).  相似文献   

18.
The problem of solving large Markov decision processes accurately and quickly is challenging. Since the computational effort incurred is considerable, current research focuses on finding superior acceleration techniques. For instance, the convergence properties of current solution methods depend, to a great extent, on the order of backup operations. On one hand, algorithms such as topological sorting are able to find good orderings but their overhead is usually high. On the other hand, shortest path methods, such as Dijkstra’s algorithm which is based on priority queues, have been applied successfully to the solution of deterministic shortest-path Markov decision processes. Here, we propose an improved value iteration algorithm based on Dijkstra’s algorithm for solving shortest path Markov decision processes. The experimental results on a stochastic shortest-path problem show the feasibility of our approach.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号