首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 140 毫秒
1.
马尔可夫决策过程两种抽象模式   总被引:2,自引:1,他引:1  
抽象层次上马尔可夫决策过程的引入,使得人们可简洁地、陈述地表达复杂的马尔可夫决策过程,解决常规马尔可夫决策过程(MDPs)在实际中所遇到的大型状态空间的表达问题.介绍了结构型和概括型两种不同类型抽象马尔可夫决策过程基本概念以及在各种典型抽象MDPs中的最优策略的精确或近似算法,其中包括与常规MDPs根本不同的一个算法:把Bellman方程推广到抽象状态空间的方法,并且对它们的研究历史进行总结和对它们的发展做一些展望,使得人们对它们有一个透彻的、全面而又重点的理解.  相似文献   

2.
马尔可夫决策过程复杂性的熵测度   总被引:3,自引:1,他引:3       下载免费PDF全文
应用Shannon熵和其他熵指数来度量马尔可夫决策的复杂性.将马尔可夫链的复杂性、不确定性和不可预测性的度量扩展到马尔可夫决策,提出一套基于信息理论的复杂性度量方法,可用于随机和确定性策略下的完全观测和不完全观测马尔可夫决策.对有关数值进行仿真研究,并给出了计算结果.  相似文献   

3.
仵博  吴敏 《计算机工程与设计》2007,28(9):2116-2119,2126
部分可观察马尔可夫决策过程是通过引入信念状态空间将非马尔可夫链问题转化为马尔可夫链问题来求解,其描述真实世界的特性使它成为研究随机决策过程的重要分支.介绍了部分可观察马尔可夫决策过程的基本原理和决策过程,然后介绍了3种典型的算法,它们分别是Littman等人的Witness算法、hcremental Pruning算法和Pineau等人的基于点的值迭代算法,对这3种算法进行了分析比较.讲述部分可观察马尔可夫决策过程的应用.  相似文献   

4.
马尔可夫决策过程自适应决策的进展   总被引:6,自引:0,他引:6  
在介绍一般马尔可夫决策过程的基础上,分析了当前主要马尔可夫过程自适应决策方法的基本思想、具体算法实现以及相应结论,总结了现有马尔可夫过程自适应决策算法的特点,并指出了需要进一步解决的问题。  相似文献   

5.
人类在处理问题中往往分为两个层次,首先在整体上把握问题,即提出大体方案,然后再具体实施.也就是说人类就是具有多分辨率智能系统的极好例子,他能够在多个层次上从底向上泛化(即看问题角度粒度变"粗",它类似于抽象),并且又能从顶向下进行实例化(即看问题角度变"细",它类似于具体化).由此构造了由在双层(理想空间即泛化和实际空间即实例化)上各自运行的马尔可夫决策过程组成的半马尔可夫决策过程,称之为双马尔可夫决策过程联合模型.然后讨论该联合模型的最优策略算法,最后给出一个实例说明双马尔可夫决策联合模型能够经济地节约"思想",是运算有效性和可行性的一个很好的折中.  相似文献   

6.
仵博  吴敏 《控制与决策》2007,22(12):1417-1420
针对求解部分可观察马尔可夫决策过程(POMDP)信念状态空间是NP难问题.提出一种信念状态空间压缩(BSSC)算法.将信念状态空间的高维压缩到低维,利用动态贝叶斯网络对状态转移函数、观察函数和报酬函数进行压缩。降低求解规模,达到实时决策的目的.对比实验表明,所提出的算法可以快速求解最优策略和最优值函数.  相似文献   

7.
在边缘计算切换策略中,针对马尔可夫决策过程(Markov decision process,MDP)传输时延高且环境适应能力差等问题,提出了一种融合模糊逻辑与马尔可夫决策过程的边缘计算切换策略。采用模糊逻辑算法将系统参数模糊化,并且将模糊值引入适应度函数,保证系统参数能够有效融合;利用差分进化算法求解适应度函数最大值,从而选取出该环境的最优规则,提高边缘计算对环境的适应能力;将适应度函数引入MDP,提高系统综合性能。该方案将移动智能设备作为任务卸载发起方,将边缘服务器作为任务卸载对象,对一维MDP切换策略、一维仅时延MDP切换策略、二维MDP切换策略、模糊逻辑MDP切换策略、最小距离切换算法和最小时延切换算法进行仿真。仿真结果表明,模糊逻辑MDP的边缘计算切换策略的任务执行平均时长为608.8 s,较一维MDP切换策略、一维仅时延MDP切换策略、二维MDP切换策略、最小距离切换算法和最小时延切换算法分别降低了27.2%、8.6%、37.1%、41%和22.3%。该方案在提高了基于MDP的边缘计算切换策略的环境适应性的同时,大幅降低了边缘计算的传输时延。  相似文献   

8.
本文利用(1)中的马尔可夫链,用FORTRAN语言设计了股票价格上扬,下跌平均时间的计算程序,利用马尔可夫决策,实现了股票买进卖出最佳策略的实用程序。  相似文献   

9.
研究供需不平衡环境下的应急物资动态分配问题.考虑到台风灾害演变导致应急物资需求不断增长与应急物资供应相对紧缺之间的矛盾,将需求的演变设计成一个马尔可夫决策过程,建立基于马尔可夫决策的应急物资动态分配模型.通过二进制粒子群优化算法求解,最后将所提出模型应用于某台风发生时的救灾实例.实例分析表明,马尔可夫决策方法可以动态地做出合适的需求扑灭策略,使得整体的需求演变趋势保持平稳,整体的需求水平降到最低.  相似文献   

10.
近些年来,群体动画在机器人学、电影、游戏等领域得到了广泛的研究和应用,但传统的群体动画技术均涉及复杂的运动规划或碰撞避免操作,计算效率较低.本文提出了一种基于马尔可夫决策过程(MDPs)的群体动画运动轨迹生成算法,该算法无需碰撞检测即可生成各智能体的无碰撞运动轨迹.同时本文还提出了一种改进的值迭代算法用于求解马尔可夫决策过程的状态-值,利用该算法在栅格环境中进行实验,结果表明该算法的计算效率明显高于使用欧氏距离作为启发式的值迭代算法和Dijkstra算法.利用本文提出的运动轨迹生成算法在三维(3D)动画场景中进行群体动画仿真实验,结果表明该算法可实现群体无碰撞地朝向目标运动,并具有多样性.  相似文献   

11.
We will discuss an expected utility of rewards which are generated by Markov decision processes. This is applied to the optimal stopping problem with a utility treatment. Also a combined model of the decision processes and the stopping problem, called a stopped Markov decision, is considered under the utility.  相似文献   

12.
应用Markov决策过程与性能势相结合的方法,给出了呼叫接入控制的策略优化算法.所得到的最优策略是状态相关的策略,与基于节点已占用带宽决定行动的策略相比,状态相关策略具有更好的性能值,而且该算法具有很快的收敛速度.  相似文献   

13.
Kearns  Michael  Mansour  Yishay  Ng  Andrew Y. 《Machine Learning》2002,49(2-3):193-208
A critical issue for the application of Markov decision processes (MDPs) to realistic problems is how the complexity of planning scales with the size of the MDP. In stochastic environments with very large or infinite state spaces, traditional planning and reinforcement learning algorithms may be inapplicable, since their running time typically grows linearly with the state space size in the worst case. In this paper we present a new algorithm that, given only a generative model (a natural and common type of simulator) for an arbitrary MDP, performs on-line, near-optimal planning with a per-state running time that has no dependence on the number of states. The running time is exponential in the horizon time (which depends only on the discount factor and the desired degree of approximation to the optimal policy). Our algorithm thus provides a different complexity trade-off than classical algorithms such as value iteration—rather than scaling linearly in both horizon time and state space size, our running time trades an exponential dependence on the former in exchange for no dependence on the latter.Our algorithm is based on the idea of sparse sampling. We prove that a randomly sampled look-ahead tree that covers only a vanishing fraction of the full look-ahead tree nevertheless suffices to compute near-optimal actions from any state of an MDP. Practical implementations of the algorithm are discussed, and we draw ties to our related recent results on finding a near-best strategy from a given class of strategies in very large partially observable MDPs (Kearns, Mansour, & Ng. Neural information processing systems 13, to appear).  相似文献   

14.
With a long‐run average performance as the primary criterion for a Markov decision process, variance measures are studied as its secondary criteria. The steady‐state variance and the limiting average variance along a sample path are discussed. The latter one is difficult to handle due to its special form. With a sensitivity‐based approach, the difference formula for the sample‐path variance under different policies is intuitively constructed and then the optimality equation is presented. Moreover a policy iteration algorithm is developed. This work extends the sensitivity‐based construction approach to Markov decision processes with non‐standard performance criteria. The difference between these two types of variance and bias criteria is illustrated with a numerical example.  相似文献   

15.
洪晔  边信黔 《计算机仿真》2007,24(6):146-149
自治式水下机器人在复杂海洋环境航行时要求寻找一条从给定起始点到终止点的较优的运动路径,安全、无碰撞地绕过所有的障碍物.提出了一种基于部分可观察马尔可夫决策过程,并结合预测障碍物运动的全局路径规划新方法; 给出了部分可观马尔可夫决策的数学模型;建立了树状的分层部分可观马尔可夫决策模型,并在路径规划中应用;提出了短期预测和长期预测两种针对水下障碍物运动轨迹预测的方法;最后通过仿真实验对AUV的全局路径规划能力进行了仿真验证,为今后的实艇试验打下了很好的基础.  相似文献   

16.
Modelling the long-term operation of hydroelectric systems is one of the classic applications of Markov decision process (MDP). The computation of optimal policies, for MDP models, is usually done by dynamic programming (DP) on a discretized state space. A major difficulty arises when optimizing multi-reservoir systems, because the computational complexity of DP increases exponentially with the number of sites. This so-called 'curse of dimensionality' has so far restricted the applicability of DP to very small systems (2 or 3 sites). Practitioners have thus had to resort to other methodologies for the long-term planning, often at the expense of rigour, and without reliable error estimates. This paper surveys recent research of MDP computation, with application to hydro-power systems. Three main approaches are discussed: (i) discrete DP, (ii) numerical approximation of the expected future reward function, and (iii) analytic solution of the DP recursion.  相似文献   

17.
徐昕  沈栋  高岩青  王凯 《自动化学报》2012,38(5):673-687
基于马氏决策过程(Markov decision process, MDP)的动态系统学习控制是近年来一个涉及机器学习、控制理论和运筹学等多个学科的交叉研究方向, 其主要目标是实现系统在模型复杂或者不确定等条件下基于数据驱动的多阶段优化控制. 本文对基于MDP的动态系统学习控制理论、算法与应用的发展前沿进行综述,重点讨论增强学习(Reinforcement learning, RL)与近似动态规划(Approximate dynamic programming, ADP)理论与方法的研究进展,其中包括时域差值学习理论、求解连续状态与行为空间MDP的值函数逼近方法、 直接策略搜索与近似策略迭代、自适应评价设计算法等,最后对相关研究领域的应用及发展趋势进行分析和探讨.  相似文献   

18.
This communique provides an exact iterative search algorithm for the NP-hard problem of obtaining an optimal feasible stationary Markovian pure policy that achieves the maximum value averaged over an initial state distribution in finite constrained Markov decision processes. It is based on a novel characterization of the entire feasible policy space and takes the spirit of policy iteration (PI) in that a sequence of monotonically improving feasible policies is generated and converges to an optimal policy in iterations of the size of the policy space at the worst case. Unlike PI, an unconstrained MDP needs to be solved at iterations involved with feasible policies and the current best policy improves all feasible policies included in the union of the policy spaces associated with the unconstrained MDPs.  相似文献   

19.
We propose a time aggregation approach for the solution of infinite horizon average cost Markov decision processes via policy iteration. In this approach, policy update is only carried out when the process visits a subset of the state space. As in state aggregation, this approach leads to a reduced state space, which may lead to a substantial reduction in computational and storage requirements, especially for problems with certain structural properties. However, in contrast to state aggregation, which generally results in an approximate model due to the loss of Markov property, time aggregation suffers no loss of accuracy, because the Markov property is preserved. Single sample path-based estimation algorithms are developed that allow the time aggregation approach to be implemented on-line for practical systems. Some numerical and simulation examples are presented to illustrate the ideas and potential computational savings.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号