首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
This communique provides an exact iterative search algorithm for the NP-hard problem of obtaining an optimal feasible stationary Markovian pure policy that achieves the maximum value averaged over an initial state distribution in finite constrained Markov decision processes. It is based on a novel characterization of the entire feasible policy space and takes the spirit of policy iteration (PI) in that a sequence of monotonically improving feasible policies is generated and converges to an optimal policy in iterations of the size of the policy space at the worst case. Unlike PI, an unconstrained MDP needs to be solved at iterations involved with feasible policies and the current best policy improves all feasible policies included in the union of the policy spaces associated with the unconstrained MDPs.  相似文献   

2.
本文以随机逼近的形式,提出了一些用于求解平均奖赏Markov决策过程系统方程的在策略无模型激励学习算法,这些算法与广泛且成功应用于折扣奖赏MDP的SARSA(λ)类算法相似,为比较这些新算法的性能,本文还给出了一些初步的实验结果。  相似文献   

3.
4.
马尔可夫决策过程自适应决策的进展   总被引:6,自引:0,他引:6  
在介绍一般马尔可夫决策过程的基础上,分析了当前主要马尔可夫过程自适应决策方法的基本思想、具体算法实现以及相应结论,总结了现有马尔可夫过程自适应决策算法的特点,并指出了需要进一步解决的问题。  相似文献   

5.
This communique presents an algorithm called “value set iteration” (VSI) for solving infinite horizon discounted Markov decision processes with finite state and action spaces as a simple generalization of value iteration (VI) and as a counterpart to Chang’s policy set iteration. A sequence of value functions is generated by VSI based on manipulating a set of value functions at each iteration and it converges to the optimal value function. VSI preserves convergence properties of VI while converging no slower than VI and in particular, if the set used in VSI contains the value functions of independently generated sample-policies from a given distribution and a properly defined policy switching policy, a probabilistic exponential convergence rate of VSI can be established. Because the set used in VSI can contain the value functions of any policies generated by other existing algorithms, VSI is also a general framework of combining multiple solution methods.  相似文献   

6.
    
This paper deals with Markov decision processes with a target set for nonpositive rewards. Two types of threshold probability criteria are discussed. The first criterion is a probability that a total reward is not greater than a given initial threshold value, and the second is a probability that the total reward is less than it. Our first (resp. second) optimizing problem is to minimize the first (resp. second) threshold probability. These problems suggest that the threshold value is a permissible level of the total reward to reach a goal (the target set), that is, we would reach this set over the level, if possible. For the both problems, we show that 1) the optimal threshold probability is a unique solution to an optimality equation, 2) there exists an optimal deterministic stationary policy, and 3) a value iteration and a policy space iteration are given. In addition, we prove that the first (resp. second) optimal threshold probability is a monotone increasing and right (resp. left) continuous function of the initial threshold value and propose a method to obtain an optimal policy and the optimal threshold probability in the first problem by using them in the second problem.  相似文献   

7.
We propose a time aggregation approach for the solution of infinite horizon average cost Markov decision processes via policy iteration. In this approach, policy update is only carried out when the process visits a subset of the state space. As in state aggregation, this approach leads to a reduced state space, which may lead to a substantial reduction in computational and storage requirements, especially for problems with certain structural properties. However, in contrast to state aggregation, which generally results in an approximate model due to the loss of Markov property, time aggregation suffers no loss of accuracy, because the Markov property is preserved. Single sample path-based estimation algorithms are developed that allow the time aggregation approach to be implemented on-line for practical systems. Some numerical and simulation examples are presented to illustrate the ideas and potential computational savings.  相似文献   

8.
黄永皓  陈曦 《控制与决策》2010,25(6):857-861
研究机会式频谱接入技术中探测与接入策略的优化问题.首先,以与原问题等价的信度马尔可夫决策过程为基本模型,基于性能势的核心概念,从性能灵敏度的角度出发,分析不同策略下系统的性能差异,给出了优化探测与接入策略的迭代算法;然后,通过分析系统的样本路径,结合该问题中连续状态空间可集结的特点,进一步讨论了策略迭代算法的基于样本路径的具体实现.两个仿真示例验证了算法的有效性.  相似文献   

9.
对于一类利用集中式构架和分布式构架各自优点的分层非结构化P2P系统,通过定义一种Markov切换空间模型来描述其动态分组切换行为.在Markov决策过程理论的基础上,给出了关于性能指标的策略迭代和在线策略迭代算法,并通过实例仿真说明该方法的优越性.  相似文献   

10.
实时动态规划的最优行动判据及算法改进   总被引:2,自引:0,他引:2       下载免费PDF全文
范长杰  陈小平 《软件学报》2008,19(11):2869-2878
主要以提高求解马尔可夫决策问题的实时动态规划(real-time dynamic programming,简称RTDP)算法的效率为目的.对几类典型的实时动态规划算法所使用的收敛判据进行了对比分析,并利用值函数上界、下界给出了称为最优行动判据的收敛判据,以及一个更适合实时算法的分支选择策略.最优行动判据可以更早地标定当前状态满足精度要求的最优行动供立即执行,而新的分支选择策略可以加快这一判据的满足.据此设计了一种有界增量实时动态规划(bounded incremental RTDP,简称BI-RTDP)算法.在两种典型仿真实时环境的实验中,BI-RTDP均显示出优于现有相关算法的实时性能.  相似文献   

11.
平均奖赏强化学习算法研究   总被引:7,自引:0,他引:7  
高阳  周如益  王皓  曹志新 《计算机学报》2007,30(8):1372-1378
顺序决策问题常用马尔可夫决策过程(MDP)建模.当决策行为执行从时刻点扩展到连续时间上时,经典的马尔可夫决策过程模型也扩展到半马尔可夫决策过程模型(SMDP).当系统参数未知时,强化学习技术被用来学习最优策略.文中基于性能势理论,证明了平均奖赏强化学习的逼近定理.通过逼近相对参考状态的性能势值函数,研究一个新的平均奖赏强化学习算法--G-学习算法.G-学习算法既可以用于MDP,也可以用于SMDP.不同于经典的R-学习算法,G-学习算法采用相对参考状态的性能势值函数替代相对平均奖赏和的相对值函数.在顾客访问控制和生产库存仿真实验中,G-学习算法表现出优于R-学习算法和SMART算法的性能.  相似文献   

12.
针对飞行目标机动性带来的多传感器协同探测资源调度动态性需求, 提出一种新的基于近端策略优化(Proximal policy optimization, PPO)与全连接神经网络结合的多传感器协同探测资源调度算法. 首先, 分析影响多传感器协同探测资源调度的复杂约束条件, 形成评价多传感器协同探测资源调度过程指标; 然后, 引入马尔科夫决策过程(Markov decision process, MDP)模拟多传感器协同探测资源调度过程, 并为提高算法稳定性, 将Adam算法与学习率衰减算法结合, 控制学习率调整步长; 最后, 基于改进近端策略优化与全卷积神经网络结合算法求解动态资源调度策略, 并通过对比实验表明该算法的优越性.  相似文献   

13.
适于估计OD矩阵的交通检测点的最优分布   总被引:5,自引:0,他引:5       下载免费PDF全文
讨论了适于估计起迄点出行分布矩阵(OD矩阵)的交通检测点的合理分布问题.根据检测点应当满足的规则,建立了关于检测点分布的非线性规划模型.在已知极点问转移概率的前提下,将检测点的分布问题描述成一个平均报酬Markov决策过程,并通过转化为一个等价的整数线性规划问题来求解.最后实例结果表明该模型是有效的、合理的.  相似文献   

14.
It is known that the performance potentials (or equivalently, perturbation realization factors) can be used as building blocks for performance sensitivities of Markov systems. In parameterized systerns, the changes in parameters may only affect some states, and the explicit transition probability matrix may not be known. In this paper, we use an example to show that we can use potentials to construct performance sensitivities m a more flexible way; only the potentials at the affected states need to be estimated, and the transition probability matrix need not be known. Policy iteration algorithms, which are simpler than the standard one, can be established.  相似文献   

15.
在大规模随机控制问题中,值函数逼近是一种克服维数灾的方法,考虑平均模型马氏决策规划(MDP)的状态软集结相对值迭代算法,在Span压缩的条件下,证明了该算法的收敛性,同时还给出了其误差估计。  相似文献   

16.
The paper is devoted to the first order delayed linear system with relay output controlled by the proportional-integral (PI) regulator. The deterministic system exhibits stable oscillations, and, since the system itself is stable, it can be suitable to switch off the controller if there are no disturbances during a long time interval. In the present work, the random disturbances are modelled by a Poisson stream of impulses, and the goal is to determine the instants of switching on (off) of the PI controller. After several assumptions and quantization of the time axis, we construct the new optimal control problem which is successfully solved with the help of the dynamic programming approach.  相似文献   

17.
We use a stochastic dynamic programming (SDP) approach to solve the problem of determining the optimal routing policies in a stochastic dynamic network. Due to its long time for solving SDP, we propose three techniques for pruning stochastic dynamic networks to expedite the process of obtaining optimal routing policies. The techniques include: (1) use of static upper/lower bounds, (2) pre-processing the stochastic dynamic networks by using the start time and origin location of the vehicle, and (3) a mix of pre-processing and upper/lower bounds. Our experiments show that while finding optimal routing policies in stochastic dynamic networks, the last two of the three strategies have a significant computational advantage over conventional SDP. Our main observation from these experiments was that the computational advantage of the pruning strategies that depend on the start time of the vehicle varies according to the time input to the problem. We present the results of this variation in the experiments section. We recommend that while comparing the computational performances of time-dependent techniques, it is very important to test the performance of such strategies at various time inputs.  相似文献   

18.
推荐算法在一定程度上解决了信息过载问题,但传统推荐模型在挖掘数据特性方面有待改进。为此,结合强化学习方法提出一种融合序列模式评分的策略梯度推荐算法。将推荐过程建模为马尔可夫决策过程;分析推荐基础数据特性模式,设计以序列模式评分为奖励的反馈函数,在算法的每一次迭代过程中学习;通过对累积奖励设计标准化操作来降低策略梯度的方差。将该方法应用到电影推荐中进行验证,结果表明所提方法具有较好的推荐准确性。  相似文献   

19.
曾斌  樊旭  李厚朴 《自动化学报》2023,49(7):1519-1529
复杂多变的战场环境要求后装保障能够根据战场环境变化, 预见性地做出决策. 为此, 提出基于强化学习的动态调度方法. 为准确描述保障调度问题, 提出支持抢占调度、重分配及重部署决策的马尔科夫决策过程(Markov decision process, MDP)模型, 模型中综合考量了任务排队、保障优先级以及油料约束等诸多问题的影响; 随后设计改进策略迭代算法, 训练基于神经网络的保障调度模型; 训练后的神经网络模型能够近似计算状态价值函数, 从而求解出产生最大期望价值的优化调度策略. 最后设计一个分布式战场保障仿真实验, 通过与常规调度策略的对比, 验证了动态调度算法具有良好的自适应性和自主学习能力, 能够根据历史数据和当前态势预判后续变化, 并重新规划和配置保障资源的调度方案.  相似文献   

20.
策略梯度强化学习中的最优回报基线   总被引:2,自引:0,他引:2  
王学宁  徐昕  吴涛  贺汉根 《计算机学报》2005,28(6):1021-1026
尽管策略梯度强化学习算法有较好的收敛性,但是在梯度估计的过程中方差过大,却是该方法在理论和应用上的一个主要弱点,为减小梯度强化学习算法的方差,该文提出一种新的算法——Istate-Grbp算法:在策略梯度算法Istate-GPOMDP中加入回报基线,以改进策略梯度算法的学习性能,文中证明了在Istate-GPOMDP算法中引入回报基线,不会改变梯度估计的期望值,并且给出了使方差最小的最优回报基线,实验结果表明,和已有算法相比,该文提出的算法通过减小梯度估计的方差,提高了学习效率,加快了学习过程的收敛。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号