首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 109 毫秒
1.
Partially observable Markov decision processes (POMDP) provide a mathematical framework for agent planning under stochastic and partially observable environments. The classic Bayesian optimal solution can be obtained by transforming the problem into Markov decision process (MDP) using belief states. However, because the belief state space is continuous and multi-dimensional, the problem is highly intractable. Many practical heuristic based methods are proposed, but most of them require a complete POMDP model of the environment, which is not always practical. This article introduces a modified memory-based reinforcement learning algorithm called modified U-Tree that is capable of learning from raw sensor experiences with minimum prior knowledge. This article describes an enhancement of the original U-Tree’s state generation process to make the generated model more compact, and also proposes a modification of the statistical test for reward estimation, which allows the algorithm to be benchmarked against some traditional model-based algorithms with a set of well known POMDP problems.  相似文献   

2.
We address the problem of online path planning for optimal sensing with a mobile robot. The objective of the robot is to learn the most about its pose and the environment given time constraints. We use a POMDP with a utility function that depends on the belief state to model the finite horizon planning problem. We replan as the robot progresses throughout the environment. The POMDP is high-dimensional, continuous, non-differentiable, nonlinear, non-Gaussian and must be solved in real-time. Most existing techniques for stochastic planning and reinforcement learning are therefore inapplicable. To solve this extremely complex problem, we propose a Bayesian optimization method that dynamically trades off exploration (minimizing uncertainty in unknown parts of the policy space) and exploitation (capitalizing on the current best solution). We demonstrate our approach with a visually-guide mobile robot. The solution proposed here is also applicable to other closely-related domains, including active vision, sequential experimental design, dynamic sensing and calibration with mobile sensors.  相似文献   

3.
结合启发式求解和增强学习技术,深入研究了基于实例的POMDP问题的近似求解算法,包括基于最近邻算法法的NNI及它的参数化增强版本ENNI和基于局部加权回归算法的LWI,并通过实验对比,给出了相应算法在实际应用中的性能。实验证明,基于实例的方法来求解POMDP问题,能够获得性能较好的次优解。  相似文献   

4.
解决深度探索问题的贝叶斯深度强化学习算法   总被引:1,自引:0,他引:1  
在强化学习领域,如何平衡探索与利用之间的关系是一个难题。近几年提出的强化学习方法主要关注如何结合深度学习技术来提高算法的泛化能力,却忽略探索利用困境这一问题。传统的强化学习方法可以有效解决探索问题,但存在着一定的限制条件:马尔可夫决策过程的状态空间必须是离散并有限的。提出通过贝叶斯方法来提高深度强化算法的探索效率,并将贝叶斯线性回归中计算参数后验分布的方法扩展到人工神经网络等非线性模型中,通过结合Bootstrapped DQN和提出的计算方法得到了贝叶斯自举深度Q网络算法(BBDQN)。最后用两个环境下的实验表明了BBDQN在面对深度探索问题时的探索效率要优于DQN以及Bootstrapped DQN。  相似文献   

5.
In this paper, we propose to use hierarchical action decomposition to make Bayesian model-based reinforcement learning more efficient and feasible for larger problems. We formulate Bayesian hierarchical reinforcement learning as a partially observable semi-Markov decision process (POSMDP). The main POSMDP task is partitioned into a hierarchy of POSMDP subtasks. Each subtask might consist of only primitive actions or hierarchically call other subtasks’ policies, since the policies of lower-level subtasks are considered as macro actions in higher-level subtasks. A solution for this hierarchical action decomposition is to solve lower-level subtasks first, then higher-level ones. Because each formulated POSMDP has a continuous state space, we sample from a prior belief to build an approximate model for them, then solve by using a recently introduced Monte Carlo Value Iteration with Macro-Actions solver. We name this method Monte Carlo Bayesian Hierarchical Reinforcement Learning. Simulation results show that our algorithm exploiting the action hierarchy performs significantly better than that of flat Bayesian reinforcement learning in terms of both reward, and especially solving time, in at least one order of magnitude.  相似文献   

6.
多配送中心车辆路径规划(multi-depot vehicle routing problem, MDVRP)是现阶段供应链应用较为广泛的问题模型,现有算法多采用启发式方法,其求解速度慢且无法保证解的质量,因此研究快速且有效的求解算法具有重要的学术意义和应用价值.以最小化总车辆路径距离为目标,提出一种基于多智能体深度强化学习的求解模型.首先,定义多配送中心车辆路径问题的多智能体强化学习形式,包括状态、动作、回报以及状态转移函数,使模型能够利用多智能体强化学习训练;然后通过对MDVRP的节点邻居及遮掩机制的定义,基于注意力机制设计由多个智能体网络构成的策略网络模型,并利用策略梯度算法进行训练以获得能够快速求解的模型;接着,利用2-opt局部搜索策略和采样搜索策略改进解的质量;最后,通过对不同规模问题仿真实验以及与其他算法进行对比,验证所提出的多智能体深度强化学习模型及其与搜索策略的结合能够快速获得高质量的解.  相似文献   

7.
Reinforcement learning (RL) is an area of machine learning that is concerned with how an agent learns to make decisions sequentially in order to optimize a particular performance measure. For achieving such a goal, the agent has to choose either 1) exploiting previously known knowledge that might end up at local optimality or 2) exploring to gather new knowledge that expects to improve the current performance. Among other RL algorithms, Bayesian model-based RL (BRL) is well-known to be able to trade-off between exploitation and exploration optimally via belief planning, i.e. partially observable Markov decision process (POMDP). However, solving that POMDP often suffers from curse of dimensionality and curse of history. In this paper, we make two major contributions which are: 1) an integration framework of temporal abstraction into BRL that eventually results in a hierarchical POMDP formulation, which can be solved online using a hierarchical sample-based planning solver; 2) a subgoal discovery method for hierarchical BRL that automatically discovers useful macro actions to accelerate learning. In the experiment section, we demonstrate that the proposed approach can scale up to much larger problems. On the other hand, the agent is able to discover useful subgoals for speeding up Bayesian reinforcement learning.  相似文献   

8.
基于遗传算法和强化学习的贝叶斯网络结构学习算法   总被引:1,自引:0,他引:1  
遗传算法是基于自然界中生物遗传规律的适应性原则对问题解空间进行搜寻和最优化的方法。贝叶斯网络是对不确定性知识进行建模、推理的主要方法,Bayesian网中的学习问题(参数学习与结构学习)是个NP-hard问题。强化学习是利用新顺序数据来更新学习结果的在线学习方法。介绍了利用强化学习指导遗传算法,实现对贝叶斯网结构进行有效学习。  相似文献   

9.
蒋云良  赵康  曹军杰  范婧  刘勇 《控制与决策》2021,36(8):1825-1833
近年来随着深度学习尤其是深度强化学习模型的不断增大,其训练成本即超参数的搜索空间也在不断变大,然而传统超参数搜索算法大部分是基于顺序执行训练,往往需要等待数周甚至数月才有可能找到较优的超参数配置.为解决深度强化学习超参数搜索时间长和难以找到较优超参数配置问题,提出一种新的超参数搜索算法-----基于种群演化的超参数异步并行搜索(PEHS).算法结合演化算法思想,利用固定资源预算异步并行搜索种群模型及其超参数,从而提高算法性能.设计实现在Ray并行分布式框架上运行的参数搜索算法,通过实验表明在并行框架上基于种群演化的超参数异步并行搜索的效果优于传统超参数搜索算法,且性能稳定.  相似文献   

10.
近年来图神经网络与深度强化学习的发展为组合优化问题的求解提供了新的方法。当前此类方法大多未考虑到算法参数学习问题,为解决该问题,基于图注意力网络设计了一种智能优化模型。该模型对大量问题数据进行学习,自动构建邻域搜索算子与序列破坏终止符,并使用强化学习训练模型参数。在标准算例集上测试模型并进行三组不同实验。实验结果表明,该模型学习出的邻域搜索算子具备较强的寻优能力和收敛性,同时显著降低了训练占用显存。该模型能够在较短时间内求解包含数百节点的CVRP问题,并具有一定的扩展潜力。  相似文献   

11.
车辆路径问题是物流运输优化中的核心问题,目的是在满足顾客需求下得到一条最低成本的车辆路径规划。但随着物流运输规模的不断增大,车辆路径问题求解难度增加,并且对实时性要求也不断提高,已有的常规算法不再适应实际要求。近年来,基于强化学习算法开始成为求解车辆路径问题的重要方法,在简要回顾常规方法求解车辆路径问题的基础上,重点总结基于强化学习求解车辆路径问题的算法,并将算法按照基于动态规划、基于价值、基于策略的方式进行了分类;最后对该问题未来的研究进行了展望。  相似文献   

12.
In this paper, we address the problem of suboptimal behavior during online partially observable Markov decision process (POMDP) planning caused by time constraints on planning. Taking inspiration from the related field of reinforcement learning (RL), our solution is to shape the agent’s reward function in order to lead the agent to large future rewards without having to spend as much time explicitly estimating cumulative future rewards, enabling the agent to save time to improve the breadth planning and build higher quality plans. Specifically, we extend potential-based reward shaping (PBRS) from RL to online POMDP planning. In our extension, information about belief states is added to the function optimized by the agent during planning. This information provides hints of where the agent might find high future rewards beyond its planning horizon, and thus achieve greater cumulative rewards. We develop novel potential functions measuring information useful to agent metareasoning in POMDPs (reflecting on agent knowledge and/or histories of experience with the environment), theoretically prove several important properties and benefits of using PBRS for online POMDP planning, and empirically demonstrate these results in a range of classic benchmark POMDP planning problems.  相似文献   

13.
强化学习(Reinforcement Learning)是学习环境状态到动作的一种映射,并且能够获得最大的奖赏信号。强化学习中有三种方法可以实现回报的最大化:值迭代、策略迭代、策略搜索。该文介绍了强化学习的原理、算法,并对有环境模型和无环境模型的离散空间值迭代算法进行研究,并且把该算法用于固定起点和随机起点的格子世界问题。实验结果表明,相比策略迭代算法,该算法收敛速度快,实验精度好。  相似文献   

14.
Solving partially observable Markov decision processes (POMDPs) is a complex task that is often intractable. This paper examines the problem of finding an optimal policy for POMDPs. While a lot of effort has been made to develop algorithms to solve POMDPs, the question of automatically finding good low-dimensional spaces in multi-agent co-operative learning domains has not been explored thoroughly. To identify this question, an online algorithm CMEAS is presented to improve the POMDP model. This algorithm is based on a look-ahead search to find the best action to execute at each cycle. Thus the overwhelming complexity of computing a policy for each possible situation is avoided. A series of simulations demonstrate this good strategy and performance of the proposed algorithm when multiple agents co-operate to find an optimal policy for POMDPs.  相似文献   

15.
基于双尺度约束模型的BN结构自适应学习算法   总被引:1,自引:0,他引:1  
戴晶帼  任佳  董超  杜文才 《自动化学报》2021,47(8):1988-2001
在无先验信息的情况下, 贝叶斯网络(Bayesian network, BN)结构搜索空间的规模随节点数目增加呈指数级增长, 造成BN结构学习难度急剧增加. 针对该问题, 提出基于双尺度约束模型的BN结构自适应学习算法. 该算法利用最大互信息和条件独立性测试构建大尺度约束模型, 完成BN结构搜索空间的初始化. 在此基础上设计改进遗传算法, 在结构迭代优化过程中引入小尺度约束模型, 实现结构搜索空间小尺度动态缩放. 同时, 在改进遗传算法中构建变异概率自适应调节函数, 以降低结构学习过程陷入局部最优解的概率. 仿真结果表明, 提出的基于双尺度约束模型的BN结构自适应学习算法能够在无先验信息的情况下保证BN结构学习的精度和迭代寻优的收敛速度.  相似文献   

16.
为对电力市场环境下电力系统供需互动问题更精确地建模,使其更好地与未来电力市场环境下需求侧负荷聚合商之间多变的关系和复杂的通信拓扑结构相匹配,本文将电力系统供需互动的Stackelberg博弈与复杂网络上反映需求侧负荷聚合商互动的演化博弈相结合,搭建考虑市场因素的电力系统供需互动混合博弈模型.并提出混合博弈强化学习算法求解相应的非凸非连续优化问题,该算法以Q学习为载体,通过引入博弈论和图论的思想,把分块协同和演化博弈的方法相结合,充分地利用博弈者之间互动博弈关系所形成的知识矩阵信息,高质量地求解考虑复杂网络上多智能体系统的非凸优化问题.基于复杂网络理论搭建的四类3机-6负荷系统和南方某一线城市电网的仿真结果表明:混合博弈强化学习算法的寻优性能比大多数集中式的智能算法好,且在不同网络下均可以保证较好的寻优结果,具有很强的适应性和稳定性.  相似文献   

17.
We model reinforcement learning as the problem of learning to control a partially observable Markov decision process (POMDP) and focus on gradient ascent approaches to this problem. In an earlier work (2001, J. Artificial Intelligence Res.14) we introduced GPOMDP, an algorithm for estimating the performance gradient of a POMDP from a single sample path, and we proved that this algorithm almost surely converges to an approximation to the gradient. In this paper, we provide a convergence rate for the estimates produced by GPOMDP and give an improved bound on the approximation error of these estimates. Both of these bounds are in terms of mixing times of the POMDP.  相似文献   

18.
A popular strategy for dealing with large parameter estimation problems is to split the problem into manageable subproblems and solve them cyclically one by one until convergence. A well-known drawback of this strategy is slow convergence in low noise conditions. We propose using so-called pattern searches which consist of an exploratory phase followed by a line search. During the exploratory phase, a search direction is determined by combining the individual updates of all subproblems. The approach can be used to speed up several well-known learning methods such as variational Bayesian learning (ensemble learning) and expectation-maximization algorithm with modest algorithmic modifications. Experimental results show that the proposed method is able to reduce the required convergence time by 60–85% in realistic variational Bayesian learning problems.  相似文献   

19.
This research treats a bargaining process as a Markov decision process, in which a bargaining agent’s goal is to learn the optimal policy that maximizes the total rewards it receives over the process. Reinforcement learning is an effective method for agents to learn how to determine actions for any time steps in a Markov decision process. Temporal-difference (TD) learning is a fundamental method for solving the reinforcement learning problem, and it can tackle the temporal credit assignment problem. This research designs agents that apply TD-based reinforcement learning to deal with online bilateral bargaining with incomplete information. This research further evaluates the agents’ bargaining performance in terms of the average payoff and settlement rate. The results show that agents using TD-based reinforcement learning are able to achieve good bargaining performance. This learning approach is sufficiently robust and convenient, hence it is suitable for online automated bargaining in electronic commerce.  相似文献   

20.
在深度强化学习领域,如何有效地探索环境是一个难题。深度Q网络(Deep Q-Network,DQN)使用ε-贪婪策略来探索环境,ε的大小和衰减需要人工进行调节,而调节不当会导致性能变差。这种探索策略不够高效,不能有效解决深度探索问题。针对DQN的ε-贪婪策略探索效率不够高的问题,提出一种基于平均神经网络参数的DQN算法(Averaged Parameters DQN,AP-DQN)。该算法在回合开始时,将智能体之前学习到的多个在线值网络参数进行平均,得到一个扰动神经网络参数,然后通过扰动神经网络进行动作选择,从而提高智能体的探索效率。实验结果表明,AP-DQN算法在面对深度探索问题时的探索效率优于DQN,在5个Atari游戏环境中相比DQN获得了更高的平均每回合奖励,归一化后的得分相比DQN最多提升了112.50%,最少提升了19.07%。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号