共查询到20条相似文献,搜索用时 19 毫秒
1.
In this work, we build upon the observation that offline reinforcement learning (RL) is synergistic with task hierarchies that decompose large Markov decision processes (MDPs). Task hierarchies can allow more efficient sample collection from large MDPs, while offline algorithms can learn better policies than the so-called “recursively optimal” or even hierarchically optimal policies learned by standard hierarchical RL algorithms. To enable this synergy, we study sample collection strategies for offline RL that are consistent with a provided task hierarchy while still providing good exploration of the state-action space. We show that naïve extensions of uniform random sampling do not work well in this case and design a strategy that has provably good convergence properties. We also augment the initial set of samples using additional information from the task hierarchy, such as state abstraction. We use the augmented set of samples to learn a policy offline. Given a capable offline RL algorithm, this policy is then guaranteed to have a value greater than or equal to the value of the hierarchically optimal policy. We evaluate our approach on several domains and show that samples generated using a task hierarchy with a suitable strategy allow significantly more sample-efficient convergence than standard offline RL. Further, our approach also shows more sample-efficient convergence to policies with value greater than or equal to hierarchically optimal policies found through an online hierarchical RL approach. 相似文献
2.
针对分层强化学习需要人工给出层次结构这一问题,同时考虑到基于状态空间的自动分层方法在环境状态中没有明显子目标时分层效果并不理想的情况,提出一种基于动作空间的自动构造层次结构方法。首先,根据动作影响的状态分量将动作集合划分为多个不相交的子集;然后,分析Agent在不同状态下的可用动作,并识别瓶颈动作;最后,由瓶颈动作与执行次序确定动作子集之间的上下层关系,并构造层次结构。此外,对MAXQ方法中子任务的终止条件进行修改,使所提算法构造的层次结构可以通过MAXQ方法找到最优策略。实验结果表明,所提算法可以自动构造层次结构,而不会受环境变化的干扰。与Q学习、Sarsa算法相比,MAXQ方法根据该结构得到最优策略的时间更短,获得回报更高。验证了所提算法能够有效地自动构造MAXQ层次结构,并使寻找最优策略更加高效。 相似文献
3.
针对MAXQ算法存在的弊端,提出一种改进的分层学习算法BMAXQ。该方法修改了MAXQ的抽象机制,利用BP神经网络的特点,使得Agent能够自动发现子任务,实现各分层的并行学习,适应动态环境下的学习任务。 相似文献
4.
5.
基于路径匹配的在线分层强化学习方法 总被引:1,自引:0,他引:1
如何在线找到正确的子目标是基于option的分层强化学习的关键问题.通过分析学习主体在子目标处的动作,发现了子目标的有效动作受限的特性,进而将寻找子目标的问题转化为寻找路径中最匹配的动作受限状态.针对网格学习环境,提出了单向值方法表示子目标的有效动作受限特性和基于此方法的option自动发现算法.实验表明,基于单向值方法产生的option能够显著加快Q学习算法,也进一步分析了option产生的时机和大小对Q学习算法性能的影响. 相似文献
6.
In this paper, we propose to use hierarchical action decomposition to make Bayesian model-based reinforcement learning more efficient and feasible for larger problems. We formulate Bayesian hierarchical reinforcement learning as a partially observable semi-Markov decision process (POSMDP). The main POSMDP task is partitioned into a hierarchy of POSMDP subtasks. Each subtask might consist of only primitive actions or hierarchically call other subtasks’ policies, since the policies of lower-level subtasks are considered as macro actions in higher-level subtasks. A solution for this hierarchical action decomposition is to solve lower-level subtasks first, then higher-level ones. Because each formulated POSMDP has a continuous state space, we sample from a prior belief to build an approximate model for them, then solve by using a recently introduced Monte Carlo Value Iteration with Macro-Actions solver. We name this method Monte Carlo Bayesian Hierarchical Reinforcement Learning. Simulation results show that our algorithm exploiting the action hierarchy performs significantly better than that of flat Bayesian reinforcement learning in terms of both reward, and especially solving time, in at least one order of magnitude. 相似文献
7.
《Engineering Applications of Artificial Intelligence》2007,20(7):936-950
The robot soccer game has been proposed as a benchmark problem for the artificial intelligence and robotic researches. Decision-making system is the most important part of the robot soccer system. As the environment is dynamic and complex, one of the reinforcement learning (RL) method named FNN-RL is employed in learning the decision-making strategy. The FNN-RL system consists of the fuzzy neural network (FNN) and RL. RL is used for structure identification and parameters tuning of FNN. On the other hand, the curse of dimensionality problem of RL can be solved by the function approximation characteristics of FNN. Furthermore, the residual algorithm is used to calculate the gradient of the FNN-RL method in order to guarantee the convergence and rapidity of learning. The complex decision-making task is divided into multiple learning subtasks that include dynamic role assignment, action selection, and action implementation. They constitute a hierarchical learning system. We apply the proposed FNN-RL method to the soccer agents who attempt to learn each subtask at the various layers. The effectiveness of the proposed method is demonstrated by the simulation and the real experiments. 相似文献
8.
分层强化学习研究进展 总被引:1,自引:0,他引:1
首先介绍了半马尔可夫决策过程、分层与抽象等分层强化学习的理论基础;其次,较全面地比较HAM、options、MAXQ和HEXQ四种典型的学习方法,从典型学习方法的拓展、学习分层、部分感知马尔可夫决策过程、并发和多agent合作等方面讨论分层强化学习的研究现状;最后指出分层强化学习未来的发展方向。 相似文献
9.
Keith L. Downing 《Genetic Programming and Evolvable Machines》2001,2(3):259-288
This paper introduces the Reinforced Genetic Programming (RGP) system, which enhances standard tree-based genetic programming (GP) with reinforcement learning (RL). RGP adds a new element to the GP function set: monitored action-selection points that provide hooks to a reinforcement-learning system. Using strong typing, RGP can restrict these choice points to leaf nodes, thereby turning GP trees into classify-and-act procedures. Then, environmental reinforcements channeled back through the choice points provide the basis for both lifetime learning and general GP fitness assessment. This paves the way for evolutionary acceleration via both Baldwinian and Lamarckian mechanisms. In addition, the hybrid hints of potential improvements to RL by exploiting evolution to design proper abstraction spaces, via the problem-state classifications of the internal tree nodes. This paper details the basic mechanisms of RGP and demonstrates its application on a series of static and dynamic maze-search problems. 相似文献
10.
Hierarchical algorithms for Markov decision processes have been proved to be useful for the problem domains with multiple
subtasks. Although the existing hierarchical approaches are strong in task decomposition, they are weak in task abstraction,
which is more important for task analysis and modeling. In this paper, we propose a task-oriented design to strengthen the
task abstraction. Our approach learns an episodic task model from the problem domain, with which the planner obtains the same
control effect, with concise structure and much improved performance than the original model. According to our analysis and
experimental evaluation, our approach has better performance than the existing hierarchical algorithms, such as MAXQ and HEXQ. 相似文献
11.
We formalize the problem of Structured Prediction as a Reinforcement Learning task. We first define a Structured Prediction Markov Decision Process (SP-MDP), an instantiation of Markov Decision Processes for Structured Prediction and show that learning an optimal policy for this SP-MDP is equivalent to minimizing the empirical loss. This link between the supervised learning formulation of structured prediction and reinforcement learning (RL) allows us to use approximate RL methods for learning the policy. The proposed model makes weak assumptions both on the nature of the Structured Prediction problem and on the supervision process. It does not make any assumption on the decomposition of loss functions, on data encoding, or on the availability of optimal policies for training. It then allows us to cope with a large range of structured prediction problems. Besides, it scales well and can be used for solving both complex and large-scale real-world problems. We describe two series of experiments. The first one provides an analysis of RL on classical sequence prediction benchmarks and compares our approach with state-of-the-art SP algorithms. The second one introduces a tree transformation problem where most previous models fail. This is a complex instance of the general labeled tree mapping problem. We show that RL exploration is effective and leads to successful results on this challenging task. This is a clear confirmation that RL could be used for large size and complex structured prediction problems. 相似文献
12.
Mohammad Ghavamzadeh Sridhar Mahadevan Rajbala Makar 《Autonomous Agents and Multi-Agent Systems》2006,13(2):197-229
In this paper, we investigate the use of hierarchical reinforcement learning (HRL) to speed up the acquisition of cooperative
multi-agent tasks. We introduce a hierarchical multi-agent reinforcement learning (RL) framework, and propose a hierarchical
multi-agent RL algorithm called Cooperative HRL. In this framework, agents are cooperative and homogeneous (use the same task decomposition). Learning is decentralized,
with each agent learning three interrelated skills: how to perform each individual subtask, the order in which to carry them
out, and how to coordinate with other agents. We define cooperative subtasks to be those subtasks in which coordination among agents significantly improves the performance of the overall task. Those
levels of the hierarchy which include cooperative subtasks are called cooperation levels. A fundamental property of the proposed approach is that it allows agents to learn coordination faster by sharing information
at the level of cooperative subtasks, rather than attempting to learn coordination at the level of primitive actions. We study the empirical performance of the
Cooperative HRL algorithm using two testbeds: a simulated two-robot trash collection task, and a larger four-agent automated guided vehicle
(AGV) scheduling problem. We compare the performance and speed of Cooperative HRL with other learning algorithms, as well as several well-known industrial AGV heuristics. We also address the issue of rational
communication behavior among autonomous agents in this paper. The goal is for agents to learn both action and communication
policies that together optimize the task given a communication cost. We extend the multi-agent HRL framework to include communication
decisions and propose a cooperative multi-agent HRL algorithm called COM-Cooperative HRL. In this algorithm, we add a communication level to the hierarchical decomposition of the problem below each cooperation level. Before an agent makes a decision at a cooperative subtask, it decides if it is worthwhile to perform a communication action. A communication action has a certain cost and provides
the agent with the actions selected by the other agents at a cooperation level. We demonstrate the efficiency of the COM-Cooperative HRL algorithm as well as the relation between the communication cost and the learned communication policy using a multi-agent
taxi problem. 相似文献
13.
作为机器学习和人工智能领域的一个重要分支,多智能体分层强化学习以一种通用的形式将多智能体的协作能力与强化学习的决策能力相结合,并通过将复杂的强化学习问题分解成若干个子问题并分别解决,可以有效解决空间维数灾难问题。这也使得多智能体分层强化学习成为解决大规模复杂背景下智能决策问题的一种潜在途径。首先对多智能体分层强化学习中涉及的主要技术进行阐述,包括强化学习、半马尔可夫决策过程和多智能体强化学习;然后基于分层的角度,对基于选项、基于分层抽象机、基于值函数分解和基于端到端等4种多智能体分层强化学习方法的算法原理和研究现状进行了综述;最后介绍了多智能体分层强化学习在机器人控制、博弈决策以及任务规划等领域的应用现状。 相似文献
14.
Transfer in variable-reward hierarchical reinforcement learning 总被引:2,自引:1,他引:1
Transfer learning seeks to leverage previously learned tasks to achieve faster learning in a new task. In this paper, we consider
transfer learning in the context of related but distinct Reinforcement Learning (RL) problems. In particular, our RL problems are derived from Semi-Markov Decision Processes (SMDPs) that share the same
transition dynamics but have different reward functions that are linear in a set of reward features. We formally define the
transfer learning problem in the context of RL as learning an efficient algorithm to solve any SMDP drawn from a fixed distribution
after experiencing a finite number of them. Furthermore, we introduce an online algorithm to solve this problem, Variable-Reward
Reinforcement Learning (VRRL), that compactly stores the optimal value functions for several SMDPs, and uses them to optimally
initialize the value function for a new SMDP. We generalize our method to a hierarchical RL setting where the different SMDPs
share the same task hierarchy. Our experimental results in a simplified real-time strategy domain show that significant transfer
learning occurs in both flat and hierarchical settings. Transfer is especially effective in the hierarchical setting where
the overall value functions are decomposed into subtask value functions which are more widely amenable to transfer across
different SMDPs. 相似文献
15.
16.
A novel supervised learning method is proposed by combining linear discriminant functions with neural networks. The proposed method results in a tree-structured hybrid architecture. Due to constructive learning, the binary tree hierarchical architecture is automatically generated by a controlled growing process for a specific supervised learning task. Unlike the classic decision tree, the linear discriminant functions are merely employed in the intermediate level of the tree for heuristically partitioning a large and complicated task into several smaller and simpler subtasks in the proposed method. These subtasks are dealt with by component neural networks at the leaves of the tree accordingly. For constructive learning, growing and credit-assignment algorithms are developed to serve for the hybrid architecture. The proposed architecture provides an efficient way to apply existing neural networks (e.g. multi-layered perceptron) for solving a large scale problem. We have already applied the proposed method to a universal approximation problem and several benchmark classification problems in order to evaluate its performance. Simulation results have shown that the proposed method yields better results and faster training in comparison with the multilayered perceptron. 相似文献
17.
为加快分层强化学习中任务层次结构的自动生成速度,提出了一种基于多智能体系统的并行自动分层方法,该方法以Sutton提出的Option分层强化学习方法为理论框架,首先由多智能体合作对状态空间进行并行探测并集中聚类产生状态子空间,然后多智能体并行学习生成各子空间上内部策略,最终生成Option.以二维有障碍栅格空间内两点间最短路径规划为任务背景给出了算法并进行了仿真实验和分析,结果表明,并行自动分层方法生成任务层次结构的速度明显快于以往的串行自动分层方法.本文的方法适用于空间探测、路径规划、追逃等类问题领域. 相似文献
18.
AUTOMATIC COMPLEXITY REDUCTION IN REINFORCEMENT LEARNING 总被引:1,自引:0,他引:1
High dimensionality of state representation is a major limitation for scale-up in reinforcement learning (RL). This work derives the knowledge of complexity reduction from partial solutions and provides algorithms for automated dimension reduction in RL. We propose the cascading decomposition algorithm based on the spectral analysis on a normalized graph Laplacian to decompose a problem into several subproblems and then conduct parameter relevance analysis on each subproblem to perform dynamic state abstraction. The elimination of irrelevant parameters projects the original state space into the one with lower dimension in which some subtasks are projected onto the same shared subtasks. The framework could identify irrelevant parameters based on performed action sequences and thus relieve the problem of high dimensionality in learning process. We evaluate the framework with experiments and show that the dimension reduction approach could indeed make some infeasible problem to become learnable. 相似文献
19.
Borja Fernandez-Gauna Jose Manuel Lopez-Guede Manuel Graña 《Robotics and Autonomous Systems》2013,61(7):694-703
Transfer learning is a hierarchical approach to reinforcement learning of complex tasks modeled as Markov Decision Processes. The learning results on the source task are used as the starting point for the learning on the target task. In this paper we deal with a hierarchy of constrained systems, where the source task is an under-constrained system, hence called the Partially Constrained Model (PCM). Constraints in the framework of reinforcement learning are dealt with by state-action veto policies. We propose a theoretical background for the hierarchy of training refinements, showing that the effective action repertoires learnt on the PCM are maximal, and that the PCM-optimal policy gives maximal state value functions. We apply the approach to learn the control of Linked Multicomponent Robotic Systems using Reinforcement Learning. The paradigmatic example is the transportation of a hose. The system has strong physical constraints and a large state space. Learning experiments in the target task are realized over an accurate but computationally expensive simulation of the hose dynamics. The PCM is obtained simplifying the hose model. Learning results of the PCM Transfer Learning show an spectacular improvement over conventional Q-learning on the target task. 相似文献
20.
The behavior of reinforcement learning (RL) algorithms is best understood in completely observable, discrete-time controlled Markov chains with finite state and action spaces. In contrast, robot-learning domains are inherently continuous both in time and space, and moreover are partially observable. Here we suggest a systematic approach to solve such problems in which the available qualitative and quantitative knowledge is used to reduce the complexity of learning task. The steps of the design process are to: (i) decompose the task into subtasks using the qualitative knowledge at hand; (ii) design local controllers to solve the subtasks using the available quantitative knowledge, and (iii) learn a coordination of these controllers by means of reinforcement learning. It is argued that the approach enables fast, semi-automatic, but still high quality robot-control as no fine-tuning of the local controllers is needed. The approach was verified on a non-trivial real-life robot task. Several RL algorithms were compared by ANOVA and it was found that the model-based approach worked significantly better than the model-free approach. The learnt switching strategy performed comparably to a handcrafted version. Moreover, the learnt strategy seemed to exploit certain properties of the environment which were not foreseen in advance, thus supporting the view that adaptive algorithms are advantageous to nonadaptive ones in complex environments. 相似文献