首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
深度逆向强化学习是机器学习领域的一个新的研究热点,它针对深度强化学习的回报函数难以获取问题,提出了通过专家示例轨迹重构回报函数的方法。首先介绍了3类深度强化学习方法的经典算法;接着阐述了经典的逆向强化学习算法,包括基于学徒学习、最大边际规划、结构化分类和概率模型形式化的方法;然后对深度逆向强化学习的一些前沿方向进行了综述,包括基于最大边际法的深度逆向强化学习、基于深度Q网络的深度逆向强化学习和基于最大熵模型的深度逆向强化学习和示例轨迹非专家情况下的逆向强化学习方法等。最后总结了深度逆向强化学习在算法、理论和应用方面存在的问题和发展方向。  相似文献   

2.
学习、交互及其结合是建立健壮、自治agent的关键必需能力。强化学习是agent学习的重要部分,agent强化学习包括单agent强化学习和多agent强化学习。文章对单agent强化学习与多agent强化学习进行了比较研究,从基本概念、环境框架、学习目标、学习算法等方面进行了对比分析,指出了它们的区别和联系,并讨论了它们所面临的一些开放性的问题。  相似文献   

3.
解决深度探索问题的贝叶斯深度强化学习算法   总被引:1,自引:0,他引:1  
在强化学习领域,如何平衡探索与利用之间的关系是一个难题。近几年提出的强化学习方法主要关注如何结合深度学习技术来提高算法的泛化能力,却忽略探索利用困境这一问题。传统的强化学习方法可以有效解决探索问题,但存在着一定的限制条件:马尔可夫决策过程的状态空间必须是离散并有限的。提出通过贝叶斯方法来提高深度强化算法的探索效率,并将贝叶斯线性回归中计算参数后验分布的方法扩展到人工神经网络等非线性模型中,通过结合Bootstrapped DQN和提出的计算方法得到了贝叶斯自举深度Q网络算法(BBDQN)。最后用两个环境下的实验表明了BBDQN在面对深度探索问题时的探索效率要优于DQN以及Bootstrapped DQN。  相似文献   

4.
Individual learning in an environment where more than one agent exist is a chal-lengingtask. In this paper, a single learning agent situated in an environment where multipleagents exist is modeled based on reinforcement learning. The environment is non-stationaryand partially accessible from an agents' point of view. Therefore, learning activities of anagent is influenced by actions of other cooperative or competitive agents in the environment.A prey-hunter capture game that has the above characteristics is defined and experimentedto simulate the learning process of individual agents. Experimental results show that thereare no strict rules for reinforcement learning. We suggest two new methods to improve theperformance of agents. These methods decrease the number of states while keeping as muchstate as necessary.  相似文献   

5.
Lifelong reinforcement learning is able to continually accumulate shared knowledge by estimating the inter-task relationships based on training data for the learned tasks in order to accelerate learning for new tasks by knowledge reuse. The existing methods employ a linear model to represent the inter-task relationships by incorporating task features in order to accomplish a new task without any learning. But these methods may be ineffective for general scenarios, where linear models build inter-task relationships from low-dimensional task features to high-dimensional policy parameters space. Also, the deficiency of calculating errors from objective function may arise in the lifelong reinforcement learning process when some errors of policy parameters restrain others due to inter-parameter correlation. In this paper, we develop a policy generation network that nonlinearly models the inter-task relationships by mapping low-dimensional task features to the high-dimensional policy parameters, in order to represent the shared knowledge more effectively. At the same time, we propose a novel objective function of lifelong reinforcement learning to relieve the deficiency of calculating errors by adding weight constraints for errors. We empirically demonstrate that our method improves the zero-shot policy performance across a variety of dynamical systems.  相似文献   

6.
In this paper, we investigate the use of hierarchical reinforcement learning (HRL) to speed up the acquisition of cooperative multi-agent tasks. We introduce a hierarchical multi-agent reinforcement learning (RL) framework, and propose a hierarchical multi-agent RL algorithm called Cooperative HRL. In this framework, agents are cooperative and homogeneous (use the same task decomposition). Learning is decentralized, with each agent learning three interrelated skills: how to perform each individual subtask, the order in which to carry them out, and how to coordinate with other agents. We define cooperative subtasks to be those subtasks in which coordination among agents significantly improves the performance of the overall task. Those levels of the hierarchy which include cooperative subtasks are called cooperation levels. A fundamental property of the proposed approach is that it allows agents to learn coordination faster by sharing information at the level of cooperative subtasks, rather than attempting to learn coordination at the level of primitive actions. We study the empirical performance of the Cooperative HRL algorithm using two testbeds: a simulated two-robot trash collection task, and a larger four-agent automated guided vehicle (AGV) scheduling problem. We compare the performance and speed of Cooperative HRL with other learning algorithms, as well as several well-known industrial AGV heuristics. We also address the issue of rational communication behavior among autonomous agents in this paper. The goal is for agents to learn both action and communication policies that together optimize the task given a communication cost. We extend the multi-agent HRL framework to include communication decisions and propose a cooperative multi-agent HRL algorithm called COM-Cooperative HRL. In this algorithm, we add a communication level to the hierarchical decomposition of the problem below each cooperation level. Before an agent makes a decision at a cooperative subtask, it decides if it is worthwhile to perform a communication action. A communication action has a certain cost and provides the agent with the actions selected by the other agents at a cooperation level. We demonstrate the efficiency of the COM-Cooperative HRL algorithm as well as the relation between the communication cost and the learned communication policy using a multi-agent taxi problem.  相似文献   

7.
Reinforcement learning is about learning agent models that make the best sequential decisions in unknown environments. In an unknown environment, the agent needs to explore the environment while exploiting the collected information, which usually forms a sophisticated problem to solve. Derivative-free optimization, meanwhile, is capable of solving sophisticated problems. It commonly uses a sampling-andupdating framework to iteratively improve the solution, where exploration and exploitation are also needed to be well balanced. Therefore, derivative-free optimization deals with a similar core issue as reinforcement learning, and has been introduced in reinforcement learning approaches, under the names of learning classifier systems and neuroevolution/evolutionary reinforcement learning. Although such methods have been developed for decades, recently, derivative-free reinforcement learning exhibits attracting increasing attention. However, recent survey on this topic is still lacking. In this article, we summarize methods of derivative-free reinforcement learning to date, and organize the methods in aspects including parameter updating, model selection, exploration, and parallel/distributed methods. Moreover, we discuss some current limitations and possible future directions, hoping that this article could bring more attentions to this topic and serve as a catalyst for developing novel and efficient approaches.  相似文献   

8.
强化学习是机器学习领域的研究热点, 是考察智能体与环境的相互作用, 做出序列决策、优化策略并最大化累积回报的过程. 强化学习具有巨大的研究价值和应用潜力, 是实现通用人工智能的关键步骤. 本文综述了强化学习算法与应用的研究进展和发展动态, 首先介绍强化学习的基本原理, 包括马尔可夫决策过程、价值函数、探索-利用问题. 其次, 回顾强化学习经典算法, 包括基于价值函数的强化学习算法、基于策略搜索的强化学习算法、结合价值函数和策略搜索的强化学习算法, 以及综述强化学习前沿研究, 主要介绍多智能体强化学习和元强化学习方向. 最后综述强化学习在游戏对抗、机器人控制、城市交通和商业等领域的成功应用, 以及总结与展望.  相似文献   

9.
Multi-agent reinforcement learning methods suffer from several deficiencies that are rooted in the large state space of multi-agent environments. This paper tackles two deficiencies of multi-agent reinforcement learning methods: their slow learning rate, and low quality decision-making in early stages of learning. The proposed methods are applied in a grid-world soccer game. In the proposed approach, modular reinforcement learning is applied to reduce the state space of the learning agents from exponential to linear in terms of the number of agents. The modular model proposed here includes two new modules, a partial-module and a single-module. These two new modules are effective for increasing the speed of learning in a soccer game. We also apply the instance-based learning concepts, to choose proper actions in states that are not experienced adequately during learning. The key idea is to use neighbouring states that have been explored sufficiently during the learning phase. The results of experiments in a grid-soccer game environment show that our proposed methods produce a higher average reward compared to the situation where the proposed method is not applied to the modular structure.  相似文献   

10.
一种新的多智能体Q学习算法   总被引:2,自引:0,他引:2  
郭锐  吴敏  彭军  彭姣  曹卫华 《自动化学报》2007,33(4):367-372
针对非确定马尔可夫环境下的多智能体系统,提出了一种新的多智能体Q学习算法.算法中通过对联合动作的统计来学习其它智能体的行为策略,并利用智能体策略向量的全概率分布保证了对联合最优动作的选择. 同时对算法的收敛性和学习性能进行了分析.该算法在多智能体系统RoboCup中的应用进一步表明了算法的有效性与泛化能力.  相似文献   

11.
文档排序一直是信息检索(IR)领域的关键任务之一。受益于马尔科夫决策过程强大的建模能力,以及强化学习方法强大的求解能力,近年来基于强化学习的排序模型被提出并取得了良好效果。然而,由于候选文档中会包含大量的不相关文档,导致基于“试错”的强化学习方法存在效率低下的问题。为解决上述问题,该文提出了一种基于模仿学习的排序学习算法IR-DAGGER,其基于文档标注信息构建专家策略,在保证文档排序精度的同时提高了算法的学习效率。为了测试IR-DAGGER的性能,该文基于面向相关性排序任务的OHSUMED数据集和面向多样化排序的TREC数据集进行了实验,实验结果表明IR-DAGGER在上述两个数据集上均提升了文档排序的精度和效率。  相似文献   

12.
Prediction of wind speed can provide a reference for the reliable utilization of wind energy. This study focuses on 1-hour, 1-step ahead deterministic wind speed prediction with only wind speed as input. To consider the time-varying characteristics of wind speed series, a dynamic ensemble wind speed prediction model based on deep reinforcement learning is proposed. It includes ensemble learning, multi-objective optimization, and deep reinforcement learning to ensure effectiveness. In part A, deep echo state network enhanced by real-time wavelet packet decomposition is used to construct base models with different vanishing moments. The variety of vanishing moments naturally guarantees the diversity of base models. In part B, multi-objective optimization is adopted to determine the combination weights of base models. The bias and variance of ensemble model are synchronously minimized to improve generalization ability. In part C, the non-dominated solutions of combination weights are embedded into a deep reinforcement learning environment to achieve dynamic selection. By reasonably designing the reinforcement learning environment, it can dynamically select non-dominated solution in each prediction according to the time-varying characteristics of wind speed. Four actual wind speed series are used to validate the proposed dynamic ensemble model. The results show that: (a) The proposed dynamic ensemble model is competitive for wind speed prediction. It significantly outperforms five classic intelligent prediction models and six ensemble methods; (b) Every part of the proposed model is indispensable to improve the prediction accuracy.  相似文献   

13.
一种基于分布式强化学习的多智能体协调方法   总被引:2,自引:0,他引:2  
范波  潘泉  张洪才 《计算机仿真》2005,22(6):115-118
多智能体系统研究的重点在于使功能独立的智能体通过协商、协调和协作,完成复杂的控制任务或解决复杂的问题。通过对分布式强化学习算法的研究和分析,提出了一种多智能体协调方法,协调级将复杂的系统任务进行分解,协调智能体利用中央强化学习进行子任务的分配,行为级中的任务智能体接受各自的子任务,利用独立强化学习分别选择有效的行为,协作完成系统任务。通过在Robot Soccer仿真比赛中的应用和实验,说明了基于分布式强化学习的多智能体协调方法的效果优于传统的强化学习。  相似文献   

14.
In this paper we introduce a new multi-agent reinforcement learning algorithm, called exploring selfish reinforcement learning (ESRL). ESRL allows agents to reach optimal solutions in repeated non-zero sum games with stochastic rewards, by using coordinated exploration. First, two ESRL algorithms for respectively common interest and conflicting interest games are presented. Both ESRL algorithms are based on the same idea, i.e. an agent explores by temporarily excluding some of the local actions from its private action space, to give the team of agents the opportunity to look for better solutions in a reduced joint action space. In a latter stage these two algorithms are transformed into one generic algorithm which does not assume that the type of the game is known in advance. ESRL is able to find the Pareto optimal solution in common interest games without communication. In conflicting interest games ESRL only needs limited communication to learn a fair periodical policy, resulting in a good overall policy. Important to know is that ESRL agents are independent in the sense that they only use their own action choices and rewards to base their decisions on, that ESRL agents are flexible in learning different solution concepts and they can handle both stochastic, possible delayed rewards and asynchronous action selection. A real-life experiment, i.e. adaptive load-balancing of parallel applications is added.  相似文献   

15.
强化学习在游戏对弈、系统控制等领域内表现出良好的性能,如何使用少量样本快速学习新任务是强化学习中亟需解决的问题。目前的有效解决方法是将元学习应用在强化学习中,由此所产生的元强化学习日益成为强化学习领域中的研究热点。为了帮助后续研究人员快速并全面了解元强化学习领域,根据近年来的元强化学习文献对研究方法进行梳理,将其归纳成基于循环网络的元强化学习、基于上下文的元强化学习、基于梯度的元强化学习、基于分层的元强化学习和离线元强化学习,对五种类型的研究方法进行对比分析,简要阐述了元强化学习的基本理论和面临的挑战,最后基于当前研究现状讨论了元强化学习的未来发展前景。  相似文献   

16.
One of the main challenges in Grid computing is efficient allocation of resources (CPU – hours, network bandwidth, etc.) to the tasks submitted by users. Due to the lack of centralized control and the dynamic/stochastic nature of resource availability, any successful allocation mechanism should be highly distributed and robust to the changes in the Grid environment. Moreover, it is desirable to have an allocation mechanism that does not rely on the availability of coherent global information. In this paper we examine a simple algorithm for distributed resource allocation in a simplified Grid-like environment that meets the above requirements. Our system consists of a large number of heterogenous reinforcement learning agents that share common resources for their computational needs. There is no explicit communication or interaction between the agents: the only information that agents receive is the expected response time of a job it submitted to a particular resource, which serves as a reinforcement signal for the agent. The results of our experiments suggest that even simple reinforcement learning can indeed be used to achieve load balanced resource allocation in large scale heterogenous system.  相似文献   

17.
基于强化学习的多Agent协作研究   总被引:2,自引:0,他引:2  
强化学习为多Agent之间的协作提供了鲁棒的学习方法.本文首先介绍了强化学习的原理和组成要素,其次描述了多Agent马尔可夫决策过程MMDP,并给出了Agent强化学习模型.在此基础上,对多Agent协作过程中存在的两种强化学习方式:IL(独立学习)和JAL(联合动作学习)进行了比较.最后分析了在有多个最优策略存在的情况下,协作多Agent系统常用的几种协调机制.  相似文献   

18.
郭锐  彭军  吴敏 《计算机工程与应用》2005,41(13):36-38,146
增强学习属于机器学习的一种,它通过与环境的交互获得策略的改进,其在线学习和自适应学习的特点使其成为解决策略寻优问题有力的工具。多智能体系统是人工智能领域的一个研究热点,对于多智能体学习技术的研究需要建立在系统环境模型的基础之上,由于多个智能体的存在,智能体之间的相互影响使得多智能体系统高度复杂,多智能体系统环境属于非确定马尔可夫模型,因此直接把基于马尔可夫模型的增强学习技术引入多智能体系统是不合适的。论文基于智能体间独立的学习机制,提出了一种改进的多智能体Q学习算法,使其适用于非确定马尔可夫环境,并对该学习技术在多智能体系统RoboCup中的应用进行了研究,实验证明了该学习技术的有效性与泛化能力,最后简要给出了多智能体增强学习研究的方向及进一步的工作。  相似文献   

19.
In this paper, we propose a set of algorithms to design signal timing plans via deep reinforcement learning. The core idea of this approach is to set up a deep neural network (DNN) to learn the Q-function of reinforcement learning from the sampled traffic state/control inputs and the corresponding traffic system performance output. Based on the obtained DNN, we can find the appropriate signal timing policies by implicitly modeling the control actions and the change of system states. We explain the possible benefits and implementation tricks of this new approach. The relationships between this new approach and some existing approaches are also carefully discussed.   相似文献   

20.
针对协作多智能体强化学习中的全局信用分配机制很难捕捉智能体之间的复杂协作关系及无法有效地处理非马尔可夫奖励信号的问题,提出了一种增强的协作多智能体强化学习中的全局信用分配机制。首先,设计了一种新的基于奖励高速路连接的全局信用分配结构,使得智能体在决策时能够考虑其所分得的局部奖励信号与团队的全局奖励信号;其次,通过融合多步奖励信号提出了一种能够适应非马尔可夫奖励的值函数估计方法。在星际争霸微操作实验平台上的多个复杂场景下的实验结果表明:所提方法不仅能够取得先进的性能,同时还能大大提高样本的利用率。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号