首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this paper, we investigate the use of hierarchical reinforcement learning (HRL) to speed up the acquisition of cooperative multi-agent tasks. We introduce a hierarchical multi-agent reinforcement learning (RL) framework, and propose a hierarchical multi-agent RL algorithm called Cooperative HRL. In this framework, agents are cooperative and homogeneous (use the same task decomposition). Learning is decentralized, with each agent learning three interrelated skills: how to perform each individual subtask, the order in which to carry them out, and how to coordinate with other agents. We define cooperative subtasks to be those subtasks in which coordination among agents significantly improves the performance of the overall task. Those levels of the hierarchy which include cooperative subtasks are called cooperation levels. A fundamental property of the proposed approach is that it allows agents to learn coordination faster by sharing information at the level of cooperative subtasks, rather than attempting to learn coordination at the level of primitive actions. We study the empirical performance of the Cooperative HRL algorithm using two testbeds: a simulated two-robot trash collection task, and a larger four-agent automated guided vehicle (AGV) scheduling problem. We compare the performance and speed of Cooperative HRL with other learning algorithms, as well as several well-known industrial AGV heuristics. We also address the issue of rational communication behavior among autonomous agents in this paper. The goal is for agents to learn both action and communication policies that together optimize the task given a communication cost. We extend the multi-agent HRL framework to include communication decisions and propose a cooperative multi-agent HRL algorithm called COM-Cooperative HRL. In this algorithm, we add a communication level to the hierarchical decomposition of the problem below each cooperation level. Before an agent makes a decision at a cooperative subtask, it decides if it is worthwhile to perform a communication action. A communication action has a certain cost and provides the agent with the actions selected by the other agents at a cooperation level. We demonstrate the efficiency of the COM-Cooperative HRL algorithm as well as the relation between the communication cost and the learned communication policy using a multi-agent taxi problem.  相似文献   

2.
为了在连续和动态的环境中处理智能体不断变化的需求,我们通过利用强化学习来研究多机器人推箱子问题,得到了一种智能体可以不需要其它智能体任何信息的情况下完成协作任务的方法。强化学习可以应用于合作和非合作场合,对于存在噪声干扰和通讯困难的情况,强化学习具有其它人工智能方法不可比拟的优越性。  相似文献   

3.
We describe a relational learning by observation framework that automatically creates cognitive agent programs that model expert task performance in complex dynamic domains. Our framework uses observed behavior and goal annotations of an expert as the primary input, interprets them in the context of background knowledge, and returns an agent program that behaves similar to the expert. We map the problem of creating an agent program on to multiple learning problems that can be represented in a “supervised concept learning’’ setting. The acquired procedural knowledge is partitioned into a hierarchy of goals and represented with first order rules. Using an inductive logic programming (ILP) learning component allows our framework to naturally combine structured behavior observations, parametric and hierarchical goal annotations, and complex background knowledge. To deal with the large domains we consider, we have developed an efficient mechanism for storing and retrieving structured behavior data. We have tested our approach using artificially created examples and behavior observation traces generated by AI agents. We evaluate the learned rules by comparing them to hand-coded rules. Editor: Rui Camacho  相似文献   

4.
Transfer in variable-reward hierarchical reinforcement learning   总被引:2,自引:1,他引:1  
Transfer learning seeks to leverage previously learned tasks to achieve faster learning in a new task. In this paper, we consider transfer learning in the context of related but distinct Reinforcement Learning (RL) problems. In particular, our RL problems are derived from Semi-Markov Decision Processes (SMDPs) that share the same transition dynamics but have different reward functions that are linear in a set of reward features. We formally define the transfer learning problem in the context of RL as learning an efficient algorithm to solve any SMDP drawn from a fixed distribution after experiencing a finite number of them. Furthermore, we introduce an online algorithm to solve this problem, Variable-Reward Reinforcement Learning (VRRL), that compactly stores the optimal value functions for several SMDPs, and uses them to optimally initialize the value function for a new SMDP. We generalize our method to a hierarchical RL setting where the different SMDPs share the same task hierarchy. Our experimental results in a simplified real-time strategy domain show that significant transfer learning occurs in both flat and hierarchical settings. Transfer is especially effective in the hierarchical setting where the overall value functions are decomposed into subtask value functions which are more widely amenable to transfer across different SMDPs.  相似文献   

5.
In this paper, we investigate Reinforcement learning (RL) in multi-agent systems (MAS) from an evolutionary dynamical perspective. Typical for a MAS is that the environment is not stationary and the Markov property is not valid. This requires agents to be adaptive. RL is a natural approach to model the learning of individual agents. These Learning algorithms are however known to be sensitive to the correct choice of parameter settings for single agent systems. This issue is more prevalent in the MAS case due to the changing interactions amongst the agents. It is largely an open question for a developer of MAS of how to design the individual agents such that, through learning, the agents as a collective arrive at good solutions. We will show that modeling RL in MAS, by taking an evolutionary game theoretic point of view, is a new and potentially successful way to guide learning agents to the most suitable solution for their task at hand. We show how evolutionary dynamics (ED) from Evolutionary Game Theory can help the developer of a MAS in good choices of parameter settings of the used RL algorithms. The ED essentially predict the equilibriums outcomes of the MAS where the agents use individual RL algorithms. More specifically, we show how the ED predict the learning trajectories of Q-Learners for iterated games. Moreover, we apply our results to (an extension of) the COllective INtelligence framework (COIN). COIN is a proved engineering approach for learning of cooperative tasks in MASs. The utilities of the agents are re-engineered to contribute to the global utility. We show how the improved results for MAS RL in COIN, and a developed extension, are predicted by the ED. Author funded by a doctoral grant of the institute for advancement of scientific technological research in Flanders (IWT).  相似文献   

6.
Ho  F.  Kamel  M. 《Machine Learning》1998,33(2-3):155-177
A central issue in the design of cooperative multiagent systems is how to coordinate the behavior of the agents to meet the goals of the designer. Traditionally, this had been accomplished by hand-coding the coordination strategies. However, this task is complex due to the interactions that can take place among agents. Recent work in the area has focused on how strategies can be learned. Yet, many of these systems suffer from convergence, complexity and performance problems. This paper presents a new approach for learning multiagent coordination strategies that addresses these issues. The effectiveness of the technique is demonstrated using a synthetic domain and the predator and prey pursuit problem.  相似文献   

7.
Cooperative Multi-Agent Learning: The State of the Art   总被引:5,自引:4,他引:1  
Cooperative multi-agent systems (MAS) are ones in which several agents attempt, through their interaction, to jointly solve tasks or to maximize utility. Due to the interactions among the agents, multi-agent problem complexity can rise rapidly with the number of agents or their behavioral sophistication. The challenge this presents to the task of programming solutions to MAS problems has spawned increasing interest in machine learning techniques to automate the search and optimization process. We provide a broad survey of the cooperative multi-agent learning literature. Previous surveys of this area have largely focused on issues common to specific subareas (for example, reinforcement learning, RL or robotics). In this survey we attempt to draw from multi-agent learning work in a spectrum of areas, including RL, evolutionary computation, game theory, complex systems, agent modeling, and robotics. We find that this broad view leads to a division of the work into two categories, each with its own special issues: applying a single learner to discover joint solutions to multi-agent problems (team learning), or using multiple simultaneous learners, often one per agent (concurrent learning). Additionally, we discuss direct and indirect communication in connection with learning, plus open issues in task decomposition, scalability, and adaptive dynamics. We conclude with a presentation of multi-agent learning problem domains, and a list of multi-agent learning resources.  相似文献   

8.
朱鹏飞  张琬迎  王煜  胡清华 《软件学报》2022,33(4):1156-1169
深度神经网络在分类任务上不断取得性能突破,但在测试中面对未知类样本时,会错误地给出一个已知类预测结果.开放集识别任务旨在解决该问题,要求模型不仅精确地分类已知类,同时对未知类样本进行准确判别.现有方法虽然取得了不错的效果,但由于未对开放集识别任务的影响因素进行分析,因而大多基于某种假设启发式地设计模型,难以保证对于实际场景的适应性.分析了现有方法的共性,通过设计一个新的决策变量实验,发现模型对已知类的表示学习能力是其中的一个关键影响因素.基于该结论,提出了一种基于模型表示学习能力增强的开放集识别方法.首先,由于对比式学习已展示出的强大表示学习能力以及开放集识别任务所包含的标签信息,引入了监督对比式学习方法,提高模型对已知类的建模能力;其次,考虑到类别间的相关性是在类别层次上的表示,且类别之间往往呈现分层结构关系,设计了一种多粒度类相关性的损失函数,通过在标签语义空间构建分层结构并度量多粒度类相关性的方式,约束模型学习不同已知类间的相关关系,进一步提高其表示学习能力;最后,在多个标准数据集上进行了实验验证,证明了所提出方法在开放集识别任务上的有效性.  相似文献   

9.
ABSTRACT

Learning parameters of a probabilistic model is a necessary step in machine learning tasks. We present a method to improve learning from small datasets by using monotonicity conditions. Monotonicity simplifies the learning and it is often required by users. We present an algorithm for Bayesian Networks parameter learning. The algorithm and monotonicity conditions are described, and it is shown that with the monotonicity conditions we can better fit underlying data. Our algorithm is tested on artificial and empiric datasets. We use different methods satisfying monotonicity conditions: the proposed gradient descent, isotonic regression EM, and non-linear optimization. We also provide results of unrestricted EM and gradient descent methods. Learned models are compared with respect to their ability to fit data in terms of log-likelihood and their fit of parameters of the generating model. Our proposed method outperforms other methods for small sets, and provides better or comparable results for larger sets.  相似文献   

10.
Communities of Learning (CoL) have been suggested to facilitate the learning process among participants of online trainings. Yet, previous studies often detached participants from the social context in which learning took place. The present study addresses this shortcoming by providing empirical evidence from 25 CoL of a global organization, where 249 staff members from different hierarchical positions engaged into collaborative learning via asynchronous discussion forums. We conduct a longitudinal study on the type of communication within these CoL, as well as participants' network positions, in order to investigate the research question: What is the impact of individual's hierarchical positions on the type of communication within CoL? Our results indicate that the higher participants' hierarchical position, the higher their amount of social and cognitive communication, which in turn was also positively related to their network position within CoL. We also identified a sub-group of Stars that outperformed their colleagues and who were at the center of CoL, irrespective of their hierarchical positions. Consequently, we propose design and facilitation strategies to practitioners and organizers of future CoL that can foster the learning processes and outcomes of all participants. Additionally, we consider future research avenues that could be explored further.  相似文献   

11.
We present a generative probabilistic model for the unsupervised learning of hierarchical natural language syntactic structure. Unlike most previous work, we do not learn a context-free grammar, but rather induce a distributional model of constituents which explicitly relates constituent yields and their linear contexts. Parameter search with EM produces higher quality analyses for human language data than those previously exhibited by unsupervised systems, giving the best published unsupervised parsing results on the ATIS corpus. Experiments on Penn treebank sentences of comparable length show an even higher constituent F1 of 71% on non-trivial brackets. We compare distributionally induced and actual part-of-speech tags as input data, and examine extensions to the basic model. We discuss errors made by the system, compare the system to previous models, and discuss upper bounds, lower bounds, and stability for this task.  相似文献   

12.
Ben-David  Shai  Eiron  Nadav 《Machine Learning》1998,33(1):87-104
We study the self-directed (SD) learning model. In this model a learner chooses examples, guesses their classification and receives immediate feedback indicating the correctness of its guesses. We consider several fundamental questions concerning this model: the parameters of a task that determine the cost of learning, the computational complexity of a student, and the relationship between this model and the teacher-directed (TD) learning model. We answer the open problem of relating the cost of self-directed learning to the VC-dimension by showing that no such relation exists. Furthermore, we refute the conjecture that for the intersection-closed case, the cost of self-directed learning is bounded by the VC-dimension. We also show that the cost of SD learning may be arbitrarily higher that that of TD learning.Finally, we discuss the number of queries needed for learning in this model and its relationship to the number of mistakes the student incurs. We prove a trade-off formula showing that an algorithm that makes fewer queries throughout its learning process, necessarily suffers a higher number of mistakes.  相似文献   

13.
基于强化学习的多Agent协作研究   总被引:2,自引:0,他引:2  
强化学习为多Agent之间的协作提供了鲁棒的学习方法.本文首先介绍了强化学习的原理和组成要素,其次描述了多Agent马尔可夫决策过程MMDP,并给出了Agent强化学习模型.在此基础上,对多Agent协作过程中存在的两种强化学习方式:IL(独立学习)和JAL(联合动作学习)进行了比较.最后分析了在有多个最优策略存在的情况下,协作多Agent系统常用的几种协调机制.  相似文献   

14.
Learning to Perceive and Act by Trial and Error   总被引:5,自引:1,他引:4  
This article considers adaptive control architectures that integrate active sensory-motor systems with decision systems based on reinforcement learning. One unavoidable consequence of active perception is that the agent's internal representation often confounds external world states. We call this phoenomenon perceptual aliasingand show that it destabilizes existing reinforcement learning algorithms with respect to the optimal decision policy. We then describe a new decision system that overcomes these difficulties for a restricted class of decision problems. The system incorporates a perceptual subcycle within the overall decision cycle and uses a modified learning algorithm to suppress the effects of perceptual aliasing. The result is a control architecture that learns not only how to solve a task but also where to focus its visual attention in order to collect necessary sensory information.  相似文献   

15.
《Advanced Robotics》2012,26(17):1967-1993
A current trend in robotics is to define robot motions so that they can be easily adopted to situations beyond those for which the motion was originally designed. In this work, we show how the challenging task of playing minigolf can be efficiently tackled by first learning a basic hitting motion model, and then learning to adapt it to different situations. We model the basic hitting motion with an autonomous dynamical systems, and solve the problem of learning the parameters of the model from a set of demonstrations through a constrained optimization. To hit the ball with the appropriate hitting angle and speed, a nonlinear model of the hitting parameters is estimated based on a set of examples of good hitting parameters. We compare two statistical methods, Gaussian Process Regression and Gaussian Mixture Regression in the context of inferring the hitting parameters for the minigolf task. We demonstrate the generalization ability of the model in various situations. We validate our approach on the 7 Degrees of Freedom (DoF) Barrett WAM arm and 6-DoF Katana arm in both simulated and real environments.  相似文献   

16.
Multiagent learning provides a promising paradigm to study how autonomous agents learn to achieve coordinated behavior in multiagent systems. In multiagent learning, the concurrency of multiple distributed learning processes makes the environment nonstationary for each individual learner. Developing an efficient learning approach to coordinate agents’ behavior in this dynamic environment is a difficult problem especially when agents do not know the domain structure and at the same time have only local observability of the environment. In this paper, a coordinated learning approach is proposed to enable agents to learn where and how to coordinate their behavior in loosely coupled multiagent systems where the sparse interactions of agents constrain coordination to some specific parts of the environment. In the proposed approach, an agent first collects statistical information to detect those states where coordination is most necessary by considering not only the potential contributions from all the domain states but also the direct causes of the miscoordination in a conflicting state. The agent then learns to coordinate its behavior with others through its local observability of the environment according to different scenarios of state transitions. To handle the uncertainties caused by agents’ local observability, an optimistic estimation mechanism is introduced to guide the learning process of the agents. Empirical studies show that the proposed approach can achieve a better performance by improving the average agent reward compared with an uncoordinated learning approach and by reducing the computational complexity significantly compared with a centralized learning approach. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

17.
Action coordination in multiagent systemsis a difficult task especially in dynamicenvironments. If the environment possessescooperation, least communication,incompatibility and local informationconstraints, the task becomes even moredifficult. Learning compatible action sequencesto achieve a designated goal under theseconstraints is studied in this work. Two newmultiagent learning algorithms called QACE andNoCommQACE are developed. To improve theperformance of the QACE and NoCommQACEalgorithms four heuristics, stateiteration, means-ends analysis, decreasing reward and do-nothing, aredeveloped. The proposed algorithms are testedon the blocks world domain and the performanceresults are reported.  相似文献   

18.
目的 跨媒体检索旨在以任意媒体数据检索其他媒体的相关数据,实现图像、文本等不同媒体的语义互通和交叉检索。然而,"异构鸿沟"导致不同媒体数据的特征表示不一致,难以实现语义关联,使得跨媒体检索面临巨大挑战。而描述同一语义的不同媒体数据存在语义一致性,且数据内部蕴含着丰富的细粒度信息,为跨媒体关联学习提供了重要依据。现有方法仅仅考虑了不同媒体数据之间的成对关联,而忽略了数据内细粒度局部之间的上下文信息,无法充分挖掘跨媒体关联。针对上述问题,提出基于层级循环注意力网络的跨媒体检索方法。方法 首先提出媒体内-媒体间两级循环神经网络,其中底层网络分别建模不同媒体内部的细粒度上下文信息,顶层网络通过共享参数的方式挖掘不同媒体之间的上下文关联关系。然后提出基于注意力的跨媒体联合损失函数,通过学习媒体间联合注意力来挖掘更加精确的细粒度跨媒体关联,同时利用语义类别信息增强关联学习过程中的语义辨识能力,从而提升跨媒体检索的准确率。结果 在2个广泛使用的跨媒体数据集上,与10种现有方法进行实验对比,并采用平均准确率均值MAP作为评价指标。实验结果表明,本文方法在2个数据集上的MAP分别达到了0.469和0.575,超过了所有对比方法。结论 本文提出的层级循环注意力网络模型通过挖掘图像和文本的细粒度信息,能够充分学习图像和文本之间精确跨媒体关联关系,有效地提高了跨媒体检索的准确率。  相似文献   

19.
Learning to Take Actions   总被引:1,自引:0,他引:1  
Khardon  Roni 《Machine Learning》1999,35(1):57-90
We formalize a model for supervised learning of action strategies in dynamic stochastic domains and show that PAC-learning results on Occam algorithms hold in this model as well. We then identify a class of rule-based action strategies for which polynomial time learning is possible. The representation of strategies is a generalization of decision lists; strategies include rules with existentially quantified conditions, simple recursive predicates, and small internal state, but are syntactically restricted. We also study the learnability of hierarchically composed strategies where a subroutine already acquired can be used as a basic action in a higher level strategy. We prove some positive results in this setting, but also show that in some cases the hierarchical learning problem is computationally hard.  相似文献   

20.
We present a methodology for learning a taxonomy from a set of text documents that each describes one concept. The taxonomy is obtained by clustering the concept definition documents with a hierarchical approach to the Self-Organizing Map. In this study, we compare three different feature extraction approaches with varying degree of language independence. The feature extraction schemes include fuzzy logic-based feature weighting and selection, statistical keyphrase extraction, and the traditional tf-idf weighting scheme. The experiments are conducted for English, Finnish, and Spanish. The results show that while the rule-based fuzzy logic systems have an advantage in automatic taxonomy learning, taxonomies can also be constructed with tolerable results using statistical methods without domain- or style-specific knowledge.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号