首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this paper, we investigate Reinforcement learning (RL) in multi-agent systems (MAS) from an evolutionary dynamical perspective. Typical for a MAS is that the environment is not stationary and the Markov property is not valid. This requires agents to be adaptive. RL is a natural approach to model the learning of individual agents. These Learning algorithms are however known to be sensitive to the correct choice of parameter settings for single agent systems. This issue is more prevalent in the MAS case due to the changing interactions amongst the agents. It is largely an open question for a developer of MAS of how to design the individual agents such that, through learning, the agents as a collective arrive at good solutions. We will show that modeling RL in MAS, by taking an evolutionary game theoretic point of view, is a new and potentially successful way to guide learning agents to the most suitable solution for their task at hand. We show how evolutionary dynamics (ED) from Evolutionary Game Theory can help the developer of a MAS in good choices of parameter settings of the used RL algorithms. The ED essentially predict the equilibriums outcomes of the MAS where the agents use individual RL algorithms. More specifically, we show how the ED predict the learning trajectories of Q-Learners for iterated games. Moreover, we apply our results to (an extension of) the COllective INtelligence framework (COIN). COIN is a proved engineering approach for learning of cooperative tasks in MASs. The utilities of the agents are re-engineered to contribute to the global utility. We show how the improved results for MAS RL in COIN, and a developed extension, are predicted by the ED. Author funded by a doctoral grant of the institute for advancement of scientific technological research in Flanders (IWT).  相似文献   

2.
In this paper, we investigate the use of hierarchical reinforcement learning (HRL) to speed up the acquisition of cooperative multi-agent tasks. We introduce a hierarchical multi-agent reinforcement learning (RL) framework, and propose a hierarchical multi-agent RL algorithm called Cooperative HRL. In this framework, agents are cooperative and homogeneous (use the same task decomposition). Learning is decentralized, with each agent learning three interrelated skills: how to perform each individual subtask, the order in which to carry them out, and how to coordinate with other agents. We define cooperative subtasks to be those subtasks in which coordination among agents significantly improves the performance of the overall task. Those levels of the hierarchy which include cooperative subtasks are called cooperation levels. A fundamental property of the proposed approach is that it allows agents to learn coordination faster by sharing information at the level of cooperative subtasks, rather than attempting to learn coordination at the level of primitive actions. We study the empirical performance of the Cooperative HRL algorithm using two testbeds: a simulated two-robot trash collection task, and a larger four-agent automated guided vehicle (AGV) scheduling problem. We compare the performance and speed of Cooperative HRL with other learning algorithms, as well as several well-known industrial AGV heuristics. We also address the issue of rational communication behavior among autonomous agents in this paper. The goal is for agents to learn both action and communication policies that together optimize the task given a communication cost. We extend the multi-agent HRL framework to include communication decisions and propose a cooperative multi-agent HRL algorithm called COM-Cooperative HRL. In this algorithm, we add a communication level to the hierarchical decomposition of the problem below each cooperation level. Before an agent makes a decision at a cooperative subtask, it decides if it is worthwhile to perform a communication action. A communication action has a certain cost and provides the agent with the actions selected by the other agents at a cooperation level. We demonstrate the efficiency of the COM-Cooperative HRL algorithm as well as the relation between the communication cost and the learned communication policy using a multi-agent taxi problem.  相似文献   

3.
强化学习是机器学习领域的研究热点, 是考察智能体与环境的相互作用, 做出序列决策、优化策略并最大化累积回报的过程. 强化学习具有巨大的研究价值和应用潜力, 是实现通用人工智能的关键步骤. 本文综述了强化学习算法与应用的研究进展和发展动态, 首先介绍强化学习的基本原理, 包括马尔可夫决策过程、价值函数、探索-利用问题. 其次, 回顾强化学习经典算法, 包括基于价值函数的强化学习算法、基于策略搜索的强化学习算法、结合价值函数和策略搜索的强化学习算法, 以及综述强化学习前沿研究, 主要介绍多智能体强化学习和元强化学习方向. 最后综述强化学习在游戏对抗、机器人控制、城市交通和商业等领域的成功应用, 以及总结与展望.  相似文献   

4.
未解决当前的远程教育系统存在形式单一和被动教学等问题,该文提出了一个基于学习者个性因素的多Agent学习系统模型。该模型结合智能代理技术,通过分析学习者个性因素,给出了个体Agent能力描述语言,提出了新的个性化分组策略和学习任务分配策略,采用补偿机制鼓励agent合作,结合状态空间搜索理论使M AS系统具有更强的解题能力,并可满足学习者主动学习的要求,还能在一定程度上节约系统的通讯。  相似文献   

5.
As machine learning (ML) and artificial intelligence progress, more complex tasks can be addressed, quite often by cascading or combining existing models and technologies, known as the bottom‐up design. Some of those tasks are addressed by agents, which attempt to simulate or emulate higher cognitive abilities that cover a broad range of functions; hence, those agents are named cognitive agents. We formulate, implement, and evaluate such a cognitive agent, which combines learning by example with ML. The mechanisms, algorithms, and theories to be merged when training a cognitive agent to read and learn how to represent knowledge have not, to the best of our knowledge, been defined by the current state‐of‐the‐art research. The task of learning to represent knowledge is known as semantic parsing, and we demonstrate that it is an ability that may be attained by cognitive agents using ML, and the knowledge acquired can be represented by using conceptual graphs. By doing so, we create a cognitive agent that simulates properties of “learning by example,” while performing semantic parsing with good accuracy. Due to the unique and unconventional design of this agent, we first present the model and then gauge its performance, showcasing its strengths and weaknesses.  相似文献   

6.
We describe a framework and equations used to model and predict the behavior of multi-agent systems (MASs) with learning agents. A difference equation is used for calculating the progression of an agent's error in its decision function, thereby telling us how the agent is expected to fare in the MAS. The equation relies on parameters which capture the agent's learning abilities, such as its change rate, learning rate and retention rate, as well as relevant aspects of the MAS such as the impact that agents have on each other. We validate the framework with experimental results using reinforcement learning agents in a market system, as well as with other experimental results gathered from the AI literature. Finally, we use PAC-theory to show how to calculate bounds on the values of the learning parameters.  相似文献   

7.
作为机器学习和人工智能领域的一个重要分支,多智能体分层强化学习以一种通用的形式将多智能体的协作能力与强化学习的决策能力相结合,并通过将复杂的强化学习问题分解成若干个子问题并分别解决,可以有效解决空间维数灾难问题。这也使得多智能体分层强化学习成为解决大规模复杂背景下智能决策问题的一种潜在途径。首先对多智能体分层强化学习中涉及的主要技术进行阐述,包括强化学习、半马尔可夫决策过程和多智能体强化学习;然后基于分层的角度,对基于选项、基于分层抽象机、基于值函数分解和基于端到端等4种多智能体分层强化学习方法的算法原理和研究现状进行了综述;最后介绍了多智能体分层强化学习在机器人控制、博弈决策以及任务规划等领域的应用现状。  相似文献   

8.
Shaping multi-agent systems with gradient reinforcement learning   总被引:1,自引:0,他引:1  
An original reinforcement learning (RL) methodology is proposed for the design of multi-agent systems. In the realistic setting of situated agents with local perception, the task of automatically building a coordinated system is of crucial importance. To that end, we design simple reactive agents in a decentralized way as independent learners. But to cope with the difficulties inherent to RL used in that framework, we have developed an incremental learning algorithm where agents face a sequence of progressively more complex tasks. We illustrate this general framework by computer experiments where agents have to coordinate to reach a global goal. This work has been conducted in part in NICTA’s Canberra laboratory.  相似文献   

9.
This paper proposes a model-free learning scheme for the developmental acquisition of robot kinematic control and dexterous manipulation skills. The approach is based on a nested-hierarchical multi-agent architecture that intuitively encapsulates the topology of robot kinematic chains, where the activity of each independent degree-of-freedom (DOF) is finally mapped onto a distinct agent. Each one of those agents progressively evolves a local kinematic control strategy in a game-theoretic sense, that is, based on a partial (local) view of the whole system topology, which is incrementally updated through a recursive communication process according to the nested-hierarchical topology. Learning is thus approached not through demonstration and training but through an autonomous self-exploration process. A fuzzy reinforcement learning scheme is employed within each agent to enable efficient exploration in a continuous state–action domain. This paper constitutes in fact a proof of concept, demonstrating that global dexterous manipulation skills can indeed evolve through such a distributed iterative learning of local agent sensorimotor mappings. The main motivation behind the development of such an incremental multi-agent topology is to enhance system modularity, to facilitate extensibility to more complex problem domains and to improve robustness with respect to structural variations including unpredictable internal failures. These attributes of the proposed system are assessed in this paper through numerical experiments in different robot manipulation task scenarios, involving both single and multi-robot kinematic chains. The generalisation capacity of the learning scheme is experimentally assessed and robustness properties of the multi-agent system are also evaluated with respect to unpredictable variations in the kinematic topology. Furthermore, these numerical experiments demonstrate the scalability properties of the proposed nested-hierarchical architecture, where new agents can be recursively added in the hierarchy to encapsulate individual active DOFs. The results presented in this paper demonstrate the feasibility of such a distributed multi-agent control framework, showing that the solutions which emerge are plausible and near-optimal. Numerical efficiency and computational cost issues are also discussed.  相似文献   

10.
For a software information agent, operating on behalf of a human owner and belonging to a community of agents, the choice of communicating or not with another agent becomes a decision to take, since communication generally implies a cost. Since these agents often operate as recommender systems, on the basis of dynamic recognition of their human owners’ behaviour and by generally using hybrid machine learning techniques, three main necessities arise in their design, namely (i) providing the agent with an internal representation of both interests and behaviour of its owner, usually called ontology; (ii) detecting inter-ontology properties that can help an agent to choose the most promising agents to be contacted for knowledge-sharing purposes; (iii) semi-automatically constructing the agent ontology, by simply observing the behaviour of the user supported by the agent, leaving to the user only the task of defining concepts and categories of interest. We present a complete MAS architecture, called connectionist learning and inter-ontology similarities (CILIOS), for supporting agent mutual monitoring, trying to cover all the issues above. CILIOS exploits an ontology model able to represent concepts, concept collections, functions and causal implications among events in a multi-agent environment; moreover, it uses a mechanism capable of inducing logical rules representing agent behaviour in the ontology by means of a connectionist ontology representation, based on neural-symbolic networks, i.e., networks whose input and output nodes are associated with logic variables.  相似文献   

11.
A main issue in cooperation in multi-agent systems is how an agent decides in which situations is better to cooperate with other agents, and with which agents does the agent cooperate. Specifically in this paper we focus on multi-agent systems composed of learning agents, where the goal of the agents is to achieve a high accuracy on predicting the correct solution of the problems they encounter. For that purpose, when encountering a new problem each agent has to decide whether to solve it individually or to ask other agents for collaboration. We will see that learning agents can collaborate forming committees in order to improve performance. Moreover, in this paper we will present a proactive learning approach that will allow the agents to learn when to convene a committee and with which agents to invite to join the committee. Our experiments show that learning results in smaller committees while maintaining (and sometimes improving) the problem solving accuracy than forming committees composed of all agents.  相似文献   

12.
针对多智能体系统(multi-agent systems,MAS)中环境具有不稳定性、智能体决策相互影响所导致的策略学习困难的问题,提出了一种名为观测空间关系提取(observation relation extraction,ORE)的方法,该方法使用一个完全图来建模MAS中智能体观测空间不同部分之间的关系,并使用注意力机制来计算智能体观测空间不同部分之间关系的重要程度。通过将该方法应用在基于值分解的多智能体强化学习算法上,提出了基于观测空间关系提取的多智能体强化学习算法。在星际争霸微观场景(StarCraft multi-agent challenge,SMAC)上的实验结果表明,与原始算法相比,带有ORE结构的值分解多智能体算法在收敛速度和最终性能方面都有更好的性能。  相似文献   

13.
致力于解决多智能体系统中的任务分配问题,基于社会生活中的竞争现象提出了一种多智能体竞争模型,同时提出了解决多智能体任务分配的详细算法.文章引入博弈论来研究存在相互外部约束条件下的个体选择问题.为了克服求解纳什均衡点的复杂性,本文采用了一步纳什均衡的方法.仿真结果证明了本模型的合理性和算法的有效性.  相似文献   

14.
一种新的多智能体Q学习算法   总被引:2,自引:0,他引:2  
郭锐  吴敏  彭军  彭姣  曹卫华 《自动化学报》2007,33(4):367-372
针对非确定马尔可夫环境下的多智能体系统,提出了一种新的多智能体Q学习算法.算法中通过对联合动作的统计来学习其它智能体的行为策略,并利用智能体策略向量的全概率分布保证了对联合最优动作的选择. 同时对算法的收敛性和学习性能进行了分析.该算法在多智能体系统RoboCup中的应用进一步表明了算法的有效性与泛化能力.  相似文献   

15.
郭锐  彭军  吴敏 《计算机工程与应用》2005,41(13):36-38,146
增强学习属于机器学习的一种,它通过与环境的交互获得策略的改进,其在线学习和自适应学习的特点使其成为解决策略寻优问题有力的工具。多智能体系统是人工智能领域的一个研究热点,对于多智能体学习技术的研究需要建立在系统环境模型的基础之上,由于多个智能体的存在,智能体之间的相互影响使得多智能体系统高度复杂,多智能体系统环境属于非确定马尔可夫模型,因此直接把基于马尔可夫模型的增强学习技术引入多智能体系统是不合适的。论文基于智能体间独立的学习机制,提出了一种改进的多智能体Q学习算法,使其适用于非确定马尔可夫环境,并对该学习技术在多智能体系统RoboCup中的应用进行了研究,实验证明了该学习技术的有效性与泛化能力,最后简要给出了多智能体增强学习研究的方向及进一步的工作。  相似文献   

16.
Communication and coordination are the main cores for reaching a constructive agreement among multi-agent systems (MASs). Dividing the overall performance of MAS to individual agents may lead to group learning as opposed to individual learning, which is one of the weak points of MASs. This paper proposes a recursive genetic framework for solving problems with high dynamism. In this framework, a combination of genetic algorithm and multi-agent capabilities is utilised to accelerate team learning and accurate credit assignment. The argumentation feature is used to accomplish agent learning and the negotiation features of MASs are used to achieve a credit assignment. The proposed framework is quite general and its recursive hierarchical structure could be extended. We have dedicated one special controlling module for increasing convergence time. Due to the complexity of blackjack, we have applied it as a possible test bed to evaluate the system’s performance. The learning rate of agents is measured as well as their credit assignment. The analysis of the obtained results led us to believe that our robust framework with the proposed negotiation operator is a promising methodology to solve similar problems in other areas with high dynamism.  相似文献   

17.
The development of the semantic Web will require agents to use common domain ontologies to facilitate communication of conceptual knowledge. However, the proliferation of domain ontologies may also result in conflicts between the meanings assigned to the various terms. That is, agents with diverse ontologies may use different terms to refer to the same meaning or the same term to refer to different meanings. Agents will need a method for learning and translating similar semantic concepts between diverse ontologies. Only until recently have researchers diverged from the last decade's common ontology paradigm to a paradigm involving agents that can share knowledge using diverse ontologies. This paper describes how we address this agent knowledge sharing problem of how agents deal with diverse ontologies by introducing a methodology and algorithms for multi-agent knowledge sharing and learning in a peer-to-peer setting. We demonstrate how this approach will enable multi-agent systems to assist groups of people in locating, translating, and sharing knowledge using our Distributed Ontology Gathering Group Integration Environment (DOGGIE) and describe our proof-of-concept experiments. DOGGIE synthesizes agent communication, machine learning, and reasoning for information sharing in the Web domain.  相似文献   

18.
In multi-agent reinforcement learning (MARL), the behaviors of each agent can influence the learning of others, and the agents have to search in an exponentially enlarged joint-action space. Hence, it is challenging for the multi-agent teams to explore in the environment. Agents may achieve suboptimal policies and fail to solve some complex tasks. To improve the exploring efficiency as well as the performance of MARL tasks, in this paper, we propose a new approach by transferring the knowledge across tasks. Differently from the traditional MARL algorithms, we first assume that the reward functions can be computed by linear combinations of a shared feature function and a set of task-specific weights. Then, we define a set of basic MARL tasks in the source domain and pre-train them as the basic knowledge for further use. Finally, once the weights for target tasks are available, it will be easier to get a well-performed policy to explore in the target domain. Hence, the learning process of agents for target tasks is speeded up by taking full use of the basic knowledge that was learned previously. We evaluate the proposed algorithm on two challenging MARL tasks: cooperative box-pushing and non-monotonic predator-prey. The experiment results have demonstrated the improved performance compared with state-of-the-art MARL algorithms.   相似文献   

19.
Elevator Group Control Using Multiple Reinforcement Learning Agents   总被引:22,自引:0,他引:22  
Crites  Robert H.  Barto  Andrew G. 《Machine Learning》1998,33(2-3):235-262
Recent algorithmic and theoretical advances in reinforcement learning (RL) have attracted widespread interest. RL algorithms have appeared that approximate dynamic programming on an incremental basis. They can be trained on the basis of real or simulated experiences, focusing their computation on areas of state space that are actually visited during control, making them computationally tractable on very large problems. If each member of a team of agents employs one of these algorithms, a new collective learning algorithm emerges for the team as a whole. In this paper we demonstrate that such collective RL algorithms can be powerful heuristic methods for addressing large-scale control problems.Elevator group control serves as our testbed. It is a difficult domain posing a combination of challenges not seen in most multi-agent learning research to date. We use a team of RL agents, each of which is responsible for controlling one elevator car. The team receives a global reward signal which appears noisy to each agent due to the effects of the actions of the other agents, the random nature of the arrivals and the incomplete observation of the state. In spite of these complications, we show results that in simulation surpass the best of the heuristic elevator control algorithms of which we are aware. These results demonstrate the power of multi-agent RL on a very large scale stochastic dynamic optimization problem of practical utility.  相似文献   

20.
车辆驻站是减少串车现象和改善公交服务可靠性的常用且有效控制策略,其执行过程需要在随机交互的系统环境中进行动态决策。考虑实时公交运营信息的可获得性,研究智能体完全合作环境下公交车辆驻站增强学习控制问题,建立基于多智能体系统的单线公交控制概念模型,描述学习框架下包括智能体状态、动作集、收益函数、协调机制等主要元素,采用hysteretic Q-learning算法求解问题。仿真实验结果表明该方法能有效防止串车现象并保持单线公交服务系统车头时距的均衡性。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号