共查询到20条相似文献,搜索用时 406 毫秒
1.
2.
3.
为了解决部分可观对抗环境中多智能体协同决策难题,受人大脑皮层通过记忆进行学习和推理功能启发,提出一种新的部分可观对抗环境下基于不完备信息预测的多智能体分布式协同决策框架。该框架可采用支持向量回归等多种预测方法通过历史记忆和当前观察信息对环境中不可见信息进行预测,并将预测信息和观察到的信息融合,作为协同决策的依据;再通过分布式多智能体强化学习进行协同策略学习得到团队中每个智能体的决策模型。使用该框架结合多种预测算法在典型的部分可观对抗环境中进行了多智能体协同决策的验证。结果表明,提出的框架对多种预测算法具有普适性,且在保证对不可见部分高预测精度时能将多智能体协同决策水平提升23.4%。 相似文献
4.
常晓军 《计算机工程与应用》2011,47(23):212-216
在传统Q学习算法基础上引入多智能体系统,提出了多智能体联合Q学习算法。该算法是在同一评价函数下进行多智能体的学习,并且学习过程考虑了参与协作的所有智能体的学习结果。在RoboCup-2D足球仿真比赛中通过引入球场状态分解法减少了状态分量,采用联合学习得到的最优状态作为多智能体协作的最优动作组,有效解决了仿真中各智能体之间的传球策略及其协作问题,仿真和实验结果证明了算法的有效性和可靠性。 相似文献
5.
多智能体高效协作是多智能体深度强化学习的重要目标,然而多智能体决策系统中存在的环境非平稳、维数灾难等问题使得这一目标难以实现。现有值分解方法可在环境平稳性和智能体拓展性之间取得较好平衡,但忽视了智能体策略网络的重要性,并且在学习联合动作值函数时未充分利用经验池中保存的完整历史轨迹。提出一种基于多智能体多步竞争网络的多智能体协作方法,在训练过程中使用智能体网络和价值网络对智能体动作评估和环境状态评估进行解耦,同时针对整条历史轨迹完成多步学习以估计时间差分目标,通过优化近似联合动作值函数的混合网络集中且端到端地训练分散的多智能体协作策略。实验结果表明,该方法在6种场景中的平均胜率均优于基于值分解网络、单调值函数分解、值函数变换分解、反事实多智能体策略梯度的多智能体协作方法,并且具有较快的收敛速度和较好的稳定性。 相似文献
6.
本文针对RoboCup仿真比赛的多智能体协作问题,分析了目前在RoboCup中的几个典型多智能体协作模型,提出一种三层的Multi—Agent层次协作模型,它包括全局层、局部层和个体层。实战证明该模型是合理的、有效的。 相似文献
7.
8.
9.
针对动态未知环境下多智能体多目标协同问题,为实现在动态未知环境下多个智能体能够同时到达所有目标点,设计函数式奖励函数,对强化学习算法进行改进.智能体与环境交互,不断重复"探索-学习-决策"过程,在与环境的交互中积累经验并优化策略,在未预先分配目标点的情况下,智能体通过协同决策,能够避开环境中的静态障碍物和动态障碍物,同时到达所有目标点.仿真结果表明,该算法相比现有多智能体协同方法的学习速度平均提高约42.86%,同时智能体能够获得更多的奖励,可以做到自主决策自主分配目标,并且实现同时到达所有目标点的目标. 相似文献
10.
以无人机(unmanned aerial vehicle, UAV)和无人车(unmanned ground vehicle, UGV)的异构协作任务为背景,通过UAV和UGV的异构特性互补,为了扩展和改进异构多智能体的动态覆盖问题,提出了一种地-空异构多智能体协作覆盖模型。在覆盖过程中,UAV可以利用速度与观测范围的优势对UGV的行动进行指导;同时考虑智能体的局部观测性与不确定性,以分布式局部可观测马尔可夫(decentralized partially observable Markov decision processes,DEC-POMDPs)为模型搭建覆盖场景,并利用多智能体强化学习算法完成对环境的覆盖。仿真实验表明,UAV与 UGV间的协作加快了团队对环境的覆盖速度,同时强化学习算法也提高了覆盖模型的有效性。 相似文献
11.
一种基于分布式强化学习的多智能体协调方法 总被引:2,自引:0,他引:2
多智能体系统研究的重点在于使功能独立的智能体通过协商、协调和协作,完成复杂的控制任务或解决复杂的问题。通过对分布式强化学习算法的研究和分析,提出了一种多智能体协调方法,协调级将复杂的系统任务进行分解,协调智能体利用中央强化学习进行子任务的分配,行为级中的任务智能体接受各自的子任务,利用独立强化学习分别选择有效的行为,协作完成系统任务。通过在Robot Soccer仿真比赛中的应用和实验,说明了基于分布式强化学习的多智能体协调方法的效果优于传统的强化学习。 相似文献
12.
多智能体深度强化学习方法可应用于真实世界中需要多方协作的场景,是强化学习领域内的研究热点。在多目标多智能体合作场景中,各智能体之间具有复杂的合作与竞争并存的混合关系,在这些场景中应用多智能体强化学习方法时,其性能取决于该方法是否能够充分地衡量各智能体之间的关系、区分合作和竞争动作,同时也需要解决高维数据的处理以及算法效率等应用难点。针对多目标多智能体合作场景,在QMIX模型的基础上提出一种基于目标的值分解深度强化学习方法,并使用注意力机制衡量智能体之间的群体影响力,利用智能体的目标信息实现量两阶段的值分解,提升对复杂智能体关系的刻画能力,从而提高强化学习方法在多目标多智能体合作场景中的性能。实验结果表明,相比QMIX模型,该方法在星际争霸2微观操控平台上的得分与其持平,在棋盘游戏中得分平均高出4.9分,在多粒子运动环境merge和cross中得分分别平均高出25分和280.4分,且相较于主流深度强化学习方法也具有更高的得分与更好的性能表现。 相似文献
13.
多智能体强化学习及其在足球机器人角色分配中的应用 总被引:2,自引:0,他引:2
足球机器人系统是一个典型的多智能体系统, 每个机器人球员选择动作不仅与自身的状态有关, 还要受到其他球员的影响, 因此通过强化学习来实现足球机器人决策策略需要采用组合状态和组合动作. 本文研究了基于智能体动作预测的多智能体强化学习算法, 使用朴素贝叶斯分类器来预测其他智能体的动作. 并引入策略共享机制来交换多智能体所学习的策略, 以提高多智能体强化学习的速度. 最后, 研究了所提出的方法在足球机器人动态角色分配中的应用, 实现了多机器人的分工和协作. 相似文献
14.
Xue Lei Changyin Sun Donald Wunsch Yingjiang Zhou Yu Fang 《IEEE/CAA Journal of Automatica Sinica》2018,5(1):301-310
The iterated prisoner's dilemma (IPD) is an ideal model for analyzing interactions between agents in complex networks. It has attracted wide interest in the development of novel strategies since the success of tit-for-tat in Axelrod's tournament. This paper studies a new adaptive strategy of IPD in different complex networks, where agents can learn and adapt their strategies through reinforcement learning method. A temporal difference learning method is applied for designing the adaptive strategy to optimize the decision making process of the agents. Previous studies indicated that mutual cooperation is hard to emerge in the IPD. Therefore, three examples which based on square lattice network and scale-free network are provided to show two features of the adaptive strategy. First, the mutual cooperation can be achieved by the group with adaptive agents under scale-free network, and once evolution has converged mutual cooperation, it is unlikely to shift. Secondly, the adaptive strategy can earn a better payoff compared with other strategies in the square network. The analytical properties are discussed for verifying evolutionary stability of the adaptive strategy. 相似文献
15.
Zishuo Cheng Dayong Ye Tianqing Zhu Wanlei Zhou Philip S. Yu Congcong Zhu 《国际智能系统杂志》2022,37(1):799-828
In multi-agent reinforcement learning, transfer learning is one of the key techniques used to speed up learning performance through the exchange of knowledge among agents. However, there are three challenges associated with applying this technique to real-world problems. First, most real-world domains are partially rather than fully observable. Second, it is difficult to pre-collect knowledge in unknown domains. Third, negative transfer impedes the learning progress. We observe that differentially private mechanisms can overcome these challenges due to their randomization property. Therefore, we propose a novel differential transfer learning method for multi-agent reinforcement learning problems, characterized by the following three key features. First, our method allows agents to implement real-time knowledge transfers between each other in partially observable domains. Second, our method eliminates the constraints on the relevance of transferred knowledge, which expands the knowledge set to a large extent. Third, our method improves robustness to negative transfers by applying differentially exponential noise and relevance weights to transferred knowledge. The proposed method is the first to use the randomization property of differential privacy to stimulate the learning performance in multi-agent reinforcement learning system. We further implement extensive experiments to demonstrate the effectiveness of our proposed method. 相似文献
16.
17.
18.
在多机器人系统中,协作环境探索的强化学习的空间规模是机器人个数的指数函数,学习空间非常庞大造成收敛速度极慢。为了解决这个问题,将基于动作预测的强化学习方法及动作选择策略应用于多机器人协作研究中,通过预测机器人可能执行动作的概率以加快学习算法的收敛速度。实验结果表明,基于动作预测的强化学习方法能够比原始算法更快速地获取多机器人的协作策略。 相似文献
19.
20.
Zhenyi Zhang Zhibin Mo Yutao Chen Jie Huang 《IEEE/CAA Journal of Automatica Sinica》2022,9(9):1561-1573
Behavior-based autonomous systems rely on human intelligence to resolve multi-mission conflicts by designing mission priority rules and nonlinear controllers. In this work, a novel two-layer reinforcement learning behavioral control (RLBC) method is proposed to reduce such dependence by trial-and-error learning. Specifically, in the upper layer, a reinforcement learning mission supervisor (RLMS) is designed to learn the optimal mission priority. Compared with existing mission supervisors, the RLMS improves the dynamic performance of mission priority adjustment by maximizing cumulative rewards and reducing hardware storage demand when using neural networks. In the lower layer, a reinforcement learning controller (RLC) is designed to learn the optimal control policy. Compared with existing behavioral controllers, the RLC reduces the control cost of mission priority adjustment by balancing control performance and consumption. All error signals are proved to be semi-globally uniformly ultimately bounded (SGUUB). Simulation results show that the number of mission priority adjustment and the control cost are significantly reduced compared to some existing mission supervisors and behavioral controllers, respectively. 相似文献