基于Meta平衡的多Agent Q学习算法研究 |
| |
引用本文: | 王万良,艘约庆,赵燕伟. 基于Meta平衡的多Agent Q学习算法研究[J]. 计算机科学, 2012, 39(105): 261-264 |
| |
作者姓名: | 王万良 艘约庆 赵燕伟 |
| |
作者单位: | (浙江工业大学计算机科学与技术学院 杭州 310023) (浙江工业大学特种装备制造与先进加工技术教育部重点实验室 杭州 310012) |
| |
摘 要: | 多Agent强化学习算法的研究一直以来大多都是针对于合作策略,而NashQ算法的提出对非合作策略的研究无疑是一个重要贡献。针对在多Agent系统中,Nash平衡无法确保求得的解是Paret。最优解及其计算复杂度较高的问题,提出了基于Mcta平衡的MctaQ算法。与NashQ算法不同,MctaQ算法通过对自身行为的预处理以及对其它Agent行为的预测来获取共同行为的最优策略。最后通过研究及气候合作策略游戏实验,证明了MctaQ算法在解决非合作策略的问题中有着很好的理论解释和实验性能。
|
关 键 词: | 强化学习,Meta平衡,NashQ,多Agent系统 |
Research on Multi-agent Q Learning Algorithm Based on Meta Equilibrium |
| |
Abstract: | Multi-agent reinforcement learning algorithms aim at cooperation strategy, while NashQ is frectuently menboned as a pivotal algorithm to the study of non-cooperative strategics. In multi agent systems, Nash equilibrium can not ensure the solutions obtained Pareto optimal, besides, the algorithm with high computation complexity. MetaQ algorithm was proposed in this paper. It is different from NashQ that MetaQ finds out the optimal solution by the pretreatment of its own behavior and the prediction of the others behavior. In the end,a game-climate cooperation strategy was used in this paper, and the results shows that MetaQ algorithm, with impressive performance, is fit for non-cooperative problem. |
| |
Keywords: | Reinforcement learning Meta ectuilibrium NashQ Multi-agent system |
|
| 点击此处可从《计算机科学》浏览原始摘要信息 |
|
点击此处可从《计算机科学》下载全文 |