首页 | 本学科首页   官方微博 | 高级检索  
     

两方零和马尔科夫博弈下的策略梯度算法
引用本文:李永强,周键,冯宇,冯远静. 两方零和马尔科夫博弈下的策略梯度算法[J]. 模式识别与人工智能, 2023, 36(1): 81-91. DOI: 10.16451/j.cnki.issn1003-6059.202301007
作者姓名:李永强  周键  冯宇  冯远静
作者单位:1.浙江工业大学 信息工程学院 杭州 310023
基金项目:国家自然科学基金面上项目(No.62073294)、浙江省自然科学基金重点项目(No.LZ21F030003)资助
摘    要:在两方零和马尔科夫博弈中,由于玩家策略会受到另一个玩家策略的影响,传统的策略梯度定理只适用于交替训练两个玩家的策略.为了实现同时训练两个玩家的策略,文中给出两方零和马尔科夫博弈下的策略梯度定理.然后,基于该策略梯度定理,提出基于额外梯度的REINFORCE算法,可使玩家的联合策略收敛到近似纳什均衡.文中从多个维度分析算法的优越性.首先,在同时移动博弈游戏上的对比实验表明,文中算法的收敛性和收敛速度较优.其次,分析文中算法得到的联合策略的特点,并验证这些联合策略达到近似纳什均衡.最后,在不同难度等级的同时移动博弈游戏上的对比实验表明,文中算法在更大的难度等级下仍能保持不错的收敛速度.

关 键 词:马尔科夫博弈  零和博弈  策略梯度定理  近似纳什均衡
收稿时间:2022-08-05

Policy Gradient Algorithm in Two-Player Zero-Sum Markov Games
LI Yongqiang,ZHOU Jian,FENG Yu,FENG Yuanjing. Policy Gradient Algorithm in Two-Player Zero-Sum Markov Games[J]. Pattern Recognition and Artificial Intelligence, 2023, 36(1): 81-91. DOI: 10.16451/j.cnki.issn1003-6059.202301007
Authors:LI Yongqiang  ZHOU Jian  FENG Yu  FENG Yuanjing
Affiliation:1. College of Information Engineering, Zhejiang University of Technology, Hangzhou 310023
Abstract:In two-player zero-sum Markov games, the traditional policy gradient theorem is only applied to alternate training of two players due to the influence of one player's policy on the other player's policy. To train two players at the same time, the policy gradient theorem in two-player zero-sum Markov games is proposed. Then, based on the policy gradient theorem, an extra-gradient based REINFORCE algorithm is proposed to achieve approximate Nash convergence of the joint policy of two players. The superiority of the proposed algorithm is analyzed in multiple dimensions. Firstly, the comparative experiments on simultaneous-move game show that the convergence and convergence speed of the proposed algorithm are better. Secondly, the characteristics of the joint policy obtained by the proposed algorithm are analyzed and these joint policies are verified to achieve approximate Nash equilibrium. Finally, the comparative experiments on simultaneous-move game with different difficulty levels show that the proposed algorithm holds a good convergence speed at higher difficulty levels.
Keywords:Markov Game  Zero-Sum Game  Policy Gradient Theorem  Approximate Nash Equilibrium  
点击此处可从《模式识别与人工智能》浏览原始摘要信息
点击此处可从《模式识别与人工智能》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号