首页 | 本学科首页   官方微博 | 高级检索  
     

逆向强化学习研究综述
引用本文:张立华,刘全,黄志刚,朱斐. 逆向强化学习研究综述[J]. 软件学报, 2023, 34(10): 4772-4803
作者姓名:张立华  刘全  黄志刚  朱斐
作者单位:苏州大学 计算机科学与技术学院, 江苏 苏州 215006;苏州大学 计算机科学与技术学院, 江苏 苏州 215006;江苏省计算机信息处理技术重点实验室 (苏州大学), 江苏 苏州 215006;符号计算与知识工程教育部重点实验室 (吉林大学), 吉林 长春 130012;软件新技术与产业化协同创新中心, 江苏 南京 210023
基金项目:国家自然科学基金(61772355,61702055,61876217,62176175);江苏省高等学校自然科学研究重大项目(18KJA520011,17KJA520004);吉林大学符号计算与知识工程教育部重点实验室资助项目(93K172017K18,93K172021K08);苏州市应用基础研究计划工业部分(SYG201422);江苏高校优势学科建设工程
摘    要:逆向强化学习(inverse reinforcement learning, IRL)也称为逆向最优控制(inverse optimal control, IOC),是强化学习和模仿学习领域的一种重要研究方法,该方法通过专家样本求解奖赏函数,并根据所得奖赏函数求解最优策略,以达到模仿专家策略的目的.近年来,逆向强化学习在模仿学习领域取得了丰富的研究成果,已广泛应用于汽车导航、路径推荐和机器人最优控制等问题中.首先介绍逆向强化学习理论基础,然后从奖赏函数构建方式出发,讨论分析基于线性奖赏函数和非线性奖赏函数的逆向强化学习算法,包括最大边际逆向强化学习算法、最大熵逆向强化学习算法、最大熵深度逆向强化学习算法和生成对抗模仿学习等.随后从逆向强化学习领域的前沿研究方向进行综述,比较和分析该领域代表性算法,包括状态动作信息不完全逆向强化学习、多智能体逆向强化学习、示范样本非最优逆向强化学习和指导逆向强化学习等.最后总结分析当前存在的关键问题,并从理论和应用方面探讨未来的发展方向.

关 键 词:逆向强化学习  模仿学习  生成对抗模仿学习  逆向最优控制  强化学习
收稿时间:2021-11-05
修稿时间:2021-12-15

Survey on Inverse Reinforcement Learning
ZHANG Li-Hu,LIU Quan,HUANG Zhi-Gang,ZHU Fei. Survey on Inverse Reinforcement Learning[J]. Journal of Software, 2023, 34(10): 4772-4803
Authors:ZHANG Li-Hu  LIU Quan  HUANG Zhi-Gang  ZHU Fei
Affiliation:School of Computer Science & Technology, Soochow University, Suzhou 215006, China;School of Computer Science & Technology, Soochow University, Suzhou 215006, China;Provincial Key Laboratory for Computer Information Processing Technology (Soochow University), Suzhou 215006, China;Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education (Jilin University), Changchun 130012, China;Collaborative Innovation Center of Novel Software Technology and Industrialization, Nanjing 210023, China
Abstract:Inverse reinforcement learning (IRL), also known as inverse optimal control (IOC), is an important research method of reinforcement learning and imitation learning. IRL solves a reward function from expert samples, and the optimal strategy is then solved to imitate expert strategies. In recent years, fruitful achievements have been yielded by IRL in imitation learning, with widespread application in vehicle navigation, path recommendation, and robotic optimal control. First, this study presents the theoretical basis of IRL. Then, from the perspective of reward function construction methods, IRL algorithms based on linear and non-linear reward functions are analyzed. The algorithms include maximum marginal IRL, maximum entropy IRL, maximum entropy deep IRL, and generative adversarial imitation learning. In addition, frontier research directions of IRL are reviewed to compare and analyze relevant representative algorithms containing IRL with incomplete expert demonstrations, multi-agent IRL, IRL with sub-optimal expert demonstrations, and guiding IRL. Finally, the primary challenges of IRL and future developments in its theoretical and application significance are summarized.
Keywords:inverse reinforcement learning (IRL)  imitation learning  generative adversarial imitation learning  inverse optimal control (IOC)  reinforcement learning (RL)
点击此处可从《软件学报》浏览原始摘要信息
点击此处可从《软件学报》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号