首页 | 本学科首页   官方微博 | 高级检索  
     

一种不稳定环境下的策略搜索及迁移方法
引用本文:朱斐,刘全,傅启明,陈冬火,王辉,伏玉琛. 一种不稳定环境下的策略搜索及迁移方法[J]. 电子学报, 2017, 45(2): 257-266. DOI: 10.3969/j.issn.0372-2112.2017.02.001
作者姓名:朱斐  刘全  傅启明  陈冬火  王辉  伏玉琛
作者单位:1. 苏州大学计算机科学与技术学院, 江苏苏州 215006;2. 苏州大学江苏省计算机信息处理技术重点实验室, 江苏苏州 215006;3. 符号计算与知识工程教育部重点实验室(吉林大学), 吉林长春 130012;4. 苏州科技学院电子与信息工程学院, 江苏苏州 215011
基金项目:国家自然科学基金,江苏省高校自然科学研究基金,吉林大学符号计算与知识工程教育部重点实验室基金,苏州市应用基础研究计划基金,苏州大学高校省级重点实验室基金,中国国家留学基金
摘    要:强化学习是一种Agent在与环境交互过程中,通过累计奖赏最大化来寻求最优策略的在线学习方法.由于在不稳定环境中,某一时刻的MDP模型在与Agent交互之后就发生了变化,导致基于稳定MDP模型传统的强化学习方法无法完成不稳定环境下的最优策略求解问题.针对不稳定环境下的策略求解问题,利用MDP分布对不稳定环境进行建模,提出一种基于公式集的策略搜索算法--FSPS.FSPS算法在学习过程中搜集所获得的历史样本信息,并对其进行特征信息的提取,利用这些特征信息来构造不同的用于动作选择的公式,采取策略搜索算法求解最优公式.在此基础之上,给出所求解策略的最优性边界,并从理论上证明了迁移到新MDP分布中策略的最优性主要依赖于MDP分布之间的距离以及所求解策略在原始MDP分布中的性能.最后,将FSPS算法用于经典的Markov Chain问题,实验结果表明,所求解的策略具有较好的性能.

关 键 词:强化学习  策略搜索  策略迁移  不稳定环境  公式集  
收稿时间:2015-11-03

A Policy Search and Transfer Approach in the Non-stationary Environment
ZHU Fei,LIU Quan,FU Qi-ming,CHEN Dong-huo,WANG Hui,Fu Yu-chen. A Policy Search and Transfer Approach in the Non-stationary Environment[J]. Acta Electronica Sinica, 2017, 45(2): 257-266. DOI: 10.3969/j.issn.0372-2112.2017.02.001
Authors:ZHU Fei  LIU Quan  FU Qi-ming  CHEN Dong-huo  WANG Hui  Fu Yu-chen
Affiliation:1. School of Computer Science and Technology, Soochow University, Suzhou, Jiangsu 215006, China;2. Provincial Key Laboratory for Computer Information Processing Technology, Soochow University, Suzhou, Jiangsu 215006, China;3. Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun, Jilin 130012, China;4. College of Electronic and Information Engineering, Suzhou University of Science and Technology, Suzhou, Jiangsu 215011, China
Abstract:As an online learning algorithm,reinforcement learning,which obtains the optimal policy with the maximum expected cumulative reward by interacting with the environment,is mostly based on the stationary Markov Decision Process (MDP) but however is unable to deal with problems of the non-stationary case because traditional reinforcement learning algorithms cannot be used to learn an optimal policy directly due to the failure of MDP model after the agent once interacts with the environment.Hereby,a novel policy search algorithm based on a formula set (FSPS),which is generated by features extracted from the collected historical sample trajectories,was proposed.The algorithm adopted the formula with the best performance as the optimal policy.The algorithm also took advantage of concept of transfer learning by transferred the learned policy between two similar MDP distributions,where the performance of the transferred policy mainly depends on the distance between two MDP distributions as well as the performance of the learned policy in the original MDP distribution.Simulation results on the Markov Chain problem show that the algorithm can solve the problem of the non-stationary case quite well.
Keywords:reinforcement learning  policy search  policy transfer  non-stationary environment  formula set
本文献已被 万方数据 等数据库收录!
点击此处可从《电子学报》浏览原始摘要信息
点击此处可从《电子学报》下载免费的PDF全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号