首页 | 本学科首页   官方微博 | 高级检索  
     

一种基于最优策略概率分布的POMDP值迭代算法
引用本文:刘峰,王崇骏,骆斌.一种基于最优策略概率分布的POMDP值迭代算法[J].电子学报,2016,44(5):1078-1084.
作者姓名:刘峰  王崇骏  骆斌
作者单位:1. 南京大学软件学院, 江苏南京 210093; 2. 南京大学计算机科学与技术系, 江苏南京 210093; 3. 南京大学软件新技术国家重点实验室, 江苏南京 210093
基金项目:国家自然科学基金(No.61375069);江苏省自然科学基金(BK20131277)
摘    要:随着应用中POMDP问题的规模不断扩大,基于最优策略可达区域的启发式方法成为了目前的研究热点.然而目前已有的算法虽然保证了全局最优,但选择最优动作还不够精确,影响了算法的效率.本文提出一种基于最优策略概率的值迭代方法PBVIOP.该方法在深度优先的启发式探索中,根据各个动作值函数在其上界和下界之间的分布,用蒙特卡罗法计算动作最优的概率,选择概率最大的动作作为最优探索策略.在4个基准问题上的实验结果表明PBVIOP算法能够收敛到全局最优解,并明显提高了收敛效率.

关 键 词:部分可观测马尔科夫决策过程  基于最优策略概率的值迭代算法  蒙特卡罗法  
收稿时间:2014-09-15

A Probability-Based Value Iteration on Optimal Policy Algorithm for POMDP
LIU Feng,WANG Chong-jun,LUO Bin.A Probability-Based Value Iteration on Optimal Policy Algorithm for POMDP[J].Acta Electronica Sinica,2016,44(5):1078-1084.
Authors:LIU Feng  WANG Chong-jun  LUO Bin
Affiliation:1. Software Institute, Nanjing University, Nanjing, Jiangsu 210093, China; 2. Department of Computer Science and Technology, Nanjing University, Nanjing, Jiangsu 210093, China; 3. National Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, Jiangsu 210093, China
Abstract:With the enlargement of the scale of POMDP problems in applications,the research of heuristic methods for reachable area based on the optimal policy becomes current hotspot.However,the standard of existing algorithms about choosing the best action is not perfect enough thus the efficiency of the algorithms is affected.This paper proposes a new value iteration method PBVIOP (Probability-based Value Iteration on Optimal Policy).In depth-first heuristic exploration,this method uses the Monte Carlo algorithm to calculate the probability of each optimal action according to the distribution of each action′s Q function value between its upper and lower bounds,and chooses the maximum probability action.Experiment results of four benchmarks show that PBVIOP algorithm can obtain global optimal solution and significantly improve the convergence efficiency.
Keywords:partially observable Markov decision process (POMDP)  probability-based value iteration on optimal policy(PBVIOP)  Monte Carlo method
本文献已被 万方数据 等数据库收录!
点击此处可从《电子学报》浏览原始摘要信息
点击此处可从《电子学报》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号