A Modified Memory-Based Reinforcement Learning Method for Solving POMDP Problems |
| |
Authors: | Lei Zheng Siu-Yeung Cho |
| |
Affiliation: | (1) Department of Computer Engineering, Bogazici University, 34342 Bebek, Istanbul, Turkey |
| |
Abstract: | Partially observable Markov decision processes (POMDP) provide a mathematical framework for agent planning under stochastic
and partially observable environments. The classic Bayesian optimal solution can be obtained by transforming the problem into
Markov decision process (MDP) using belief states. However, because the belief state space is continuous and multi-dimensional,
the problem is highly intractable. Many practical heuristic based methods are proposed, but most of them require a complete
POMDP model of the environment, which is not always practical. This article introduces a modified memory-based reinforcement
learning algorithm called modified U-Tree that is capable of learning from raw sensor experiences with minimum prior knowledge.
This article describes an enhancement of the original U-Tree’s state generation process to make the generated model more compact,
and also proposes a modification of the statistical test for reward estimation, which allows the algorithm to be benchmarked
against some traditional model-based algorithms with a set of well known POMDP problems. |
| |
Keywords: | |
本文献已被 SpringerLink 等数据库收录! |
|