Recent Advances in Hierarchical Reinforcement Learning |
| |
Authors: | Andrew G. Barto Sridhar Mahadevan |
| |
Affiliation: | (1) Autonomous Learning Laboratory, Department of Computer Science, University of Massachusetts, Amherst, MA, 01003 |
| |
Abstract: | Reinforcement learning is bedeviled by the curse of dimensionality: the number of parameters to be learned grows exponentially with the size of any compact encoding of a state. Recent attempts to combat the curse of dimensionality have turned to principled ways of exploiting temporal abstraction, where decisions are not required at each step, but rather invoke the execution of temporally-extended activities which follow their own policies until termination. This leads naturally to hierarchical control architectures and associated learning algorithms. We review several approaches to temporal abstraction and hierarchical organization that machine learning researchers have recently developed. Common to these approaches is a reliance on the theory of semi-Markov decision processes, which we emphasize in our review. We then discuss extensions of these ideas to concurrent activities, multiagent coordination, and hierarchical memory for addressing partial observability. Concluding remarks address open challenges facing the further development of reinforcement learning in a hierarchical setting. |
| |
Keywords: | reinforcement learning Markov decision processes semi-Markov decision processes hierarchy temporal abstraction |
本文献已被 SpringerLink 等数据库收录! |
|