首页 | 本学科首页   官方微博 | 高级检索  
     


Natural Actor-Critic
Authors:Jan  Stefan  
Affiliation:

aMax-Planck-Institute for Biological Cybernetics, Tuebingen, Germany

bUniversity of Southern California, Los Angeles, CA 90089, USA

cATR Computational Neuroscience Laboratories, Kyoto 619-0288, Japan

Abstract:In this paper, we suggest a novel reinforcement learning architecture, the Natural Actor-Critic. The actor updates are achieved using stochastic policy gradients employing Amari's natural gradient approach, while the critic obtains both the natural policy gradient and additional parameters of a value function simultaneously by linear regression. We show that actor improvements with natural policy gradients are particularly appealing as these are independent of coordinate frame of the chosen policy representation, and can be estimated more efficiently than regular policy gradients. The critic makes use of a special basis function parameterization motivated by the policy-gradient compatible function approximation. We show that several well-known reinforcement learning methods such as the original Actor-Critic and Bradtke's Linear Quadratic Q-Learning are in fact Natural Actor-Critic algorithms. Empirical evaluations illustrate the effectiveness of our techniques in comparison to previous methods, and also demonstrate their applicability for learning control on an anthropomorphic robot arm.
Keywords:Policy-gradient methods  Compatible function approximation  Natural gradients  Actor-Critic methods  Reinforcement learning  Robot learning
本文献已被 ScienceDirect 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号