首页 | 本学科首页   官方微博 | 高级检索  
     


Technical Note: Q-Learning
Authors:Christopher J.C.H. Watkins  Peter Dayan
Affiliation:(1) 25b Framfield Road, N5 IUU Highbury, London, England;(2) Centre for Cognitive Science, University of Edinburgh, 2 Buccleuch Place, EH8 9EH Edinburgh, Scotland
Abstract:Q-learning (Watkins, 1989) is a simple way for agents to learn how to act optimally in controlled Markovian domains. It amounts to an incremental method for dynamic programming which imposes limited computational demands. It works by successively improving its evaluations of the quality of particular actions at particular states.This paper presents and proves in detail a convergence theorem forQ-learning based on that outlined in Watkins (1989). We show thatQ-learning converges to the optimum action-values with probability 1 so long as all actions are repeatedly sampled in all states and the action-values are represented discretely. We also sketch extensions to the cases of non-discounted, but absorbing, Markov environments, and where manyQ values can be changed each iteration, rather than just one.
Keywords:Q-learning  reinforcement learning  temporal differences  asynchronous dynamic programming
本文献已被 SpringerLink 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号