首页 | 本学科首页   官方微博 | 高级检索  
     


Practical Issues in Temporal Difference Learning
Authors:Gerald Tesauro
Affiliation:(1) IBM Thomas J. Watson Research Center, P.O. Box 704, Yorktown Heights, NY 10598, USA
Abstract:This paper examines whether temporal difference methods for training connectionist networks, such as Sutton's TD(lambda) algorithm, can be successfully applied to complex real-world problems. A number of important practical issues are identified and discussed from a general theoretical perspective. These practical issues are then examined in the context of a case study in which TD(lambda) is applied to learning the game of backgammon from the outcome of self-play. This is apparently the first application of this algorithm to a complex non-trivial task. It is found that, with zero knowledge built in, the network is able to learn from scratch to play the entire game at a fairly strong intermediate level of performance, which is clearly better than conventional commercial programs, and which in fact surpasses comparable networks trained on a massive human expert data set. This indicates that TD learning may work better in practice than one would expect based on current theory, and it suggests that further analysis of TD methods, as well as applications in other complex domains, may be worth investigating.
Keywords:Temporal difference learning  neural networks  connectionist methods  backgammon  games  feature discovery
本文献已被 SpringerLink 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号