Relative Loss Bounds for Temporal-Difference Learning |
| |
Authors: | Forster Jürgen Warmuth Manfred K. |
| |
Affiliation: | (1) Lehrstuhl Mathematik & Informatik, Fakultät für Mathematik, Ruhr-Universität Bochum, 44780 Bochum, Germany;(2) Computer Science Department, University of California, Santa Cruz, CA 95064, USA |
| |
Abstract: | Foster and Vovk proved relative loss bounds for linear regression where the total loss of the on-line algorithm minus the total loss of the best linear predictor (chosen in hindsight) grows logarithmically with the number of trials. We give similar bounds for temporal-difference learning. Learning takes place in a sequence of trials where the learner tries to predict discounted sums of future reinforcement signals. The quality of the predictions is measured with the square loss and we bound the total loss of the on-line algorithm minus the total loss of the best linear predictor for the whole sequence of trials. Again the difference of the losses is logarithmic in the number of trials. The bounds hold for an arbitrary (worst-case) sequence of examples. We also give a bound on the expected difference for the case when the instances are chosen from an unknown distribution. For linear regression a corresponding lower bound shows that this expected bound cannot be improved substantially. |
| |
Keywords: | machine learning temporal-difference learning on-line learning relative loss bounds |
本文献已被 SpringerLink 等数据库收录! |