首页 | 本学科首页   官方微博 | 高级检索  
     


How to Better Use Expert Advice
Authors:Rani Yaroshinsky  Ran El-Yaniv  Steven S Seiden
Abstract:This work is concerned with online learning from expert advice. Extensive work on this problem generated numerous ldquoexpert advice algorithmsrdquo whose total loss is provably bounded above in terms of the loss incurred by the best expert in hindsight. Such algorithms were devised for various problem variants corresponding to various loss functions. For some loss functions, such as the square, Hellinger and entropy losses, optimal algorithms are known. However, for two of the most widely used loss functions, namely the 0/1 and absolute loss, there are still gaps between the known lower and upper bounds.In this paper we present two new expert advice algorithms and prove for them the best known 0/1 and absolute loss bounds. Given an expert advice algorithm ALG, the goal is to form an upper bound on the regret L ALGL* of ALG, where L ALG is the loss of ALG and L* is the loss of the best expert in hindsight. Typically, regret bounds of a ldquocanonical formrdquo C · 
$$\sqrt {L^ * \ln N} $$
are sought where N is the number of experts and C is a constant. So far, the best known constant for the absolute loss function is C = 2.83, which is achieved by the recent IAWM algorithm of Auer et al. (2002). For the 0/1 loss function no bounds of this canonical form are known and the best known regret bound is 
$$L_{ALG} - L* \leqslant L* + C_1 \ln N + C_2 \sqrt {L*\ln N + \frac{e}{4}\ln ^2 N} $$
, where C 1 = e – 2 and C 2 = 2 
$$\sqrt e $$
. This bound is achieved by a ldquoP-normrdquo algorithm of Gentile and Littlestone (1999). Our first algorithm is a randomized extension of the ldquoguess and doublerdquo algorithm of Cesa-Bianchi et al. (1997). While the guess and double algorithm achieves a canonical regret bound with C = 3.32, the expected regret of our randomized algorithm is canonically bounded with C = 2.49 for the absolute loss function. The algorithm utilizes one random choice at the start of the game. Like the deterministic guess and double algorithm, a deficiency of our algorithm is that it occasionally restarts itself and therefore ldquoforgetsrdquo what it learned. Our second algorithm does not forget and enjoys the best known asymptotic performance guarantees for both the absolute and 0/1 loss functions. Specifically, in the case of the absolute loss, our algorithm is canonically bounded with C approaching 
$$\sqrt 2 $$
and in the case of the 0/1 loss, with C approaching 3/ 
$$\sqrt 2 \approx 2.12$$
. In the 0/1 loss case the algorithm is randomized and the bound is on the expected regret.
Keywords:online learning  online prediction  learning from expert advice
本文献已被 SpringerLink 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号