首页 | 本学科首页   官方微博 | 高级检索  
     


Online variance minimization
Authors:Manfred K. Warmuth  Dima Kuzmin
Affiliation:1. UC California, Santa Cruz, USA
2. Google, Mountain View, USA
Abstract:We consider the following type of online variance minimization problem: In every trial t our algorithms get a covariance matrix C t and try to select a parameter vector w t−1 such that the total variance over a sequence of trials ?t=1T (wt-1)T Ctwt-1sum_{t=1}^{T} (boldsymbol {w}^{t-1})^{top} boldsymbol {C}^{t}boldsymbol {w}^{t-1} is not much larger than the total variance of the best parameter vector u chosen in hindsight. Two parameter spaces in ℝ n are considered—the probability simplex and the unit sphere. The first space is associated with the problem of minimizing risk in stock portfolios and the second space leads to an online calculation of the eigenvector with minimum eigenvalue of the total covariance matrix ?t=1T Ctsum_{t=1}^{T} boldsymbol {C}^{t}. For the first parameter space we apply the Exponentiated Gradient algorithm which is motivated with a relative entropy regularization. In the second case, the algorithm has to maintain uncertainty information over all unit directions u. For this purpose, directions are represented as dyads uu and the uncertainty over all directions as a mixture of dyads which is a density matrix. The motivating divergence for density matrices is the quantum version of the relative entropy and the resulting algorithm is a special case of the Matrix Exponentiated Gradient algorithm. In each of the two cases we prove bounds on the additional total variance incurred by the online algorithm over the best offline parameter.
Keywords:
本文献已被 SpringerLink 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号