首页 | 本学科首页   官方微博 | 高级检索  
     


Uniformly Improving the CramÉr-Rao Bound and Maximum-Likelihood Estimation
Abstract:An important aspect of estimation theory is characterizing the best achievable performance in a given estimation problem, as well as determining estimators that achieve the optimal performance. The traditional CramÉr–Rao type bounds provide benchmarks on the variance of any estimator of a deterministic parameter vector under suitable regularity conditions, while requiring a-priori specification of a desired bias gradient. In applications, it is often not clear how to choose the required bias. A direct measure of the estimation error that takes both the variance and the bias into account is the mean squared error (MSE), which is the sum of the variance and the squared-norm of the bias. Here, we develop bounds on the MSE in estimating a deterministic parameter vector$ bf x_0$over all bias vectors that are linear in$ bf x_0$, which includes the traditional unbiased estimation as a special case. In some settings, it is possible to minimize the MSE over all linear bias vectors. More generally, direct minimization is not possible since the optimal solution depends on the unknown$ bf x_0$. Nonetheless, we show that in many cases, we can find bias vectors that result in an MSE bound that is smaller than the CramÉr–Rao lower bound (CRLB) for all values of$ bf x_0$. Furthermore, we explicitly construct estimators that achieve these bounds in cases where an efficient estimator exists, by performing a simple linear transformation on the standard maximum likelihood (ML) estimator. This leads to estimators that result in a smaller MSE than the ML approach for all possible values of$ bf x_0$.
Keywords:
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号