Abstract: | In this paper, a new class of parameter estimation algorithms, called turbo estimation algorithms (TEA), is introduced. The basic idea is that each estimation algorithm (EA) must perform a sort of intrinsic denoising of the input data in order to achieve reliable estimates. Optimum algorithms implement the best possible noise reduction, compatible with the problem definition and the related lower bounds to the estimation error variance; however, their computational complexity is often overwhelming, so that in real life one must often resort to suboptimal algorithms; in this case, some amount of noise could be still eliminated. The TEA methods reduce the residual noise by means of a closed loop configuration, in which an external denoising system, fed by the master estimator output, generates an enhanced signal to be input to the estimator for next iteration. The working principle of such schemes can be described in terms of a more general turbo principle, well known in an information theory context. In this paper, an example of turbo algorithm for modal analysis is described, which employs the Tufts and Kumaresan (TK) method as a master EA. |