首页 | 本学科首页   官方微博 | 高级检索  
     


Information theoretic learning with adaptive kernels
Authors:Abhishek Singh,José   C. Prí  ncipe
Affiliation:Computational NeuroEngineering Laboratory, NEB 486, Bldg #33, P.O. Box 116130, University of Florida, Gainesville, FL 32611, USA
Abstract:This paper presents an online algorithm for adapting the kernel width that is a free parameter in information theoretic cost functions using Renyi's entropy. This kernel computes the interactions between the error samples and essentially controls the nature of the performance surface over which the parameters of the system adapt. Since the error in an adaptive system is non-stationary during training, a fixed value of the kernel width may affect the adaptation dynamics and even compromise the location of the global optimum in parameter space. The proposed online algorithm for adapting the kernel width is derived from first principles and minimizes the Kullback-Leibler divergence between the estimated error density and the true density. We characterize the performance of this novel approach with simulations of linear and nonlinear systems training, using the minimum error entropy criterion with the proposed adaptive kernel algorithm. We conclude that adapting the kernel width improves the rate of convergence of the parameters, and decouples the convergence rate and misadjustment of the filter weights.
Keywords:Information theoretic learning   Adaptive system training   Minimum error entropy   Kullback Leibler divergence   Kernel width
本文献已被 ScienceDirect 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号