首页 | 本学科首页   官方微博 | 高级检索  
     


A robust incremental learning method for non-stationary environments
Authors:David Martínez-Rego Author VitaeBeatriz Pérez-Sánchez Author Vitae  Oscar Fontenla-RomeroAuthor Vitae  Amparo Alonso-Betanzos Author Vitae
Affiliation:Laboratory for Research and Development in Artificial Intelligence (LIDIA), Department of Computer Science, Faculty of Informatics, University of A Coruña, Campus de Elviña s/n, 15071 A Coruña, Spain
Abstract:Recent machine learning challenges require the capability of learning in non-stationary environments. These challenges imply the development of new algorithms that are able to deal with changes in the underlying problem to be learnt. These changes can be gradual or trend changes, abrupt changes and recurring contexts. As the dynamics of the changes can be very different, existing machine learning algorithms exhibit difficulties to cope with them. Several methods using, for instance, ensembles or variable length windowing have been proposed to approach this task.In this work we propose a new method, for single-layer neural networks, that is based on the introduction of a forgetting function in an incremental online learning algorithm. This forgetting function gives a monotonically increasing importance to new data. Due to the combination of incremental learning and increasing importance assignment the network forgets rapidly in the presence of changes while maintaining a stable behavior when the context is stationary.The performance of the method has been tested over several regression and classification problems and its results compared with those of previous works. The proposed algorithm has demonstrated high adaptation to changes while maintaining a low consumption of computational resources.
Keywords:Incremental learning  Concept drift  Online learning  Neural networks
本文献已被 ScienceDirect 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号