AVLR-EBP: A Variable Step Size Approach to Speed-up the Convergence of Error Back-Propagation Algorithm |
| |
Authors: | Arman Didandeh Nima Mirbakhsh Ali Amiri Mahmood Fathy |
| |
Affiliation: | (1) School of Electronics, Telecommunication and Computer Engineering, Hankuk Aviation University, 200-1, Hwajeon-dong, Deokyang-gu, Koyang-city, Kyonggi-do, 412-791, Korea;(2) Software Technology Institute, Information and Communications University, 517-10, Dogok-dong, Kangnam-gu, Seoul, 135-854, Korea |
| |
Abstract: | A critical issue of Neural Network based large-scale data mining algorithms is how to speed up their learning algorithm. This
problem is particularly challenging for Error Back-Propagation (EBP) algorithm in Multi-Layered Perceptron (MLP) Neural Networks
due to their significant applications in many scientific and engineering problems. In this paper, we propose an Adaptive Variable
Learning Rate EBP algorithm to attack the challenging problem of reducing the convergence time in an EBP algorithm, aiming
to have a high-speed convergence in comparison with standard EBP algorithm. The idea is inspired from adaptive filtering,
which leaded us into two semi-similar methods of calculating the learning rate. Mathematical analysis of AVLR-EBP algorithm
confirms its convergence property. The AVLR-EBP algorithm is utilized for data classification applications. Simulation results
on many well-known data sets shall demonstrate that this algorithm reaches to a considerable reduction in convergence time
in comparison to the standard EBP algorithm. The proposed algorithm, in classifying the IRIS, Wine, Breast Cancer, Semeion
and SPECT Heart datasets shows a reduction of the learning epochs relative to the standard EBP algorithm. |
| |
Keywords: | |
本文献已被 SpringerLink 等数据库收录! |
|