Fast and improved backpropagation learning of multi-layer artificial neural network using adaptive activation function |
| |
Authors: | Sashmita Panda Ganapati Panda |
| |
Affiliation: | 1. G.S.Sayal Scool of Telecommunications, Indian Institute of Technology, Kharagpur, India;2. Electronics and Communication Engineering, C V Raman Engineering College, Bhubaneswar, India |
| |
Abstract: | In the conventional backpropagation (BP) learning algorithm used for the training of the connecting weights of the artificial neural network (ANN), a fixed slope−based sigmoidal activation function is used. This limitation leads to slower training of the network because only the weights of different layers are adjusted using the conventional BP algorithm. To accelerate the rate of convergence during the training phase of the ANN, in addition to updates of weights, the slope of the sigmoid function associated with artificial neuron can also be adjusted by using a newly developed learning rule. To achieve this objective, in this paper, new BP learning rules for slope adjustment of the activation function associated with the neurons have been derived. The combined rules both for connecting weights and slopes of sigmoid functions are then applied to the ANN structure to achieve faster training. In addition, two benchmark problems: classification and nonlinear system identification are solved using the trained ANN. The results of simulation-based experiments demonstrate that, in general, the proposed new BP learning rules for slope and weight adjustments of ANN provide superior convergence performance during the training phase as well as improved performance in terms of root mean square error and mean absolute deviation for classification and nonlinear system identification problems. |
| |
Keywords: | adaptive learning activation function based classification and nonlinear system identification artificial neural network classification improved fast BP algorithm mean absolute deviation sigmoid functions root mean square error |
|
|