Abstract: | Despite many advances, the problem of determining the proper size of a neural network is important, especially for its practical
implications in such issues as learning and generalization. Unfortunately, it is not usually obvious which size is best; a
system that is too small will not be able to learn the data, while one that is just big enough may learn very slowly and be
very sensitive to initial conditions and learning parameters. There are two types of approach to determining the network size:
pruning and growing. Pruning consists of training a network which is larger than necessary, and then removing unnecessary
weights/nodes. Here, a new pruning method is developed, based on the penalty-term method. This method makes the neural networks
good for generalization, and reduces the retraining time needed after pruning weights/nodes.
This work was presented, in part, at the 6th International Symposium on Artificial Life and Robotics, Tokyo, Japan, January
15–17, 2001. |