<Emphasis Type="Italic">Assessing the Impact of Input Features in a Feedforward Neural Network</Emphasis> |
| |
Authors: | W Wang P Jones D Partridge |
| |
Affiliation: | (1) Department of Computer Science, School of Engineering and Computer Science, University of Exeter, Exeter, UK, GB |
| |
Abstract: | For a variety of reasons, the relative impacts of neural-net inputs on the output of a network’s computation is valuable information
to obtain. In particular, it is desirable to identify the significant features, or inputs, of a data-defined problem before
the data is sufficiently preprocessed to enable high performance neural-net training. We have defined and tested a technique
for assessing such input impacts, which will be compared with a method described in a paper published earlier in this journal.
The new approach, known as the ‘clamping’ technique, offers efficient impact assessment of the input features of the problem.
Results of the clamping technique prove to be robust under a variety of different network configurations. Differences in architecture,
training parameter values and subsets of the data all deliver much the same impact rankings, which supports the notion that
the technique ranks an inherent property of the available data rather than a property of any particular feedforward neural
network. The success, stability and efficiency of the clamping technique are shown to hold for a number of different real-world
problems. In addition, we subject the previously published technique, which we will call the ‘weight product’ technique, to
the same tests in order to provide directly comparable information. |
| |
Keywords: | :Clamping Feature salience ranking Input impact Neural networks |
本文献已被 SpringerLink 等数据库收录! |
|