The framework is empirically validated. Validation is undertaken with three different types of regression task: (1) a one-to-one (o–o) task, f(x):xi→yj; (2) the second, in its f(x):{xi,xi+1, …}→yj formulation, maps a many-to-one (m–o) task; and (3) the third f(x):xi→{yj,yj+1, …} a one-to-many (o–m) task. The first and second are assigned to feedforward nets, while the third, due to its complexity, is assigned to a recurrent neural net.
Throughout the empirical work, higher-order generalization is validated with reference to the ability of a net to perform symmetrically related or isomorphic functions generated using symmetric transformations (STs) of a net's weights. The transformed weights of a base net (BN) are inherited by a derived net (DN). The inheritance is viewed as the reuse of information. The overall framework is also considered in the light of alignment to neural models; for example, which order (or level) of generalization can be performed by which specific type of neuron model.
The complete framework may not be applicable to all neural models; in fact, some orders may be special cases which apply only to specific neuron models. This is, indeed, shown to be the case. Lower-order generalization is viewed as a general case and is applicable to all neuron models, whereas higher-order generalization is a particular or special case. This paper focuses on initial results; some of the aims have been demonstrated and amplified through the experimental work. 相似文献