首页 | 本学科首页   官方微博 | 高级检索  
     


Correlation structure of training data and the fitting ability of back propagation networks: Some experimental results
Authors:Yildiz  Nihat
Affiliation:(1) Department of Physics, Cumhuriyet University, 58140 Sivas, Turkey
Abstract:White 6–8] has theoretically shown that learning procedures used in network training are inherently statistical in nature. This paper takes a small but pioneering experimental step towards learning about this statistical behaviour by showing that the results obtained are completely in line with White's theory. We show that, given two random vectorsX (input) andY (target), which follow a two-dimensional standard normal distribution, and fixed network complexity, the network's fitting ability definitely improves with increasing correlation coefficient rXY (0lerXYle1) betweenX andY. We also provide numerical examples which support that both increasing the network complexity and training for much longer do improve the network's performance. However, as we clearly demonstrate, these improvements are far from dramatic, except in the case rXY=+ 1. This is mainly due to the existence of a theoretical lower bound to the inherent conditional variance, as we both analytically and numerically show. Finally, the fitting ability of the network for a test set is illustrated with an example.Nomenclature X Generalr-dimensional random vector. In this work it is a one-dimensional normal vector and represents the input vector - Y Generalp-dimensional random vector. In this work it is a one-dimensional normal vector and represents the target vector - Z Generalr+p dimensional random vector. In this work it is a two-dimensional normal vector - E Expectation operator (Lebesgue integral) - g(X)=E(Y¦X) Conditional expectation - epsi Experimental random error (defined by Eq. (2.1)) - y Realized target value - o Output value - f Network's output function. It is formally expressed asf: R r×WrarrR p, whereW is the appropriate weight space - lambda Average (or expected) performance function. It is defined by Eq. (2.2) as lambda(w)=Epgr(Yf(X,w)],w epsiW - pgr Network's performance - w * Weight vector for optimal solution. That is, the objective of network is such that lambda(w *) is minimum - C 1 Component one - C 2 Component two - Z Matrix of realised values of the random vectorZ overn observations - Z t Transformed matrix version ofZ in such a way thatX andY have values in 0,1] - X t ,Y t Transformed versions ofX andY and both are standard one-dimensional normal vectors - n h Number of hidden nodes (neurons) - r XY Correlation coefficient between eitherX andY orX t andY t - lambda s and lambda k in Eq. (3.1) and afterwards lambda s is average value of 100 differentZ t matrices. lambda k is the error function ofkthZ t , matrix. In Eq. (3.1), the summation is fromk=1 to 100, and in Eq. (3.2) fromi=1 ton. In Eq. (3.2)o ki andy ki are the output and target values for the kthZ t matrix and ith observation, respectively - lambda1/2(w *) and lambda k (wn) lambdak(wn) is the sample analogue of lambda1/2(w *) - sgrY 2 In Eq. (4.1) and afterwards, sgrY 2 is the variance ofY - sgrY 2 variance ofY t . In Sect. 4.3 the transformation isY t=a Y+b - Y max,Y min the maximum and minimum values ofY - R Correlation matrix ofX andY - Barwed Covariance matrix ofX andY - exist Membership symbol in set theory
Keywords:Correlation structure  Fitting ability  Extended-Delta-Bar method  Statistical neural networks  Conditional variance  Asymptotic behaviour
本文献已被 SpringerLink 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号