首页 | 本学科首页   官方微博 | 高级检索  
     


Convergence models for Rosenblatt's perceptron learning algorithm
Authors:Diggavi  SN Shynk  JJ Bershad  NJ
Affiliation:Dept. of Electr. & Comput. Eng., California Univ., Santa Barbara, CA;
Abstract:Presents a stochastic analysis of the steady-state and transient convergence properties of a single-layer perceptron for fast learning (large step-size, input-power product). The training data are modeled using a system identification formulation with zero-mean Gaussian inputs. The perceptron weights are adjusted by a learning algorithm equivalent to Rosenblatt's perceptron convergence procedure. It is shown that the convergence points of the algorithm depend on the step size μ and the input signal power (variance) σx2 , and that the algorithm is stable essentially for μ>0. Two coupled nonlinear recursions are derived that accurately model the transient behavior of the algorithm. The authors also examine how these convergence results are affected by noisy perceptron input vectors. Computer simulations are presented to verify the analytical models
Keywords:
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号