首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
Dynamics of Generalized PCA and MCA Learning Algorithms   总被引:1,自引:0,他引:1  
Principal component analysis (PCA) and minor component analysis (MCA) are two important statistical tools which have many applications in the fields of signal processing and data analysis. PCA and MCA neural networks (NNs) can be used to online extract principal component and minor component from input data. It is interesting to develop generalized learning algorithms of PCA and MCA NNs. Some novel generalized PCA and MCA learning algorithms are proposed in this paper. Convergence of PCA and MCA learning algorithms is an essential issue in practical applications. Traditionally, the convergence is studied via deterministic continuous-time (DCT) method. The DCT method requires the learning rate of the algorithms to approach to zero, which is not realistic in many practical applications. In this paper, deterministic discrete-time (DDT) method is used to study the dynamical behaviors of the proposed algorithms. The DDT method is more reasonable for the convergence analysis since it does not require constraints as that of the DCT method. It is proven that under some mild conditions, the weight vector in these proposed algorithms will converge exponentially to principal or minor component. Simulation results are further used to illustrate the theoretical results.  相似文献   

2.
The convergence of Oja's principal component analysis (PCA) learning algorithms is a difficult topic for direct study and analysis. Traditionally, the convergence of these algorithms is indirectly analyzed via certain deterministic continuous time (DCT) systems. Such a method will require the learning rate to converge to zero, which is not a reasonable requirement to impose in many practical applications. Recently, deterministic discrete time (DDT) systems have been proposed instead to indirectly interpret the dynamics of the learning algorithms. Unlike DCT systems, DDT systems allow learning rates to be constant (which can be a nonzero). This paper will provide some important results relating to the convergence of a DDT system of Oja's PCA learning algorithm. It has the following contributions: 1) A number of invariant sets are obtained, based on which we can show that any trajectory starting from a point in the invariant set will remain in the set forever. Thus, the nondivergence of the trajectories is guaranteed. 2) The convergence of the DDT system is analyzed rigorously. It is proven, in the paper, that almost all trajectories of the system starting from points in an invariant set will converge exponentially to the unit eigenvector associated with the largest eigenvalue of the correlation matrix. In addition, exponential convergence rate are obtained, providing useful guidelines for the selection of fast convergence learning rate. 3) Since the trajectories may diverge, the careful choice of initial vectors is an important issue. This paper suggests to use the domain of unit hyper sphere as initial vectors to guarantee convergence. 4) Simulation results will be furnished to illustrate the theoretical results achieved.  相似文献   

3.
The generalized Hebbian algorithm (GHA) is one of the most widely used principal component analysis (PCA) neural network (NN) learning algorithms. Learning rates of GHA play important roles in convergence of the algorithm for applications. Traditionally, the learning rates of GHA are required to converge to zero so that its convergence can be analyzed by studying the corresponding deterministic continuous-time (DCT) equations. However, the requirement for learning rates to approach zero is not a practical one in applications due to computational roundoff limitations and tracking requirements. In this paper, nonzero-approaching adaptive learning rates are proposed to overcome this problem. These proposed adaptive learning rates converge to some positive constants, which not only speed up the algorithm evolution considerably, but also guarantee global convergence of the GHA algorithm. The convergence is studied in detail by analyzing the corresponding deterministic discrete-time (DDT) equations. Extensive simulations are carried out to illustrate the theory.  相似文献   

4.
Dezhong  Zhang  JianCheng  Yong 《Neurocomputing》2008,71(7-9):1748-1752
The eigenvector associated with the smallest eigenvalue of the autocorrelation matrix of input signals is called minor component. Minor component analysis (MCA) is a statistical approach for extracting minor component from input signals and has been applied in many fields of signal processing and data analysis. In this letter, we propose a neural networks learning algorithm for estimating adaptively minor component from input signals. Dynamics of the proposed algorithm are analyzed via a deterministic discrete time (DDT) method. Some sufficient conditions are obtained to guarantee convergence of the proposed algorithm.  相似文献   

5.
A non-zero-approaching adaptive learning rate is proposed to guarantee the global convergence of Oja's principal component analysis (PCA) learning algorithm. Most of the existing adaptive learning rates for Oja's PCA learning algorithm are required to approach zero as the learning step increases. However, this is not practical in many applications due to the computational round-off limitations and tracking requirements. The proposed adaptive learning rate overcomes this shortcoming. The learning rate converges to a positive constant, thus it increases the evolution rate as the learning step increases. This is different from learning rates which approach zero which slow the convergence considerably and increasingly with time. Rigorous mathematical proofs for global convergence of Oja's algorithm with the proposed learning rate are given in detail via studying the convergence of an equivalent deterministic discrete time (DDT) system. Extensive simulations are carried out to illustrate and verify the theory derived. Simulation results show that this adaptive learning rate is more suitable for Oja's PCA algorithm to be used in an online learning situation.  相似文献   

6.
Principal/minor component analysis(PCA/MCA),generalized principal/minor component analysis(GPCA/GMCA),and singular value decomposition(SVD)algorithms are important techniques for feature extraction.In the convergence analysis of these algorithms,the deterministic discrete-time(DDT)method can reveal the dynamic behavior of PCA/MCA and GPCA/GMCA algorithms effectively.However,the dynamic behavior of SVD algorithms has not been studied quantitatively because of their special structure.In this paper,for the first time,we utilize the advantages of the DDT method in PCA algorithms analysis to study the dynamics of SVD algorithms.First,taking the cross-coupled Hebbian algorithm as an example,by concatenating the two cross-coupled variables into a single vector,we successfully get a PCA-like DDT system.Second,we analyze the discrete-time dynamic behavior and stability of the PCA-like DDT system in detail based on the DDT method,and obtain the boundedness of the weight vectors and learning rate.Moreover,further discussion shows the universality of the proposed method for analyzing other SVD algorithms.As a result,the proposed method provides a new way to study the dynamical convergence properties of SVD algorithms.  相似文献   

7.
Recently, many unified learning algorithms have been developed to solve the task of principal component analysis (PCA) and minor component analysis (MCA). These unified algorithms can be used to extract principal component and if altered simply by the sign, it can also serve as a minor component extractor. This is of practical significance in the implementations of algorithms. Convergence of the existing unified algorithms is guaranteed only under the condition that the learning rates of algorithms approach zero, which is impractical in many practical applications. In this paper, we propose a unified PCA & MCA algorithm with a constant learning rate, and derive the sufficient conditions to guarantee convergence via analyzing the discrete-time dynamics of the proposed algorithm. The achieved theoretical results lay a solid foundation for the applications of our proposed algorithm.  相似文献   

8.
李晓波  樊养余  彭轲 《控制与决策》2010,25(9):1399-1402
针对确定连续时间(DCT)方法在研究最小主元分析(MCA)学习算法时,存在严格限制条件而较难实现的问题,基于确定离散时间(DDT)方法,研究了AMEX MCA学习算法的收敛条件.理论分析表明,只有在算法的学习因子和输入信号的自相关矩阵满足一定条件时,AMEX MCA学习算法才能收敛到系统的总体最小二乘解.最后,仿真结果表明了收敛条件的正确性.  相似文献   

9.
A principal component analysis (PCA) neural network is developed for online extraction of the multiple minor directions of an input signal. The neural network can extract the multiple minor directions in parallel by computing the principal directions of the transformed input signal so that the stability-speed problem of directly computing the minor directions can be avoided to a certain extent. On the other hand, the learning algorithms for updating the net weights use constant learning rates. This overcomes the shortcoming of the learning rates approaching zero. In addition, the proposed algorithms are globally convergent so that it is very simple to choose the initial values of the learning parameters. This paper presents the convergence analysis of the proposed algorithms by studying the corresponding deterministic discrete time (DDT) equations. Rigorous mathematical proof is given to prove the global convergence. The theoretical results are further confirmed via simulations.  相似文献   

10.
提出一种自稳定的双目的算法用以提取信号自相关矩阵的特征对.该算法可以通过仅仅改变一个符号实现主/次特征向量估计的转化,并且可以通过估计的特征向量的模值信息估计对应的特征值,从而实现特征对的提取.基于确定性离散时间方法对所提出的算法进行收敛性分析,并确定算法收敛的边界条件.与已有算法对比的仿真实验验证了所提出算法的收敛性能.  相似文献   

11.
Generalized eigenvector plays an essential role in the signal processing field. In this paper, we present a novel neural network learning algorithm for estimating the generalized eigenvector of a Hermitian matrix pencil. Differently from some traditional algorithms, which need to select the proper values of learning rates before using, the proposed algorithm does not need a learning rate and is very suitable for real applications. Through analyzing all of the equilibrium points, it is proven that if and only if the weight vector of the neural network is equal to the generalized eigenvector corresponding to the largest generalized eigenvalue of a Hermitian matrix pencil, the proposed algorithm reaches to convergence status. By using the deterministic discrete-time (DDT) method, some convergence conditions, which can be satisfied with probability 1, are also obtained to guarantee its convergence. Simulation results show that the proposed algorithm has a fast convergence speed and good numerical stability. The real application demonstrates its effectiveness in tracking the optimal vector of beamforming.   相似文献   

12.
The M?ller algorithm is a self-stabilizing minor component analysis algorithm. This research document involves the study of the convergence and dynamic characteristics of the M?ller algorithm using the deterministic discrete time (DDT) methodology. Unlike other analysis methodologies, the DDT methodology is capable of serving the distinct time characteristic and having no constraint conditions. Through analyzing the dynamic characteristics of the weight vector, several convergence conditions are drawn, which are beneficial for its application. The performing computer simulations and real applications demonstrate the correctness of the analysis’s conclusions.   相似文献   

13.
In this paper, a quasi-Newton-type optimized iterative learning control (ILC) algorithm is investigated for a class of discrete linear time-invariant systems. The proposed learning algorithm is to update the learning gain matrix by a quasi-Newton-type matrix instead of the inversion of the plant. By means of the mathematical inductive method, the monotone convergence of the proposed algorithm is analyzed, which shows that the tracking error monotonously converges to zero after a finite number of iterations. Compared with the existing optimized ILC algorithms, due to the superlinear convergence of quasi-Newton method, the proposed learning law operates with a faster convergent rate and is robust to the ill-condition of the system model, and thus owns a wide range of applications. Numerical simulations demonstrate the validity and effectiveness.  相似文献   

14.
A Class of Self-Stabilizing MCA Learning Algorithms   总被引:1,自引:0,他引:1  
In this letter, we propose a class of self-stabilizing learning algorithms for minor component analysis (MCA), which includes a few well-known MCA learning algorithms. Self-stabilizing means that the sign of the weight vector length change is independent of the presented input vector. For these algorithms, rigorous global convergence proof is given and the convergence rate is also discussed. By combining the positive properties of these algorithms, a new learning algorithm is proposed which can improve the performance. Simulations are employed to confirm our theoretical results  相似文献   

15.
Jian Wang  Wei Wu  Jacek M. Zurada 《Neurocomputing》2011,74(14-15):2368-2376
Conjugate gradient methods have many advantages in real numerical experiments, such as fast convergence and low memory requirements. This paper considers a class of conjugate gradient learning methods for backpropagation neural networks with three layers. We propose a new learning algorithm for almost cyclic learning of neural networks based on PRP conjugate gradient method. We then establish the deterministic convergence properties for three different learning modes, i.e., batch mode, cyclic and almost cyclic learning. The two deterministic convergence properties are weak and strong convergence that indicate that the gradient of the error function goes to zero and the weight sequence goes to a fixed point, respectively. It is shown that the deterministic convergence results are based on different learning modes and dependent on different selection strategies of learning rate. Illustrative numerical examples are given to support the theoretical analysis.  相似文献   

16.
In the presence of probabilistic communication networks between agents, the convergence analysis of max-consensus algorithm (MCA) is addressed in this paper. It is considered that at each iteration of MCA, all agents share their measurements with adjacent agents via local communication networks which is applicable in many multi-agent systems (MASs). It is assumed that the communication networks have Bernoulli dropouts, i.e. the information exchanged between agents may be lost with Bernoulli distribution. In the proposed method, the information topology of MAS is modelled as a dynamic graph with the Bernoulli adjacency matrix. It is proved that in the presence of Bernoulli dropouts and under non-restrictive assumptions concerning the MAS features and communication topology, the MCA converges with a probability one in the finite time. Furthermore, the upper bounds are provided by means of deterministic and probabilistic expressions for the expectation and dispersion of convergence time, respectively. It is shown that the proposed upper bounds are asymptotic, i.e. there are specific conditions of MAS in which the convergence time of MCA tends to the proposed upper bounds. The convergence accuracy of MCA is discussed in terms of probabilistic equations. The validity of the proposed theorems is illustrated by means of simulation results.  相似文献   

17.
J. Andrs  Pedro J. 《Neurocomputing》2007,70(16-18):2768
In this paper, the behavior of the Sanger hebbian artificial neural networks is analyzed. Hebbian networks are employed to implement principal component analysis (PCA), and several improvements over the original model due to Oja have been developed in the last two decades. Among them, Sanger model is designed to directly provide the eigenvectors of the correlation matrix. The behavior of these models has been traditionally considered on a deterministic continuous-time (DCT) formulation whose validity is justified under some hypotheses on the specific asymptotic behavior of the learning gain. In practical applications, these assumptions cannot be guaranteed. This paper addresses a comparative study with a deterministic discrete-time (DDT) formulation that characterizes the average evolution of the net, preserving the discrete-time form of the original network and gathering a more realistic behavior of the learning gain. The results thoroughly characterize the relationship between the learning gain and the eigenvalue structure of the correlation matrix.  相似文献   

18.
This paper considers a special class of numerical algorithms, the so-called relaxation algorithm, for Nash equilibrium points in noncooperative games. The relaxation algorithms have been studied by various authors for the deterministic case. Convergence conditions of this algorithm are based on fixed point theorems. For example, Basar (1987) and Li and Basar (1987) have proved its convergence for a two-player game via the contraction mapping theorem. For the quadratic case these conditions can be easily checked. For other nonlinear payoff functions it is sometimes difficult to check these convergence conditions. In this paper, the authors propose an alternative approach using the residual terms of the Nikaido-Isoda function. The convergence theorem is proved for nonsmooth weakly convex-concave Nikaido-Isoda functions. The family of weakly convex-concave functions is broad enough for applications, since if includes the family of smooth functions. When the payoff functions are twice continuously differentiable, the condition for the residual terms is reduced to strict positiveness of a matrix representing the difference of the Hessians of the Nikaido-Isoda function with respect to the first and second groups of variables. An analogous condition was used by Uryas'ev (1988) to prove convergence of the gradient-type algorithm for the Nash equilibrium problem. In this paper the authors discuss only the deterministic case; nevertheless this approach can be generalized for the stochastic Nash equilibrium problems with uncertainties in parameters  相似文献   

19.
This work presents two novel approaches, backpropagation (BP) with magnified gradient function (MGFPROP) and deterministic weight modification (DWM), to speed up the convergence rate and improve the global convergence capability of the standard BP learning algorithm. The purpose of MGFPROP is to increase the convergence rate by magnifying the gradient function of the activation function, while the main objective of DWM is to reduce the system error by changing the weights of a multilayered feedforward neural network in a deterministic way. Simulation results show that the performance of the above two approaches is better than BP and other modified BP algorithms for a number of learning problems. Moreover, the integration of the above two approaches forming a new algorithm called MDPROP, can further improve the performance of MGFPROP and DWM. From our simulation results, the MDPROP algorithm always outperforms BP and other modified BP algorithms in terms of convergence rate and global convergence capability.  相似文献   

20.
A critical issue of Neural Network based large-scale data mining algorithms is how to speed up their learning algorithm. This problem is particularly challenging for Error Back-Propagation (EBP) algorithm in Multi-Layered Perceptron (MLP) Neural Networks due to their significant applications in many scientific and engineering problems. In this paper, we propose an Adaptive Variable Learning Rate EBP algorithm to attack the challenging problem of reducing the convergence time in an EBP algorithm, aiming to have a high-speed convergence in comparison with standard EBP algorithm. The idea is inspired from adaptive filtering, which leaded us into two semi-similar methods of calculating the learning rate. Mathematical analysis of AVLR-EBP algorithm confirms its convergence property. The AVLR-EBP algorithm is utilized for data classification applications. Simulation results on many well-known data sets shall demonstrate that this algorithm reaches to a considerable reduction in convergence time in comparison to the standard EBP algorithm. The proposed algorithm, in classifying the IRIS, Wine, Breast Cancer, Semeion and SPECT Heart datasets shows a reduction of the learning epochs relative to the standard EBP algorithm.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号