首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 218 毫秒
1.
Dynamics of Generalized PCA and MCA Learning Algorithms   总被引:1,自引:0,他引:1  
Principal component analysis (PCA) and minor component analysis (MCA) are two important statistical tools which have many applications in the fields of signal processing and data analysis. PCA and MCA neural networks (NNs) can be used to online extract principal component and minor component from input data. It is interesting to develop generalized learning algorithms of PCA and MCA NNs. Some novel generalized PCA and MCA learning algorithms are proposed in this paper. Convergence of PCA and MCA learning algorithms is an essential issue in practical applications. Traditionally, the convergence is studied via deterministic continuous-time (DCT) method. The DCT method requires the learning rate of the algorithms to approach to zero, which is not realistic in many practical applications. In this paper, deterministic discrete-time (DDT) method is used to study the dynamical behaviors of the proposed algorithms. The DDT method is more reasonable for the convergence analysis since it does not require constraints as that of the DCT method. It is proven that under some mild conditions, the weight vector in these proposed algorithms will converge exponentially to principal or minor component. Simulation results are further used to illustrate the theoretical results.  相似文献   

2.
Recently, many unified learning algorithms have been developed to solve the task of principal component analysis (PCA) and minor component analysis (MCA). These unified algorithms can be used to extract principal component and if altered simply by the sign, it can also serve as a minor component extractor. This is of practical significance in the implementations of algorithms. Convergence of the existing unified algorithms is guaranteed only under the condition that the learning rates of algorithms approach zero, which is impractical in many practical applications. In this paper, we propose a unified PCA & MCA algorithm with a constant learning rate, and derive the sufficient conditions to guarantee convergence via analyzing the discrete-time dynamics of the proposed algorithm. The achieved theoretical results lay a solid foundation for the applications of our proposed algorithm.  相似文献   

3.
提出一种自稳定的双目的算法用以提取信号自相关矩阵的特征对.该算法可以通过仅仅改变一个符号实现主/次特征向量估计的转化,并且可以通过估计的特征向量的模值信息估计对应的特征值,从而实现特征对的提取.基于确定性离散时间方法对所提出的算法进行收敛性分析,并确定算法收敛的边界条件.与已有算法对比的仿真实验验证了所提出算法的收敛性能.  相似文献   

4.
Minor component analysis (MCA) is a statistical method of extracting the eigenvector associated with the smallest eigenvalue of the covariance matrix of input signals. Convergence is essential for MCA algorithms towards practical applications. Traditionally, the convergence of MCA algorithms is indirectly analyzed via their corresponding deterministic continuous time (DCT) systems. However, the DCT method requires the learning rate to approach zero, which is not reasonable in many applications due to the round-off limitation and tracking requirements. This paper studies the convergence of the deterministic discrete time (DDT) system associated with the OJAn MCA learning algorithm. Unlike the DCT method, the DDT method does not require the learning rate to approach zero. In this paper, some important convergence results are obtained for the OJAn MCA learning algorithm via the DDT method. Simulations are carried out to illustrate the theoretical results achieved.  相似文献   

5.
Principal component analysis (PCA) and minor component analysis (MCA) are a powerful methodology for a wide variety of applications such as pattern recognition and signal processing. In this paper, we first propose a differential equation for the generalized eigenvalue problem. We prove that the stable points of this differential equation are the eigenvectors corresponding to the largest eigenvalue. Based on this generalized differential equation, a class of PCA and MCA learning algorithms can be obtained. We demonstrate that many existing PCA and MCA learning algorithms are special cases of this class, and this class includes some new and simpler MCA learning algorithms. Our results show that all the learning algorithms of this class have the same order of convergence speed, and they are robust to implementation error.  相似文献   

6.
A principal component analysis (PCA) neural network is developed for online extraction of the multiple minor directions of an input signal. The neural network can extract the multiple minor directions in parallel by computing the principal directions of the transformed input signal so that the stability-speed problem of directly computing the minor directions can be avoided to a certain extent. On the other hand, the learning algorithms for updating the net weights use constant learning rates. This overcomes the shortcoming of the learning rates approaching zero. In addition, the proposed algorithms are globally convergent so that it is very simple to choose the initial values of the learning parameters. This paper presents the convergence analysis of the proposed algorithms by studying the corresponding deterministic discrete time (DDT) equations. Rigorous mathematical proof is given to prove the global convergence. The theoretical results are further confirmed via simulations.  相似文献   

7.
In this paper, we first propose a differential equation for the generalized eigenvalue problem. We prove that the stable points of this differential equation are the eigenvectors corresponding to the largest eigenvalue. Based on this generalized differential equation, a class of principal component analysis (PCA) and minor component analysis (MCA) learning algorithms can be obtained. We demonstrate that many existing PCA and MCA learning algorithms are special cases of this class, and this class includes some new and simpler MCA learning algorithms. Our results show that all the learning algorithms of this class have the same order of convergence speed, and they are robust to implementation error.  相似文献   

8.
Dezhong  Zhang  JianCheng  Yong 《Neurocomputing》2008,71(7-9):1748-1752
The eigenvector associated with the smallest eigenvalue of the autocorrelation matrix of input signals is called minor component. Minor component analysis (MCA) is a statistical approach for extracting minor component from input signals and has been applied in many fields of signal processing and data analysis. In this letter, we propose a neural networks learning algorithm for estimating adaptively minor component from input signals. Dynamics of the proposed algorithm are analyzed via a deterministic discrete time (DDT) method. Some sufficient conditions are obtained to guarantee convergence of the proposed algorithm.  相似文献   

9.
广义次成分分析(generalized minor component analysis,GMCA)在现代信号处理的许多领域具有重要作用.目前现有的大多算法不能同时具备与算法对应的信息准则,以及收敛性、自稳定性和多个广义次成分提取的性能.针对上述问题,利用一种新的信息传播规则,推导出一种广义次成分提取算法,并采用确定离散时间方法(deterministic discrete time,DDT)对算法的全局收敛性能进行分析;同时,通过理论分析算法的收敛性能与算法初始状态的关系,表明算法具有自稳定性.进一步地,探索了算法在多重广义次成分提取方面的应用.相比之前的算法,所提算法具有更快的收敛速度.Matlab仿真验证了所提出算法的各项性能.  相似文献   

10.
Generalized eigenvector plays an essential role in the signal processing field. In this paper, we present a novel neural network learning algorithm for estimating the generalized eigenvector of a Hermitian matrix pencil. Differently from some traditional algorithms, which need to select the proper values of learning rates before using, the proposed algorithm does not need a learning rate and is very suitable for real applications. Through analyzing all of the equilibrium points, it is proven that if and only if the weight vector of the neural network is equal to the generalized eigenvector corresponding to the largest generalized eigenvalue of a Hermitian matrix pencil, the proposed algorithm reaches to convergence status. By using the deterministic discrete-time (DDT) method, some convergence conditions, which can be satisfied with probability 1, are also obtained to guarantee its convergence. Simulation results show that the proposed algorithm has a fast convergence speed and good numerical stability. The real application demonstrates its effectiveness in tracking the optimal vector of beamforming.   相似文献   

11.
This paper presents a unified theory of a class of learning neural nets for principal component analysis (PCA) and minor component analysis (MCA). First, some fundamental properties are addressed which all neural nets in the class have in common. Second, a subclass called the generalized asymmetric learning algorithm is investigated, and the kind of asymmetric structure which is required in general to obtain the individual eigenvectors of the correlation matrix of a data sequence is clarified. Third, focusing on a single-neuron model, a systematic way of deriving both PCA and MCA learning algorithms is shown, through which a relation between the normalization in PCA algorithms and that in MCA algorithms is revealed. This work was presented, in part, at the Third International Symposium on Artificial Life and Robotics, Oita, Japan, January 19–21, 1998  相似文献   

12.
A Class of Self-Stabilizing MCA Learning Algorithms   总被引:1,自引:0,他引:1  
In this letter, we propose a class of self-stabilizing learning algorithms for minor component analysis (MCA), which includes a few well-known MCA learning algorithms. Self-stabilizing means that the sign of the weight vector length change is independent of the presented input vector. For these algorithms, rigorous global convergence proof is given and the convergence rate is also discussed. By combining the positive properties of these algorithms, a new learning algorithm is proposed which can improve the performance. Simulations are employed to confirm our theoretical results  相似文献   

13.
The convergence of Oja's principal component analysis (PCA) learning algorithms is a difficult topic for direct study and analysis. Traditionally, the convergence of these algorithms is indirectly analyzed via certain deterministic continuous time (DCT) systems. Such a method will require the learning rate to converge to zero, which is not a reasonable requirement to impose in many practical applications. Recently, deterministic discrete time (DDT) systems have been proposed instead to indirectly interpret the dynamics of the learning algorithms. Unlike DCT systems, DDT systems allow learning rates to be constant (which can be a nonzero). This paper will provide some important results relating to the convergence of a DDT system of Oja's PCA learning algorithm. It has the following contributions: 1) A number of invariant sets are obtained, based on which we can show that any trajectory starting from a point in the invariant set will remain in the set forever. Thus, the nondivergence of the trajectories is guaranteed. 2) The convergence of the DDT system is analyzed rigorously. It is proven, in the paper, that almost all trajectories of the system starting from points in an invariant set will converge exponentially to the unit eigenvector associated with the largest eigenvalue of the correlation matrix. In addition, exponential convergence rate are obtained, providing useful guidelines for the selection of fast convergence learning rate. 3) Since the trajectories may diverge, the careful choice of initial vectors is an important issue. This paper suggests to use the domain of unit hyper sphere as initial vectors to guarantee convergence. 4) Simulation results will be furnished to illustrate the theoretical results achieved.  相似文献   

14.
Neural network algorithms on principal component analysis (PCA) and minor component analysis (MCA) are of importance in signal processing. Unified (dual purpose) algorithm is capable of both PCA and MCA, thus it is valuable for reducing the complexity and the cost of hardware implementations. Coupled algorithm can mitigate the speed-stability problem which exists in most noncoupled algorithms. Though unified algorithm and coupled algorithm have these advantages compared with single purpose algorithm and noncoupled algorithm, respectively, there are only few of unified algorithms and coupled algorithms have been proposed. Moreover, to the best of the authors’ knowledge, there is no algorithm which is both unified and coupled has been proposed. In this paper, based on a novel information criterion, we propose two self-stabilizing algorithms which are both unified and coupled. In the derivation of our algorithms, it is easier to obtain the results compared with traditional methods, because it is not needed to calculate the inverse Hessian matrix. Experiment results show that the proposed algorithms perform better than existing coupled algorithms and unified algorithms.  相似文献   

15.
The generalized Hebbian algorithm (GHA) is one of the most widely used principal component analysis (PCA) neural network (NN) learning algorithms. Learning rates of GHA play important roles in convergence of the algorithm for applications. Traditionally, the learning rates of GHA are required to converge to zero so that its convergence can be analyzed by studying the corresponding deterministic continuous-time (DCT) equations. However, the requirement for learning rates to approach zero is not a practical one in applications due to computational roundoff limitations and tracking requirements. In this paper, nonzero-approaching adaptive learning rates are proposed to overcome this problem. These proposed adaptive learning rates converge to some positive constants, which not only speed up the algorithm evolution considerably, but also guarantee global convergence of the GHA algorithm. The convergence is studied in detail by analyzing the corresponding deterministic discrete-time (DDT) equations. Extensive simulations are carried out to illustrate the theory.  相似文献   

16.
Principal component analysis (PCA) and Minor component analysis (MCA) are similar but have different dynamical performances. Unexpectedly, a sequential extraction algorithm for MCA proposed by Luo and Unbehauen [11] does not work for MCA, while it works for PCA. We propose a different sequential-addition algorithm which works for MCA. We also show a conversion mechanism by which any PCA algorithms are converted to dynamically equivalent MCA algorithms and vice versa.  相似文献   

17.
The M?ller algorithm is a self-stabilizing minor component analysis algorithm. This research document involves the study of the convergence and dynamic characteristics of the M?ller algorithm using the deterministic discrete time (DDT) methodology. Unlike other analysis methodologies, the DDT methodology is capable of serving the distinct time characteristic and having no constraint conditions. Through analyzing the dynamic characteristics of the weight vector, several convergence conditions are drawn, which are beneficial for its application. The performing computer simulations and real applications demonstrate the correctness of the analysis’s conclusions.   相似文献   

18.
This article introduces new low cost algorithms for the adaptive estimation and tracking of principal and minor components. The proposed algorithms are based on the well-known OPAST method which is adapted and extended in order to achieve the desired MCA or PCA (Minor or Principal Component Analysis). For the PCA case, we propose efficient solutions using Givens rotations to estimate the principal components out of the weight matrix given by OPAST method. These solutions are then extended to the MCA case by using a transformed data covariance matrix in such a way the desired minor components are obtained from the PCA of the new (transformed) matrix. Finally, as a byproduct of our PCA algorithm, we propose a fast adaptive algorithm for data whitening that is shown to overcome the recently proposed RLS-based whitening method.  相似文献   

19.
次成分分析是信号处理领域一门重要的工具. 然而, 到目前为止能够进行多个次成分提取的算法并不多见, 一些现存算法还存在很多限制条件. 针对这些问题, 采用加权矩阵的方法将M\"oller算法扩展为多个次成分提取算法. 该算法对于输入信号的特征值没有要求, 而且在不需要模值限制措施的情况下, 仍然具有很好的收敛性. 仿真结果表明, 该算法可并行提取多个次成分, 而且收敛速度优于一些现有算法.  相似文献   

20.
When the independent sources are known to be nonnegative and well-grounded, which means that they have a nonzero pdf in the region of zero, Oja and Plumbley have proposed a "Nonnegative principal component analysis (PCA)" algorithm to separate these positive sources. Generally, it is very difficult to prove the convergence of a discrete-time independent component analysis (ICA) learning algorithm. However, by using the skew-symmetry property of this discrete-time "Nonnegative PCA" algorithm, if the learning rate satisfies suitable condition, the global convergence of this discrete-time algorithm can be proven. Simulation results are employed to further illustrate the advantages of this theory.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号