首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
Dynamics of Generalized PCA and MCA Learning Algorithms   总被引:1,自引:0,他引:1  
Principal component analysis (PCA) and minor component analysis (MCA) are two important statistical tools which have many applications in the fields of signal processing and data analysis. PCA and MCA neural networks (NNs) can be used to online extract principal component and minor component from input data. It is interesting to develop generalized learning algorithms of PCA and MCA NNs. Some novel generalized PCA and MCA learning algorithms are proposed in this paper. Convergence of PCA and MCA learning algorithms is an essential issue in practical applications. Traditionally, the convergence is studied via deterministic continuous-time (DCT) method. The DCT method requires the learning rate of the algorithms to approach to zero, which is not realistic in many practical applications. In this paper, deterministic discrete-time (DDT) method is used to study the dynamical behaviors of the proposed algorithms. The DDT method is more reasonable for the convergence analysis since it does not require constraints as that of the DCT method. It is proven that under some mild conditions, the weight vector in these proposed algorithms will converge exponentially to principal or minor component. Simulation results are further used to illustrate the theoretical results.  相似文献   

2.
最小主元分析(Minor Component Analysis,MCA)类自适应总体最小二乘算法易受初始权值向量的影响而无法收敛。为解决这一问题,提出了一种不受初始权值向量影响的MCA学习算法,推导出了该算法的收敛条件与最终收敛域,并通过计算机仿真验证了该算法的正确性。  相似文献   

3.
Minor component analysis (MCA) is a statistical method of extracting the eigenvector associated with the smallest eigenvalue of the covariance matrix of input signals. Convergence is essential for MCA algorithms towards practical applications. Traditionally, the convergence of MCA algorithms is indirectly analyzed via their corresponding deterministic continuous time (DCT) systems. However, the DCT method requires the learning rate to approach zero, which is not reasonable in many applications due to the round-off limitation and tracking requirements. This paper studies the convergence of the deterministic discrete time (DDT) system associated with the OJAn MCA learning algorithm. Unlike the DCT method, the DDT method does not require the learning rate to approach zero. In this paper, some important convergence results are obtained for the OJAn MCA learning algorithm via the DDT method. Simulations are carried out to illustrate the theoretical results achieved.  相似文献   

4.
Principal/minor component analysis(PCA/MCA),generalized principal/minor component analysis(GPCA/GMCA),and singular value decomposition(SVD)algorithms are important techniques for feature extraction.In the convergence analysis of these algorithms,the deterministic discrete-time(DDT)method can reveal the dynamic behavior of PCA/MCA and GPCA/GMCA algorithms effectively.However,the dynamic behavior of SVD algorithms has not been studied quantitatively because of their special structure.In this paper,for the first time,we utilize the advantages of the DDT method in PCA algorithms analysis to study the dynamics of SVD algorithms.First,taking the cross-coupled Hebbian algorithm as an example,by concatenating the two cross-coupled variables into a single vector,we successfully get a PCA-like DDT system.Second,we analyze the discrete-time dynamic behavior and stability of the PCA-like DDT system in detail based on the DDT method,and obtain the boundedness of the weight vectors and learning rate.Moreover,further discussion shows the universality of the proposed method for analyzing other SVD algorithms.As a result,the proposed method provides a new way to study the dynamical convergence properties of SVD algorithms.  相似文献   

5.
Principal component analysis (PCA) and minor component analysis (MCA) are a powerful methodology for a wide variety of applications such as pattern recognition and signal processing. In this paper, we first propose a differential equation for the generalized eigenvalue problem. We prove that the stable points of this differential equation are the eigenvectors corresponding to the largest eigenvalue. Based on this generalized differential equation, a class of PCA and MCA learning algorithms can be obtained. We demonstrate that many existing PCA and MCA learning algorithms are special cases of this class, and this class includes some new and simpler MCA learning algorithms. Our results show that all the learning algorithms of this class have the same order of convergence speed, and they are robust to implementation error.  相似文献   

6.
In this paper, we first propose a differential equation for the generalized eigenvalue problem. We prove that the stable points of this differential equation are the eigenvectors corresponding to the largest eigenvalue. Based on this generalized differential equation, a class of principal component analysis (PCA) and minor component analysis (MCA) learning algorithms can be obtained. We demonstrate that many existing PCA and MCA learning algorithms are special cases of this class, and this class includes some new and simpler MCA learning algorithms. Our results show that all the learning algorithms of this class have the same order of convergence speed, and they are robust to implementation error.  相似文献   

7.
Recently, many unified learning algorithms have been developed to solve the task of principal component analysis (PCA) and minor component analysis (MCA). These unified algorithms can be used to extract principal component and if altered simply by the sign, it can also serve as a minor component extractor. This is of practical significance in the implementations of algorithms. Convergence of the existing unified algorithms is guaranteed only under the condition that the learning rates of algorithms approach zero, which is impractical in many practical applications. In this paper, we propose a unified PCA & MCA algorithm with a constant learning rate, and derive the sufficient conditions to guarantee convergence via analyzing the discrete-time dynamics of the proposed algorithm. The achieved theoretical results lay a solid foundation for the applications of our proposed algorithm.  相似文献   

8.
This paper presents a unified theory of a class of learning neural nets for principal component analysis (PCA) and minor component analysis (MCA). First, some fundamental properties are addressed which all neural nets in the class have in common. Second, a subclass called the generalized asymmetric learning algorithm is investigated, and the kind of asymmetric structure which is required in general to obtain the individual eigenvectors of the correlation matrix of a data sequence is clarified. Third, focusing on a single-neuron model, a systematic way of deriving both PCA and MCA learning algorithms is shown, through which a relation between the normalization in PCA algorithms and that in MCA algorithms is revealed. This work was presented, in part, at the Third International Symposium on Artificial Life and Robotics, Oita, Japan, January 19–21, 1998  相似文献   

9.
基于向量图分析的迭代学习控制新算法   总被引:9,自引:2,他引:9  
基于向量图分析,对迭代学习控制方法的几何框架进行探索.首先通过对通常算法所构 成的向量图进行几何分析,导出了新的迭代学习控制算法的结构,然后从理论上对所导出的新算 法进行了完整的收敛性分析.所得算法结构的形式与已有算法完全不同,但其收敛速度和精度有 明显提高,仿真结果表明了新算法的有效性与优越性.  相似文献   

10.
基于向量图分析的迭代学习控制非线性算法   总被引:7,自引:2,他引:5  
打破多年来人们一直囿于Arimoto的思路 ,另辟途径寻找新的迭代学习控制的研究方法 ,以期构架迭代学习控制的几何理论 .基于数学的几何方法 ,通过对通常算法所构成的向量图进行分析 ,获得了一类快速的迭代学习控制新算法 ,然后对这种新结构的算法在理论上进行了完整的收敛性分析 .这类新算法与目前所有迭代学习控制算法不同 ,具有非线性结构 .仿真结果表明了该类算法的有效性与优越性  相似文献   

11.
基于向量图分析的分布参数系统迭代学习控制   总被引:3,自引:0,他引:3  
针对一类不确定线性分布参数系统的迭代学习控制问题进行了讨论。基于向量图分析方法,提出了分布参数系统的一种新的迭代学习控制算法,该算法与现有算法不同,具有非线性形式。此外,利用 范数对所提新算法进行了完整的收敛性分析。  相似文献   

12.
基于次梯度的L1正则化Hinge损失问题求解研究   总被引:1,自引:0,他引:1  
Hinge损失函数是支持向量机(support vector machines,SVM)成功的关键,L1正则化在稀疏学习的研究中起关键作用.鉴于两者均是不可导函数,高阶梯度信息无法使用.利用随机次梯度方法系统研究L1正则化项的Hinge损失大规模数据问题求解.首先描述了直接次梯度方法和投影次梯度方法的随机算法形式,并对算法的收敛性和收敛速度进行了理论分析.大规模真实数据集上的实验表明,投影次梯度方法对于处理大规模稀疏数据具有更快的收敛速度和更好的稀疏性.实验进一步阐明了投影阈值对算法稀疏度的影响.  相似文献   

13.
一类基于几何分析的迭代学习控制算法   总被引:11,自引:0,他引:11       下载免费PDF全文
基于几何分析,对迭代学习控制方法的几何框架进行探索.首先通过对Arimoto算法所构成的向量图进行几何分析,导出了一类新的迭代学习算法结构;然后从理论上对所导出的算法进行完整的收敛性分析.该算法结构与已有算法完全不同,但其收敛速度和精度明显提高.仿真结果表明了新算法的有效性和优越性.  相似文献   

14.
次成分分析是信号处理领域一门重要的工具. 然而, 到目前为止能够进行多个次成分提取的算法并不多见, 一些现存算法还存在很多限制条件. 针对这些问题, 采用加权矩阵的方法将M\"oller算法扩展为多个次成分提取算法. 该算法对于输入信号的特征值没有要求, 而且在不需要模值限制措施的情况下, 仍然具有很好的收敛性. 仿真结果表明, 该算法可并行提取多个次成分, 而且收敛速度优于一些现有算法.  相似文献   

15.
Generalized eigenvector plays an essential role in the signal processing field. In this paper, we present a novel neural network learning algorithm for estimating the generalized eigenvector of a Hermitian matrix pencil. Differently from some traditional algorithms, which need to select the proper values of learning rates before using, the proposed algorithm does not need a learning rate and is very suitable for real applications. Through analyzing all of the equilibrium points, it is proven that if and only if the weight vector of the neural network is equal to the generalized eigenvector corresponding to the largest generalized eigenvalue of a Hermitian matrix pencil, the proposed algorithm reaches to convergence status. By using the deterministic discrete-time (DDT) method, some convergence conditions, which can be satisfied with probability 1, are also obtained to guarantee its convergence. Simulation results show that the proposed algorithm has a fast convergence speed and good numerical stability. The real application demonstrates its effectiveness in tracking the optimal vector of beamforming.   相似文献   

16.
量子门线路神经网络(QGCNN)是一种直接利用量子理论设计神经网络拓扑结构或训练算法的量子神经网络模型。动量更新是在神经网络的权值更新中加入动量,在改变权值向量的同时提供一个特定的惯量,从而避免权值向量在网络训练过程中持续振荡。在基本的量子门线路神经网络的学习算法中引入动量更新原理,提出了一种具有动量更新的量子门线路网络算法(QGCMA)。研究表明,QGCMA保持了网络100%的收敛率,同时,相对于基本算法,在具有相同学习速率的情况下,提高了网络的收敛速度。  相似文献   

17.
支持向量机的训练算法综述   总被引:1,自引:0,他引:1  
支持向量机(SVM)是在统计学习理论基础上发展起来的新方法,其训练算法本质上是一个二次规划的求解问题.首先简要概述了SVM的基本原理,然后对SVM训练算法的国内外研究现状进行综述,重点分析SVM的缩减算法和具有线性收敛性质的算法,对这些算法的性能进行比较,并且对SVM的扩展算法也进行简单介绍.最后对该领域存在的问题和发展趋势进行了展望.  相似文献   

18.
We compare several new SVD learning algorithms which are based on the subspace method in principal component analysis with the APEX-like algorithm proposed by Diamantaras. It is shown experimentally that the convergence of these algorithms is as fast as the convergence of the APEX-like algorithm.  相似文献   

19.
The MCA EXIN neuron for the minor component analysis   总被引:7,自引:0,他引:7  
The minor component analysis (MCA) deals with the recovery of the eigenvector associated to the smallest eigenvalue of the autocorrelation matrix of the input data and is a very important tool for signal processing and data analysis. It is almost exclusively solved by linear neurons. This paper presents a linear neuron endowed with a novel learning law, called MCA EXINn and analyzes its features. The neural literature about MCA is very poor, in the sense that both a little theoretical basis is given (almost always focusing on the ODE asymptotic approximation) and only experiments on toy problems (at most four-dimensional problems) are presented, without any numerical analysis. This work addresses these problems and lays sound theoretical foundations for the neural MCA theory. In particular, it classifies the MCA neurons according to the Riemannian metric and justifies, from the analysis of the degeneracy of the error cost; the different behavior in approaching convergence. The cost landscape is studied and used as a basis for the analysis of the asymptotic behavior. All the phases of the dynamics of the MCA algorithms are investigated in detail and, together with the numerical analysis, lead to the identification of three possible kinds of divergence, here called sudden, dynamic, and numerical. The importance of the choice of low initial conditions is also explained. A lot of importance is given to the experimental part, where simulations on high-dimensional problems are,presented and analyzed. The orthogonal regression or total least squares (TLS) technique is also presented, together with a real-world application on the identification of the parameters of an electrical machine. It can be concluded that MCA EXIN is the best MCA neuron in terms of stability (no finite time divergence), speed, and accuracy.  相似文献   

20.
李晓波  樊养余  彭轲 《控制与决策》2010,25(9):1399-1402
针对确定连续时间(DCT)方法在研究最小主元分析(MCA)学习算法时,存在严格限制条件而较难实现的问题,基于确定离散时间(DDT)方法,研究了AMEX MCA学习算法的收敛条件.理论分析表明,只有在算法的学习因子和输入信号的自相关矩阵满足一定条件时,AMEX MCA学习算法才能收敛到系统的总体最小二乘解.最后,仿真结果表明了收敛条件的正确性.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号