首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Recently, many unified learning algorithms have been developed to solve the task of principal component analysis (PCA) and minor component analysis (MCA). These unified algorithms can be used to extract principal component and if altered simply by the sign, it can also serve as a minor component extractor. This is of practical significance in the implementations of algorithms. Convergence of the existing unified algorithms is guaranteed only under the condition that the learning rates of algorithms approach zero, which is impractical in many practical applications. In this paper, we propose a unified PCA & MCA algorithm with a constant learning rate, and derive the sufficient conditions to guarantee convergence via analyzing the discrete-time dynamics of the proposed algorithm. The achieved theoretical results lay a solid foundation for the applications of our proposed algorithm.  相似文献   

2.
提出一种自稳定的双目的算法用以提取信号自相关矩阵的特征对.该算法可以通过仅仅改变一个符号实现主/次特征向量估计的转化,并且可以通过估计的特征向量的模值信息估计对应的特征值,从而实现特征对的提取.基于确定性离散时间方法对所提出的算法进行收敛性分析,并确定算法收敛的边界条件.与已有算法对比的仿真实验验证了所提出算法的收敛性能.  相似文献   

3.
Principal component analysis (PCA) and Minor component analysis (MCA) are similar but have different dynamical performances. Unexpectedly, a sequential extraction algorithm for MCA proposed by Luo and Unbehauen [11] does not work for MCA, while it works for PCA. We propose a different sequential-addition algorithm which works for MCA. We also show a conversion mechanism by which any PCA algorithms are converted to dynamically equivalent MCA algorithms and vice versa.  相似文献   

4.
This paper presents a unified theory of a class of learning neural nets for principal component analysis (PCA) and minor component analysis (MCA). First, some fundamental properties are addressed which all neural nets in the class have in common. Second, a subclass called the generalized asymmetric learning algorithm is investigated, and the kind of asymmetric structure which is required in general to obtain the individual eigenvectors of the correlation matrix of a data sequence is clarified. Third, focusing on a single-neuron model, a systematic way of deriving both PCA and MCA learning algorithms is shown, through which a relation between the normalization in PCA algorithms and that in MCA algorithms is revealed. This work was presented, in part, at the Third International Symposium on Artificial Life and Robotics, Oita, Japan, January 19–21, 1998  相似文献   

5.
Dynamics of Generalized PCA and MCA Learning Algorithms   总被引:1,自引:0,他引:1  
Principal component analysis (PCA) and minor component analysis (MCA) are two important statistical tools which have many applications in the fields of signal processing and data analysis. PCA and MCA neural networks (NNs) can be used to online extract principal component and minor component from input data. It is interesting to develop generalized learning algorithms of PCA and MCA NNs. Some novel generalized PCA and MCA learning algorithms are proposed in this paper. Convergence of PCA and MCA learning algorithms is an essential issue in practical applications. Traditionally, the convergence is studied via deterministic continuous-time (DCT) method. The DCT method requires the learning rate of the algorithms to approach to zero, which is not realistic in many practical applications. In this paper, deterministic discrete-time (DDT) method is used to study the dynamical behaviors of the proposed algorithms. The DDT method is more reasonable for the convergence analysis since it does not require constraints as that of the DCT method. It is proven that under some mild conditions, the weight vector in these proposed algorithms will converge exponentially to principal or minor component. Simulation results are further used to illustrate the theoretical results.  相似文献   

6.
Principal/minor component analysis(PCA/MCA),generalized principal/minor component analysis(GPCA/GMCA),and singular value decomposition(SVD)algorithms are important techniques for feature extraction.In the convergence analysis of these algorithms,the deterministic discrete-time(DDT)method can reveal the dynamic behavior of PCA/MCA and GPCA/GMCA algorithms effectively.However,the dynamic behavior of SVD algorithms has not been studied quantitatively because of their special structure.In this paper,for the first time,we utilize the advantages of the DDT method in PCA algorithms analysis to study the dynamics of SVD algorithms.First,taking the cross-coupled Hebbian algorithm as an example,by concatenating the two cross-coupled variables into a single vector,we successfully get a PCA-like DDT system.Second,we analyze the discrete-time dynamic behavior and stability of the PCA-like DDT system in detail based on the DDT method,and obtain the boundedness of the weight vectors and learning rate.Moreover,further discussion shows the universality of the proposed method for analyzing other SVD algorithms.As a result,the proposed method provides a new way to study the dynamical convergence properties of SVD algorithms.  相似文献   

7.
An adaptive learning algorithm for principal component analysis   总被引:2,自引:0,他引:2  
Principal component analysis (PCA) is one of the most general purpose feature extraction methods. A variety of learning algorithms for PCA has been proposed. Many conventional algorithms, however, will either diverge or converge very slowly if learning rate parameters are not properly chosen. In this paper, an adaptive learning algorithm (ALA) for PCA is proposed. By adaptively selecting the learning rate parameters, we show that the m weight vectors in the ALA converge to the first m principle component vectors with almost the same rates. Comparing with the Sanger's generalized Hebbian algorithm (GHA), the ALA can quickly find the desired principal component vectors while the GHA fails to do so. Finally, simulation results are also included to illustrate the effectiveness of the ALA.  相似文献   

8.
This article introduces new low cost algorithms for the adaptive estimation and tracking of principal and minor components. The proposed algorithms are based on the well-known OPAST method which is adapted and extended in order to achieve the desired MCA or PCA (Minor or Principal Component Analysis). For the PCA case, we propose efficient solutions using Givens rotations to estimate the principal components out of the weight matrix given by OPAST method. These solutions are then extended to the MCA case by using a transformed data covariance matrix in such a way the desired minor components are obtained from the PCA of the new (transformed) matrix. Finally, as a byproduct of our PCA algorithm, we propose a fast adaptive algorithm for data whitening that is shown to overcome the recently proposed RLS-based whitening method.  相似文献   

9.
Principal component analysis (PCA) and minor component analysis (MCA) are a powerful methodology for a wide variety of applications such as pattern recognition and signal processing. In this paper, we first propose a differential equation for the generalized eigenvalue problem. We prove that the stable points of this differential equation are the eigenvectors corresponding to the largest eigenvalue. Based on this generalized differential equation, a class of PCA and MCA learning algorithms can be obtained. We demonstrate that many existing PCA and MCA learning algorithms are special cases of this class, and this class includes some new and simpler MCA learning algorithms. Our results show that all the learning algorithms of this class have the same order of convergence speed, and they are robust to implementation error.  相似文献   

10.
Local PCA algorithms   总被引:5,自引:0,他引:5  
Within the last years various principal component analysis (PCA) algorithms have been proposed. In this paper we use a general framework to describe those PCA algorithms which are based on Hebbian learning. For an important subset of these algorithms, the local algorithms, we fully describe their equilibria, where all lateral connections are set to zero and their local stability. We show how the parameters in the PCA algorithms have to be chosen in order to get an algorithm which converges to a stable equilibrium which provides principal component extraction.  相似文献   

11.
Feature extraction is an important component of a pattern recognition system. It performs two tasks: transforming input parameter vector into a feature vector and/or reducing its dimensionality. A well-defined feature extraction algorithm makes the classification process more effective and efficient. Two popular methods for feature extraction are linear discriminant analysis (LDA) and principal component analysis (PCA). In this paper, the minimum classification error (MCE) training algorithm (which was originally proposed for optimizing classifiers) is investigated for feature extraction. A generalized MCE (GMCE) training algorithm is proposed to mend the shortcomings of the MCE training algorithm. LDA, PCA, and MCE and GMCE algorithms extract features through linear transformation. Support vector machine (SVM) is a recently developed pattern classification algorithm, which uses non-linear kernel functions to achieve non-linear decision boundaries in the parametric space. In this paper, SVM is also investigated and compared to linear feature extraction algorithms.  相似文献   

12.
In this paper, we first propose a differential equation for the generalized eigenvalue problem. We prove that the stable points of this differential equation are the eigenvectors corresponding to the largest eigenvalue. Based on this generalized differential equation, a class of principal component analysis (PCA) and minor component analysis (MCA) learning algorithms can be obtained. We demonstrate that many existing PCA and MCA learning algorithms are special cases of this class, and this class includes some new and simpler MCA learning algorithms. Our results show that all the learning algorithms of this class have the same order of convergence speed, and they are robust to implementation error.  相似文献   

13.
韩旭  刘强  许瑾  谌海云 《计算机科学》2018,45(Z11):278-281, 307
PCA(Principal Component Analysis)是最重要的数据降维算法之一,针对降维过程出现的信息丢失问题,学术界说法不一。基于此,文中提出了一种新的改进算法(Similar Principal Component Analysis,SPCA), 新算法 在处理过程中保留了部分细节信息。以手写数字(MNIST)数据库为例,将原始向量组进行临近特征筛选,得出多维复合非正交特征向量组;将训练库所得的向量组与测试集的向量组进行比对,识别出所测试的手写数字。结果表明,该算法能够以较少量的训练样本实现对测试样本的较为完全的识别。  相似文献   

14.
Principal component analysis (PCA) has been proven to be an efficient method in pattern recognition and image analysis. Recently, PCA has been extensively employed for face-recognition algorithms, such as eigenface and fisherface. The encouraging results have been reported and discussed in the literature. Many PCA-based face-recognition systems have also been developed in the last decade. However, existing PCA-based face-recognition systems are hard to scale up because of the computational cost and memory-requirement burden. To overcome this limitation, an incremental approach is usually adopted. Incremental PCA (IPCA) methods have been studied for many years in the machine-learning community. The major limitation of existing IPCA methods is that there is no guarantee on the approximation error. In view of this limitation, this paper proposes a new IPCA method based on the idea of a singular value decomposition (SVD) updating algorithm, namely an SVD updating-based IPCA (SVDU-IPCA) algorithm. In the proposed SVDU-IPCA algorithm, we have mathematically proved that the approximation error is bounded. A complexity analysis on the proposed method is also presented. Another characteristic of the proposed SVDU-IPCA algorithm is that it can be easily extended to a kernel version. The proposed method has been evaluated using available public databases, namely FERET, AR, and Yale B, and applied to existing face-recognition algorithms. Experimental results show that the difference of the average recognition accuracy between the proposed incremental method and the batch-mode method is less than 1%. This implies that the proposed SVDU-IPCA method gives a close approximation to the batch-mode PCA method.  相似文献   

15.
次成分分析是信号处理领域一门重要的工具. 然而, 到目前为止能够进行多个次成分提取的算法并不多见, 一些现存算法还存在很多限制条件. 针对这些问题, 采用加权矩阵的方法将M\"oller算法扩展为多个次成分提取算法. 该算法对于输入信号的特征值没有要求, 而且在不需要模值限制措施的情况下, 仍然具有很好的收敛性. 仿真结果表明, 该算法可并行提取多个次成分, 而且收敛速度优于一些现有算法.  相似文献   

16.
Principal component analysis (PCA) by neural networks is one of the most frequently used feature extracting methods. To process huge data sets, many learning algorithms based on neural networks for PCA have been proposed. However, traditional algorithms are not globally convergent. In this paper, a new PCA learning algorithm based on cascade recursive least square (CRLS) neural network is proposed. This algorithm can guarantee the network weight vector converges to an eigenvector associated with the largest eigenvalue of the input covariance matrix globally. A rigorous mathematical proof is given. Simulation results show the effectiveness of the algorithm.  相似文献   

17.
A Class of Self-Stabilizing MCA Learning Algorithms   总被引:1,自引:0,他引:1  
In this letter, we propose a class of self-stabilizing learning algorithms for minor component analysis (MCA), which includes a few well-known MCA learning algorithms. Self-stabilizing means that the sign of the weight vector length change is independent of the presented input vector. For these algorithms, rigorous global convergence proof is given and the convergence rate is also discussed. By combining the positive properties of these algorithms, a new learning algorithm is proposed which can improve the performance. Simulations are employed to confirm our theoretical results  相似文献   

18.
张红娟  郭崇慧 《控制工程》2007,14(4):398-400
介绍了一种全新的功能核磁共振数据(fMRI)分析方法即核独立成分分析方法,它是核方法与典型相关分析方法的综合运用。将Bach和Jordan提出的核独立成分分析方法应用于功能核磁共振成像数据分析,并基于实验所得的时间序列与参考函数的相关系数的大小,比较了该方法与经典的成分分析方法-主成分分析(PCA)方法与快速不动点算法(FastICA)的实验结果。结果表明,对fMRI数据而言,核独立成分分析方法和FastICA算法要优于PCA算法。  相似文献   

19.
Accurate illuminant estimation from digital image data is a fundamental step of practically every image colour correction. Combinational illuminant estimation schemes have been shown to improve estimation accuracy significantly compared to other colour constancy algorithms. These schemes combine individual estimates of simpler colour constancy algorithms in some ‘intelligent’ manner into a joint and, usually, more efficient illuminant estimation. Among them, a combinational method based on Support Vector Regression (SVR) was proposed recently, demonstrating the more accurate illuminant estimation (Li et al. IEEE Trans. Image Process. 23(3), 1194–1209, 2014). We extended this method by our previously introduced convolutional framework, in which the illuminant was estimated by a set of image-specific filters generated using a linear analysis. In this work, the convolutional framework was reformulated, so that each image-specific filter obtained by principal component analysis (PCA) produced one illuminant estimate. All these individual estimates were then combined into a joint illuminant estimation by using SVR. Each illuminant estimation by using a single image-specific PCA filter within the convolutional framework actually represented one base algorithm for the combinational method based on SVR. The proposed method was validated on the well-known Gehler image dataset, reprocessed and prepared by author Shi, and, as well, on the NUS multi-camera dataset. It was shown that the median and trimean angular errors were (non-significantly) lower for our proposed method compared to the original combinational method based on SVR for which our method utilized just 6 image-specific PCA filters, while the original combinational method required 12 base algorithms for similar results. Nevertheless, a proposed method unified grey edge framework, PCA analysis, linear filtering theory, and SVR regression formally for the combinational illuminant estimation.  相似文献   

20.
We propose a constrained EM algorithm for principal component analysis (PCA) using a coupled probability model derived from single-standard factor analysis models with isotropic noise structure. The single probabilistic PCA, especially for the case where there is no noise, can find only a vector set that is a linear superposition of principal components and requires postprocessing, such as diagonalization of symmetric matrices. By contrast, the proposed algorithm finds the actual principal components, which are sorted in descending order of eigenvalue size and require no additional calculation or postprocessing. The method is easily applied to kernel PCA. It is also shown that the new EM algorithm is derived from a generalized least-squares formulation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号