首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 390 毫秒
1.
Principal/minor component analysis(PCA/MCA),generalized principal/minor component analysis(GPCA/GMCA),and singular value decomposition(SVD)algorithms are important techniques for feature extraction.In the convergence analysis of these algorithms,the deterministic discrete-time(DDT)method can reveal the dynamic behavior of PCA/MCA and GPCA/GMCA algorithms effectively.However,the dynamic behavior of SVD algorithms has not been studied quantitatively because of their special structure.In this paper,for the first time,we utilize the advantages of the DDT method in PCA algorithms analysis to study the dynamics of SVD algorithms.First,taking the cross-coupled Hebbian algorithm as an example,by concatenating the two cross-coupled variables into a single vector,we successfully get a PCA-like DDT system.Second,we analyze the discrete-time dynamic behavior and stability of the PCA-like DDT system in detail based on the DDT method,and obtain the boundedness of the weight vectors and learning rate.Moreover,further discussion shows the universality of the proposed method for analyzing other SVD algorithms.As a result,the proposed method provides a new way to study the dynamical convergence properties of SVD algorithms.  相似文献   

2.
Minor component analysis (MCA) is a statistical method of extracting the eigenvector associated with the smallest eigenvalue of the covariance matrix of input signals. Convergence is essential for MCA algorithms towards practical applications. Traditionally, the convergence of MCA algorithms is indirectly analyzed via their corresponding deterministic continuous time (DCT) systems. However, the DCT method requires the learning rate to approach zero, which is not reasonable in many applications due to the round-off limitation and tracking requirements. This paper studies the convergence of the deterministic discrete time (DDT) system associated with the OJAn MCA learning algorithm. Unlike the DCT method, the DDT method does not require the learning rate to approach zero. In this paper, some important convergence results are obtained for the OJAn MCA learning algorithm via the DDT method. Simulations are carried out to illustrate the theoretical results achieved.  相似文献   

3.
The generalized Hebbian algorithm (GHA) is one of the most widely used principal component analysis (PCA) neural network (NN) learning algorithms. Learning rates of GHA play important roles in convergence of the algorithm for applications. Traditionally, the learning rates of GHA are required to converge to zero so that its convergence can be analyzed by studying the corresponding deterministic continuous-time (DCT) equations. However, the requirement for learning rates to approach zero is not a practical one in applications due to computational roundoff limitations and tracking requirements. In this paper, nonzero-approaching adaptive learning rates are proposed to overcome this problem. These proposed adaptive learning rates converge to some positive constants, which not only speed up the algorithm evolution considerably, but also guarantee global convergence of the GHA algorithm. The convergence is studied in detail by analyzing the corresponding deterministic discrete-time (DDT) equations. Extensive simulations are carried out to illustrate the theory.  相似文献   

4.
The convergence of Oja's principal component analysis (PCA) learning algorithms is a difficult topic for direct study and analysis. Traditionally, the convergence of these algorithms is indirectly analyzed via certain deterministic continuous time (DCT) systems. Such a method will require the learning rate to converge to zero, which is not a reasonable requirement to impose in many practical applications. Recently, deterministic discrete time (DDT) systems have been proposed instead to indirectly interpret the dynamics of the learning algorithms. Unlike DCT systems, DDT systems allow learning rates to be constant (which can be a nonzero). This paper will provide some important results relating to the convergence of a DDT system of Oja's PCA learning algorithm. It has the following contributions: 1) A number of invariant sets are obtained, based on which we can show that any trajectory starting from a point in the invariant set will remain in the set forever. Thus, the nondivergence of the trajectories is guaranteed. 2) The convergence of the DDT system is analyzed rigorously. It is proven, in the paper, that almost all trajectories of the system starting from points in an invariant set will converge exponentially to the unit eigenvector associated with the largest eigenvalue of the correlation matrix. In addition, exponential convergence rate are obtained, providing useful guidelines for the selection of fast convergence learning rate. 3) Since the trajectories may diverge, the careful choice of initial vectors is an important issue. This paper suggests to use the domain of unit hyper sphere as initial vectors to guarantee convergence. 4) Simulation results will be furnished to illustrate the theoretical results achieved.  相似文献   

5.
提出一种自稳定的双目的算法用以提取信号自相关矩阵的特征对.该算法可以通过仅仅改变一个符号实现主/次特征向量估计的转化,并且可以通过估计的特征向量的模值信息估计对应的特征值,从而实现特征对的提取.基于确定性离散时间方法对所提出的算法进行收敛性分析,并确定算法收敛的边界条件.与已有算法对比的仿真实验验证了所提出算法的收敛性能.  相似文献   

6.
Recently, many unified learning algorithms have been developed to solve the task of principal component analysis (PCA) and minor component analysis (MCA). These unified algorithms can be used to extract principal component and if altered simply by the sign, it can also serve as a minor component extractor. This is of practical significance in the implementations of algorithms. Convergence of the existing unified algorithms is guaranteed only under the condition that the learning rates of algorithms approach zero, which is impractical in many practical applications. In this paper, we propose a unified PCA & MCA algorithm with a constant learning rate, and derive the sufficient conditions to guarantee convergence via analyzing the discrete-time dynamics of the proposed algorithm. The achieved theoretical results lay a solid foundation for the applications of our proposed algorithm.  相似文献   

7.
In this paper, we first propose a differential equation for the generalized eigenvalue problem. We prove that the stable points of this differential equation are the eigenvectors corresponding to the largest eigenvalue. Based on this generalized differential equation, a class of principal component analysis (PCA) and minor component analysis (MCA) learning algorithms can be obtained. We demonstrate that many existing PCA and MCA learning algorithms are special cases of this class, and this class includes some new and simpler MCA learning algorithms. Our results show that all the learning algorithms of this class have the same order of convergence speed, and they are robust to implementation error.  相似文献   

8.
A principal component analysis (PCA) neural network is developed for online extraction of the multiple minor directions of an input signal. The neural network can extract the multiple minor directions in parallel by computing the principal directions of the transformed input signal so that the stability-speed problem of directly computing the minor directions can be avoided to a certain extent. On the other hand, the learning algorithms for updating the net weights use constant learning rates. This overcomes the shortcoming of the learning rates approaching zero. In addition, the proposed algorithms are globally convergent so that it is very simple to choose the initial values of the learning parameters. This paper presents the convergence analysis of the proposed algorithms by studying the corresponding deterministic discrete time (DDT) equations. Rigorous mathematical proof is given to prove the global convergence. The theoretical results are further confirmed via simulations.  相似文献   

9.
Principal component analysis (PCA) and minor component analysis (MCA) are a powerful methodology for a wide variety of applications such as pattern recognition and signal processing. In this paper, we first propose a differential equation for the generalized eigenvalue problem. We prove that the stable points of this differential equation are the eigenvectors corresponding to the largest eigenvalue. Based on this generalized differential equation, a class of PCA and MCA learning algorithms can be obtained. We demonstrate that many existing PCA and MCA learning algorithms are special cases of this class, and this class includes some new and simpler MCA learning algorithms. Our results show that all the learning algorithms of this class have the same order of convergence speed, and they are robust to implementation error.  相似文献   

10.
Dezhong  Zhang  JianCheng  Yong 《Neurocomputing》2008,71(7-9):1748-1752
The eigenvector associated with the smallest eigenvalue of the autocorrelation matrix of input signals is called minor component. Minor component analysis (MCA) is a statistical approach for extracting minor component from input signals and has been applied in many fields of signal processing and data analysis. In this letter, we propose a neural networks learning algorithm for estimating adaptively minor component from input signals. Dynamics of the proposed algorithm are analyzed via a deterministic discrete time (DDT) method. Some sufficient conditions are obtained to guarantee convergence of the proposed algorithm.  相似文献   

11.
李晓波  樊养余  彭轲 《控制与决策》2010,25(9):1399-1402
针对确定连续时间(DCT)方法在研究最小主元分析(MCA)学习算法时,存在严格限制条件而较难实现的问题,基于确定离散时间(DDT)方法,研究了AMEX MCA学习算法的收敛条件.理论分析表明,只有在算法的学习因子和输入信号的自相关矩阵满足一定条件时,AMEX MCA学习算法才能收敛到系统的总体最小二乘解.最后,仿真结果表明了收敛条件的正确性.  相似文献   

12.
This paper presents a unified theory of a class of learning neural nets for principal component analysis (PCA) and minor component analysis (MCA). First, some fundamental properties are addressed which all neural nets in the class have in common. Second, a subclass called the generalized asymmetric learning algorithm is investigated, and the kind of asymmetric structure which is required in general to obtain the individual eigenvectors of the correlation matrix of a data sequence is clarified. Third, focusing on a single-neuron model, a systematic way of deriving both PCA and MCA learning algorithms is shown, through which a relation between the normalization in PCA algorithms and that in MCA algorithms is revealed. This work was presented, in part, at the Third International Symposium on Artificial Life and Robotics, Oita, Japan, January 19–21, 1998  相似文献   

13.
A non-zero-approaching adaptive learning rate is proposed to guarantee the global convergence of Oja's principal component analysis (PCA) learning algorithm. Most of the existing adaptive learning rates for Oja's PCA learning algorithm are required to approach zero as the learning step increases. However, this is not practical in many applications due to the computational round-off limitations and tracking requirements. The proposed adaptive learning rate overcomes this shortcoming. The learning rate converges to a positive constant, thus it increases the evolution rate as the learning step increases. This is different from learning rates which approach zero which slow the convergence considerably and increasingly with time. Rigorous mathematical proofs for global convergence of Oja's algorithm with the proposed learning rate are given in detail via studying the convergence of an equivalent deterministic discrete time (DDT) system. Extensive simulations are carried out to illustrate and verify the theory derived. Simulation results show that this adaptive learning rate is more suitable for Oja's PCA algorithm to be used in an online learning situation.  相似文献   

14.
J. Andrs  Pedro J. 《Neurocomputing》2007,70(16-18):2768
In this paper, the behavior of the Sanger hebbian artificial neural networks is analyzed. Hebbian networks are employed to implement principal component analysis (PCA), and several improvements over the original model due to Oja have been developed in the last two decades. Among them, Sanger model is designed to directly provide the eigenvectors of the correlation matrix. The behavior of these models has been traditionally considered on a deterministic continuous-time (DCT) formulation whose validity is justified under some hypotheses on the specific asymptotic behavior of the learning gain. In practical applications, these assumptions cannot be guaranteed. This paper addresses a comparative study with a deterministic discrete-time (DDT) formulation that characterizes the average evolution of the net, preserving the discrete-time form of the original network and gathering a more realistic behavior of the learning gain. The results thoroughly characterize the relationship between the learning gain and the eigenvalue structure of the correlation matrix.  相似文献   

15.
This study, for the first time, developed an adaptive neural networks (NNs) formulation for the two-dimensional principal component analysis (2DPCA), whose space complexity is far lower than that of its statistical version. Unlike the NNs formulation of principal component analysis (PCA, i.e., 1DPCA), the solution with lower iteration in nature aims to directly deal with original image matrices. We also put forward the consistence in the conceptions of ‘eigenfaces’ or ‘eigengaits’ in both 1DPCA and 2DPCA neural networks. To evaluate the performance of the proposed NN, the experiments were carried out on AR face database and on 64 × 64 pixels gait energy images on CASIA(B) gait database. The less reconstruction error was exploited using the proposed NN in the condition of a large sample set compared to adaptive estimation of learning algorithms for NNs of PCA. On the contrary, if the sample set was small, the proposed NN could achieve a higher residue error than PCA NNs. The amount of calculation for the proposed NN here could be smaller than that for the PCA NNs on the feature extraction of the same image matrix, which represented an efficient solution to the problem of training images directly. On face and gait recognition tasks, a simple nearest neighbor classifier test indicated a particular benefit of the neural network developed here which serves as an efficient alternative to conventional PCA NNs.  相似文献   

16.
A Class of Self-Stabilizing MCA Learning Algorithms   总被引:1,自引:0,他引:1  
In this letter, we propose a class of self-stabilizing learning algorithms for minor component analysis (MCA), which includes a few well-known MCA learning algorithms. Self-stabilizing means that the sign of the weight vector length change is independent of the presented input vector. For these algorithms, rigorous global convergence proof is given and the convergence rate is also discussed. By combining the positive properties of these algorithms, a new learning algorithm is proposed which can improve the performance. Simulations are employed to confirm our theoretical results  相似文献   

17.
Generalized eigenvector plays an essential role in the signal processing field. In this paper, we present a novel neural network learning algorithm for estimating the generalized eigenvector of a Hermitian matrix pencil. Differently from some traditional algorithms, which need to select the proper values of learning rates before using, the proposed algorithm does not need a learning rate and is very suitable for real applications. Through analyzing all of the equilibrium points, it is proven that if and only if the weight vector of the neural network is equal to the generalized eigenvector corresponding to the largest generalized eigenvalue of a Hermitian matrix pencil, the proposed algorithm reaches to convergence status. By using the deterministic discrete-time (DDT) method, some convergence conditions, which can be satisfied with probability 1, are also obtained to guarantee its convergence. Simulation results show that the proposed algorithm has a fast convergence speed and good numerical stability. The real application demonstrates its effectiveness in tracking the optimal vector of beamforming.   相似文献   

18.
广义次成分分析(generalized minor component analysis,GMCA)在现代信号处理的许多领域具有重要作用.目前现有的大多算法不能同时具备与算法对应的信息准则,以及收敛性、自稳定性和多个广义次成分提取的性能.针对上述问题,利用一种新的信息传播规则,推导出一种广义次成分提取算法,并采用确定离散时间方法(deterministic discrete time,DDT)对算法的全局收敛性能进行分析;同时,通过理论分析算法的收敛性能与算法初始状态的关系,表明算法具有自稳定性.进一步地,探索了算法在多重广义次成分提取方面的应用.相比之前的算法,所提算法具有更快的收敛速度.Matlab仿真验证了所提出算法的各项性能.  相似文献   

19.
Neural network algorithms on principal component analysis (PCA) and minor component analysis (MCA) are of importance in signal processing. Unified (dual purpose) algorithm is capable of both PCA and MCA, thus it is valuable for reducing the complexity and the cost of hardware implementations. Coupled algorithm can mitigate the speed-stability problem which exists in most noncoupled algorithms. Though unified algorithm and coupled algorithm have these advantages compared with single purpose algorithm and noncoupled algorithm, respectively, there are only few of unified algorithms and coupled algorithms have been proposed. Moreover, to the best of the authors’ knowledge, there is no algorithm which is both unified and coupled has been proposed. In this paper, based on a novel information criterion, we propose two self-stabilizing algorithms which are both unified and coupled. In the derivation of our algorithms, it is easier to obtain the results compared with traditional methods, because it is not needed to calculate the inverse Hessian matrix. Experiment results show that the proposed algorithms perform better than existing coupled algorithms and unified algorithms.  相似文献   

20.
This article introduces new low cost algorithms for the adaptive estimation and tracking of principal and minor components. The proposed algorithms are based on the well-known OPAST method which is adapted and extended in order to achieve the desired MCA or PCA (Minor or Principal Component Analysis). For the PCA case, we propose efficient solutions using Givens rotations to estimate the principal components out of the weight matrix given by OPAST method. These solutions are then extended to the MCA case by using a transformed data covariance matrix in such a way the desired minor components are obtained from the PCA of the new (transformed) matrix. Finally, as a byproduct of our PCA algorithm, we propose a fast adaptive algorithm for data whitening that is shown to overcome the recently proposed RLS-based whitening method.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号