首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The generalized Hebbian algorithm (GHA) is one of the most widely used principal component analysis (PCA) neural network (NN) learning algorithms. Learning rates of GHA play important roles in convergence of the algorithm for applications. Traditionally, the learning rates of GHA are required to converge to zero so that its convergence can be analyzed by studying the corresponding deterministic continuous-time (DCT) equations. However, the requirement for learning rates to approach zero is not a practical one in applications due to computational roundoff limitations and tracking requirements. In this paper, nonzero-approaching adaptive learning rates are proposed to overcome this problem. These proposed adaptive learning rates converge to some positive constants, which not only speed up the algorithm evolution considerably, but also guarantee global convergence of the GHA algorithm. The convergence is studied in detail by analyzing the corresponding deterministic discrete-time (DDT) equations. Extensive simulations are carried out to illustrate the theory.  相似文献   

2.
The convergence of Oja's principal component analysis (PCA) learning algorithms is a difficult topic for direct study and analysis. Traditionally, the convergence of these algorithms is indirectly analyzed via certain deterministic continuous time (DCT) systems. Such a method will require the learning rate to converge to zero, which is not a reasonable requirement to impose in many practical applications. Recently, deterministic discrete time (DDT) systems have been proposed instead to indirectly interpret the dynamics of the learning algorithms. Unlike DCT systems, DDT systems allow learning rates to be constant (which can be a nonzero). This paper will provide some important results relating to the convergence of a DDT system of Oja's PCA learning algorithm. It has the following contributions: 1) A number of invariant sets are obtained, based on which we can show that any trajectory starting from a point in the invariant set will remain in the set forever. Thus, the nondivergence of the trajectories is guaranteed. 2) The convergence of the DDT system is analyzed rigorously. It is proven, in the paper, that almost all trajectories of the system starting from points in an invariant set will converge exponentially to the unit eigenvector associated with the largest eigenvalue of the correlation matrix. In addition, exponential convergence rate are obtained, providing useful guidelines for the selection of fast convergence learning rate. 3) Since the trajectories may diverge, the careful choice of initial vectors is an important issue. This paper suggests to use the domain of unit hyper sphere as initial vectors to guarantee convergence. 4) Simulation results will be furnished to illustrate the theoretical results achieved.  相似文献   

3.
Dynamics of Generalized PCA and MCA Learning Algorithms   总被引:1,自引:0,他引:1  
Principal component analysis (PCA) and minor component analysis (MCA) are two important statistical tools which have many applications in the fields of signal processing and data analysis. PCA and MCA neural networks (NNs) can be used to online extract principal component and minor component from input data. It is interesting to develop generalized learning algorithms of PCA and MCA NNs. Some novel generalized PCA and MCA learning algorithms are proposed in this paper. Convergence of PCA and MCA learning algorithms is an essential issue in practical applications. Traditionally, the convergence is studied via deterministic continuous-time (DCT) method. The DCT method requires the learning rate of the algorithms to approach to zero, which is not realistic in many practical applications. In this paper, deterministic discrete-time (DDT) method is used to study the dynamical behaviors of the proposed algorithms. The DDT method is more reasonable for the convergence analysis since it does not require constraints as that of the DCT method. It is proven that under some mild conditions, the weight vector in these proposed algorithms will converge exponentially to principal or minor component. Simulation results are further used to illustrate the theoretical results.  相似文献   

4.
一种全局收敛的PCA神经网络学习算法   总被引:2,自引:1,他引:2  
主元分析(PCA)也称为K-L变换是进行特征提取的一种重要方法。近年来,为了处理海量数据,许多基于Hebbian学习算法的PCA神经网络被提出来。传统的算法,通常不能保证其收敛性或者收敛速度较慢。基于CRLS神经网络,本文提出了一种新的确保权向量收敛的学习算法,本算法无须在计算中规格化权向量。同时也证明了该学习算法使得权向量收敛到最大特征值所对应的特征向量。实验表明,与传统的CRLS神经网络比较,本文算法准确性得到极大提高。  相似文献   

5.
This paper proposes a linear neural network for principal component analysis whose weight vector lengths converge to the variances of the principal components in the input data. The neural network breaks the symmetry in its learning process by the differences in weight vector lengths and, as opposed to other linear neural networks described in literature, does not need to assume any asymmetries in its structure to extract the principal components. We prove the asymptotic stability of a stationary solution of the network's learning equation. Simulations show that the set of weight vectors converge to this solution. Comparison of convergence speeds shows that in the simulations the proposed neural network is about as fast as Sanger's generalized Hebbian algorithm (GHA) network, the weighted subspace rule network of Oja et al., and Xu's LMSER network (weighted linear version).  相似文献   

6.
Recently, many unified learning algorithms have been developed to solve the task of principal component analysis (PCA) and minor component analysis (MCA). These unified algorithms can be used to extract principal component and if altered simply by the sign, it can also serve as a minor component extractor. This is of practical significance in the implementations of algorithms. Convergence of the existing unified algorithms is guaranteed only under the condition that the learning rates of algorithms approach zero, which is impractical in many practical applications. In this paper, we propose a unified PCA & MCA algorithm with a constant learning rate, and derive the sufficient conditions to guarantee convergence via analyzing the discrete-time dynamics of the proposed algorithm. The achieved theoretical results lay a solid foundation for the applications of our proposed algorithm.  相似文献   

7.
韩旭  刘强  许瑾  谌海云 《计算机科学》2018,45(Z11):278-281, 307
PCA(Principal Component Analysis)是最重要的数据降维算法之一,针对降维过程出现的信息丢失问题,学术界说法不一。基于此,文中提出了一种新的改进算法(Similar Principal Component Analysis,SPCA), 新算法 在处理过程中保留了部分细节信息。以手写数字(MNIST)数据库为例,将原始向量组进行临近特征筛选,得出多维复合非正交特征向量组;将训练库所得的向量组与测试集的向量组进行比对,识别出所测试的手写数字。结果表明,该算法能够以较少量的训练样本实现对测试样本的较为完全的识别。  相似文献   

8.
Local PCA algorithms   总被引:5,自引:0,他引:5  
Within the last years various principal component analysis (PCA) algorithms have been proposed. In this paper we use a general framework to describe those PCA algorithms which are based on Hebbian learning. For an important subset of these algorithms, the local algorithms, we fully describe their equilibria, where all lateral connections are set to zero and their local stability. We show how the parameters in the PCA algorithms have to be chosen in order to get an algorithm which converges to a stable equilibrium which provides principal component extraction.  相似文献   

9.
Principal/minor component analysis(PCA/MCA),generalized principal/minor component analysis(GPCA/GMCA),and singular value decomposition(SVD)algorithms are important techniques for feature extraction.In the convergence analysis of these algorithms,the deterministic discrete-time(DDT)method can reveal the dynamic behavior of PCA/MCA and GPCA/GMCA algorithms effectively.However,the dynamic behavior of SVD algorithms has not been studied quantitatively because of their special structure.In this paper,for the first time,we utilize the advantages of the DDT method in PCA algorithms analysis to study the dynamics of SVD algorithms.First,taking the cross-coupled Hebbian algorithm as an example,by concatenating the two cross-coupled variables into a single vector,we successfully get a PCA-like DDT system.Second,we analyze the discrete-time dynamic behavior and stability of the PCA-like DDT system in detail based on the DDT method,and obtain the boundedness of the weight vectors and learning rate.Moreover,further discussion shows the universality of the proposed method for analyzing other SVD algorithms.As a result,the proposed method provides a new way to study the dynamical convergence properties of SVD algorithms.  相似文献   

10.
In this letter, we introduce a nonlinear hierarchic PCA type neural network with a simple architecture. The learning algorithm is a kind of nonlinear extension of the well-known Sanger's Generalized Hebbian Algorithm (GHA). It is derived from a nonlinear optimization criterion. Experiments with sinusoidal data show that the neurons become sensitive to different sinusoids. Standard linear PCA algorithms don't have such a separation property.  相似文献   

11.
Principal component analysis (PCA) by neural networks is one of the most frequently used feature extracting methods. To process huge data sets, many learning algorithms based on neural networks for PCA have been proposed. However, traditional algorithms are not globally convergent. In this paper, a new PCA learning algorithm based on cascade recursive least square (CRLS) neural network is proposed. This algorithm can guarantee the network weight vector converges to an eigenvector associated with the largest eigenvalue of the input covariance matrix globally. A rigorous mathematical proof is given. Simulation results show the effectiveness of the algorithm.  相似文献   

12.
The problem of robust stability and convergence of learning parameters of adaptation algorithms in a noisy environment for the single preceptron is addressed. The case in which the same input pattern is presented in the adaptation cycle is analyzed. The algorithm proposed is of the Widrow-Hoff type. It is concluded that this algorithm is robust. However, the weight vectors do not necessarily converge in the presence of measurement noise. A modified version of this algorithm in which the reduction factors are allowed to vary with time is proposed, and it is shown that this algorithm is robust and that the weight vectors converge in the presence of bounded noise. Only deterministic-type arguments are used in the analysis. An ultimate bound on the error in terms of a convex combination of the initial error and the bound on the noise is obtained.  相似文献   

13.
A learning algorithm for the principal component analysis (PCA) is developed based on the least-square minimization. The dual learning rate parameters are adjusted adaptively to make the proposed algorithm capable of fast convergence and high accuracy for extracting all principal components. The proposed algorithm is robust to the error accumulation existing in the sequential PCA algorithm. We show that all information needed for PCA can he completely represented by the unnormalized weight vector which is updated based only on the corresponding neuron input-output product. The updating of the normalized weight vector can be referred to as a leaky Hebb's rule. The convergence of the proposed algorithm is briefly analyzed. We also establish the relation between Oja's rule and the least squares learning rule. Finally, the simulation results are given to illustrate the effectiveness of this algorithm for PCA and tracking time-varying directions-of-arrival.  相似文献   

14.
A non-zero-approaching adaptive learning rate is proposed to guarantee the global convergence of Oja's principal component analysis (PCA) learning algorithm. Most of the existing adaptive learning rates for Oja's PCA learning algorithm are required to approach zero as the learning step increases. However, this is not practical in many applications due to the computational round-off limitations and tracking requirements. The proposed adaptive learning rate overcomes this shortcoming. The learning rate converges to a positive constant, thus it increases the evolution rate as the learning step increases. This is different from learning rates which approach zero which slow the convergence considerably and increasingly with time. Rigorous mathematical proofs for global convergence of Oja's algorithm with the proposed learning rate are given in detail via studying the convergence of an equivalent deterministic discrete time (DDT) system. Extensive simulations are carried out to illustrate and verify the theory derived. Simulation results show that this adaptive learning rate is more suitable for Oja's PCA algorithm to be used in an online learning situation.  相似文献   

15.
Algorithms for accelerated convergence of adaptive PCA   总被引:3,自引:0,他引:3  
We derive and discuss adaptive algorithms for principal component analysis (PCA) that are shown to converge faster than the traditional PCA algorithms due to Oja and Karhunen (1985), Sanger (1989), and Xu (1993). It is well known that traditional PCA algorithms that are derived by using gradient descent on an objective function are slow to converge. Furthermore, the convergence of these algorithms depends on appropriate choices of the gain sequences. Since online applications demand faster convergence and an automatic selection of gains, we present new adaptive algorithms to solve these problems. We first present an unconstrained objective function, which can be minimized to obtain the principal components. We derive adaptive algorithms from this objective function by using: (1) gradient descent; (2) steepest descent; (3) conjugate direction; and (4) Newton-Raphson methods. Although gradient descent produces Xu's LMSER algorithm, the steepest descent, conjugate direction, and Newton-Raphson methods produce new adaptive algorithms for PCA. We also provide a discussion on the landscape of the objective function, and present a global convergence proof of the adaptive gradient descent PCA algorithm using stochastic approximation theory. Extensive experiments with stationary and nonstationary multidimensional Gaussian sequences show faster convergence of the new algorithms over the traditional gradient descent methods. We also compare the steepest descent adaptive algorithm with state-of-the-art methods on stationary and nonstationary sequences.  相似文献   

16.
针对标准的BP神经网络仅从预测误差负梯度方向修正权值和阈值,学习过程收敛缓慢,并且容易陷入局部最小值,导致泛化能力不足的问题,提出了一种基于学习经验变学习速率改进的RPROP方法作为BP神经网络权值和阈值更新方法,并与主成分分析法(Principal Component Analysis,PCA)相结合,形成了PCA-改进神经网络算法。同时,采用Matlab软件对四类音乐信号进行分类实验。实验结果表明,改进算法比标准算法的稳定识别率提高2.6%,当稳定识别率达到90%时,用时节省75%,表明该算法可以加快网络的收敛过程,提高泛化能力。  相似文献   

17.
This paper applies statistical physics to the problem of robust principal component analysis (PCA). The commonly used PCA learning rules are first related to energy functions. These functions are generalized by adding a binary decision field with a given prior distribution so that outliers in the data are dealt with explicitly in order to make PCA robust. Each of the generalized energy functions is then used to define a Gibbs distribution from which a marginal distribution is obtained by summing over the binary decision field. The marginal distribution defines an effective energy function, from which self-organizing rules have been developed for robust PCA. Under the presence of outliers, both the standard PCA methods and the existing self-organizing PCA rules studied in the literature of neural networks perform quite poorly. By contrast, the robust rules proposed here resist outliers well and perform excellently for fulfilling various PCA-like tasks such as obtaining the first principal component vector, the first k principal component vectors, and directly finding the subspace spanned by the first k vector principal component vectors without solving for each vector individually. Comparative experiments have been made, and the results show that the authors' robust rules improve the performances of the existing PCA algorithms significantly when outliers are present.  相似文献   

18.
聚类问题的自适应杂交差分演化模拟退火算法   总被引:1,自引:0,他引:1       下载免费PDF全文
针对K-均值聚类算法对初始值敏感和易陷入局部最优的缺点,提出了一个基于自适应杂交差分演化模拟退火的K-均值聚类算法。该算法以差分演化算法为基础,通过模拟退火算法的更新策略来增强全局搜索能力,并运用自适应技术来选择学习策略、确定算法的关键参数。实验结果表明,该算法能较好地克服传统K-均值聚类算法的缺点,具有较好的全局收敛能力,且算法稳定性强、收敛速度快,将新算法与传统的K-均值聚类算法以及最近提出的几个同类聚类算法进行了比较。  相似文献   

19.
工业检测图像经常受到不均光照的影响,对该类图像局部自适应分割算法比全局算法能产生更好的分割效果. 但局部算法中基于分块的算法对分块方法缺乏指导,而基于邻域的算法容易在背景或前景内部产生误分. 针对上述缺点,本文提出了一种多方向灰度波动变换的自适应阈值分割算法. 该算法先从多个方向依照灰度波动对图像进行转换,构造以多维向量为基础的灰度波动变换矩阵, 然后利用主成分分析法(Principal component analysis, PCA)将高维向量压缩至一维并生成变换图像,最后运用Otsu算法分割变换图像. 该算法无需分块,并且仅需波动幅度阈值和布尔型背景色两个参数. 实验结果表明,该算法能够有效减少不均光照对工业检测图像分割的影响, 与Niblack法、Sauvola法等几种局部算法相比,该法在分割效果上具有了明显的提升.  相似文献   

20.
A class of neural networks for independent component analysis   总被引:26,自引:0,他引:26  
Independent component analysis (ICA) is a recently developed, useful extension of standard principal component analysis (PCA). The ICA model is utilized mainly in blind separation of unknown source signals from their linear mixtures. In this application only the source signals which correspond to the coefficients of the ICA expansion are of interest. In this paper, we propose neural structures related to multilayer feedforward networks for performing complete ICA. The basic ICA network consists of whitening, separation, and basis vector estimation layers. It can be used for both blind source separation and estimation of the basis vectors of ICA. We consider learning algorithms for each layer, and modify our previous nonlinear PCA type algorithms so that their separation capabilities are greatly improved. The proposed class of networks yields good results in test examples with both artificial and real-world data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号