首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 125 毫秒
1.
基于最优化理论,提出了基于新拟牛顿方程的改进拟牛顿算法训练BP神经网络.改进算法使用了一组新型的Hesse矩阵校正方程,使得改进拟牛顿算法具有全局收敛性和局部超线性收敛性.该文将改进的拟牛顿算法与BP神经网络权值的训练结合,得到一种新的BP神经网络权值的训练算法.与传统的神经网络权值学习的拟牛顿算法比较而言,采用改进算法的神经网络的收敛速度明显加快.改进算法能有效解决BP神经网络收敛速度慢的缺陷,显著提高了BP神经网络的学习训练收敛速度和学习精度.  相似文献   

2.
针对自组织特征映射(SOFM)神经网络应用于矢量量化具有收敛速度慢、计算量大等缺点,本文提出了一种基于PCA/SOFM混合神经网络的矢量量化的算法,先用主元分析(PCA)线性神经网络对输入矢量进行降维处理,再用SOFM神经网络进行矢量量化。通过调整SOFM神经网络的学习函数、邻域权值及初始码书对网络进行优化。实验表明,改进算法缩短了图像压缩的时间,提高了码书的性能。  相似文献   

3.
为克服BP算法易陷入局部最小的缺点,同时为减少样本数据维数,提出一种基于主成分分析(PCA)的遗传神经网络方法。通过降维和去相关加快收敛速度,采用改进的遗传算法优化神经网络权值,利用自适应学习速率动量梯度下降算法对神经网络进行训练。MATLAB仿真实验结果表明,该方法在准确性和收敛性方面都优于BP算法,应用于入侵检测系统中的检测率和误报率明显优于传统方法。  相似文献   

4.
二进制神经网络分类问题的几何学习算法   总被引:6,自引:0,他引:6  
朱大铭  马绍汉 《软件学报》1997,8(8):622-629
分类问题在前向神经网络研究中占有重要位置.本文利用几何方法给出一个二进制神经网络K(≥2)分类问题的新学习算法.算法通过训练点的几何位置与类别分析,建立一个四层前向神经网络,实现网络输入向量分类.本文算法的优点在于:保证学习收敛且收敛速度快于BP算法及已有的其他一些前向网络学习算法;算法可以确定神经网络的结构且能实现精确的向量分类.另外,算法所建神经网络由线性阀值单元组成,神经元突触权值和阀值均为整数,特别适合于集成电路实现.  相似文献   

5.
针对标准的BP神经网络仅从预测误差负梯度方向修正权值和阈值,学习过程收敛缓慢,并且容易陷入局部最小值,导致泛化能力不足的问题,提出了一种基于学习经验变学习速率改进的RPROP方法作为BP神经网络权值和阈值更新方法,并与主成分分析法(Principal Component Analysis,PCA)相结合,形成了PCA-改进神经网络算法。同时,采用Matlab软件对四类音乐信号进行分类实验。实验结果表明,改进算法比标准算法的稳定识别率提高2.6%,当稳定识别率达到90%时,用时节省75%,表明该算法可以加快网络的收敛过程,提高泛化能力。  相似文献   

6.
大规模前馈神经网络的一种有效学习算法及其应用   总被引:3,自引:0,他引:3  
前馈神经网络在复杂系统建模中局限于小型或中等规模的系统,主要原因是:对于大规模问题,现有的神经网络学习算法或者收敛太慢,或者难以收敛.针对这一问题,本文提出一种基于改进的拟牛顿方法的神经网络学习算法该算法内存需要量小,收敛速度快,适合高维神经网络的训练.本文利用该算法训练神经网络建立32输入工业产品质量模型,结果表明了该算法的有效性.  相似文献   

7.
一种基于改进BP神经网络的PCA人脸识别算法   总被引:1,自引:0,他引:1  
人脸识别作为模式识别领域的热点研究问题受到了广泛的关注。传统BP算法虽然具有自学习、自适应以及强大的非线性映射能力并且在人脸图像识别准确率上占有很大的优势,但算法具有收敛缓慢、训练过程振荡、易陷入局部极小点等缺点。针对传统BP算法的不足提出一种基于改进BP神经网络的PCA人脸识别算法,该算法采用PCA算法提取图像的主要特征,并结合一种新的权值调整方法改进BP算法进行图像分类识别。仿真实验表明,通过使用该算法对ORL人脸数据库的图像进行识别,其结果比传统算法具有更快的收敛速度和更高的识别率。  相似文献   

8.
针对传统BP神经网络存在学习效率低、收敛速度慢和容易陷入局部极小值的问题,提出一种基于改进的PSO来优化BP神经网络的方法。通过在PSO算法中引入随机变化的加速常数来获得最优权值,对BP神经网络进行优化和训练,将优化的BP神经网络用于遗传高血压患病年龄的预测中。实验结果表明,该方法较好地解决了传统BP神经网络易陷入局部极小值的问题,提高了算法的收敛速度和稳定性。  相似文献   

9.
高维数据的主成分分析较难处理,因为计算时间和空间复杂度随着数据维数的增加而急剧增加.文中提出一种直接面向数据学习的PCA算法,即在迭代中新的权向量等于所有样本向量的加权和,因而不需要计算数据协方差矩阵.在解决给定样本向量或平稳随机过程的PCA问题时,该算法能够弥补目前批最算法和增量算法存在的不足.此外,在理论上证明该算法的收敛性.实验结果表明,该算法能在很少迭代次数内迅速收敛到精确解.  相似文献   

10.
为了解决传统神经网络BP梯度下降算法在解决柔性制造系统调度策略时易陷入局部最优的问题,在规则化神经网络结构的基础上,提出了一种基于最大熵的神经网络权值优化算法,利用神经网络隐层节点变量的条件概率,在计算寻优过程中,通过改变收敛算子求解熵函数的期望,进而迭代求解网络的最优权重向量,对比实验表明,相较BP梯度下降算法,采用最大熵权值调整算法,数据搜索空间范围大,能保证系统准确收敛到全局最优解,算法鲁棒性好,在实际的调度策略应用中,该算法能明显缩短整体生产任务的加工周期,达到提高企业生产效率的目的。  相似文献   

11.
A learning algorithm for the principal component analysis (PCA) is developed based on the least-square minimization. The dual learning rate parameters are adjusted adaptively to make the proposed algorithm capable of fast convergence and high accuracy for extracting all principal components. The proposed algorithm is robust to the error accumulation existing in the sequential PCA algorithm. We show that all information needed for PCA can he completely represented by the unnormalized weight vector which is updated based only on the corresponding neuron input-output product. The updating of the normalized weight vector can be referred to as a leaky Hebb's rule. The convergence of the proposed algorithm is briefly analyzed. We also establish the relation between Oja's rule and the least squares learning rule. Finally, the simulation results are given to illustrate the effectiveness of this algorithm for PCA and tracking time-varying directions-of-arrival.  相似文献   

12.
提出一种自稳定的双目的算法用以提取信号自相关矩阵的特征对.该算法可以通过仅仅改变一个符号实现主/次特征向量估计的转化,并且可以通过估计的特征向量的模值信息估计对应的特征值,从而实现特征对的提取.基于确定性离散时间方法对所提出的算法进行收敛性分析,并确定算法收敛的边界条件.与已有算法对比的仿真实验验证了所提出算法的收敛性能.  相似文献   

13.
A non-zero-approaching adaptive learning rate is proposed to guarantee the global convergence of Oja's principal component analysis (PCA) learning algorithm. Most of the existing adaptive learning rates for Oja's PCA learning algorithm are required to approach zero as the learning step increases. However, this is not practical in many applications due to the computational round-off limitations and tracking requirements. The proposed adaptive learning rate overcomes this shortcoming. The learning rate converges to a positive constant, thus it increases the evolution rate as the learning step increases. This is different from learning rates which approach zero which slow the convergence considerably and increasingly with time. Rigorous mathematical proofs for global convergence of Oja's algorithm with the proposed learning rate are given in detail via studying the convergence of an equivalent deterministic discrete time (DDT) system. Extensive simulations are carried out to illustrate and verify the theory derived. Simulation results show that this adaptive learning rate is more suitable for Oja's PCA algorithm to be used in an online learning situation.  相似文献   

14.
When the independent sources are known to be nonnegative and well-grounded, which means that they have a nonzero pdf in the region of zero, Oja and Plumbley have proposed a "Nonnegative principal component analysis (PCA)" algorithm to separate these positive sources. Generally, it is very difficult to prove the convergence of a discrete-time independent component analysis (ICA) learning algorithm. However, by using the skew-symmetry property of this discrete-time "Nonnegative PCA" algorithm, if the learning rate satisfies suitable condition, the global convergence of this discrete-time algorithm can be proven. Simulation results are employed to further illustrate the advantages of this theory.  相似文献   

15.
The convergence of Oja's principal component analysis (PCA) learning algorithms is a difficult topic for direct study and analysis. Traditionally, the convergence of these algorithms is indirectly analyzed via certain deterministic continuous time (DCT) systems. Such a method will require the learning rate to converge to zero, which is not a reasonable requirement to impose in many practical applications. Recently, deterministic discrete time (DDT) systems have been proposed instead to indirectly interpret the dynamics of the learning algorithms. Unlike DCT systems, DDT systems allow learning rates to be constant (which can be a nonzero). This paper will provide some important results relating to the convergence of a DDT system of Oja's PCA learning algorithm. It has the following contributions: 1) A number of invariant sets are obtained, based on which we can show that any trajectory starting from a point in the invariant set will remain in the set forever. Thus, the nondivergence of the trajectories is guaranteed. 2) The convergence of the DDT system is analyzed rigorously. It is proven, in the paper, that almost all trajectories of the system starting from points in an invariant set will converge exponentially to the unit eigenvector associated with the largest eigenvalue of the correlation matrix. In addition, exponential convergence rate are obtained, providing useful guidelines for the selection of fast convergence learning rate. 3) Since the trajectories may diverge, the careful choice of initial vectors is an important issue. This paper suggests to use the domain of unit hyper sphere as initial vectors to guarantee convergence. 4) Simulation results will be furnished to illustrate the theoretical results achieved.  相似文献   

16.
The generalized Hebbian algorithm (GHA) is one of the most widely used principal component analysis (PCA) neural network (NN) learning algorithms. Learning rates of GHA play important roles in convergence of the algorithm for applications. Traditionally, the learning rates of GHA are required to converge to zero so that its convergence can be analyzed by studying the corresponding deterministic continuous-time (DCT) equations. However, the requirement for learning rates to approach zero is not a practical one in applications due to computational roundoff limitations and tracking requirements. In this paper, nonzero-approaching adaptive learning rates are proposed to overcome this problem. These proposed adaptive learning rates converge to some positive constants, which not only speed up the algorithm evolution considerably, but also guarantee global convergence of the GHA algorithm. The convergence is studied in detail by analyzing the corresponding deterministic discrete-time (DDT) equations. Extensive simulations are carried out to illustrate the theory.  相似文献   

17.
Recently, many unified learning algorithms have been developed to solve the task of principal component analysis (PCA) and minor component analysis (MCA). These unified algorithms can be used to extract principal component and if altered simply by the sign, it can also serve as a minor component extractor. This is of practical significance in the implementations of algorithms. Convergence of the existing unified algorithms is guaranteed only under the condition that the learning rates of algorithms approach zero, which is impractical in many practical applications. In this paper, we propose a unified PCA & MCA algorithm with a constant learning rate, and derive the sufficient conditions to guarantee convergence via analyzing the discrete-time dynamics of the proposed algorithm. The achieved theoretical results lay a solid foundation for the applications of our proposed algorithm.  相似文献   

18.
19.
基于MapReduce的主成分分析算法研究   总被引:1,自引:0,他引:1  
随着MapReduce并行化框架的流行,各种数据挖掘算法的并行化也成为了当下研究的热点。主成分分析(Principle Components Analysis,PCA)算法的并行化也得到了越来越多的关注。通过对目前PCA算法的并行化研究的成果进行总结,发现这些PCA算法并行程度并不完全,特别是特征值计算过程。整个PCA算法流程分为两个阶段:相关系数矩阵求解阶段和矩阵的奇异值分解(Singular Value Decomposition,SVD)阶段。通过当前最流行的并行框架MapReduce,融合矩阵的QR分解,提出了一种奇异值分解的并行实现方法。利用随机产生的不同维度大小的双浮点矩阵比较并行奇异值分解相对传统串行环境下的算法效率的提升情况,并分析算法效率。之后,将并行奇异值分解融合到PCA算法中,同时提出相关系数矩阵的并行计算过程,将PCA计算的两个部分完全并行化。利用不同维度的矩阵对提出的并行PCA算法与已存在的未完全并行PCA算法、常规的PCA算法的运算速度进行比较,分析完全并行化PCA算法的加速比,最终得出所提算法在处理一定规模的大数据情况下的时间消耗要少许多。  相似文献   

20.
In several application domains, high-dimensional observations are collected and then analysed in search for naturally occurring data clusters which might provide further insights about the nature of the problem. In this paper we describe a new approach for partitioning such high-dimensional data. Our assumption is that, within each cluster, the data can be approximated well by a linear subspace estimated by means of a principal component analysis (PCA). The proposed algorithm, Predictive Subspace Clustering (PSC) partitions the data into clusters while simultaneously estimating cluster-wise PCA parameters. The algorithm minimises an objective function that depends upon a new measure of influence for PCA models. A penalised version of the algorithm is also described for carrying our simultaneous subspace clustering and variable selection. The convergence of PSC is discussed in detail, and extensive simulation results and comparisons to competing methods are presented. The comparative performance of PSC has been assessed on six real gene expression data sets for which PSC often provides state-of-art results.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号