首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 234 毫秒
1.
韩敏  王新迎 《控制与决策》2011,26(5):757-760
针对极端学习机(ELM)网络伪逆输出权值计算方法的运算复杂度制约其训练速度问题,提出一种基于信赖域Newton算法的新型ELM网络(TRON-ELM),并采用信赖域Newton算法求解ELM网络的输出权值.该算法首先构造一个ELM网络代价函数的Newton方程,并将其作为一个无约束优化问题,采用共轭梯度法求解,避免了求代价函数Hessian矩阵逆的运算,提高了训练速度,信赖域条件的存在保证了算法的整体收敛性.仿真实验结果验证了所提出方法的有效性.  相似文献   

2.
王粲  夏元清  邹伟东 《计算机应用研究》2021,38(6):1724-1727,1764
针对极限学习机(extreme learning machine,ELM)隐节点不确定性导致的系统不稳定,以及对大型数据计算负担过重的问题,提出了基于自适应动量优化算法(adaptive and momentum method,AdaMom)的正则化极限学习机.算法主要思想是构造连续可微的目标函数,在梯度下降过程中计算自适应学习率,求自适应学习率与梯度乘积的指数加权平均值,通过迭代得到损失函数最小值对应的隐层输出权重矩阵.实验结果表明,在相同基准数据集的训练中,AdaMom-ELM算法具有非常良好的泛化性能和鲁棒性,提高了计算效率.  相似文献   

3.
极限学习机(Extreme Learning Machine,ELM)是一种单隐层前馈神经网络(Single-hidden Layer Feedforward Neural Networks,SLFN),它相较于传统神经网络算法来说结构简单,具有较快的学习速度,以及良好的泛化性能等优点。由最小二乘法(Least Square,LE)计算得出的输出权值,往往由于设计矩阵为奇异矩阵,得到的权值有较大偏差,遇到有噪声的数据时,算法的鲁棒性无法保证。主成分估计是对最小二乘估计的一种改进算法,主成分估计能有效的改善设计矩阵奇异造成的影响,能有效的提高网络模型的鲁棒性和抗噪能力。提出了一种基于主成分估计的极限学习机方法(PC-ELM),实验结果表明,此方法能有效提高算法的鲁棒性和泛化能力。  相似文献   

4.
极端学习机(ELM)以其快速高效和良好的泛化能力在模式识别领域得到了广泛应用。然而当前的ELM及其改进算法并没有充分考虑到隐层节点输出矩阵对极端学习机泛化能力的影响。通过实验发现激活函数选取不当及数据维数过高将导致隐层节点输出值趋于零,使得输出权值矩阵求解不准,降低ELM的分类性能。为此,提出一种微分同胚优化的极端学习机算法。该算法结合降维和微分同胚技术提高激活函数的鲁棒性,克服隐层节点输出值趋于零的问题。为验证所提算法的有效性使用人脸数据进行实验。实验结果表明所提算法具有良好的泛化性能。  相似文献   

5.
当混合信号的个数多于源信号时,盲源分离模型中的混合矩阵被描述为一个超定矩阵,因此不能直接通过估计逆矩阵的方法来得到分离矩阵。针对该线性超定混合情况提出了一种基于共轭梯度的盲源分离方法。该方法基于最小互信息准则,通过对行满秩分离矩阵的奇异值分解而引入了超定盲源分离的代价函数。利用共轭梯度优化算法推导出了迭代计算分离矩阵的更新公式。在每次迭代计算中,利用随机变量概率密度估计的核函数法在线估计分离信号的评价函数。避免了诸多传统盲分离算法中只能凭经验选取特定的非线性函数来代替评价函数的问题。仿真结果验证了所提算法的有效性。  相似文献   

6.
应用程序中涉及到的数据日益扩大且结构日益复杂,使得在大规模数据上运行极限学习机ELM成为一个具有挑战性的任务。为了应对这一挑战,提出了一个在云计算环境下安全和实用的ELM外包机制。该机制将ELM显式地分为私有部分和公有部分,可以有效地减少训练时间,并确保算法输入与输出的安全性。私有部分主要负责随机参数的生成和一些简单的矩阵计算;公有部分外包到云计算服务器中,由云计算服务商负责ELM算法中计算量最大的计算Moore-Penrose广义逆的操作。该广义逆也作为证据以验证结果的正确性和可靠性。我们从理论上对该外包机制的安全性进行了分析。在CIFAR-10数据集上的实验结果表明,我们所提出的机制可以有效地减少用户的计算量。  相似文献   

7.
本文介绍了基于奇异值分解的射影重构算法的一般框架,以测量矩阵的秩为4作为约束,以仿射投影逼近透视投影,利用共轭梯度法估计射影深度,通过奇异值分解实现射影重构.利用共轭梯度法确定Kruppa方程中的未知比例因子,然后利用所确定的比例因子线性求解Kruppa方程,进而标定摄像机内参数.在摄像机内参数已知的情况下,求解一个满足欧氏重构条件的非奇异矩阵,然后通过此矩阵将射影重构变换为欧氏重构.实验结果表明所给出的算法是行之有效的.  相似文献   

8.
一种基于鲁棒估计的极限学习机方法   总被引:2,自引:0,他引:2  
极限学习机(ELM)是一种单隐层前馈神经网络(single-hidden layer feedforward neural networks,SLFNs),它相较于传统神经网络算法来说结构简单,具有较快的学习速度和良好的泛化性能等优点。ELM的输出权值是由最小二乘法(least square,LE)计算得出,然而经典的LS估计的抗差能力较差,容易夸大离群点和噪声的影响,从而造成训练出的参数模型不准确甚至得到完全错误的结果。为了解决此问题,提出一种基于M估计的采用加权最小二乘方法来取代最小二乘法计算输出权值的鲁棒极限学习机算法(RBELM),通过对多个数据集进行回归和分类分析实验,结果表明,该方法能够有效降低异常值的影响,具有良好的抗差能力。  相似文献   

9.
极限学习机(Extreme Learning Machine,ELM)是一种高效率的单隐层前馈神经网络,由于其训练速度快与泛化性能好,在各个领域中都有广泛的应用。但是极限学习机随机生成输入权值与隐含层偏置矩阵,随机性影响训练模型的泛化性能与稳定性,降低模型分类的精度。为了解决这一问题,借鉴蚁狮优化算法中利用蚁狮种群中的多个个体进行并行寻优的能力,改进优化极限学习机的输入权值与隐含层偏置矩阵,得到一个分类精度更高模型。以UCI标准数据库中数据进行分类实验分析验证,实验结果表明,在5类UCI数据集上基于蚁狮优化的极限学习机(ALO-ELM)相比于PSO-ELM和SaDE-ELM具有更高的分类精度。  相似文献   

10.
图像复原实际上是反卷积问题,其中的卷积核矩阵属于大尺寸的Toeplitz矩阵。为了降低迭代复原算法的计算复杂度,通过分析该Toeplitz系统的病态性及常见快速求解方法,提出一种基于卷积核矩阵重构的预条件共轭梯度迭代算法。首先根据Toeplitz矩阵可分解为Kronecker积的和的性质,对点扩散函数进行奇异值分解,将各奇异值对应的左右向量构造子Toeplitz矩阵,子矩阵作Kronecker积并加和,从而得到卷积核矩阵的分解式,然后根据Kronecker乘积的性质,将该分解式用于构造预条件算子,最后利用预条件共轭梯度法求解。计算复杂度分析及实验表明该方法有助于加速迭代的收敛并得到稳定结果。  相似文献   

11.
Extreme learning machine (ELM) [G.-B. Huang, Q.-Y. Zhu, C.-K. Siew, Extreme learning machine: a new learning scheme of feedforward neural networks, in: Proceedings of the International Joint Conference on Neural Networks (IJCNN2004), Budapest, Hungary, 25-29 July 2004], a novel learning algorithm much faster than the traditional gradient-based learning algorithms, was proposed recently for single-hidden-layer feedforward neural networks (SLFNs). However, ELM may need higher number of hidden neurons due to the random determination of the input weights and hidden biases. In this paper, a hybrid learning algorithm is proposed which uses the differential evolutionary algorithm to select the input weights and Moore-Penrose (MP) generalized inverse to analytically determine the output weights. Experimental results show that this approach is able to achieve good generalization performance with much more compact networks.  相似文献   

12.
针对传统极限学习机的输入权值矩阵和隐含层偏差是随机给定进而可能会导致在乳腺肿瘤的辅助诊断应用研究中存在精度明显不足的情况,提出用改进鱼群算法优化ELM方法。在完成对乳腺肿瘤有效的辅助诊断的过程中,本研究工作充分利用ELM能快速地完成训练过程且具有很好的泛化能力的特点,并结合用改进鱼群算法对ELM的隐含层偏差进行优化,构造出了乳腺肿瘤与从乳腺肿瘤样本数据中提取的10个特征向量之间的非线性映射关系。将本文提出的乳腺肿瘤识别方法的仿真结果与AFSA-ELM方法、ELM方法、LVQ方法、BP方法的仿真结果分别从识别准确率、假阴性率、学习速度三个方面做对比分析,仿真结果表明,本文所提方法对乳腺肿瘤诊断具有较高的分类识别准确率、假阴性率以及较快的学习速率。  相似文献   

13.
Xiaoliang  Min   《Neurocomputing》2009,72(13-15):3066
There are two problems preventing the further development of extreme learning machine (ELM). First, the ill-conditioning of hidden layer output matrix reduces the stability of ELM. Second, the complexity of singular value decomposition (SVD) for computing Moore–Penrose generalized inverse limits the learning speed of ELM. For these two problems, this paper proposes the partial Lanczos ELM (PL-ELM) which employs the hybrid of partial Lanczos bidiagonalization and SVD to compute output weights. Experimental results indicate that, compared with ELM, PL-ELM not only effectively improves the stability and generalization performance but also raises the learning speed.  相似文献   

14.
Recently, a novel learning algorithm for single-hidden-layer feedforward neural networks (SLFNs) named extreme learning machine (ELM) was proposed by Huang et al. The essence of ELM is that the learning parameters of hidden nodes, including input weights and biases, are randomly assigned and need not be tuned while the output weights can be analytically determined by the simple generalized inverse operation. The only parameter needed to be defined is the number of hidden nodes. Compared with other traditional learning algorithms for SLFNs, ELM provides extremely faster learning speed, better generalization performance and with least human intervention. This paper firstly introduces a brief review of ELM, describing the principle and algorithm of ELM. Then, we put emphasis on the improved methods or the typical variants of ELM, especially on incremental ELM, pruning ELM, error-minimized ELM, two-stage ELM, online sequential ELM, evolutionary ELM, voting-based ELM, ordinal ELM, fully complex ELM, and symmetric ELM. Next, the paper summarized the applications of ELM on classification, regression, function approximation, pattern recognition, forecasting and diagnosis, and so on. In the last, the paper discussed several open issues of ELM, which may be worthy of exploring in the future.  相似文献   

15.
As a novel learning algorithm for single-hidden-layer feedforward neural networks, extreme learning machines (ELMs) have been a promising tool for regression and classification applications. However, it is not trivial for ELMs to find the proper number of hidden neurons due to the nonoptimal input weights and hidden biases. In this paper, a new model selection method of ELM based on multi-objective optimization is proposed to obtain compact networks with good generalization ability. First, a new leave-one-out (LOO) error bound of ELM is derived, and it can be calculated with negligible computational cost once the ELM training is finished. Furthermore, the hidden nodes are added to the network one-by-one, and at each step, a multi-objective optimization algorithm is used to select optimal input weights by minimizing this LOO bound and the norm of output weight simultaneously in order to avoid over-fitting. Experiments on five UCI regression data sets are conducted, demonstrating that the proposed algorithm can generally obtain better generalization performance with more compact network than the conventional gradient-based back-propagation method, original ELM and evolutionary ELM.  相似文献   

16.
针对极限学习机(ELM)中隐藏层到输出层存在误差的问题,通过分析发现误差来源于求解隐藏层输出矩阵H的Moore-Penrose广义逆矩阵Η的过程,即矩阵HH与单位矩阵有偏差,可根据偏差的程度来选择合适的输出矩阵H以获得较小的训练误差。根据广义逆矩阵和辅助矩阵的定义,首先确定了目标矩阵HH和误差指标L21范数,其次通过实验分析表明HH的L21范数与ELM的误差呈显著线性相关,最后通过引入Gaussian滤波对目标矩阵进行降噪处理,由此有效降低了目标矩阵的L21范数,同时降低了ELM的误差,达到优化ELM算法的目的。  相似文献   

17.
In order to overcome the disadvantage of the traditional algorithm for SLFN (single-hidden layer feedforward neural network), an improved algorithm for SLFN, called extreme learning machine (ELM), is proposed by Huang et al. However, ELM is sensitive to the neuron number in hidden layer and its selection is a difficult-to-solve problem. In this paper, a self-adaptive mechanism is introduced into the ELM. Herein, a new variant of ELM, called self-adaptive extreme learning machine (SaELM), is proposed. SaELM is a self-adaptive learning algorithm that can always select the best neuron number in hidden layer to form the neural networks. There is no need to adjust any parameters in the training process. In order to prove the performance of the SaELM, it is used to solve the Italian wine and iris classification problems. Through the comparisons between SaELM and the traditional back propagation, basic ELM and general regression neural network, the results have proven that SaELM has a faster learning speed and better generalization performance when solving the classification problem.  相似文献   

18.
Interval data offer a valuable way of representing the available information in complex problems where uncertainty, inaccuracy, or variability must be taken into account. Considered in this paper is the learning of interval neural networks, of which the input and output are vectors with interval components, and the weights are real numbers. The back-propagation (BP) learning algorithm is very slow for interval neural networks, just as for usual real-valued neural networks. Extreme learning machine (ELM) has faster learning speed than the BP algorithm. In this paper, ELM is applied for learning of interval neural networks, resulting in an interval extreme learning machine (IELM). There are two steps in the ELM for usual feedforward neural networks. The first step is to randomly generate the weights connecting the input and the hidden layers, and the second step is to use the Moore–Penrose generalized inversely to determine the weights connecting the hidden and output layers. The first step can be directly applied for interval neural networks. But the second step cannot, due to the involvement of nonlinear constraint conditions for IELM. Instead, we use the same idea as that of the BP algorithm to form a nonlinear optimization problem to determine the weights connecting the hidden and output layers of IELM. Numerical experiments show that IELM is much faster than the usual BP algorithm. And the generalization performance of IELM is much better than that of BP, while the training error of IELM is a little bit worse than that of BP, implying that there might be an over-fitting for BP.  相似文献   

19.
We present a Newton-based extremum seeking algorithm for the multivariable case. The design extends the recent Newton-based extremum seeking algorithms for the scalar case and introduces a dynamic estimator of the inverse of the Hessian matrix that removes the difficulty with the possible singularity of a possible direct estimate of the Hessian matrix. The estimator of the inverse of the Hessian has the form of a differential Riccati equation. We prove local stability of the new algorithm for general nonlinear dynamic systems using averaging and singular perturbations. In comparison with the standard gradient-based multivariable extremum seeking, the proposed algorithm removes the dependence of the convergence rate on the unknown Hessian matrix and makes the convergence rate, of both the parameter estimates and of the estimates of the Hessian inverse, user-assignable. In particular, the new algorithm allows all the parameters to converge with the same speed, yielding straight trajectories to the extremum even with maps that have highly elongated level sets, in contrast to curved “steepest descent” trajectories of the gradient algorithm. Simulation results show the advantage of the proposed approach over gradient-based extremum seeking, by assigning equal, desired convergence rates to all the parameters using Newton’s approach.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号