首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 296 毫秒
1.
针对极端学习机(ELM)网络结构设计问题,提出基于灵敏度分析法的ELM剪枝算法.利用隐含层节点输出和相对应的输出层权值向量,定义学习残差对于隐含层节点的灵敏度和网络规模适应度,根据灵敏度大小判断隐含层节点的重要性,利用网络规模适应度确定隐含层节点个数,删除重要性较低的节点.仿真结果表明,所提出的算法能够较为准确地确定与学习样本相匹配的网络规模,解决了ELM网络结构设计问题.  相似文献   

2.
神经网络的隐层数和隐层节点数决定了网络规模,并对网络性能造成较大影响。在满足网络所需最少隐层节点数的前提下,利用剪枝算法删除某些冗余节点,减少隐层节点数,得到更加精简的网络结构。基于惩罚函数的剪枝算法是在目标函数后加入一个惩罚函数项,该惩罚函数项是一个变量为网络权值的函数。由于惩罚函数中的网络权值变量可以附加一个可调参数,将单一惩罚函数项泛化为一类随参数规律变化的新的惩罚函数,初始惩罚函数可看作泛化后惩罚函数的参数取定值的特殊情况。实验利用基于标准BP神经网络的XOR数据进行测试,得到隐层节点剪枝效果和网络权值随惩罚函数的泛化而发生变化,并从数据分析中得出具有更好剪枝效果及更优网络结构的惩罚函数泛化参数。  相似文献   

3.
相比径向基(RBF)神经网络,极限学习机(ELM)训练速度更快,泛化能力更强.同时,近邻传播聚类算法(AP)可以自动确定聚类个数.因此,文中提出融合AP聚类、多标签RBF(ML-RBF)和正则化ELM(RELM)的多标签学习模型(ML-AP-RBF-RELM).首先,在该模型中输入层使用ML-RBF进行映射,且通过AP聚类算法自动确定每一类标签的聚类个数,计算隐层节点个数.然后,利用每类标签的聚类个数通过K均值聚类确定隐层节点RBF函数的中心.最后,通过RELM快速求解隐层到输出层的连接权值.实验表明,ML-AP-RBF-RELM效果较好.  相似文献   

4.
为提高小波网络运行速度,缩短小波网络的训练及运行时间,提出一种基于提升小波变换和神经网络算法的改进小波网络——提升小波网络.首先将带有明显特征的信号作为网络输入,经过权值处理输入到隐层节点进行提升小波变换处理,提取信号的低频系数作为隐层节点的输出,再经过权值化处理输入到输出层节点进行0-1输出,进而达到对信号的特征识别...  相似文献   

5.
针对模型辨识中模型阶次难以辨识的问题,提出了一种RBF神经网络剪枝算法。基于该算法,对RBF神经网络隐节点和输入节点进行剪枝,不仅可以精简网络的结构,而且可以减少网络的输入节点,从而确定模型的阶次。同时,为了避免误删输入节点,在对输入节点剪枝时,将过程的输入和输出分开剪枝。将该算法应用于热工过程辨识中,仿真结果表明,提出的基于RBF神经网络剪枝算法是有效的。  相似文献   

6.
BP(back propagation)神经网络中隐层节点的个数过多将影响网络的泛化性能和效率,自构形学习算法通过考察网络隐层节点输出之间的相关性来删除和合并隐层节点.但自构形算法在节点的删除和合并时存在网络收敛不一致问题,因此,在自构形算法中引入随机度概念,在分治算法思想的基础上提出了循环自构形算法来优化网络结构.Matlab实验对比验证了循环自构形算法能从不同或相同的隐层节点数剪枝到一致的网络结构,并将网络结构优化至最精简.  相似文献   

7.
基于GEP优化的RBF神经网络算法   总被引:1,自引:0,他引:1  
RBF神经网络作为一种采用局部调节来执行函数映射的人工神经网络,在逼近能力、分类能力和学习速度等方面都有良好的表现,但由于RBF网络的隐节点的个数和隐节点的中心难以确定,从而影响了整个网络的精度,极大地制约了该网络的广泛应用.为此本文提出基于GEP优化的RBF神经网络算法,对其中心向量及连接权值进行优化.实验表明,本文所提算法比RBF算法的预测误差平均减少了48.96% .  相似文献   

8.
提出了一种新的结构自适应的径向基函数(RBF)神经网络模型。在该网络中,自组织映射(SOM)神经网络作为聚类网络,采用无监督学习算法对输入样本进行自组织分类,并将分类中心及其对应的权值向量传递给RBF神经网络,作为径向基函数的中心和相应的权值向量;RBF神经网络作为基础网络,采用高斯函数实现输入层到隐层的非线性映射,输出层则采用有监督学习算法训练网络的权值,从而实现输入层到输出层的非线性映射。通过对字母数据集进行仿真,表明该网络具有较好的性能。  相似文献   

9.
徐睿  梁循  马跃峰  齐金山 《计算机学报》2021,44(9):1888-1906
由于具有灵活的非线性建模能力和良好的模式识别能力,单隐藏层前馈神经网络(Single Hidden Layer Feedforward Neural Network,SLFN)一直是机器学习和数据挖掘领域关注的焦点.众所周知,网络结构是影响SLFN泛化能力的重要因素之一.给定一个具体应用,如何在训练过程中自动选取最优的隐节点个数,仍是一大挑战.极限学习机(Extreme Learning Machine,ELM)通过随机生成隐藏层节点参数,并利用最小二乘法求解输出层权值的方式来训练SLFN,在一定程度上克服了传统的基于梯度类学习方法收敛速度慢、容易陷入局部最小值等问题.然而,ELM仍需要人为确定隐节点个数,不仅过程繁琐,而且无法保证得到最优或者次优的网络结构.在不影响泛化能力的前提下,为了进一步降低网络的复杂度,本文对ELM进行了改进,通过将网络结构学习转化为子集模型选择,提出了一种隐节点自适应正交搜索方法.首先,利用标准ELM构建隐节点候选池.然后,采用正交前向选择算法选择与网络期望输出相关度最大的候选隐节点加入到模型中.同时,每向前引入一个新的隐节点,就要向后对已选入的隐节点进行逐个检查,将变得不重要的隐节点从网络中删除.最后,设计了一种增强的向后移除策略来纠正前面步骤中所犯的错误,进一步剔除模型内残留的冗余隐节点.本文方法充分考虑了隐节点间的内在联系和相互影响,实验结果表明,该方法不仅具有良好的泛化性能,而且能够产生比较紧凑的网络结构.  相似文献   

10.
提出了一种新的结构自适应的径向基函数(RBF)神经网络模型。在该模型中,自组织映射(SOM)神经网络作为聚类网络,采用无监督学习算法对输入样本进行自组织分类,并将分类中心及其对应的权值向量传递给RBF神经网络,分别作为径向基函数的中心和相应的权值向量;RBF神经网络作为基础网络,采用高斯函数实现输入层到隐层的非线性映射,输出层则采用有监督学习算法训练网络的权值,从而实现输入层到输出层的非线性映射。通过对字母数据集进行仿真,表明该网络具有较好的性能。  相似文献   

11.
In this paper, we introduce a new learning method for composite function wavelet neural networks (CFWNN) by combining the differential evolution (DE) algorithm with extreme learning machine (ELM), in short, as CWN-E-ELM. The recently proposed CFWNN trained with ELM (CFWNN-ELM) has several promising features. But the CFWNN-ELM may have some redundant nodes due to the number of hidden nodes assigned a priori and the input weight matrix and the hidden node parameter vector randomly generated once and never changed during the learning phase. The introduction of DE into CFWNN-ELM is to search for the optimal network parameters and to reduce the number of hidden nodes used in the network. Simulations on several artificial function approximations, real-world data regressions and a chaotic signal prediction problem show some advantages of the proposed CWN-E-ELM. Compared with CFWNN-ELM, CWN-E-ELM has a much more compact network size and Compared with several relevant methods, CWN-E-ELM is able to achieve a better generalization performance.  相似文献   

12.
张辉  柴毅 《计算机工程与应用》2012,48(20):146-149,157
提出了一种改进的RBF神经网络参数优化算法。通过资源分配网络算法确定隐含层节点个数,引入剪枝策略删除对网络贡献不大的节点,用改进的粒子群算法对RBF网络的中心、宽度、权值进行优化,使RBF网络不仅可以得到合适的结构,同时也可以得到合适的控制参数。将此算法用于连续搅拌釜反应器模型的预测,结果表明,此算法优化后的RBF网络结构小,并且具有较高的泛化能力。  相似文献   

13.
Considering the uncertainty of hidden neurons, choosing significant hidden nodes, called as model selection, has played an important role in the applications of extreme learning machines(ELMs). How to define and measure this uncertainty is a key issue of model selection for ELM. From the information geometry point of view, this paper presents a new model selection method of ELM for regression problems based on Riemannian metric. First, this paper proves theoretically that the uncertainty can be characterized by a form of Riemannian metric. As a result, a new uncertainty evaluation of ELM is proposed through averaging the Riemannian metric of all hidden neurons. Finally, the hidden nodes are added to the network one by one, and at each step, a multi-objective optimization algorithm is used to select optimal input weights by minimizing this uncertainty evaluation and the norm of output weight simultaneously in order to obtain better generalization performance. Experiments on five UCI regression data sets and cylindrical shell vibration data set are conducted, demonstrating that the proposed method can generally obtain lower generalization error than the original ELM, evolutionary ELM, ELM with model selection, and multi-dimensional support vector machine. Moreover, the proposed algorithm generally needs less hidden neurons and computational time than the traditional approaches, which is very favorable in engineering applications.  相似文献   

14.
As a novel learning algorithm for single-hidden-layer feedforward neural networks, extreme learning machines (ELMs) have been a promising tool for regression and classification applications. However, it is not trivial for ELMs to find the proper number of hidden neurons due to the nonoptimal input weights and hidden biases. In this paper, a new model selection method of ELM based on multi-objective optimization is proposed to obtain compact networks with good generalization ability. First, a new leave-one-out (LOO) error bound of ELM is derived, and it can be calculated with negligible computational cost once the ELM training is finished. Furthermore, the hidden nodes are added to the network one-by-one, and at each step, a multi-objective optimization algorithm is used to select optimal input weights by minimizing this LOO bound and the norm of output weight simultaneously in order to avoid over-fitting. Experiments on five UCI regression data sets are conducted, demonstrating that the proposed algorithm can generally obtain better generalization performance with more compact network than the conventional gradient-based back-propagation method, original ELM and evolutionary ELM.  相似文献   

15.
Recently, a novel learning algorithm for single-hidden-layer feedforward neural networks (SLFNs) named extreme learning machine (ELM) was proposed by Huang et al. The essence of ELM is that the learning parameters of hidden nodes, including input weights and biases, are randomly assigned and need not be tuned while the output weights can be analytically determined by the simple generalized inverse operation. The only parameter needed to be defined is the number of hidden nodes. Compared with other traditional learning algorithms for SLFNs, ELM provides extremely faster learning speed, better generalization performance and with least human intervention. This paper firstly introduces a brief review of ELM, describing the principle and algorithm of ELM. Then, we put emphasis on the improved methods or the typical variants of ELM, especially on incremental ELM, pruning ELM, error-minimized ELM, two-stage ELM, online sequential ELM, evolutionary ELM, voting-based ELM, ordinal ELM, fully complex ELM, and symmetric ELM. Next, the paper summarized the applications of ELM on classification, regression, function approximation, pattern recognition, forecasting and diagnosis, and so on. In the last, the paper discussed several open issues of ELM, which may be worthy of exploring in the future.  相似文献   

16.
前馈神经网隐层节点的动态删除法   总被引:5,自引:0,他引:5  
本文首先针对BP算法中存在的缺陷对误差函数作了简单的修改,使网络的收敛速度比原来的大大提高,此外本文提提出了一种基于线性回归分析算法来确定隐层节点数。当已训练好的网络具有过多的隐层单元,可以用这种算法来计算隐层节点输出之间的线性相关性,并估计多余隐层单元数目,然后删除这部分多余的节点,就能获得一个合适的网络结构。  相似文献   

17.
Extreme learning machine (ELM) is a learning algorithm for generalized single-hidden-layer feed-forward networks (SLFNs). In order to obtain a suitable network architecture, Incremental Extreme Learning Machine (I-ELM) is a sort of ELM constructing SLFNs by adding hidden nodes one by one. Although kinds of I-ELM-class algorithms were proposed to improve the convergence rate or to obtain minimal training error, they do not change the construction way of I-ELM or face the over-fitting risk. Making the testing error converge quickly and stably therefore becomes an important issue. In this paper, we proposed a new incremental ELM which is referred to as Length-Changeable Incremental Extreme Learning Machine (LCI-ELM). It allows more than one hidden node to be added to the network and the existing network will be regarded as a whole in output weights tuning. The output weights of newly added hidden nodes are determined using a partial error-minimizing method. We prove that an SLFN constructed using LCI-ELM has approximation capability on a universal compact input set as well as on a finite training set. Experimental results demonstrate that LCI-ELM achieves higher convergence rate as well as lower over-fitting risk than some competitive I-ELM-class algorithms.  相似文献   

18.
This paper presents a performance enhancement scheme for the recently developed extreme learning machine (ELM) for classifying power system disturbances using particle swarm optimization (PSO). Learning time is an important factor while designing any computational intelligent algorithms for classifications. ELM is a single hidden layer neural network with good generalization capabilities and extremely fast learning capacity. In ELM, the input weights are chosen randomly and the output weights are calculated analytically. However, ELM may need higher number of hidden neurons due to the random determination of the input weights and hidden biases. One of the advantages of ELM over other methods is that the parameter that the user must properly adjust is the number of hidden nodes only. But the optimal selection of its parameter can improve its performance. In this paper, a hybrid optimization mechanism is proposed which combines the discrete-valued PSO with the continuous-valued PSO to optimize the input feature subset selection and the number of hidden nodes to enhance the performance of ELM. The experimental results showed the proposed algorithm is faster and more accurate in discriminating power system disturbances.  相似文献   

19.
Evolutionary selection extreme learning machine optimization for regression   总被引:2,自引:1,他引:1  
Neural network model of aggression can approximate unknown datasets with the less error. As an important method of global regression, extreme learning machine (ELM) represents a typical learning method in single-hidden layer feedforward network, because of the better generalization performance and the faster implementation. The “randomness” property of input weights makes the nonlinear combination reach arbitrary function approximation. In this paper, we attempt to seek the alternative mechanism to input connections. The idea is derived from the evolutionary algorithm. After predefining the number L of hidden nodes, we generate original ELM models. Each hidden node is seemed as a gene. To rank these hidden nodes, the larger weight nodes are reassigned for the updated ELM. We put L/2 trivial hidden nodes in a candidate reservoir. Then, we generate L/2 new hidden nodes to combine L hidden nodes from this candidate reservoir. Another ranking is used to choose these hidden nodes. The fitness-proportional selection may select L/2 hidden nodes and recombine evolutionary selection ELM. The entire algorithm can be applied for large-scale dataset regression. The verification shows that the regression performance is better than the traditional ELM and Bayesian ELM under less cost gain.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号