首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
As a novel learning algorithm for single-hidden-layer feedforward neural networks, extreme learning machines (ELMs) have been a promising tool for regression and classification applications. However, it is not trivial for ELMs to find the proper number of hidden neurons due to the nonoptimal input weights and hidden biases. In this paper, a new model selection method of ELM based on multi-objective optimization is proposed to obtain compact networks with good generalization ability. First, a new leave-one-out (LOO) error bound of ELM is derived, and it can be calculated with negligible computational cost once the ELM training is finished. Furthermore, the hidden nodes are added to the network one-by-one, and at each step, a multi-objective optimization algorithm is used to select optimal input weights by minimizing this LOO bound and the norm of output weight simultaneously in order to avoid over-fitting. Experiments on five UCI regression data sets are conducted, demonstrating that the proposed algorithm can generally obtain better generalization performance with more compact network than the conventional gradient-based back-propagation method, original ELM and evolutionary ELM.  相似文献   

2.
Considering the uncertainty of hidden neurons, choosing significant hidden nodes, called as model selection, has played an important role in the applications of extreme learning machines(ELMs). How to define and measure this uncertainty is a key issue of model selection for ELM. From the information geometry point of view, this paper presents a new model selection method of ELM for regression problems based on Riemannian metric. First, this paper proves theoretically that the uncertainty can be characterized by a form of Riemannian metric. As a result, a new uncertainty evaluation of ELM is proposed through averaging the Riemannian metric of all hidden neurons. Finally, the hidden nodes are added to the network one by one, and at each step, a multi-objective optimization algorithm is used to select optimal input weights by minimizing this uncertainty evaluation and the norm of output weight simultaneously in order to obtain better generalization performance. Experiments on five UCI regression data sets and cylindrical shell vibration data set are conducted, demonstrating that the proposed method can generally obtain lower generalization error than the original ELM, evolutionary ELM, ELM with model selection, and multi-dimensional support vector machine. Moreover, the proposed algorithm generally needs less hidden neurons and computational time than the traditional approaches, which is very favorable in engineering applications.  相似文献   

3.
Recently, a novel learning algorithm for single-hidden-layer feedforward neural networks (SLFNs) named extreme learning machine (ELM) was proposed by Huang et al. The essence of ELM is that the learning parameters of hidden nodes, including input weights and biases, are randomly assigned and need not be tuned while the output weights can be analytically determined by the simple generalized inverse operation. The only parameter needed to be defined is the number of hidden nodes. Compared with other traditional learning algorithms for SLFNs, ELM provides extremely faster learning speed, better generalization performance and with least human intervention. This paper firstly introduces a brief review of ELM, describing the principle and algorithm of ELM. Then, we put emphasis on the improved methods or the typical variants of ELM, especially on incremental ELM, pruning ELM, error-minimized ELM, two-stage ELM, online sequential ELM, evolutionary ELM, voting-based ELM, ordinal ELM, fully complex ELM, and symmetric ELM. Next, the paper summarized the applications of ELM on classification, regression, function approximation, pattern recognition, forecasting and diagnosis, and so on. In the last, the paper discussed several open issues of ELM, which may be worthy of exploring in the future.  相似文献   

4.
韩敏  刘晓欣 《控制与决策》2014,29(9):1576-1580

针对回归问题中存在的变量选择和网络结构设计问题, 提出一种基于互信息的极端学习机(ELM) 训练算法, 同时实现输入变量的选择和隐含层的结构优化. 该算法将互信息输入变量选择嵌入到ELM网络的学习过程之中, 以网络的学习性能作为衡量输入变量与输出变量相关与否的指标, 并以增量式的方法确定隐含层节点的规模.在Lorenz、Gas Furnace 和10 组标杆数据上的仿真结果表明了所提出算法的有效性. 该算法不仅可以简化网络结构, 还可以提高网络的泛化性能.

  相似文献   

5.
This paper presents a performance enhancement scheme for the recently developed extreme learning machine (ELM) for classifying power system disturbances using particle swarm optimization (PSO). Learning time is an important factor while designing any computational intelligent algorithms for classifications. ELM is a single hidden layer neural network with good generalization capabilities and extremely fast learning capacity. In ELM, the input weights are chosen randomly and the output weights are calculated analytically. However, ELM may need higher number of hidden neurons due to the random determination of the input weights and hidden biases. One of the advantages of ELM over other methods is that the parameter that the user must properly adjust is the number of hidden nodes only. But the optimal selection of its parameter can improve its performance. In this paper, a hybrid optimization mechanism is proposed which combines the discrete-valued PSO with the continuous-valued PSO to optimize the input feature subset selection and the number of hidden nodes to enhance the performance of ELM. The experimental results showed the proposed algorithm is faster and more accurate in discriminating power system disturbances.  相似文献   

6.
This paper presents a performance enhancement scheme for the recently developed extreme learning machine (ELM) for multi-category sparse data classification problems. ELM is a single hidden layer neural network with good generalization capabilities and extremely fast learning capacity. In ELM, the input weights are randomly chosen and the output weights are analytically calculated. The generalization performance of the ELM algorithm for sparse data classification problem depends critically on three free parameters. They are, the number of hidden neurons, the input weights and the bias values which need to be optimally chosen. Selection of these parameters for the best performance of ELM involves a complex optimization problem.In this paper, we present a new, real-coded genetic algorithm approach called ‘RCGA-ELM’ to select the optimal number of hidden neurons, input weights and bias values which results in better performance. Two new genetic operators called ‘network based operator’ and ‘weight based operator’ are proposed to find a compact network with higher generalization performance. We also present an alternate and less computationally intensive approach called ‘sparse-ELM’. Sparse-ELM searches for the best parameters of ELM using K-fold validation. A multi-class human cancer classification problem using micro-array gene expression data (which is sparse), is used for evaluating the performance of the two schemes. Results indicate that the proposed RCGA-ELM and sparse-ELM significantly improve ELM performance for sparse multi-category classification problems.  相似文献   

7.
极限学习机( Extreme Learning Machine , ELM)是一种新型的单馈层神经网络算法,克服了传统的误差反向传播方法需要多次迭代,算法的计算量和搜索空间大的缺点,只需要设置合适的隐含层节点个数,为输入权和隐含层偏差进行随机赋值,一次完成无需迭代。研究表明股票市场是一个非常复杂的非线性系统,需要用到人工智能理论、统计学理论和经济学理论。本文将极限学习机方法引入股票价格预测中,通过对比支持向量机( Support Vector Machine , SVM)和误差反传神经网络( Back Propagation Neural Network , BP神经网络),分析极限学习机在股票价格预测中的可行性和优势。结果表明极限学习机预测精度高,并且在参数选择及训练速度上具有较明显的优势。  相似文献   

8.
极限学习机(ELM)由于高效的训练方式被广泛应用于分类回归,然而不同的输入权值在很大程度上会影响其学习性能。为了进一步提高ELM的学习性能,针对ELM的输入权值进行了研究,充分利用图像局部感知的稀疏性,将局部感知的方法运用到基于自动编码器的ELM(ELM-AE)上,提出了局部感知的类限制极限学习机(RF-C2ELM)。通过对MNIST数据集进行分类问题分析实验,实验结果表明,在具有相同隐层结点数的条件下,提出的方法能够获得更高的分类精度。  相似文献   

9.
针对极端学习机(ELM)网络规模控制问题,从剪枝思路出发,提出了一种基于影响度剪枝的ELM分类算法。利用ELM网络单个隐节点连接输入层和输出层的权值向量、该隐节点的输出、初始隐节点个数以及训练样本个数,定义单个隐节点相对于整个网络学习的影响度,根据影响度判断隐节点的重要性并将其排序,采用与ELM网络规模相匹配的剪枝步长删除冗余节点,最后更新隐含层与输入层和输出层连接的权值向量。通过对多个UCI机器学习数据集进行分类实验,并将提出的算法与EM-ELM、PELM和ELM算法相比较,结果表明,该算法具有较高的稳定性和测试精度,训练速度较快,并能有效地控制网络规模。  相似文献   

10.
In this paper, we introduce a new learning method for composite function wavelet neural networks (CFWNN) by combining the differential evolution (DE) algorithm with extreme learning machine (ELM), in short, as CWN-E-ELM. The recently proposed CFWNN trained with ELM (CFWNN-ELM) has several promising features. But the CFWNN-ELM may have some redundant nodes due to the number of hidden nodes assigned a priori and the input weight matrix and the hidden node parameter vector randomly generated once and never changed during the learning phase. The introduction of DE into CFWNN-ELM is to search for the optimal network parameters and to reduce the number of hidden nodes used in the network. Simulations on several artificial function approximations, real-world data regressions and a chaotic signal prediction problem show some advantages of the proposed CWN-E-ELM. Compared with CFWNN-ELM, CWN-E-ELM has a much more compact network size and Compared with several relevant methods, CWN-E-ELM is able to achieve a better generalization performance.  相似文献   

11.
TROP-ELM: A double-regularized ELM using LARS and Tikhonov regularization   总被引:1,自引:0,他引:1  
In this paper an improvement of the optimally pruned extreme learning machine (OP-ELM) in the form of a L2 regularization penalty applied within the OP-ELM is proposed. The OP-ELM originally proposes a wrapper methodology around the extreme learning machine (ELM) meant to reduce the sensitivity of the ELM to irrelevant variables and obtain more parsimonious models thanks to neuron pruning. The proposed modification of the OP-ELM uses a cascade of two regularization penalties: first a L1 penalty to rank the neurons of the hidden layer, followed by a L2 penalty on the regression weights (regression between hidden layer and output layer) for numerical stability and efficient pruning of the neurons. The new methodology is tested against state of the art methods such as support vector machines or Gaussian processes and the original ELM and OP-ELM, on 11 different data sets; it systematically outperforms the OP-ELM (average of 27% better mean square error) and provides more reliable results - in terms of standard deviation of the results - while remaining always less than one order of magnitude slower than the OP-ELM.  相似文献   

12.
A study on effectiveness of extreme learning machine   总被引:7,自引:0,他引:7  
Extreme learning machine (ELM), proposed by Huang et al., has been shown a promising learning algorithm for single-hidden layer feedforward neural networks (SLFNs). Nevertheless, because of the random choice of input weights and biases, the ELM algorithm sometimes makes the hidden layer output matrix H of SLFN not full column rank, which lowers the effectiveness of ELM. This paper discusses the effectiveness of ELM and proposes an improved algorithm called EELM that makes a proper selection of the input weights and bias before calculating the output weights, which ensures the full column rank of H in theory. This improves to some extend the learning rate (testing accuracy, prediction accuracy, learning time) and the robustness property of the networks. The experimental results based on both the benchmark function approximation and real-world problems including classification and regression applications show the good performances of EELM.  相似文献   

13.
为了能够更加高效地检测和诊断模拟电路中的故障元件,提出了自适应狼群算法优化极限学习机的方法。该方法采用自适应遗传算法对特征参数进行选择,从而生成最优特征子集,然后利用最优特征子集构造样本输入极限学习机ELM网络对故障进行分类。针对极限学习机的输入层和隐含层之间的连接权值、隐含层的偏差都将会使其学习速度和分类正确率受到影响的问题,采用本文方法对它们进行优化并选择相应的最优值,提高了极限学习机网络训练的稳定性与故障诊断的成功率。通过2个典型模拟电路的诊断实例,给出了这些方法的具体实现过程,故障诊断率均在99%以上。仿真结果表明使用该方法进行模拟电路故障诊断时具有良好的正确率和稳定性。  相似文献   

14.
Symmetric extreme learning machine   总被引:1,自引:1,他引:0  
Extreme learning machine (ELM) can be considered as a black-box modeling approach that seeks a model representation extracted from the training data. In this paper, a modified ELM algorithm, called symmetric ELM (S-ELM), is proposed by incorporating a priori information of symmetry. S-ELM is realized by transforming the original activation function of hidden neurons into a symmetric one with respect to the input variables of the samples. In theory, S-ELM can approximate N arbitrary distinct samples with zero error. Simulation results show that, in the applications where there exists the prior knowledge of symmetry, S-ELM can obtain better generalization performance, faster learning speed, and more compact network architecture.  相似文献   

15.
In this paper, we develop an online sequential learning algorithm for single hidden layer feedforward networks (SLFNs) with additive or radial basis function (RBF) hidden nodes in a unified framework. The algorithm is referred to as online sequential extreme learning machine (OS-ELM) and can learn data one-by-one or chunk-by-chunk (a block of data) with fixed or varying chunk size. The activation functions for additive nodes in OS-ELM can be any bounded nonconstant piecewise continuous functions and the activation functions for RBF nodes can be any integrable piecewise continuous functions. In OS-ELM, the parameters of hidden nodes (the input weights and biases of additive nodes or the centers and impact factors of RBF nodes) are randomly selected and the output weights are analytically determined based on the sequentially arriving data. The algorithm uses the ideas of ELM of Huang developed for batch learning which has been shown to be extremely fast with generalization performance better than other batch training methods. Apart from selecting the number of hidden nodes, no other control parameters have to be manually chosen. Detailed performance comparison of OS-ELM is done with other popular sequential learning algorithms on benchmark problems drawn from the regression, classification and time series prediction areas. The results show that the OS-ELM is faster than the other sequential algorithms and produces better generalization performance  相似文献   

16.
Extreme learning machine (ELM) is widely used in complex industrial problems, especially the online-sequential extreme learning machine (OS-ELM) plays a good role in industrial online modeling. However, OS-ELM requires batch samples to be pre-trained to obtain initial weights, which may reduce the timeliness of samples. This paper proposes a novel model for the online process regression prediction, which is called the Recurrent Extreme Learning Machine (Recurrent-ELM). The nodes between the hidden layers are connected in Recurrent-ELM, thus the input of the hidden layer receives both the information from the current input layer and the previously hidden layer. Moreover, the weights and biases of the proposed model are generated by analysis rather than random. Six regression applications are used to verify the designed Recurrent-ELM, compared with extreme learning machine (ELM), fast learning network (FLN), online sequential extreme learning machine (OS-ELM), and an ensemble of online sequential extreme learning machine (EOS-ELM), the experimental results show that the Recurrent-ELM has better generalization and stability in several samples. In addition, to further test the performance of Recurrent-ELM, we employ it in the combustion modeling of a 330 MW coal-fired boiler compared with FLN, SVR and OS-ELM. The results show that Recurrent-ELM has better accuracy and generalization ability, and the theoretical model has some potential application value in practical application.  相似文献   

17.
Compared with traditional learning methods such as the back propagation (BP) method, extreme learning machine provides much faster learning speed and needs less human intervention, and thus has been widely used. In this paper we combine the L1/2 regularization method with extreme learning machine to prune extreme learning machine. A variable learning coefficient is employed to prevent too large a learning increment. A numerical experiment demonstrates that a network pruned by L1/2 regularization has fewer hidden nodes but provides better performance than both the original network and the network pruned by L2 regularization.  相似文献   

18.

针对增量型极限学习机(I-ELM) 中存在大量降低学习效率及准确性的冗余节点的问题, 提出一种基于Delta 检验(DT) 和混沌优化算法(COA) 的改进式增量型核极限学习算法. 利用COA的全局搜索能力对I-ELM 中的隐含层节点参数进行寻优, 结合DT 算法检验模型输出误差, 确定有效的隐含层节点数量, 从而降低网络复杂程度, 提高算法的学习效率; 加入核函数可增强网络的在线预测能力. 仿真结果表明, 所提出的DCI-ELMK 算法具有较好的预测精度和泛化能力, 网络结构更为紧凑.

  相似文献   

19.
针对极端学习机(ELM)网络结构设计问题,提出基于灵敏度分析法的ELM剪枝算法.利用隐含层节点输出和相对应的输出层权值向量,定义学习残差对于隐含层节点的灵敏度和网络规模适应度,根据灵敏度大小判断隐含层节点的重要性,利用网络规模适应度确定隐含层节点个数,删除重要性较低的节点.仿真结果表明,所提出的算法能够较为准确地确定与学习样本相匹配的网络规模,解决了ELM网络结构设计问题.  相似文献   

20.
针对资源分配网络(RAN)算法存在隐含层节点受初始学习数据影响大、收敛速度低等问题,提出一种新的RAN学习算法。通过均值算法确定初始隐含层节点,在原有的“新颖性准则”基础上增加RMS窗口,更好地判定隐含层节点是否增加。同时,采用最小均方(LMS)算法与扩展卡尔曼滤波器(EKF)算法相结合调整网络参数,提高算法学习速度。由于基于词向量空间文本模型很难处理文本的高维特性和语义复杂性,为此通过语义特征选取方法对文本输入空间进行语义特征的抽取和降维。实验结果表明,新的RAN学习算法具有学习速度快、网络结构紧凑、分类效果好的优点,而且,在语义特征选取的同时实现了降维,大幅度减少文本分类时间,有效提高了系统分类准确性。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号