首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
As a novel learning algorithm for single-hidden-layer feedforward neural networks, extreme learning machines (ELMs) have been a promising tool for regression and classification applications. However, it is not trivial for ELMs to find the proper number of hidden neurons due to the nonoptimal input weights and hidden biases. In this paper, a new model selection method of ELM based on multi-objective optimization is proposed to obtain compact networks with good generalization ability. First, a new leave-one-out (LOO) error bound of ELM is derived, and it can be calculated with negligible computational cost once the ELM training is finished. Furthermore, the hidden nodes are added to the network one-by-one, and at each step, a multi-objective optimization algorithm is used to select optimal input weights by minimizing this LOO bound and the norm of output weight simultaneously in order to avoid over-fitting. Experiments on five UCI regression data sets are conducted, demonstrating that the proposed algorithm can generally obtain better generalization performance with more compact network than the conventional gradient-based back-propagation method, original ELM and evolutionary ELM.  相似文献   

2.
Evolutionary selection extreme learning machine optimization for regression   总被引:2,自引:1,他引:1  
Neural network model of aggression can approximate unknown datasets with the less error. As an important method of global regression, extreme learning machine (ELM) represents a typical learning method in single-hidden layer feedforward network, because of the better generalization performance and the faster implementation. The “randomness” property of input weights makes the nonlinear combination reach arbitrary function approximation. In this paper, we attempt to seek the alternative mechanism to input connections. The idea is derived from the evolutionary algorithm. After predefining the number L of hidden nodes, we generate original ELM models. Each hidden node is seemed as a gene. To rank these hidden nodes, the larger weight nodes are reassigned for the updated ELM. We put L/2 trivial hidden nodes in a candidate reservoir. Then, we generate L/2 new hidden nodes to combine L hidden nodes from this candidate reservoir. Another ranking is used to choose these hidden nodes. The fitness-proportional selection may select L/2 hidden nodes and recombine evolutionary selection ELM. The entire algorithm can be applied for large-scale dataset regression. The verification shows that the regression performance is better than the traditional ELM and Bayesian ELM under less cost gain.  相似文献   

3.
隐层节点数是影响极端学习机(ELM)泛化性能的关键参数,针对传统的ELM隐层节点数确定算法中优化过程复杂、容易过学习或陷入局部最优的问题,提出结构风险最小化-极端学习机(SRM-ELM)算法。通过分析VC维与隐层节点数量之间的关联,对VC信任函数进行近似改进,使其为凹函数,并结合经验风险重构近似的SRM。在此基础上,将粒子群优化的位置值直接作为ELM的隐层节点数,利用粒子群算法最小化结构风险函数获得极端学习机的隐层节点数,作为最优节点数。使用6组UCI数据和胶囊缺陷数据进行仿真验证,结果表明,该算法能获得极端学习机的最优节点数,并具有更好的泛化能力。  相似文献   

4.
This paper presents a performance enhancement scheme for the recently developed extreme learning machine (ELM) for classifying power system disturbances using particle swarm optimization (PSO). Learning time is an important factor while designing any computational intelligent algorithms for classifications. ELM is a single hidden layer neural network with good generalization capabilities and extremely fast learning capacity. In ELM, the input weights are chosen randomly and the output weights are calculated analytically. However, ELM may need higher number of hidden neurons due to the random determination of the input weights and hidden biases. One of the advantages of ELM over other methods is that the parameter that the user must properly adjust is the number of hidden nodes only. But the optimal selection of its parameter can improve its performance. In this paper, a hybrid optimization mechanism is proposed which combines the discrete-valued PSO with the continuous-valued PSO to optimize the input feature subset selection and the number of hidden nodes to enhance the performance of ELM. The experimental results showed the proposed algorithm is faster and more accurate in discriminating power system disturbances.  相似文献   

5.
针对极限学习机(ELM)中冗余的隐神经元会削弱模型泛化能力的缺点,提出了一种基于隐特征空间的ELM模型选择算法。首先,为了寻找合适的ELM隐层,在ELM中添加正则项,该项为现有隐层空间到低维隐特征空间的映射函数矩阵的Frobenius范数;其次,为解决该非凸问题,采用交替优化的策略,并通过凸二次型优化学习该隐空间;最终自适应得到最优映射函数和ELM模型。分别采用UCI标准数据集和载荷识别工程数据对所提算法进行测试,结果表明,与经典ELM相比,该算法可有效提高预测精度和数值稳定性,与现有模型选择算法相比,该算法预测精度相当,但运行时间则大幅降低。  相似文献   

6.
针对大数据分类问题应用设计了一种快速隐层优化方法来解决分布式超限学习机(Extreme Learning Machine,ELM)在训练过程中存在的突出问题--需要独立重复运行多次才能优化隐层结点个数或模型泛化性能。在不增加算法时间复杂度的前提下,新算法能同时训练多个ELM隐层网络,全面兼顾模型泛化能力和隐层结点个数的优化,并通过分布式计算避免大量重复计算。同时,在算法求解过程中通过这种方式能更精确、更直观地学习隐含层结点个数变化带来的影响。比较多种类型标准测试函数的实验结果,相对于分布式ELM,新算法在求解精度、泛化能力、稳定性上大大提高。  相似文献   

7.
Symmetric extreme learning machine   总被引:1,自引:1,他引:0  
Extreme learning machine (ELM) can be considered as a black-box modeling approach that seeks a model representation extracted from the training data. In this paper, a modified ELM algorithm, called symmetric ELM (S-ELM), is proposed by incorporating a priori information of symmetry. S-ELM is realized by transforming the original activation function of hidden neurons into a symmetric one with respect to the input variables of the samples. In theory, S-ELM can approximate N arbitrary distinct samples with zero error. Simulation results show that, in the applications where there exists the prior knowledge of symmetry, S-ELM can obtain better generalization performance, faster learning speed, and more compact network architecture.  相似文献   

8.
Recently, a novel learning algorithm for single-hidden-layer feedforward neural networks (SLFNs) named extreme learning machine (ELM) was proposed by Huang et al. The essence of ELM is that the learning parameters of hidden nodes, including input weights and biases, are randomly assigned and need not be tuned while the output weights can be analytically determined by the simple generalized inverse operation. The only parameter needed to be defined is the number of hidden nodes. Compared with other traditional learning algorithms for SLFNs, ELM provides extremely faster learning speed, better generalization performance and with least human intervention. This paper firstly introduces a brief review of ELM, describing the principle and algorithm of ELM. Then, we put emphasis on the improved methods or the typical variants of ELM, especially on incremental ELM, pruning ELM, error-minimized ELM, two-stage ELM, online sequential ELM, evolutionary ELM, voting-based ELM, ordinal ELM, fully complex ELM, and symmetric ELM. Next, the paper summarized the applications of ELM on classification, regression, function approximation, pattern recognition, forecasting and diagnosis, and so on. In the last, the paper discussed several open issues of ELM, which may be worthy of exploring in the future.  相似文献   

9.
In this paper, extreme learning machine (ELM) for ε-insensitive error loss function-based regression problem formulated in 2-norm as an unconstrained optimization problem in primal variables is proposed. Since the objective function of this unconstrained optimization problem is not twice differentiable, the popular generalized Hessian matrix and smoothing approaches are considered which lead to optimization problems whose solutions are determined using fast Newton–Armijo algorithm. The main advantage of the algorithm is that at each iteration, a system of linear equations is solved. By performing numerical experiments on a number of interesting synthetic and real-world datasets, the results of the proposed method are compared with that of ELM using additive and radial basis function hidden nodes and of support vector regression (SVR) using Gaussian kernel. Similar or better generalization performance of the proposed method on the test data in comparable computational time over ELM and SVR clearly illustrates its efficiency and applicability.  相似文献   

10.
韩敏  刘晓欣 《控制与决策》2014,29(9):1576-1580

针对回归问题中存在的变量选择和网络结构设计问题, 提出一种基于互信息的极端学习机(ELM) 训练算法, 同时实现输入变量的选择和隐含层的结构优化. 该算法将互信息输入变量选择嵌入到ELM网络的学习过程之中, 以网络的学习性能作为衡量输入变量与输出变量相关与否的指标, 并以增量式的方法确定隐含层节点的规模.在Lorenz、Gas Furnace 和10 组标杆数据上的仿真结果表明了所提出算法的有效性. 该算法不仅可以简化网络结构, 还可以提高网络的泛化性能.

  相似文献   

11.
Extreme learning machine (ELM) is widely used in complex industrial problems, especially the online-sequential extreme learning machine (OS-ELM) plays a good role in industrial online modeling. However, OS-ELM requires batch samples to be pre-trained to obtain initial weights, which may reduce the timeliness of samples. This paper proposes a novel model for the online process regression prediction, which is called the Recurrent Extreme Learning Machine (Recurrent-ELM). The nodes between the hidden layers are connected in Recurrent-ELM, thus the input of the hidden layer receives both the information from the current input layer and the previously hidden layer. Moreover, the weights and biases of the proposed model are generated by analysis rather than random. Six regression applications are used to verify the designed Recurrent-ELM, compared with extreme learning machine (ELM), fast learning network (FLN), online sequential extreme learning machine (OS-ELM), and an ensemble of online sequential extreme learning machine (EOS-ELM), the experimental results show that the Recurrent-ELM has better generalization and stability in several samples. In addition, to further test the performance of Recurrent-ELM, we employ it in the combustion modeling of a 330 MW coal-fired boiler compared with FLN, SVR and OS-ELM. The results show that Recurrent-ELM has better accuracy and generalization ability, and the theoretical model has some potential application value in practical application.  相似文献   

12.
In order to overcome the disadvantage of the traditional algorithm for SLFN (single-hidden layer feedforward neural network), an improved algorithm for SLFN, called extreme learning machine (ELM), is proposed by Huang et al. However, ELM is sensitive to the neuron number in hidden layer and its selection is a difficult-to-solve problem. In this paper, a self-adaptive mechanism is introduced into the ELM. Herein, a new variant of ELM, called self-adaptive extreme learning machine (SaELM), is proposed. SaELM is a self-adaptive learning algorithm that can always select the best neuron number in hidden layer to form the neural networks. There is no need to adjust any parameters in the training process. In order to prove the performance of the SaELM, it is used to solve the Italian wine and iris classification problems. Through the comparisons between SaELM and the traditional back propagation, basic ELM and general regression neural network, the results have proven that SaELM has a faster learning speed and better generalization performance when solving the classification problem.  相似文献   

13.
This paper presents a performance enhancement scheme for the recently developed extreme learning machine (ELM) for multi-category sparse data classification problems. ELM is a single hidden layer neural network with good generalization capabilities and extremely fast learning capacity. In ELM, the input weights are randomly chosen and the output weights are analytically calculated. The generalization performance of the ELM algorithm for sparse data classification problem depends critically on three free parameters. They are, the number of hidden neurons, the input weights and the bias values which need to be optimally chosen. Selection of these parameters for the best performance of ELM involves a complex optimization problem.In this paper, we present a new, real-coded genetic algorithm approach called ‘RCGA-ELM’ to select the optimal number of hidden neurons, input weights and bias values which results in better performance. Two new genetic operators called ‘network based operator’ and ‘weight based operator’ are proposed to find a compact network with higher generalization performance. We also present an alternate and less computationally intensive approach called ‘sparse-ELM’. Sparse-ELM searches for the best parameters of ELM using K-fold validation. A multi-class human cancer classification problem using micro-array gene expression data (which is sparse), is used for evaluating the performance of the two schemes. Results indicate that the proposed RCGA-ELM and sparse-ELM significantly improve ELM performance for sparse multi-category classification problems.  相似文献   

14.
Due to the significant efficiency and simple implementation, extreme learning machine (ELM) algorithms enjoy much attention in regression and classification applications recently. Many efforts have been paid to enhance the performance of ELM from both methodology (ELM training strategies) and structure (incremental or pruned ELMs) perspectives. In this paper, a local coupled extreme learning machine (LC-ELM) algorithm is presented. By assigning an address to each hidden node in the input space, LC-ELM introduces a decoupler framework to ELM in order to reduce the complexity of the weight searching space. The activated degree of a hidden node is measured by the membership degree of the similarity between the associated address and the given input. Experimental results confirm that the proposed approach works effectively and generally outperforms the original ELM in both regression and classification applications.  相似文献   

15.
相比径向基(RBF)神经网络,极限学习机(ELM)训练速度更快,泛化能力更强.同时,近邻传播聚类算法(AP)可以自动确定聚类个数.因此,文中提出融合AP聚类、多标签RBF(ML-RBF)和正则化ELM(RELM)的多标签学习模型(ML-AP-RBF-RELM).首先,在该模型中输入层使用ML-RBF进行映射,且通过AP聚类算法自动确定每一类标签的聚类个数,计算隐层节点个数.然后,利用每类标签的聚类个数通过K均值聚类确定隐层节点RBF函数的中心.最后,通过RELM快速求解隐层到输出层的连接权值.实验表明,ML-AP-RBF-RELM效果较好.  相似文献   

16.

针对增量型极限学习机(I-ELM) 中存在大量降低学习效率及准确性的冗余节点的问题, 提出一种基于Delta 检验(DT) 和混沌优化算法(COA) 的改进式增量型核极限学习算法. 利用COA的全局搜索能力对I-ELM 中的隐含层节点参数进行寻优, 结合DT 算法检验模型输出误差, 确定有效的隐含层节点数量, 从而降低网络复杂程度, 提高算法的学习效率; 加入核函数可增强网络的在线预测能力. 仿真结果表明, 所提出的DCI-ELMK 算法具有较好的预测精度和泛化能力, 网络结构更为紧凑.

  相似文献   

17.
王粲  夏元清  邹伟东 《计算机应用研究》2021,38(6):1724-1727,1764
针对极限学习机(extreme learning machine,ELM)隐节点不确定性导致的系统不稳定,以及对大型数据计算负担过重的问题,提出了基于自适应动量优化算法(adaptive and momentum method,AdaMom)的正则化极限学习机.算法主要思想是构造连续可微的目标函数,在梯度下降过程中计算自适应学习率,求自适应学习率与梯度乘积的指数加权平均值,通过迭代得到损失函数最小值对应的隐层输出权重矩阵.实验结果表明,在相同基准数据集的训练中,AdaMom-ELM算法具有非常良好的泛化性能和鲁棒性,提高了计算效率.  相似文献   

18.
Zou  Weidong  Yao  Fenxi  Zhang  Baihai  Guan  Zixiao 《Neural computing & applications》2018,30(11):3363-3370

Liao et al. (Neurocomputing 128:81–87, 2014) proposed a meta-learning approach to extreme learning machine (Meta-ELM), which can obtain good generalization performance by training multiple ELMs. However, one of its open problems is overfitting when minimizing training error. In this paper, we propose an improved meta-learning model of ELM (improved Meta-ELM) to handle the problem. The improved Meta-ELM architecture is composed of some base ELMs which are error feedback incremental extreme learning machine (EFI-ELM) and the top ELM. The improved Meta-ELM includes two stages. First, each base ELM with EFI-ELM is trained on a subset of training data. Then, the top ELM learns with the base ELMs as hidden nodes. Simulation results on some artificial and benchmark datasets show that the proposed improved Meta-ELM model is more feasible and effective than Meta-ELM.

  相似文献   

19.
In this paper, a novel self-adaptive extreme learning machine (ELM) based on affinity propagation (AP) is proposed to optimize the radial basis function neural network (RBFNN). As is well known, the parameters of original ELM which developed by G.-B. Huang are randomly determined. However, that cannot objectively obtain a set of optimal parameters of RBFNN trained by ELM algorithm for different realistic datasets. The AP algorithm can automatically produce a set of clustering centers for the different datasets. According to the results of AP, we can, respectively, get the cluster number and the radius value of each cluster. In that case, the above cluster number and radius value can be used to initialize the number and widths of hidden layer neurons in RBFNN and that is also the parameters of coefficient matrix H of ELM. This may successfully avoid the subjectivity prior knowledge and randomness of training RBFNN. Experimental results show that the method proposed in this thesis has a more powerful generalization capability than conventional ELM for an RBFNN.  相似文献   

20.
极限学习机综述   总被引:3,自引:0,他引:3  
极限学习机是一种单隐层前向网络的训练算法,主要特点是训练速度极快,而且可以达到很高的泛化性能。回顾了极限学习机的发展历程,分析了极限学习机的数学模型,详细介绍了极限学习机的各种改进算法,并列举了极限学习机在识别、预测和医学诊断领域的应用。最后总结预测了极限学习机的改进方向。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号