首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 234 毫秒
1.

针对极限学习机(ELM) 网络结构优化问题, 提出一种改进的灵敏度剪枝ELM(ImSAP-ELM). ImSAP-ELM 将??2 正则化因子引入SAP-ELM 中, 采用留一准则确定最优隐节点数. 推导基于奇异值分解的输出权重计算公式, 避免矩阵奇异导致求解无效的问题. 将ImSAP-ELM 用于故障预测, 利用多组同类型故障数据建立多个ImSAP-ELM 模型, 基于加权思想融合不同ImSAP-ELM 的预测值. 某型无人机发射机实例表明, 相比于ELM、OP-ELM (最优剪枝ELM) 和SAP-ELM, ImSAP-ELM 耗时最高, 但是ImSAP-ELM 的预测误差小于其他3 种方法.

  相似文献   

2.
Evolutionary selection extreme learning machine optimization for regression   总被引:2,自引:1,他引:1  
Neural network model of aggression can approximate unknown datasets with the less error. As an important method of global regression, extreme learning machine (ELM) represents a typical learning method in single-hidden layer feedforward network, because of the better generalization performance and the faster implementation. The “randomness” property of input weights makes the nonlinear combination reach arbitrary function approximation. In this paper, we attempt to seek the alternative mechanism to input connections. The idea is derived from the evolutionary algorithm. After predefining the number L of hidden nodes, we generate original ELM models. Each hidden node is seemed as a gene. To rank these hidden nodes, the larger weight nodes are reassigned for the updated ELM. We put L/2 trivial hidden nodes in a candidate reservoir. Then, we generate L/2 new hidden nodes to combine L hidden nodes from this candidate reservoir. Another ranking is used to choose these hidden nodes. The fitness-proportional selection may select L/2 hidden nodes and recombine evolutionary selection ELM. The entire algorithm can be applied for large-scale dataset regression. The verification shows that the regression performance is better than the traditional ELM and Bayesian ELM under less cost gain.  相似文献   

3.
Compared with traditional learning methods such as the back propagation (BP) method, extreme learning machine provides much faster learning speed and needs less human intervention, and thus has been widely used. In this paper we combine the L1/2 regularization method with extreme learning machine to prune extreme learning machine. A variable learning coefficient is employed to prevent too large a learning increment. A numerical experiment demonstrates that a network pruned by L1/2 regularization has fewer hidden nodes but provides better performance than both the original network and the network pruned by L2 regularization.  相似文献   

4.
已有的急速学习机(Extreme Learning Machine)的学习精度受隐节点数目的影响很大。无论是已提出的单隐层急速学习机还是多隐层神经网络,都是先确定隐藏层数,再通过增加每一层的神经元个数来提高精度。但当训练集规模很大时,往往需要引入很多的隐节点,导致违逆矩阵计算复杂度大,从而不利于学习效率的提高。提出逐层可加的急速学习机MHL-ELM(Extreme Learning Machine with Incremental Hidden Layers),其思想是首先对当前隐藏层神经元(数目不大且不寻优,因而复杂度小)的权值进行随机赋值,用ELM思想求出逼近误差;若误差达不到要求,再增加一个隐含层。然后运用ELM的思想对当前隐含层优化。逐渐增加隐含层,直至满足误差精度为止。除此以外,MHL-ELM的算法复杂度为[l=1MO(N3l)]。实验使用10个UCI,keel真实数据集,通过与BP,OP-ELM等传统方法进行比较,表明MHL-ELM学习方法具有更好的泛化性,在学习精度和学习速度方面都有很大的提升。  相似文献   

5.
极限学习机(ELM)由于高效的训练方式被广泛应用于分类回归,然而不同的输入权值在很大程度上会影响其学习性能。为了进一步提高ELM的学习性能,针对ELM的输入权值进行了研究,充分利用图像局部感知的稀疏性,将局部感知的方法运用到基于自动编码器的ELM(ELM-AE)上,提出了局部感知的类限制极限学习机(RF-C2ELM)。通过对MNIST数据集进行分类问题分析实验,实验结果表明,在具有相同隐层结点数的条件下,提出的方法能够获得更高的分类精度。  相似文献   

6.
Extreme learning machine (ELM) works for generalized single-hidden-layer feedforward networks (SLFNs), and its essence is that the hidden layer of SLFNs need not be tuned. But ELM only utilizes labeled data to carry out the supervised learning task. In order to exploit unlabeled data in the ELM model, we first extend the manifold regularization (MR) framework and then demonstrate the relation between the extended MR framework and ELM. Finally, a manifold regularized extreme learning machine is derived from the proposed framework, which maintains the properties of ELM and can be applicable to large-scale learning problems. Experimental results show that the proposed semi-supervised extreme learning machine is the most cost-efficient method. It tends to have better scalability and achieve satisfactory generalization performance at a relatively faster learning speed than traditional semi-supervised learning algorithms.  相似文献   

7.
针对极端学习机(ELM)网络结构设计问题,提出基于灵敏度分析法的ELM剪枝算法.利用隐含层节点输出和相对应的输出层权值向量,定义学习残差对于隐含层节点的灵敏度和网络规模适应度,根据灵敏度大小判断隐含层节点的重要性,利用网络规模适应度确定隐含层节点个数,删除重要性较低的节点.仿真结果表明,所提出的算法能够较为准确地确定与学习样本相匹配的网络规模,解决了ELM网络结构设计问题.  相似文献   

8.
Extreme learning machine (ELM) is widely used in complex industrial problems, especially the online-sequential extreme learning machine (OS-ELM) plays a good role in industrial online modeling. However, OS-ELM requires batch samples to be pre-trained to obtain initial weights, which may reduce the timeliness of samples. This paper proposes a novel model for the online process regression prediction, which is called the Recurrent Extreme Learning Machine (Recurrent-ELM). The nodes between the hidden layers are connected in Recurrent-ELM, thus the input of the hidden layer receives both the information from the current input layer and the previously hidden layer. Moreover, the weights and biases of the proposed model are generated by analysis rather than random. Six regression applications are used to verify the designed Recurrent-ELM, compared with extreme learning machine (ELM), fast learning network (FLN), online sequential extreme learning machine (OS-ELM), and an ensemble of online sequential extreme learning machine (EOS-ELM), the experimental results show that the Recurrent-ELM has better generalization and stability in several samples. In addition, to further test the performance of Recurrent-ELM, we employ it in the combustion modeling of a 330 MW coal-fired boiler compared with FLN, SVR and OS-ELM. The results show that Recurrent-ELM has better accuracy and generalization ability, and the theoretical model has some potential application value in practical application.  相似文献   

9.
Due to the significant efficiency and simple implementation, extreme learning machine (ELM) algorithms enjoy much attention in regression and classification applications recently. Many efforts have been paid to enhance the performance of ELM from both methodology (ELM training strategies) and structure (incremental or pruned ELMs) perspectives. In this paper, a local coupled extreme learning machine (LC-ELM) algorithm is presented. By assigning an address to each hidden node in the input space, LC-ELM introduces a decoupler framework to ELM in order to reduce the complexity of the weight searching space. The activated degree of a hidden node is measured by the membership degree of the similarity between the associated address and the given input. Experimental results confirm that the proposed approach works effectively and generally outperforms the original ELM in both regression and classification applications.  相似文献   

10.
The extreme learning machine (ELM), a single-hidden layer feedforward neural network algorithm, was tested on nine environmental regression problems. The prediction accuracy and computational speed of the ensemble ELM were evaluated against multiple linear regression (MLR) and three nonlinear machine learning (ML) techniques – artificial neural network (ANN), support vector regression and random forest (RF). Simple automated algorithms were used to estimate the parameters (e.g. number of hidden neurons) needed for model training. Scaling the range of the random weights in ELM improved its performance. Excluding large datasets (with large number of cases and predictors), ELM tended to be the fastest among the nonlinear models. For large datasets, RF tended to be the fastest. ANN and ELM had similar skills, but ELM was much faster than ANN except for large datasets. Generally, the tested ML techniques outperformed MLR, but no single method was best for all the nine datasets.  相似文献   

11.
This paper presents a performance enhancement scheme for the recently developed extreme learning machine (ELM) for multi-category sparse data classification problems. ELM is a single hidden layer neural network with good generalization capabilities and extremely fast learning capacity. In ELM, the input weights are randomly chosen and the output weights are analytically calculated. The generalization performance of the ELM algorithm for sparse data classification problem depends critically on three free parameters. They are, the number of hidden neurons, the input weights and the bias values which need to be optimally chosen. Selection of these parameters for the best performance of ELM involves a complex optimization problem.In this paper, we present a new, real-coded genetic algorithm approach called ‘RCGA-ELM’ to select the optimal number of hidden neurons, input weights and bias values which results in better performance. Two new genetic operators called ‘network based operator’ and ‘weight based operator’ are proposed to find a compact network with higher generalization performance. We also present an alternate and less computationally intensive approach called ‘sparse-ELM’. Sparse-ELM searches for the best parameters of ELM using K-fold validation. A multi-class human cancer classification problem using micro-array gene expression data (which is sparse), is used for evaluating the performance of the two schemes. Results indicate that the proposed RCGA-ELM and sparse-ELM significantly improve ELM performance for sparse multi-category classification problems.  相似文献   

12.

针对增量型极限学习机(I-ELM) 中存在大量降低学习效率及准确性的冗余节点的问题, 提出一种基于Delta 检验(DT) 和混沌优化算法(COA) 的改进式增量型核极限学习算法. 利用COA的全局搜索能力对I-ELM 中的隐含层节点参数进行寻优, 结合DT 算法检验模型输出误差, 确定有效的隐含层节点数量, 从而降低网络复杂程度, 提高算法的学习效率; 加入核函数可增强网络的在线预测能力. 仿真结果表明, 所提出的DCI-ELMK 算法具有较好的预测精度和泛化能力, 网络结构更为紧凑.

  相似文献   

13.
Extreme learning machine for regression and multiclass classification   总被引:13,自引:0,他引:13  
Due to the simplicity of their implementations, least square support vector machine (LS-SVM) and proximal support vector machine (PSVM) have been widely used in binary classification applications. The conventional LS-SVM and PSVM cannot be used in regression and multiclass classification applications directly, although variants of LS-SVM and PSVM have been proposed to handle such cases. This paper shows that both LS-SVM and PSVM can be simplified further and a unified learning framework of LS-SVM, PSVM, and other regularization algorithms referred to extreme learning machine (ELM) can be built. ELM works for the "generalized" single-hidden-layer feedforward networks (SLFNs), but the hidden layer (or called feature mapping) in ELM need not be tuned. Such SLFNs include but are not limited to SVM, polynomial network, and the conventional feedforward neural networks. This paper shows the following: 1) ELM provides a unified learning platform with a widespread type of feature mappings and can be applied in regression and multiclass classification applications directly; 2) from the optimization method point of view, ELM has milder optimization constraints compared to LS-SVM and PSVM; 3) in theory, compared to ELM, LS-SVM and PSVM achieve suboptimal solutions and require higher computational complexity; and 4) in theory, ELM can approximate any target continuous function and classify any disjoint regions. As verified by the simulation results, ELM tends to have better scalability and achieve similar (for regression and binary class cases) or much better (for multiclass cases) generalization performance at much faster learning speed (up to thousands times) than traditional SVM and LS-SVM.  相似文献   

14.
韩敏  刘晓欣 《控制与决策》2014,29(9):1576-1580

针对回归问题中存在的变量选择和网络结构设计问题, 提出一种基于互信息的极端学习机(ELM) 训练算法, 同时实现输入变量的选择和隐含层的结构优化. 该算法将互信息输入变量选择嵌入到ELM网络的学习过程之中, 以网络的学习性能作为衡量输入变量与输出变量相关与否的指标, 并以增量式的方法确定隐含层节点的规模.在Lorenz、Gas Furnace 和10 组标杆数据上的仿真结果表明了所提出算法的有效性. 该算法不仅可以简化网络结构, 还可以提高网络的泛化性能.

  相似文献   

15.
In order to overcome the disadvantage of the traditional algorithm for SLFN (single-hidden layer feedforward neural network), an improved algorithm for SLFN, called extreme learning machine (ELM), is proposed by Huang et al. However, ELM is sensitive to the neuron number in hidden layer and its selection is a difficult-to-solve problem. In this paper, a self-adaptive mechanism is introduced into the ELM. Herein, a new variant of ELM, called self-adaptive extreme learning machine (SaELM), is proposed. SaELM is a self-adaptive learning algorithm that can always select the best neuron number in hidden layer to form the neural networks. There is no need to adjust any parameters in the training process. In order to prove the performance of the SaELM, it is used to solve the Italian wine and iris classification problems. Through the comparisons between SaELM and the traditional back propagation, basic ELM and general regression neural network, the results have proven that SaELM has a faster learning speed and better generalization performance when solving the classification problem.  相似文献   

16.
In this paper, a novel self-adaptive extreme learning machine (ELM) based on affinity propagation (AP) is proposed to optimize the radial basis function neural network (RBFNN). As is well known, the parameters of original ELM which developed by G.-B. Huang are randomly determined. However, that cannot objectively obtain a set of optimal parameters of RBFNN trained by ELM algorithm for different realistic datasets. The AP algorithm can automatically produce a set of clustering centers for the different datasets. According to the results of AP, we can, respectively, get the cluster number and the radius value of each cluster. In that case, the above cluster number and radius value can be used to initialize the number and widths of hidden layer neurons in RBFNN and that is also the parameters of coefficient matrix H of ELM. This may successfully avoid the subjectivity prior knowledge and randomness of training RBFNN. Experimental results show that the method proposed in this thesis has a more powerful generalization capability than conventional ELM for an RBFNN.  相似文献   

17.
针对极限学习机(ELM)在训练过程中需要大量隐含层节点的问题,提出了差分进化与克隆算法改进人工蜂群优化的极限学习机(DECABC-ELM),在人工蜂群算法的基础上,引入了差分进化算法的差分变异算子和免疫克隆算法的克隆扩增算子,改进了人工蜂群收敛速度慢等缺点,使用改进的人工蜂群算法计算ELM的隐含层节点参数.将算法应用于回归和分类数据集,并与其他算法进行比较,获得了良好的效果.  相似文献   

18.
Recently, a novel learning algorithm for single-hidden-layer feedforward neural networks (SLFNs) named extreme learning machine (ELM) was proposed by Huang et al. The essence of ELM is that the learning parameters of hidden nodes, including input weights and biases, are randomly assigned and need not be tuned while the output weights can be analytically determined by the simple generalized inverse operation. The only parameter needed to be defined is the number of hidden nodes. Compared with other traditional learning algorithms for SLFNs, ELM provides extremely faster learning speed, better generalization performance and with least human intervention. This paper firstly introduces a brief review of ELM, describing the principle and algorithm of ELM. Then, we put emphasis on the improved methods or the typical variants of ELM, especially on incremental ELM, pruning ELM, error-minimized ELM, two-stage ELM, online sequential ELM, evolutionary ELM, voting-based ELM, ordinal ELM, fully complex ELM, and symmetric ELM. Next, the paper summarized the applications of ELM on classification, regression, function approximation, pattern recognition, forecasting and diagnosis, and so on. In the last, the paper discussed several open issues of ELM, which may be worthy of exploring in the future.  相似文献   

19.
To solve the problem of improving the regression accuracy and model stability of the extreme learning machine(ELM), a new approach based on an improved M-estimation optimized double-parallel extreme learning machine is proposed in this study, namely robust double-parallel extreme learning machine(RD-ELM). Firstly, RD-ELM is constructed with a double parallel forward structure, thus the information can be received from both hidden layer neurons and input layer neurons. Secondly, we use an improved M-estimation to calculate output weights of neural network by iteratively reweighted Least-Squares Estimation(LSE), with weights assigned by the least absolute residual estimation of the samples. Finally, we establish a regression prediction model utilized to test the goodness of fit in a SinC function and verify the regression ability in eight benchmark regression problems. Then the proposed method is applied to an actual operational condition of a power plant. Experimental results show that the proposed method can efficiently process the influence of outliers and noise with strong anti-jamming ability. Compared with other methods, RD-ELM has superior performance that is stronger robustness and better generalization performance in many benchmark data and practical experiments.  相似文献   

20.
李军  乃永强 《控制与决策》2015,30(9):1559-1566

针对一类多输入多输出(MIMO) 仿射非线性动态系统, 提出一种基于极限学习机(ELM) 的鲁棒自适应神经控制方法. ELM随机确定单隐层前馈网络(SLFNs) 的隐含层参数, 仅需调整网络的输出权值, 能以极快的学习速度获得良好的推广性. 在所提出的控制方法中, 利用ELM逼近系统的未知非线性项, 针对ELM网络的权值、逼近误差及外界扰动的未知上界值分别设计参数自适应律, 通过Lyapunov 稳定性分析可以保证闭环系统所有信号半全局最终一致有界. 仿真结果表明了该控制方法的有效性.

  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号