首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
尽管极限学习机因具有快速、简单、易实现及普适的逼近能力等特点被广泛应用于分类、回归及特征学习问题,但是,极限学习机同其他标准分类方法一样将最大化各类总分类性能作为算法的优化目标,因此,在实际应用中遇到数据样本分布不平衡时,算法对大类样本具有性能偏向性。针对极限学习机类不平衡学习问题的研究起步晚,算法少的问题,在介绍了极限学习机类不平衡数据学习研究现状,极限学习机类不平衡数据学习的典型算法-加权极限学习机及其改进算法的基础上,提出一种不需要对原始不平衡样本进行处理的Adaboost提升的加权极限学习机,通过在15个UCI不平衡数据集进行分析实验,实验结果表明提出的算法具有更好的分类性能。  相似文献   

2.
Cost-sensitive learning is a crucial problem in machine learning research. Traditional classification problem assumes that the misclassification for each category has the same cost, and the target of learning algorithm is to minimize the expected error rate. In cost-sensitive learning, costs of misclassification for samples of different categories are not the same; the target of algorithm is to minimize the sum of misclassification cost. Cost-sensitive learning can meet the actual demand of real-life classification problems, such as medical diagnosis, financial projections, and so on. Due to fast learning speed and perfect performance, extreme learning machine (ELM) has become one of the best classification algorithms, while voting based on extreme learning machine (V-ELM) makes classification results more accurate and stable. However, V-ELM and some other versions of ELM are all based on the assumption that all misclassifications have same cost. Therefore, they cannot solve cost-sensitive problems well. To overcome the drawback of ELMs mentioned above, an algorithm called cost-sensitive ELM (CS-ELM) is proposed by introducing misclassification cost of each sample into V-ELM. Experimental results on gene expression data show that CS-ELM is effective in reducing misclassification cost.  相似文献   

3.
限定记忆极端学习机及其应用   总被引:1,自引:0,他引:1  
张弦  王宏力 《控制与决策》2012,27(8):1206-1210
为了实现极端学习机(ELM)的在线训练,提出一种限定记忆极端学习机(FM-ELM).FM-ELM以逐次增加新训练样本与删除旧训练样本的方式,提高其对于系统动态变化特性的自适应性,并根据矩阵求逆引理实现了网络输出权值的递推求解,减小了在线训练过程的计算代价.应用于具有动态变化特性的非线性系统在线状态预测表明,FM-ELM是一种有效的ELM在线训练模式,相比于在线贯序极端学习机,FM-ELM具有更快的调节速度和更高的预测精度.  相似文献   

4.
将极限学习机算法与旋转森林算法相结合,提出了以ELM算法为基分类器并以旋转森林算法为框架的RF-ELM集成学习模型。在8个数据集上进行了3组预测实验,根据实验结果讨论了ELM算法中隐含层神经元个数对预测结果的影响以及单个ELM模型预测结果不稳定的缺陷;将RF-ELM模型与单ELM模型和基于Bagging算法集成的ELM模型相比较,由稳定性和预测精度的两组对比实验的实验结果表明,对ELM的集成学习可以有效地提高ELM模型的性能,且RF-ELM模型较其他两个模型具有更好的稳定性和更高的准确率,验证了RF-ELM是一种有效的ELM集成学习模型。  相似文献   

5.
针对现有机器学习算法难以有效提高贯序不均衡数据分类问题中少类样本分类精度的问题,提出一种基于混合采样策略的在线贯序极限学习机。该算法可在提高少类样本分类精度的前提下,减少多类样本的分类精度损失,主要包括离线和在线两个阶段:离线阶段采用均衡采样策略,利用主曲线分别构建多类和少类样本的可信区域,在不改变样本分布特性的前提下,利用可信区域扩充少类样本和削减多类样本,进而得到均衡的离线样本集,建立初始模型;在线阶段仅对贯序到达的多类数据进行欠采样,根据样本重要度挑选最具价值的多类样本,进而动态更新网络权值。通过理论分析证明所提算法在理论上存在损失信息上界。采用UCI标准数据集和实际的澳门空气污染预报数据进行仿真实验,结果表明,与现有在线贯序极限学习机(OS-ELM)、极限学习机(ELM)和元认知在线贯序极限学习机(MCOS-ELM)算法相比,所提算法对少类样本的预测精度更高,且数值稳定性良好。  相似文献   

6.

针对极限学习机(ELM) 网络结构优化问题, 提出一种改进的灵敏度剪枝ELM(ImSAP-ELM). ImSAP-ELM 将??2 正则化因子引入SAP-ELM 中, 采用留一准则确定最优隐节点数. 推导基于奇异值分解的输出权重计算公式, 避免矩阵奇异导致求解无效的问题. 将ImSAP-ELM 用于故障预测, 利用多组同类型故障数据建立多个ImSAP-ELM 模型, 基于加权思想融合不同ImSAP-ELM 的预测值. 某型无人机发射机实例表明, 相比于ELM、OP-ELM (最优剪枝ELM) 和SAP-ELM, ImSAP-ELM 耗时最高, 但是ImSAP-ELM 的预测误差小于其他3 种方法.

  相似文献   

7.
Dynamic ensemble extreme learning machine based on sample entropy   总被引:1,自引:1,他引:0  
Extreme learning machine (ELM) as a new learning algorithm has been proposed for single-hidden layer feed-forward neural networks, ELM can overcome many drawbacks in the traditional gradient-based learning algorithm such as local minimal, improper learning rate, and low learning speed by randomly selecting input weights and hidden layer bias. However, ELM suffers from instability and over-fitting, especially on large datasets. In this paper, a dynamic ensemble extreme learning machine based on sample entropy is proposed, which can alleviate to some extent the problems of instability and over-fitting, and increase the prediction accuracy. The experimental results show that the proposed approach is robust and efficient.  相似文献   

8.
现实生活中存在大量的非平衡数据,大多数传统的分类算法假定类分布平衡或者样本的错分代价相同,因此在对这些非平衡数据进行分类时会出现少数类样本错分的问题。针对上述问题,在代价敏感的理论基础上,提出了一种新的基于代价敏感集成学习的非平衡数据分类算法--NIBoost(New Imbalanced Boost)。首先,在每次迭代过程中利用过采样算法新增一定数目的少数类样本来对数据集进行平衡,在该新数据集上训练分类器;其次,使用该分类器对数据集进行分类,并得到各样本的预测类标及该分类器的分类错误率;最后,根据分类错误率和预测的类标计算该分类器的权重系数及各样本新的权重。实验采用决策树、朴素贝叶斯作为弱分类器算法,在UCI数据集上的实验结果表明,当以决策树作为基分类器时,与RareBoost算法相比,F-value最高提高了5.91个百分点、G-mean最高提高了7.44个百分点、AUC最高提高了4.38个百分点;故该新算法在处理非平衡数据分类问题上具有一定的优势。  相似文献   

9.
Due to the significant efficiency and simple implementation, extreme learning machine (ELM) algorithms enjoy much attention in regression and classification applications recently. Many efforts have been paid to enhance the performance of ELM from both methodology (ELM training strategies) and structure (incremental or pruned ELMs) perspectives. In this paper, a local coupled extreme learning machine (LC-ELM) algorithm is presented. By assigning an address to each hidden node in the input space, LC-ELM introduces a decoupler framework to ELM in order to reduce the complexity of the weight searching space. The activated degree of a hidden node is measured by the membership degree of the similarity between the associated address and the given input. Experimental results confirm that the proposed approach works effectively and generally outperforms the original ELM in both regression and classification applications.  相似文献   

10.
针对现有学习算法难以有效提高不均衡在线贯序数据中少类样本分类精度的问题,提出一种基于不均衡样本重构的加权在线贯序极限学习机。该算法从提取在线贯序数据的分布特性入手,主要包括离线和在线两个阶段:离线阶段主要采用主曲线构建少类样本的可信区域,并通过对该区域内样本进行过采样,来构建符合样本分布趋势的均衡样本集,进而建立初始模型;而在线阶段则对贯序到达的数据根据训练误差赋予各样本相应权重,同时动态更新网络权值。采用UCI标准数据集和澳门实测气象数据进行实验对比,结果表明,与现有在线贯序极限学习机(OS-ELM)、极限学习机(ELM)和元认知在线贯序极限学习机(MCOS-ELM)相比,所提算法对少类样本的识别能力更高,且所提算法的模型训练时间与其他三种算法相差不大。结果表明在不影响算法复杂度的情况下,所提算法能有效提高少类样本的分类精度。  相似文献   

11.
For solving the problem that extreme learning machine (ELM) algorithm uses fixed activation function and cannot be residual compensation, a new learning algorithm called variable activation function extreme learning machine based on residual prediction compensation is proposed. In the learning process, the proposed method adjusts the steep degree, position and mapping scope simultaneously. To enhance the nonlinear mapping capability of ELM, particle swarm optimization algorithm is used to optimize variable parameters according to root-mean square error for the prediction accuracy of the mode. For further improving the predictive accuracy, the auto-regressive moving average model is used to model the residual errors between actual value and predicting value of variable activation function extreme learning machine (V-ELM). The prediction of residual errors is used to rectify the prediction value of V-ELM. Simulation results verified the effectiveness and feasibility of this method by using Pole, Auto-Mpg, Housing, Diabetes, Triazines and Stock benchmark datasets. Also, it was implemented to develop a soft sensor model for the gasoline dry point in delayed coking and some satisfied results were obtained.  相似文献   

12.
工业过程常含有显著的非线性、时变等复杂特性,传统的极限学习机有时无法充分利用数据信息,所建软测量模型预测性能较差。为了提高极限学习机的泛化能力和预测精度,提出一种改进粒子群优化的极限学习机软测量建模方法。首先,利用高斯函数正态分布的特点实现惯性权重的自适应更新,并线性变化学习因子以提高粒子群优化算法的收敛速度和搜索性能;然后将该算法用于优化极限学习机的惩罚系数和核宽,得到一组最优超参数;最后将该方法应用于脱丁烷塔过程软测量建模中。仿真结果表明,优化后的极限学习机模型预测精度有明显的提高,验证了所提方法不仅是可行的,而且具有良好的预测精度和泛化性能。  相似文献   

13.
In this paper, a novel self-adaptive extreme learning machine (ELM) based on affinity propagation (AP) is proposed to optimize the radial basis function neural network (RBFNN). As is well known, the parameters of original ELM which developed by G.-B. Huang are randomly determined. However, that cannot objectively obtain a set of optimal parameters of RBFNN trained by ELM algorithm for different realistic datasets. The AP algorithm can automatically produce a set of clustering centers for the different datasets. According to the results of AP, we can, respectively, get the cluster number and the radius value of each cluster. In that case, the above cluster number and radius value can be used to initialize the number and widths of hidden layer neurons in RBFNN and that is also the parameters of coefficient matrix H of ELM. This may successfully avoid the subjectivity prior knowledge and randomness of training RBFNN. Experimental results show that the method proposed in this thesis has a more powerful generalization capability than conventional ELM for an RBFNN.  相似文献   

14.
A wavelet extreme learning machine   总被引:2,自引:0,他引:2  
Extreme learning machine (ELM) has been widely used in various fields to overcome the problem of low training speed of the conventional neural network. Kernel extreme learning machine (KELM) introduces the kernel method to ELM model, which is applicable in Stat ML. However, if the number of samples in Stat ML is too small, perhaps the unbalanced samples cannot reflect the statistical characteristics of the input data, so that the learning ability of Stat ML will be influenced. At the same time, the mix kernel functions used in KELM are conventional functions. Therefore, the selection of kernel function can still be optimized. Based on the problems above, we introduce the weighted method to KELM to deal with the unbalanced samples. Wavelet kernel functions have been widely used in support vector machine and obtain a good classification performance. Therefore, to realize a combination of wavelet analysis and KELM, we introduce wavelet kernel functions to KELM model, which has a mix kernel function of wavelet kernel and sigmoid kernel, and introduce the weighted method to KELM model to balance the sample distribution, and then we propose the weighted wavelet–mix kernel extreme learning machine. The experimental results show that this method can effectively improve the classification ability with better generalization. At the same time, the wavelet kernel functions perform very well compared with the conventional kernel functions in KELM model.  相似文献   

15.
Traditional methods on creating diesel engine models include the analytical methods like multi-zone models and the intelligent based models like artificial neural network (ANN) based models. However, those analytical models require excessive assumptions while those ANN models have many drawbacks such as the tendency to overfitting and the difficulties to determine the optimal network structure. In this paper, several emerging advanced machine learning techniques, including least squares support vector machine (LS-SVM), relevance vector machine (RVM), basic extreme learning machine (ELM) and kernel based ELM, are newly applied to the modelling of diesel engine performance. Experiments were carried out to collect sample data for model training and verification. Limited by the experiment conditions, only 24 sample data sets were acquired, resulting in data scarcity. Six-fold cross-validation is therefore adopted to address this issue. Some of the sample data are also found to suffer from the problem of data exponentiality, where the engine performance output grows up exponentially along the engine speed and engine torque. This seriously deteriorates the prediction accuracy. Thus, logarithmic transformation of dependent variables is utilized to pre-process the data. Besides, a hybrid of leave-one-out cross-validation and Bayesian inference is, for the first time, proposed for the selection of hyperparameters of kernel based ELM. A comparison among the advanced machine learning techniques, along with two traditional types of ANN models, namely back propagation neural network (BPNN) and radial basis function neural network (RBFNN), is conducted. The model evaluation is made based on the time complexity, space complexity, and prediction accuracy. The evaluation results show that kernel based ELM with the logarithmic transformation and hybrid inference is far better than basic ELM, LS-SVM, RVM, BPNN and RBFNN, in terms of prediction accuracy and training time.  相似文献   

16.
极限学习机(ELM)由于高效的训练方式被广泛应用于分类回归,然而不同的输入权值在很大程度上会影响其学习性能。为了进一步提高ELM的学习性能,针对ELM的输入权值进行了研究,充分利用图像局部感知的稀疏性,将局部感知的方法运用到基于自动编码器的ELM(ELM-AE)上,提出了局部感知的类限制极限学习机(RF-C2ELM)。通过对MNIST数据集进行分类问题分析实验,实验结果表明,在具有相同隐层结点数的条件下,提出的方法能够获得更高的分类精度。  相似文献   

17.
The extreme learning machine (ELM), a single-hidden layer feedforward neural network algorithm, was tested on nine environmental regression problems. The prediction accuracy and computational speed of the ensemble ELM were evaluated against multiple linear regression (MLR) and three nonlinear machine learning (ML) techniques – artificial neural network (ANN), support vector regression and random forest (RF). Simple automated algorithms were used to estimate the parameters (e.g. number of hidden neurons) needed for model training. Scaling the range of the random weights in ELM improved its performance. Excluding large datasets (with large number of cases and predictors), ELM tended to be the fastest among the nonlinear models. For large datasets, RF tended to be the fastest. ANN and ELM had similar skills, but ELM was much faster than ANN except for large datasets. Generally, the tested ML techniques outperformed MLR, but no single method was best for all the nine datasets.  相似文献   

18.
This work concerns receiver design for light-emitting diode (LED) multiple input multiple output (MIMO) communications where the LED nonlinearity can severely degrade the performance of communications. We firstly propose an extreme learning machine (ELM) based receiver to jointly handle the LED nonlinearity and cross-LED interference. Then, by taking advantage of the features of the ELM, we propose to use a circulant structure for the input weight matrix and the fast Fourier transform (FFT) for implementation, leading to significant computational complexity reduction. It is demonstrated that, the proposed ELM based receivers can handle the nonlinearity and interference much more effectively compared to conventional techniques, and the low complexity ELM-based receiver with circulant input matrix delivers almost the same performance as the receiver based on the conventional ELM.  相似文献   

19.
In order to overcome the disadvantage of the traditional algorithm for SLFN (single-hidden layer feedforward neural network), an improved algorithm for SLFN, called extreme learning machine (ELM), is proposed by Huang et al. However, ELM is sensitive to the neuron number in hidden layer and its selection is a difficult-to-solve problem. In this paper, a self-adaptive mechanism is introduced into the ELM. Herein, a new variant of ELM, called self-adaptive extreme learning machine (SaELM), is proposed. SaELM is a self-adaptive learning algorithm that can always select the best neuron number in hidden layer to form the neural networks. There is no need to adjust any parameters in the training process. In order to prove the performance of the SaELM, it is used to solve the Italian wine and iris classification problems. Through the comparisons between SaELM and the traditional back propagation, basic ELM and general regression neural network, the results have proven that SaELM has a faster learning speed and better generalization performance when solving the classification problem.  相似文献   

20.
A study on effectiveness of extreme learning machine   总被引:7,自引:0,他引:7  
Extreme learning machine (ELM), proposed by Huang et al., has been shown a promising learning algorithm for single-hidden layer feedforward neural networks (SLFNs). Nevertheless, because of the random choice of input weights and biases, the ELM algorithm sometimes makes the hidden layer output matrix H of SLFN not full column rank, which lowers the effectiveness of ELM. This paper discusses the effectiveness of ELM and proposes an improved algorithm called EELM that makes a proper selection of the input weights and bias before calculating the output weights, which ensures the full column rank of H in theory. This improves to some extend the learning rate (testing accuracy, prediction accuracy, learning time) and the robustness property of the networks. The experimental results based on both the benchmark function approximation and real-world problems including classification and regression applications show the good performances of EELM.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号