首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 578 毫秒
1.
极限学习机(Extreme Learning Machine,ELM)是一种高效率的单隐层前馈神经网络,由于其训练速度快与泛化性能好,在各个领域中都有广泛的应用。但是极限学习机随机生成输入权值与隐含层偏置矩阵,随机性影响训练模型的泛化性能与稳定性,降低模型分类的精度。为了解决这一问题,借鉴蚁狮优化算法中利用蚁狮种群中的多个个体进行并行寻优的能力,改进优化极限学习机的输入权值与隐含层偏置矩阵,得到一个分类精度更高模型。以UCI标准数据库中数据进行分类实验分析验证,实验结果表明,在5类UCI数据集上基于蚁狮优化的极限学习机(ALO-ELM)相比于PSO-ELM和SaDE-ELM具有更高的分类精度。  相似文献   

2.
翟俊海  臧立光  张素芳 《计算机科学》2016,43(12):125-129, 145
极限学习机是一种训练单隐含层前馈神经网络的算法,它随机初始化输入层的权值和隐含层结点的偏置,用分析的方法确定输出层的权值。极限学习机具有学习速度快、泛化能力强的特点。很多研究都用服从[-1,1]区间均匀分布的随机数初始化输入层权值和隐含层结点的偏置,但没有对这种随机初始化合理性的研究。用实验的方法对这一问题进行了研究,分别研究了随机权服从均匀分布、高斯分布和指数分布对极限学习机性能的影响。研究发现随机权的分布对极限学习机的性能的确有影响,对于不同的问题或不同的数据集,服从[-1,1]区间均匀分布的随机权不一定是最优的选择。研究结论对从事极限学习机研究的人员具有一定的借鉴作用。  相似文献   

3.
本文提出一种Alexnet与极限学习机相结合的网络模型。Alexnet是一种很好的特征提取器,但是大量的网络参数集中在后三层用作分类的全连接层中,同时要在调整和训练参数上花费大量时间,而极限学习机具有训练参数少,学习速度快的优点,所以本文运用Alexnet进行特征提取,再用极限学习机对图片进行分类,结合了Alexnet和ELM的优点。本文方法能在CIFAR10数据集上有效分类,同时节省训练时间。  相似文献   

4.
集成分类通过将若干个弱分类器依据某种规则进行组合,能有效改善分类性能。在组合过程中,各个弱分类器对分类结果的重要程度往往不一样。极限学习机是最近提出的一个新的训练单隐层前馈神经网络的学习算法。以极限学习机为基分类器,提出了一个基于差分进化的极限学习机加权集成方法。提出的方法通过差分进化算法来优化集成方法中各个基分类器的权值。实验结果表明,该方法与基于简单投票集成方法和基于Adaboost集成方法相比,具有较高的分类准确性和较好的泛化能力。  相似文献   

5.
极限学习机( Extreme Learning Machine , ELM)是一种新型的单馈层神经网络算法,克服了传统的误差反向传播方法需要多次迭代,算法的计算量和搜索空间大的缺点,只需要设置合适的隐含层节点个数,为输入权和隐含层偏差进行随机赋值,一次完成无需迭代。研究表明股票市场是一个非常复杂的非线性系统,需要用到人工智能理论、统计学理论和经济学理论。本文将极限学习机方法引入股票价格预测中,通过对比支持向量机( Support Vector Machine , SVM)和误差反传神经网络( Back Propagation Neural Network , BP神经网络),分析极限学习机在股票价格预测中的可行性和优势。结果表明极限学习机预测精度高,并且在参数选择及训练速度上具有较明显的优势。  相似文献   

6.
为了能够更加高效地检测和诊断模拟电路中的故障元件,提出了自适应狼群算法优化极限学习机的方法。该方法采用自适应遗传算法对特征参数进行选择,从而生成最优特征子集,然后利用最优特征子集构造样本输入极限学习机ELM网络对故障进行分类。针对极限学习机的输入层和隐含层之间的连接权值、隐含层的偏差都将会使其学习速度和分类正确率受到影响的问题,采用本文方法对它们进行优化并选择相应的最优值,提高了极限学习机网络训练的稳定性与故障诊断的成功率。通过2个典型模拟电路的诊断实例,给出了这些方法的具体实现过程,故障诊断率均在99%以上。仿真结果表明使用该方法进行模拟电路故障诊断时具有良好的正确率和稳定性。  相似文献   

7.
为提高脑卒中经颅多普勒(Transcranial Doppler,TCD)数据分类的效率和准确率,应用蝙蝠算法(Bat Algorithm,BA)优化极限学习机(Extreme Learning Machine,ELM)模型进行脑卒中分类预测。在训练ELM模型时,隐含层输入权值矩阵和隐含层阈值矩阵元素产生的随机性影响了模型性能。为此,利用BA对ELM参数中的输入权值矩阵和隐含层阈值矩阵进行了优化,并用BA-ELM模型对实验所用的TCD数据集进行分类。实验结果表明,BA-ELM模型的分类准确率比ELM提高了22.77%,能有效进行脑卒中预测。  相似文献   

8.
《传感器与微系统》2019,(1):122-125
针对网络入侵数据量大、属性冗余及属性之间线性相关导致分类算法计算速度慢、准确度不高等问题,提出一种改进粗糙集属性约简的极限学习机网络入侵分类算法。对训练集采用粗糙集正域和分辨矩阵相结合的方法获得属性核,筛选出只有属性核的数据集得到无冗余属性的特征集合;使用极限学习机(ELM)作为分类模型进行分类,使用支持向量机(SVM)、神经网络、极限学习机比较证明提出方法的有效性,为网络入侵检测提供一种新的解决方法。  相似文献   

9.
在构建基于极限学习机的无监督自适应分类器时, 隐含层的参数通常都是随机选取的, 而随机选取的参数不具备领域适应能力. 为了增强跨领域极限学习机的知识迁移能力,提出一种新的基于极限学习机的无监督领域适应分类器学习方法, 该方法主要利用自编码极限学习机对源域和目标域数据进行重构学习, 从而可以获得具有领域不变特性的隐含层参数. 进一步, 结合联合概率分布匹配和流形正则的思想, 对输出层权重进行自适应调整. 所提出算法能对极限学习机的两层参数均赋予领域适应能力,在字符数据集和对象识别数据集上的实验结果表明其具有较高的跨领域分类精度.  相似文献   

10.
极限学习机(ELM)在训练过程中无需调整隐层节点参数,因其高效的训练方式被广泛应用于分类和回归,然而极限学习机也面临着结构选择与过拟合等严重等问题。为了解决此问题,针对隐层节点增量数目对收敛速度以及训练时间的影响进行了研究,提出一种利用网络输出误差的变化率控制网络增长速度的变长增量型极限学习机算法(VI-ELM)。通过对多个数据集进行回归和分类问题分析实验,结果表明,本文提出的方法能够以更高效的训练方式获得良好的泛化性能。  相似文献   

11.
In this paper, an integrated model based on efficient extreme learning machine (EELM) and differential evolution (DE) is proposed to predict chaotic time series. In the proposed model, a novel learning algorithm called EELM is presented and used to model the chaotic time series. The EELM inherits the basic idea of extreme learning machine (ELM) in training single hidden layer feedforward networks, but replaces the commonly used singular value decomposition with a reduced complete orthogonal decomposition to calculate the output weights, which can achieve a much faster learning speed than ELM. Moreover, in order to obtain a more accurate and more stable prediction performance for chaotic time series prediction, this model abandons the traditional two-stage modeling approach and adopts an integrated parameter selection strategy which employs a modified DE algorithm to optimize the phase space reconstruction parameters of chaotic time series and the model parameter of EELM simultaneously based on a hybrid validation criterion. Experimental results show that the proposed integrated prediction model can not only provide stable prediction performances with high efficiency but also achieve much more accurate prediction results than its counterparts for chaotic time series prediction.  相似文献   

12.
A study on effectiveness of extreme learning machine   总被引:7,自引:0,他引:7  
Extreme learning machine (ELM), proposed by Huang et al., has been shown a promising learning algorithm for single-hidden layer feedforward neural networks (SLFNs). Nevertheless, because of the random choice of input weights and biases, the ELM algorithm sometimes makes the hidden layer output matrix H of SLFN not full column rank, which lowers the effectiveness of ELM. This paper discusses the effectiveness of ELM and proposes an improved algorithm called EELM that makes a proper selection of the input weights and bias before calculating the output weights, which ensures the full column rank of H in theory. This improves to some extend the learning rate (testing accuracy, prediction accuracy, learning time) and the robustness property of the networks. The experimental results based on both the benchmark function approximation and real-world problems including classification and regression applications show the good performances of EELM.  相似文献   

13.
This paper presents a performance enhancement scheme for the recently developed extreme learning machine (ELM) for multi-category sparse data classification problems. ELM is a single hidden layer neural network with good generalization capabilities and extremely fast learning capacity. In ELM, the input weights are randomly chosen and the output weights are analytically calculated. The generalization performance of the ELM algorithm for sparse data classification problem depends critically on three free parameters. They are, the number of hidden neurons, the input weights and the bias values which need to be optimally chosen. Selection of these parameters for the best performance of ELM involves a complex optimization problem.In this paper, we present a new, real-coded genetic algorithm approach called ‘RCGA-ELM’ to select the optimal number of hidden neurons, input weights and bias values which results in better performance. Two new genetic operators called ‘network based operator’ and ‘weight based operator’ are proposed to find a compact network with higher generalization performance. We also present an alternate and less computationally intensive approach called ‘sparse-ELM’. Sparse-ELM searches for the best parameters of ELM using K-fold validation. A multi-class human cancer classification problem using micro-array gene expression data (which is sparse), is used for evaluating the performance of the two schemes. Results indicate that the proposed RCGA-ELM and sparse-ELM significantly improve ELM performance for sparse multi-category classification problems.  相似文献   

14.
Identifying a discriminative feature can effectively improve the classification performance of aerial scene classification. Deep convolutional neural networks (DCNN) have been widely used in aerial scene classification for its learning discriminative feature ability. The DCNN feature can be more discriminative by optimizing the training loss function and using transfer learning methods. To enhance the discriminative power of a DCNN feature, the improved loss functions of pretraining models are combined with a softmax loss function and a centre loss function. To further improve performance, in this article, we propose hybrid DCNN features for aerial scene classification. First, we use DCNN models with joint loss functions and transfer learning from pretrained deep DCNN models. Second, the dense DCNN features are extracted, and the discriminative hybrid features are created using linear connection. Finally, an ensemble extreme learning machine (EELM) classifier is adopted for classification due to its general superiority and low computational cost. Experimental results based on the three public benchmark data sets demonstrate that the hybrid features obtained using the proposed approach and classified by the EELM classifier can result in remarkable performance.  相似文献   

15.
提出利用极端学习机算法(ELM)在线构建像素分类模型分割白细胞图像。训练阶段根据白细胞核深染色的特点,先利用一个Mean-shift过程在RGB空间定位白细胞核区;再经核区形态学膨胀,得到一个熵与面积之比最大的区域作为正样本候选区域,而此区域外像素则作为负样本候选区域;通过正负样本像素抽样组成训练集,能在线训练得到一个两分类ELM模型。多次抽样得到的训练集可以产生多个ELM模型。测试阶段利用上述ELM模型集成分类全体像素,可实现白细胞自动分割。与传统图像分割算法相比,本文方法基本无参数调整,可自适应光照和染色条件导致的图像颜色变化,分割效果好。相关实验结果表明算法的有效性。  相似文献   

16.
Extreme Learning Machine (ELM) is a supervised learning technique for a class of feedforward neural networks with random weights that has recently been used with success for the classification of hyperspectral images. In this work, we show that the morphological techniques can be integrated in this kind of classifiers using several composite feature mappings which are proposed for ELM. In particular, we present a spectral–spatial ELM-based classifier for hyperspectral remote-sensing images that integrates the information provided by extended morphological profiles. The proposed spectral–spatial classifier allows different weights for both spatial and spectral features, outperforming other ELM-based classifiers in terms of accuracy for land-cover applications. The accuracy classification results are also better than those obtained by equivalent spectral–spatial Support-Vector-Machine-based classifiers.  相似文献   

17.
极限学习机(ELM)由于高效的训练方式被广泛应用于分类回归,然而不同的输入权值在很大程度上会影响其学习性能。为了进一步提高ELM的学习性能,针对ELM的输入权值进行了研究,充分利用图像局部感知的稀疏性,将局部感知的方法运用到基于自动编码器的ELM(ELM-AE)上,提出了局部感知的类限制极限学习机(RF-C2ELM)。通过对MNIST数据集进行分类问题分析实验,实验结果表明,在具有相同隐层结点数的条件下,提出的方法能够获得更高的分类精度。  相似文献   

18.
As a novel learning algorithm for single-hidden-layer feedforward neural networks, extreme learning machines (ELMs) have been a promising tool for regression and classification applications. However, it is not trivial for ELMs to find the proper number of hidden neurons due to the nonoptimal input weights and hidden biases. In this paper, a new model selection method of ELM based on multi-objective optimization is proposed to obtain compact networks with good generalization ability. First, a new leave-one-out (LOO) error bound of ELM is derived, and it can be calculated with negligible computational cost once the ELM training is finished. Furthermore, the hidden nodes are added to the network one-by-one, and at each step, a multi-objective optimization algorithm is used to select optimal input weights by minimizing this LOO bound and the norm of output weight simultaneously in order to avoid over-fitting. Experiments on five UCI regression data sets are conducted, demonstrating that the proposed algorithm can generally obtain better generalization performance with more compact network than the conventional gradient-based back-propagation method, original ELM and evolutionary ELM.  相似文献   

19.
In this paper, a regularized correntropy criterion (RCC) for extreme learning machine (ELM) is proposed to deal with the training set with noises or outliers. In RCC, the Gaussian kernel function is utilized to substitute Euclidean norm of the mean square error (MSE) criterion. Replacing MSE by RCC can enhance the anti-noise ability of ELM. Moreover, the optimal weights connecting the hidden and output layers together with the optimal bias terms can be promptly obtained by the half-quadratic (HQ) optimization technique with an iterative manner. Experimental results on the four synthetic data sets and the fourteen benchmark data sets demonstrate that the proposed method is superior to the traditional ELM and the regularized ELM both trained by the MSE criterion.  相似文献   

20.
Recently there have been renewed interests in single-hidden-layer neural networks (SHLNNs). This is due to its powerful modeling ability as well as the existence of some efficient learning algorithms. A prominent example of such algorithms is extreme learning machine (ELM), which assigns random values to the lower-layer weights. While ELM can be trained efficiently, it requires many more hidden units than is typically needed by the conventional neural networks to achieve matched classification accuracy. The use of a large number of hidden units translates to significantly increased test time, which is more valuable than training time in practice. In this paper, we propose a series of new efficient learning algorithms for SHLNNs. Our algorithms exploit both the structure of SHLNNs and the gradient information over all training epochs, and update the weights in the direction along which the overall square error is reduced the most. Experiments on the MNIST handwritten digit recognition task and the MAGIC gamma telescope dataset show that the algorithms proposed in this paper obtain significantly better classification accuracy than ELM when the same number of hidden units is used. For obtaining the same classification accuracy, our best algorithm requires only 1/16 of the model size and thus approximately 1/16 of test time compared with ELM. This huge advantage is gained at the expense of 5 times or less the training cost incurred by the ELM training.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号