首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 218 毫秒
1.
极限学习机(ELM)是一种新型单馈层神经网络算法,在训练过程中只需要设置合适的隐藏层节点个数,随机赋值输入权值和隐藏层偏差,一次完成无需迭代.结合遗传算法在预测模型参数寻优方面的优势,找到极限学习机的最优参数取值,建立成都双流国际机场旅客吞吐量预测模型,通过对比支持向量机、BP神经网络,分析遗传-极限学习机算法在旅客吞吐量预测中的可行性和优势.仿真结果表明遗传-极限学习机算法不仅可行,并且与原始极限学习机算法相比,在预测精度和训练速度上具有比较明显的优势.  相似文献   

2.
提出了一种基于随机特征映射的四层神经网络(FRMFNN)及其增量学习算法.FRMFNN首先把原始输入特征通过特定的随机映射算法转化为随机映射特征存储于第一层隐藏层节点中,再经过激活函数对随机映射特征进行非线性转化生成第二层隐藏节点,最后将第二层隐藏层通过输出权重连接到输出层.由于第一层和第二层隐藏层的权重是根据任意连续采样分布概率随机生成的而不需要训练更新,且输出层的权重可以用岭回归算法快速求解,从而避免了传统反向传播神经网络耗时的训练过程.当FRMFNN没有达到期望精度时,借助于快速的增量算法可以持续改进网络性能,从而避免了重新训练整个网络.详细介绍了FRMFNN及其增量算法的结构原理,证明了FRMFNN的通用逼近性.与宽度学习(BLS)和极限学习机(ELM)的增量学习算法相比,在多个主流分类和回归数据集上的实验结果表明了FRMFNN及其增量学习算法的有效性.  相似文献   

3.
传统的极限学习机作为一种有监督的学习模型,任意对隐藏层神经元的输入权值和偏置进行赋值,通过计算隐藏层神经元的输出权值完成学习过程.针对传统的极限学习机在数据分析预测研究中存在预测精度不足的问题,提出一种基于模拟退火算法改进的极限学习机.首先,利用传统的极限学习机对训练集进行学习,得到隐藏层神经元的输出权值,选取预测结果评价标准.然后利用模拟退火算法,将传统的极限学习机隐藏层输入权值和偏置视为初始解,预测结果评价标准视为目标函数,通过模拟退火的降温过程,找到最优解即学习过程中预测误差最小的极限学习机的隐藏层神经元输入权值和偏置,最后通过传统的极限学习机计算得到隐藏层输出权值.实验选取鸢尾花分类数据和波士顿房价预测数据进行分析.实验发现与传统的极限学习机相比,基于模拟退火改进的极限学习机在分类和回归性能上都更优.  相似文献   

4.
极限学习机与支持向量机在储层渗透率预测中的对比研究   总被引:4,自引:0,他引:4  
极限学习机ELM是一种简单易用、有效的单隐层前馈神经网络SLFNs学习算法。传统的神经网络学习算法(如BP算法)需要人为设置大量的网络训练参数,并且很容易产生局部最优解。极限学习机只需要设置网络的隐层节点个数,在算法执行过程中不需要调整网络的输入权值以及隐元的偏置,并且产生唯一的最优解,因此具有学习速度快且泛化性能好的优点。本文将极限学习机引入到储层渗透率的预测中,通过对比支持向量机,分析其在储层渗透率预测中的可行性和优势。实验结果表明,极限学习机与支持向量机有近似的预测精度,但在参数选择以及学习速度上极限学习机具有明显的优势。  相似文献   

5.
极限学习机是一种随机化算法,它随机生成单隐含层神经网络输入层连接权和隐含层偏置,用分析的方法确定输出层连接权。给定网络结构,用极限学习机重复训练网络,会得到不同的学习模型。本文提出了一种集成模型对数据进行分类的方法。首先用极限学习机算法重复训练若干个单隐含层前馈神经网络,然后用多数投票法集成训练好的神经网络,最后用集成模型对数据进行分类,并在10个数据集上和极限学习机及集成极限学习机进行了实验比较。实验结果表明,本文提出的方法优于极限学习机和集成极限学习机。  相似文献   

6.
手写字符识别是图像识别的一个重要分支,是基于数据挖掘和机器学习技术对数字、字母和文字等的手写体进行识别。当前手写字符识别方法主要集中在对不同深度学习模型的完善和改进上,其中多层极限学习机由于其快于深度信念网络和深度玻尔兹曼机的训练速度以及更高的识别精度引起了学术界和工业界的广泛关注。但是,多层极限学习机的预测表现极易受随机权重的影响,层数越多影响就越明显。文中在深入分析浅层极限学习机训练模式的基础上,提出了一种基于隐含层输出矩阵分解的浅层极限学习机模型,并将其应用于对手写字符的识别。分解极限学习机不需要对手写字符图像进行特征提取,而是通过对大规模隐含层输出矩阵的分解来获得极限学习机的输出层权重。相比深层极限学习机,分解极限学习机降低了基于极限学习机的手写字符识别模型训练的随机性。同时,在MNIST类数据集(即MNIST,EMNIST,KMNIST和K49-MNIST)上的比较结果表明,在相同的训练时间下,分解极限学习机能够获得优于多层极限学习机的识别精度;在相同的识别精度下,分解极限学习机的训练时间明显短于多层极限学习机。实验结果证实了分解极限学习的可行性以及在处理手写字符识别问题上的...  相似文献   

7.
考虑分布式系统质量预测中的大数据处理问题,提出一种基于分布式并行分层极限学习机(distributed parallel hierarchical extreme learning machine, dp-HELM)的大数据多模式质量预测模型。根据Map-Reduce框架,将高效的极限学习机算法转化为分布式并行建模形式。由于分层极限学习机(hierarchical extreme learning machine, HELM)的深度学习网络结构在特征上具备的预测精度优势,结合深层隐藏层的ELM自动编码器,进一步开发了dp-HELM。通过dp-ELM和dp-HELM以同步并行方式进一步训练分布式并行K均值划分的过程模式,利用贝叶斯模型融合技术来集成用于在线预测的局部模型。将所提出的预测模型应用于预脱碳装置中残留的二氧化碳含量估算,实验结果表明了该方法的有效性与可行性。  相似文献   

8.
极限最小学习机ELM(Extreme Learning Machine)是一种具有快速学习能力的神经网络训练算法。它通过随机选择神经网络节点的参数结合最小二乘法达到了减少训练时间的目的,但它需要产生大量的神经网络节点协助运算。提出一种利用迭代式Lasso回归优化的极限最小学习机(Lasso-ELM),它具有以下优势:(1)能大幅减少神经网络隐藏层节点的数量;(2)具有更好的神经网络泛化能力。实验表明Lasso-ELM的综合性能优于ELM、BP与SVM。  相似文献   

9.
针对某型飞机的操纵系统故障评估问题,提出了一种基于飞参数据建立的差分进化极限学习机(DE-ELM)算法。该算法融合了差分进化(DE)和极限学习机(ELM)两种算法,通过对飞参数据进行训练,构建了飞机操纵系统的黑箱模型。由于极限学习机(ELM)的输入权值以及隐含层阈值是随机产生的,所以ELM的随机性较大,稳定性不高,故利用寻优能力较强的DE对ELM输入权值和隐含层阈值进行寻优,从而实现ELM的结构优化,提升ELM的稳定性和鲁棒性。仿真结果表明,DE-ELM算法的决定系数达到了97.6%,其均方误差相比于BP神经网络降低了约79%,相比于单纯的ELM降低了64%。所以说该法可以有效提高精确度,同时具有更加良好的泛化性能。  相似文献   

10.
为了提高极限学习机(ELM)网络的稳定性,提出基于改进粒子群优化的极限学习机(IPSO-ELM)。结合改进的粒子群优化算法寻找ELM网络中最优的输入权值、隐层偏置及隐层节点数。通过引入变异算子,增强种群的多样性,并提高收敛速度。为了处理大规模电力负荷数据,提出基于Spark并行计算框架的并行化算法(PIPSO-ELM)。基于真实电力负荷数据的实验表明,PIPSO-ELM具有更高的稳定性及可扩展性,适合处理大规模的电力负荷数据。  相似文献   

11.
杨菊  袁玉龙  于化龙 《计算机科学》2016,43(10):266-271
针对现有极限学习机集成学习算法分类精度低、泛化能力差等缺点,提出了一种基于蚁群优化思想的极限学习机选择性集成学习算法。该算法首先通过随机分配隐层输入权重和偏置的方法生成大量差异的极限学习机分类器,然后利用一个二叉蚁群优化搜索算法迭代地搜寻最优分类器组合,最终使用该组合分类测试样本。通过12个标准数据集对该算法进行了测试,该算法在9个数据集上获得了最优结果,在另3个数据集上获得了次优结果。采用该算法可显著提高分类精度与泛化性能。  相似文献   

12.
As a novel learning algorithm for single-hidden-layer feedforward neural networks, extreme learning machines (ELMs) have been a promising tool for regression and classification applications. However, it is not trivial for ELMs to find the proper number of hidden neurons due to the nonoptimal input weights and hidden biases. In this paper, a new model selection method of ELM based on multi-objective optimization is proposed to obtain compact networks with good generalization ability. First, a new leave-one-out (LOO) error bound of ELM is derived, and it can be calculated with negligible computational cost once the ELM training is finished. Furthermore, the hidden nodes are added to the network one-by-one, and at each step, a multi-objective optimization algorithm is used to select optimal input weights by minimizing this LOO bound and the norm of output weight simultaneously in order to avoid over-fitting. Experiments on five UCI regression data sets are conducted, demonstrating that the proposed algorithm can generally obtain better generalization performance with more compact network than the conventional gradient-based back-propagation method, original ELM and evolutionary ELM.  相似文献   

13.
针对传统指纹定位算法中接收信号强度值在室内复杂环境中波动较大,指纹信息不可靠,造成定位精度不足的问题,提出了一种以测距值作为指纹信息的基于深度置信网络和极限学习机的超宽带定位方法。首先在深度置信网络底层采用多个堆叠受限玻尔兹曼机对输入数据做无监督学习,来提取深层次特征,然后在顶层选用极限学习机对输入数据及位置标签进行有监督学习。建立指纹库阶段,为优化指纹采集过程并减少人工勘测成本,提出一种基于高斯过程回归的超宽带指纹库扩充方法。真实场景下实验结果显示,视距环境和非视距环境中,该定位方法均能够达到厘米级定位精度。  相似文献   

14.
This work considers scalable incremental extreme learning machine (I‐ELM) algorithms, which could be suitable for big data regression. During the training of I‐ELMs, the hidden neurons are presented one by one, and the weights are based solely on simple direct summations, which can be most efficiently mapped on parallel environments. Existing incremental versions of ELMs are the I‐ELM, enhanced incremental ELM (EI‐ELM), and convex incremental ELM (CI‐ELM). We study the enhanced and convex incremental ELM (ECI‐ELM) algorithm, which is a combination of the last 2 versions. The main findings are that ECI‐ELM is fast, accurate, and fully scalable when it operates in a parallel system of distributed memory workstations. Experimental simulations on several benchmark data sets demonstrate that the ECI‐ELM is the most accurate among the existing I‐ELM, EI‐ELM, and CI‐ELM algorithms. We also analyze the convergence as a function of the hidden neurons and demonstrate that ECI‐ELM has the lowest error rate curve and converges much faster than the other algorithms in all of the data sets. The parallel simulations also reveal that the data parallel training of the ECI‐ELM can guarantee simplicity and straightforward mappings and can deliver speedups and scale‐ups very close to linear.  相似文献   

15.
Due to the significant efficiency and simple implementation, extreme learning machine (ELM) algorithms enjoy much attention in regression and classification applications recently. Many efforts have been paid to enhance the performance of ELM from both methodology (ELM training strategies) and structure (incremental or pruned ELMs) perspectives. In this paper, a local coupled extreme learning machine (LC-ELM) algorithm is presented. By assigning an address to each hidden node in the input space, LC-ELM introduces a decoupler framework to ELM in order to reduce the complexity of the weight searching space. The activated degree of a hidden node is measured by the membership degree of the similarity between the associated address and the given input. Experimental results confirm that the proposed approach works effectively and generally outperforms the original ELM in both regression and classification applications.  相似文献   

16.
Urban haze pollution is becoming increasingly serious, which is considered very harmful for humans by World Health Organization (WHO). Haze forecasts can be used to protect human health. In this paper, a Selective ENsemble based on an Extreme Learning Machine (ELM) and Improved Discrete Artificial Fish swarm algorithm (IDAFSEN) is proposed, which overcomes the drawback that a single ELM is unstable in terms of its classification. First, the initial pool of base ELMs is generated by using bootstrap sampling, which is then pre-pruned by calculating the pair-wise diversity measure of each base ELM. Second, partial-based ELMs among the initial pool after pre-pruning with higher precision and with greater diversity are selected by using an Improved Discrete Artificial Fish Swarm Algorithm (IDAFSA). Finally, the selected base ELMs are integrated through majority voting. The Experimental results on 16 datasets from the UCI Machine Learning Repository demonstrate that IDAFSEN can achieve better classification accuracy than other previously reported methods. After a performance evaluation of the proposed approach, this paper looks at how this can be used in haze forecasting in China to protect human health.  相似文献   

17.
Online prediction of mill load is useful to control system design in the grinding process. It is a challenging problem to estimate the parameters of the load inside the ball mill using measurable signals. This paper aims to develop a computational intelligence approach for predicting the mill load. Extreme learning machines (ELMs) are employed as learner models to implement the map between frequency spectral features and the mill load parameters. The inputs of the ELM model are reduced features, which are extracted and selected from the vibration frequency spectrum of the mill shell using partial least squares (PLS) algorithm. Experiments are carried out in the laboratory with comparisons on the well-known back-propagation learning algorithm, the original ELM and an optimization-based ELM (OELM). Results indicate that the reduced feature-based OELM can perform reasonably well at mill load parameter estimation, and it outperforms other learner models in terms of generalization capability.  相似文献   

18.
This paper presents a performance enhancement scheme for the recently developed extreme learning machine (ELM) for classifying power system disturbances using particle swarm optimization (PSO). Learning time is an important factor while designing any computational intelligent algorithms for classifications. ELM is a single hidden layer neural network with good generalization capabilities and extremely fast learning capacity. In ELM, the input weights are chosen randomly and the output weights are calculated analytically. However, ELM may need higher number of hidden neurons due to the random determination of the input weights and hidden biases. One of the advantages of ELM over other methods is that the parameter that the user must properly adjust is the number of hidden nodes only. But the optimal selection of its parameter can improve its performance. In this paper, a hybrid optimization mechanism is proposed which combines the discrete-valued PSO with the continuous-valued PSO to optimize the input feature subset selection and the number of hidden nodes to enhance the performance of ELM. The experimental results showed the proposed algorithm is faster and more accurate in discriminating power system disturbances.  相似文献   

19.
GPU-accelerated and parallelized ELM ensembles for large-scale regression   总被引:2,自引:0,他引:2  
The paper presents an approach for performing regression on large data sets in reasonable time, using an ensemble of extreme learning machines (ELMs). The main purpose and contribution of this paper are to explore how the evaluation of this ensemble of ELMs can be accelerated in three distinct ways: (1) training and model structure selection of the individual ELMs are accelerated by performing these steps on the graphics processing unit (GPU), instead of the processor (CPU); (2) the training of ELM is performed in such a way that computed results can be reused in the model structure selection, making training plus model structure selection more efficient; (3) the modularity of the ensemble model is exploited and the process of model training and model structure selection is parallelized across multiple GPU and CPU cores, such that multiple models can be built at the same time. The experiments show that competitive performance is obtained on the regression tasks, and that the GPU-accelerated and parallelized ELM ensemble achieves attractive speedups over using a single CPU. Furthermore, the proposed approach is not limited to a specific type of ELM and can be employed for a large variety of ELMs.  相似文献   

20.
为降低特征噪声对分类性能的影响,提出一种基于极限学习机(extreme learning machine,ELM)的收缩极限学习机鲁棒算法模型(CELM)。采用自编码器对输入数据进行重构,将隐层输出值关于输入的雅克比矩阵的F范数引入到目标函数中,提取出更具鲁棒性的抽象特征表示,利用提取到的新特征对常规的ELM层进行训练,提高方法的鲁棒性。对Mnist、UCI数据集、TE过程数据集以及添加不同强度的混合高斯噪声之后的Mnist数据集进行仿真,实验结果表明,提出的方法较ELM、HELM具有更高的分类精度和更好的鲁棒性。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号