首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 166 毫秒
1.
传统的极限学习机作为一种有监督的学习模型,任意对隐藏层神经元的输入权值和偏置进行赋值,通过计算隐藏层神经元的输出权值完成学习过程.针对传统的极限学习机在数据分析预测研究中存在预测精度不足的问题,提出一种基于模拟退火算法改进的极限学习机.首先,利用传统的极限学习机对训练集进行学习,得到隐藏层神经元的输出权值,选取预测结果评价标准.然后利用模拟退火算法,将传统的极限学习机隐藏层输入权值和偏置视为初始解,预测结果评价标准视为目标函数,通过模拟退火的降温过程,找到最优解即学习过程中预测误差最小的极限学习机的隐藏层神经元输入权值和偏置,最后通过传统的极限学习机计算得到隐藏层输出权值.实验选取鸢尾花分类数据和波士顿房价预测数据进行分析.实验发现与传统的极限学习机相比,基于模拟退火改进的极限学习机在分类和回归性能上都更优.  相似文献   

2.
Extreme learning machine (ELM) is widely used in complex industrial problems, especially the online-sequential extreme learning machine (OS-ELM) plays a good role in industrial online modeling. However, OS-ELM requires batch samples to be pre-trained to obtain initial weights, which may reduce the timeliness of samples. This paper proposes a novel model for the online process regression prediction, which is called the Recurrent Extreme Learning Machine (Recurrent-ELM). The nodes between the hidden layers are connected in Recurrent-ELM, thus the input of the hidden layer receives both the information from the current input layer and the previously hidden layer. Moreover, the weights and biases of the proposed model are generated by analysis rather than random. Six regression applications are used to verify the designed Recurrent-ELM, compared with extreme learning machine (ELM), fast learning network (FLN), online sequential extreme learning machine (OS-ELM), and an ensemble of online sequential extreme learning machine (EOS-ELM), the experimental results show that the Recurrent-ELM has better generalization and stability in several samples. In addition, to further test the performance of Recurrent-ELM, we employ it in the combustion modeling of a 330 MW coal-fired boiler compared with FLN, SVR and OS-ELM. The results show that Recurrent-ELM has better accuracy and generalization ability, and the theoretical model has some potential application value in practical application.  相似文献   

3.
针对传统极端学习机输入权值与隐层阈值随机设定的问题,提出了输出值反向分配算法。算法在传统极端学习机的基础上,通过优化方法得到最优输出值分配系数,并利用最小二乘法确定网络输入参数。将该算法应用到常用数据集进行实验,并与其他极端学习机改进算法进行比较,显示该算法有良好的学习以及泛化能力,能够得到简单的网络结构,证明了算法的有效性。  相似文献   

4.
为了提高目标威胁度估计的精确度,建立了反向学习磷虾群算法(OKH)优化极限学习机的目标威胁估计模型(OKH-ELM),提出基于此模型的算法。该模型使用反向学习策略优化磷虾群算法,并通过改进后的磷虾群算法优化极限学习机初始输入权重和偏置,使优化后的极限学习机能够对威胁度测试样本集做更好的预测。实验结果显示,OKH算法能够更好地优化极限学习机的权值与阈值,使建立的极限学习机目标威胁估计模型具有更高的预测精度和更强的泛化能力,能够精准、有效地实现目标威胁估计。  相似文献   

5.
针对加权极速学习机人为固定权重可能会错失更优权重的问题,提出了改进的加权极速学习机。该方法的多数类的初始权重设为1,使用多数类与少数类样例数的比值作为少数类的初始权重,然后通过在多数类或者少数类中添加权重调节因子,从缩小和扩大两个方向去调节权重,最后通过实验结果选出最优的权重。实验分别使用原加权极速学习机、其他权重的极速学习机和新方法在改造的UCI数据集上进行比较。结果表明新方法无论是在F-mea-sure还是G-mean上都要优于其他加权极速学习机。  相似文献   

6.
增量型极限学习机(incremental extreme learning machine,I-ELM)在训练过程中,由于输入权值及隐层神经元阈值的随机获取,造成部分隐层神经元的输出权值过小,使其对网络输出贡献小,从而成为无效神经元.这个问题不但使网络变得更加复杂,而且降低了网络的稳定性.针对此问题,本文提出了一种给I-ELM隐层输出加上偏置的改进方法(即Ⅱ-ELM),并分析证明了该偏置的存在性.最后对I-ELM方法在分类和回归问题上进行仿真对比,验证Ⅱ-ELM的有效性.  相似文献   

7.
Online learning algorithms have been preferred in many applications due to their ability to learn by the sequentially arriving data. One of the effective algorithms recently proposed for training single hidden-layer feedforward neural networks (SLFNs) is online sequential extreme learning machine (OS-ELM), which can learn data one-by-one or chunk-by-chunk at fixed or varying sizes. It is based on the ideas of extreme learning machine (ELM), in which the input weights and hidden layer biases are randomly chosen and then the output weights are determined by the pseudo-inverse operation. The learning speed of this algorithm is extremely high. However, it is not good to yield generalization models for noisy data and is difficult to initialize parameters in order to avoid singular and ill-posed problems. In this paper, we propose an improvement of OS-ELM based on the bi-objective optimization approach. It tries to minimize the empirical error and obtain small norm of network weight vector. Singular and ill-posed problems can be overcome by using the Tikhonov regularization. This approach is also able to learn data one-by-one or chunk-by-chunk. Experimental results show the better generalization performance of the proposed approach on benchmark datasets.  相似文献   

8.
提出一种基于差分进化(DE)和粒子群优化(PSO)的混合智能方法—–DEPSO算法,并通过对10个典型函数进行测试,表明DEPSO算法具有良好的寻优性能。针对单隐层前向神经网络(SLFNs)提出一种改进的学习算法—–DEPSO-ELM算法,即应用DEPSO算法优化SLFNs的隐层节点参数,采用极限学习算法(ELM)求取SLFNs的输出权值。将DEPSO-ELM算法应用于6个典型真实数据集的回归计算,并与DE-ELM、SaE-ELM算法相比,获得了更精确的计算结果。最后,将DEPSO-ELM算法应用于数控机床热误差的建模预测,获得了良好的预测效果。  相似文献   

9.
针对传统深度核极限学习机网络仅利用端层特征进行分类导致特征不全面,以及故障诊断分类器中核函数选择不恰当等问题,提出基于多层特征表达和多核极限学习机的船舶柴油机故障诊断方法。利用深度极限学习机网络提取故障数据的多层特征;将提取出的各层特征级联为一个具有多属性特征的故障数据特征向量;使用多核极限学习机分类器准确地实现柴油机的故障诊断。在标准分类数据集和船舶柴油机仿真故障数据集上的实验结果表明,与其他极限学习机算法相比,该方法能够有效提高故障诊断的准确率和稳定性,且具有较好的泛化性能,是柴油机故障诊断一个更为优秀实用的工具。  相似文献   

10.
基于改进多隐层极限学习机的电网虚假数据注入攻击检测   总被引:1,自引:0,他引:1  
虚假数据注入攻击(False data injection attacks, FDIA)严重威胁了电力信息物理系统(Cyber-physical system,CPS)的状态估计,而目前大多数检测方法侧重于攻击存在性检测,无法获取准确的受攻击位置.故本文提出了一种基于灰狼优化(Gray wolf optimization, GWO)多隐层极限学习机(Multi layer extreme learning machine, ML-ELM)的电力信息物理系统虚假数据注入攻击检测方法.所提方法将攻击检测看作是一个多标签二分类问题,不仅将用于特征提取与分类训练的极限学习机由单隐层变为多隐层,以解决极限学习机特征表达能力有限的问题,且融入了具有强全局搜索能力的灰狼优化算法以提高多隐层极限学习机分类精度和泛化性能.进而自动识别系统各个节点状态量的异常,获取受攻击的精确位置.通过在不同场景下对IEEE-14和57节点测试系统上进行大量实验,验证了所提方法的有效性,且分别与极限学习机、未融入灰狼优化的多隐层极限学习机以及支持向量机(Support vector machine, SVM)相比,所提方...  相似文献   

11.
The extreme learning machine (ELM), a single-hidden layer feedforward neural network algorithm, was tested on nine environmental regression problems. The prediction accuracy and computational speed of the ensemble ELM were evaluated against multiple linear regression (MLR) and three nonlinear machine learning (ML) techniques – artificial neural network (ANN), support vector regression and random forest (RF). Simple automated algorithms were used to estimate the parameters (e.g. number of hidden neurons) needed for model training. Scaling the range of the random weights in ELM improved its performance. Excluding large datasets (with large number of cases and predictors), ELM tended to be the fastest among the nonlinear models. For large datasets, RF tended to be the fastest. ANN and ELM had similar skills, but ELM was much faster than ANN except for large datasets. Generally, the tested ML techniques outperformed MLR, but no single method was best for all the nine datasets.  相似文献   

12.
为了避免花朵授粉算法在极限学习机识别过程中易陷入局部最优,提出了一种基于云量子花朵授粉的极限学习机算法。首先,将云模型和量子系统引入到花朵授粉算法中,增强花朵授粉算法的全局搜索能力,使粒子能在不同状态下进行寻优。然后,采用云量子花朵授粉算法优化极限学习机的参数,提高极限学习机的识别精度和效率。实验中采用6个标准测试函数对多个算法进行仿真对比,对比结果验证了所提云量子花朵授粉算法的性能优于另外3种群智能优化算法。最后,将改进的极限学习机算法应用到油气层识别中,结果表明其识别精度达到98.62%,相较于经典极限学习机,其训练时间缩短了1.680 2 s,该算法具有较高的识别精度和效率,可以广泛应用到实际分类领域中。  相似文献   

13.
何佩苑  刘勇 《计算机应用研究》2022,39(3):785-789+796
针对教与学优化算法(teaching-learning-based optimization, TLBO)寻优精度低、易陷入局部最优的问题,提出了一种融合认知心理学理论的新型教与学优化算法(cognitive psychology teaching-learning-based optimization, CPTLBO)。在教阶段融入登门槛效应理论,对于学习有困难的学生设置阶段性学习目标,从而提高学生的整体水平;在学阶段加入老师引导机制,提高算法收敛速度;随后,加入自我调整阶段,学生根据心理控制源理论可被分为内控型和外控型,不同类型的学生对自身成绩采取不同的归因方式并采取相应措施。利用经典的基准测试函数对CPTLBO进行测验,结果表明改进算法在寻优精度和收敛速度方面具有优势。构建CPTLBO-ELM自来水供水量预测模型,采用CPTLBO算法优化极端学习机的输入权值和隐含层阈值参数,以提高模型的预测精度和泛化能力。仿真结果表明:用CPTLBO算法优化后的模型预测结果更准确。  相似文献   

14.

针对增量型极限学习机(I-ELM) 中存在大量降低学习效率及准确性的冗余节点的问题, 提出一种基于Delta 检验(DT) 和混沌优化算法(COA) 的改进式增量型核极限学习算法. 利用COA的全局搜索能力对I-ELM 中的隐含层节点参数进行寻优, 结合DT 算法检验模型输出误差, 确定有效的隐含层节点数量, 从而降低网络复杂程度, 提高算法的学习效率; 加入核函数可增强网络的在线预测能力. 仿真结果表明, 所提出的DCI-ELMK 算法具有较好的预测精度和泛化能力, 网络结构更为紧凑.

  相似文献   

15.
罗家祥  罗丹  胡跃明 《控制与决策》2018,33(6):1033-1040
在线极限学习机对样本数据分批或分块地学习,适用于分析生产过程的在线数据,进而检测生产过程的故障.为了提高检测的准确性和快速性,提出一种权重变化和决策融合的极限学习机(ELM)在线故障检测方法.该方法在学习过程中增加被当前数据监控模型错误预测的新样本权重,同时在数据监控模型中引入决策级融合的方法,提高模型的综合决策能力.利用UCI数据集和TE过程进行仿真实验对比,对比结果表明所提出的方法在训练时间和检测准确率上都具有很好的性能.  相似文献   

16.
韩敏  刘晓欣 《控制与决策》2014,29(9):1576-1580

针对回归问题中存在的变量选择和网络结构设计问题, 提出一种基于互信息的极端学习机(ELM) 训练算法, 同时实现输入变量的选择和隐含层的结构优化. 该算法将互信息输入变量选择嵌入到ELM网络的学习过程之中, 以网络的学习性能作为衡量输入变量与输出变量相关与否的指标, 并以增量式的方法确定隐含层节点的规模.在Lorenz、Gas Furnace 和10 组标杆数据上的仿真结果表明了所提出算法的有效性. 该算法不仅可以简化网络结构, 还可以提高网络的泛化性能.

  相似文献   

17.
在构建基于极限学习机的无监督自适应分类器时, 隐含层的参数通常都是随机选取的, 而随机选取的参数不具备领域适应能力. 为了增强跨领域极限学习机的知识迁移能力,提出一种新的基于极限学习机的无监督领域适应分类器学习方法, 该方法主要利用自编码极限学习机对源域和目标域数据进行重构学习, 从而可以获得具有领域不变特性的隐含层参数. 进一步, 结合联合概率分布匹配和流形正则的思想, 对输出层权重进行自适应调整. 所提出算法能对极限学习机的两层参数均赋予领域适应能力,在字符数据集和对象识别数据集上的实验结果表明其具有较高的跨领域分类精度.  相似文献   

18.
TROP-ELM: A double-regularized ELM using LARS and Tikhonov regularization   总被引:1,自引:0,他引:1  
In this paper an improvement of the optimally pruned extreme learning machine (OP-ELM) in the form of a L2 regularization penalty applied within the OP-ELM is proposed. The OP-ELM originally proposes a wrapper methodology around the extreme learning machine (ELM) meant to reduce the sensitivity of the ELM to irrelevant variables and obtain more parsimonious models thanks to neuron pruning. The proposed modification of the OP-ELM uses a cascade of two regularization penalties: first a L1 penalty to rank the neurons of the hidden layer, followed by a L2 penalty on the regression weights (regression between hidden layer and output layer) for numerical stability and efficient pruning of the neurons. The new methodology is tested against state of the art methods such as support vector machines or Gaussian processes and the original ELM and OP-ELM, on 11 different data sets; it systematically outperforms the OP-ELM (average of 27% better mean square error) and provides more reliable results - in terms of standard deviation of the results - while remaining always less than one order of magnitude slower than the OP-ELM.  相似文献   

19.
为了提高网络流量的预测精度,针对极端学习机的训练样本选择问题,提出一种改进极端学习机的网络流量预测模型(IELM)。根据最优延迟时间和嵌入维数对网络流量重构,建立网络学习样本,将学习样本输入到改进极端学习机进行训练,随新样本加入而逐步求解网络的权值,以提高学习速度,引入cholesky分解方法提高模型的泛化能力,采用具体网络流量数据进行了仿真测试。结果表明,IELM不仅可以获得较传统网络流量预测模型更高的精度,并且大幅度减少了计算时间,提高了建模效率,可以较好地满足网络流量预测要求。  相似文献   

20.
As a novel learning algorithm for single-hidden-layer feedforward neural networks, extreme learning machines (ELMs) have been a promising tool for regression and classification applications. However, it is not trivial for ELMs to find the proper number of hidden neurons due to the nonoptimal input weights and hidden biases. In this paper, a new model selection method of ELM based on multi-objective optimization is proposed to obtain compact networks with good generalization ability. First, a new leave-one-out (LOO) error bound of ELM is derived, and it can be calculated with negligible computational cost once the ELM training is finished. Furthermore, the hidden nodes are added to the network one-by-one, and at each step, a multi-objective optimization algorithm is used to select optimal input weights by minimizing this LOO bound and the norm of output weight simultaneously in order to avoid over-fitting. Experiments on five UCI regression data sets are conducted, demonstrating that the proposed algorithm can generally obtain better generalization performance with more compact network than the conventional gradient-based back-propagation method, original ELM and evolutionary ELM.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号