首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到18条相似文献,搜索用时 203 毫秒
1.
在线鲁棒随机权神经网络(OR-RVFLN)具有较好的逼近性、较快的收敛速度、较高的鲁棒性能以及较小的存储空间.但是, OR-RVFLN算法计算过程中会产生矩阵的不适定问题,使得隐含层输出矩阵的精度较低.针对这个问题,本文提出了奇异值分解下在线鲁棒正则化随机网络(SVD-OR-RRVFLN).该算法在OR-RVFLN算法的基础上,将正则化项引入到权值的估计中,并且对隐含层输出矩阵进行奇异值分解;同时采用核密度估计(KDE)法,对整个SVD-OR-RRVFLN网络的权值矩阵进行更新,并分析了所提算法的必要性和收敛性.最后,将所提的方法应用于Benchmark数据集和磨矿粒度的指标预测中,实验结果证实了该算法不仅可以有效地提高模型的预测精度和鲁棒性能,而且具有更快的训练速度.  相似文献   

2.
针对非平稳时间序列预测问题,提出一种具有广义正则化与遗忘机制的在线贯序超限学习机算法.该算法以增量学习新样本的方式实现在线学习,以遗忘旧的失效样本的方式增强对非平稳系统的动态跟踪能力,并通过引入一种广义的$l_2$正则化使其具有持续的正则化功能,从而保证算法的持续稳定性.仿真实例表明,所提出算法具有较同类算法更好的稳定性和更小的预测误差,适用于具有动态变化特性的非平稳时间序列在线建模与预测.  相似文献   

3.
郭威  徐涛  于建江  汤克明 《控制与决策》2017,32(9):1556-1564
针对大规模在线学习问题,提出一种二维分割贯序正则化超限学习机(BP-SRELM).BP-SRELM以在线贯序超限学习机为基础,结合分治策略的思想,从实例和特征两个维度对高维隐层输出矩阵进行分割,以降低问题求解的规模和计算复杂性,从而极大地提高对大规模学习问题的执行效率.同时,BP-SRELM通过融合使用Tikhonov正则化技术进一步增强其在实际应用中的稳定性和泛化能力.实验结果表明,所提出的BP-SRELM不仅具有更高的稳定性和预测精度,而且在学习速度上优势明显,适用于大规模数据流的在线学习与实时建模.  相似文献   

4.
张明洋  闻英友  杨晓陶  赵宏 《控制与决策》2017,32(10):1887-1893
针对在线序贯极限学习机(OS-ELM)对增量数据学习效率低、准确性差的问题, 提出一种基于增量加权平均的在线序贯极限学习机(WOS-ELM)算法.将算法的原始数据训练模型残差与增量数据训练模型残差进行加权作为代价函数,推导出用于均衡原始数据与增量数据的训练模型,利用原始数据来弱化增量数据的波动,使在线极限学习机具有较好的稳定性,从而提高算法的学习效率和准确性. 仿真实验结果表明, 所提出的WOS-ELM算法对增量数据具有较好的预测精度和泛化能力.  相似文献   

5.
随着时间的推移,风电场风电功率预测模型的适用性逐渐降低,导致预测精度下降。为了解决该问题,基于在线序列-极限学习机(OS-ELM)算法提出了风电场短期风电功率预测模型的在线更新策略,建立的OS-ELM模型将风电场的历史数据固化到隐含层输出矩阵中,模型更新时,只需将新产生的数据对当前网络进行更新,大大降低了计算所需的资源。采用极限学习机(ELM)算法对数值天气预报(NWP)的预测风速进行修正,并根据风电功率的置信区间对预测功率进行二次修正。实验结果表明,采用OS-ELM算法更新后的模型适用性增强,预测精度提高;采用基于风电功率置信区间的功率修正模型后,风电功率的预测精度明显提高。  相似文献   

6.
针对传统在线贯序极限学习机存在的过学习和分类器输出不稳定等问题,将结构风险最小化理论引入到极限学习机中,用小波函数替代原有的隐层激励函数构建正则小波极限学习机,并与在线学习方法结合,提出在线正则小波极限学习机。仿真实验结果表明,在线正则小波极限学习机克服过学习和局部最优等问题,能够实现快速在线学习,具有良好的泛化性和鲁棒性。  相似文献   

7.

针对增量型极限学习机(I-ELM) 中存在大量降低学习效率及准确性的冗余节点的问题, 提出一种基于Delta 检验(DT) 和混沌优化算法(COA) 的改进式增量型核极限学习算法. 利用COA的全局搜索能力对I-ELM 中的隐含层节点参数进行寻优, 结合DT 算法检验模型输出误差, 确定有效的隐含层节点数量, 从而降低网络复杂程度, 提高算法的学习效率; 加入核函数可增强网络的在线预测能力. 仿真结果表明, 所提出的DCI-ELMK 算法具有较好的预测精度和泛化能力, 网络结构更为紧凑.

  相似文献   

8.
任瑞琪  李军 《测控技术》2018,37(6):15-19
针对电力负荷预测,提出了一种优化的核极限学习机(O-KELM)的方法.核极限学习机(KELM)方法仅以核函数表示未知的隐含层非线性特征映射,无需选择隐含层的节点数目,通过正则化最小二乘算法计算网络的输出权值.将优化算法应用于KELM方法中,给出基于遗传算法、微分演化、模拟退火的3种优化KELM方法,优化选择核函数的参数以及正则化系数,以进一步提高KELM方法的学习性能.为验证方法的有效性,将O-KELM方法应用于某地区的中期峰值电力负荷预测研究中,在同等条件下与优化极限学习机(O-ELM)方法、SVM等方法进行比较.实验结果表明,O-KELM方法具有很好的预测性能,其中GA-KELM方法的建模精度最高.  相似文献   

9.
为解决故障特征样本分批加入时分类模型的在线更新问题,提出一种限定样本序贯极端学习机(LSSELM)。 LSSELM通过逐步添加新样本,同时剔除与其相似度最高的同类别旧样本来提高模型的动态适应能力,并通过Sherman-Morrison矩阵求逆引理来降低计算复杂度,实现输出权值的递推求解,完成模型的在线训练。将LSSELM用于模拟电路在线故障诊断,结果表明相比在线序贯极端学习机(OS-ELM)和LSSELM的诊断准确率更高,具有更好的泛化性能。  相似文献   

10.
为了利用径向基函数(RBF)神经网络对混沌序列进行精确和快速的在线预测,提出一种在线构造变结构RBF神经网络的序贯学习算法。该算法建立实时更新的滑动数据窗口,通过学习窗口内的数据对隐节点进行增加和删除,动态确定RBF神经网络隐节点的数目及中心位置,并对隐层至输出层的连接权值进行在线调整。该算法具有调节参数少、学习速度快以及所得网络结构精简等特点。将该网络用于Mackey-Glass混沌时间序列的在线预测实验,结果验证该算法对该混沌序列具有良好的在线动态辨识和预测性能。  相似文献   

11.
In this paper, we develop an online sequential learning algorithm for single hidden layer feedforward networks (SLFNs) with additive or radial basis function (RBF) hidden nodes in a unified framework. The algorithm is referred to as online sequential extreme learning machine (OS-ELM) and can learn data one-by-one or chunk-by-chunk (a block of data) with fixed or varying chunk size. The activation functions for additive nodes in OS-ELM can be any bounded nonconstant piecewise continuous functions and the activation functions for RBF nodes can be any integrable piecewise continuous functions. In OS-ELM, the parameters of hidden nodes (the input weights and biases of additive nodes or the centers and impact factors of RBF nodes) are randomly selected and the output weights are analytically determined based on the sequentially arriving data. The algorithm uses the ideas of ELM of Huang developed for batch learning which has been shown to be extremely fast with generalization performance better than other batch training methods. Apart from selecting the number of hidden nodes, no other control parameters have to be manually chosen. Detailed performance comparison of OS-ELM is done with other popular sequential learning algorithms on benchmark problems drawn from the regression, classification and time series prediction areas. The results show that the OS-ELM is faster than the other sequential algorithms and produces better generalization performance  相似文献   

12.
Huang et al. (2004) has recently proposed an on-line sequential ELM (OS-ELM) that enables the extreme learning machine (ELM) to train data one-by-one as well as chunk-by-chunk. OS-ELM is based on recursive least squares-type algorithm that uses a constant forgetting factor. In OS-ELM, the parameters of the hidden nodes are randomly selected and the output weights are determined based on the sequentially arriving data. However, OS-ELM using a constant forgetting factor cannot provide satisfactory performance in time-varying or nonstationary environments. Therefore, we propose an algorithm for the OS-ELM with an adaptive forgetting factor that maintains good performance in time-varying or nonstationary environments. The proposed algorithm has the following advantages: (1) the proposed adaptive forgetting factor requires minimal additional complexity of O(N) where N is the number of hidden neurons, and (2) the proposed algorithm with the adaptive forgetting factor is comparable with the conventional OS-ELM with an optimal forgetting factor.  相似文献   

13.
Extreme learning machine (ELM) is widely used in complex industrial problems, especially the online-sequential extreme learning machine (OS-ELM) plays a good role in industrial online modeling. However, OS-ELM requires batch samples to be pre-trained to obtain initial weights, which may reduce the timeliness of samples. This paper proposes a novel model for the online process regression prediction, which is called the Recurrent Extreme Learning Machine (Recurrent-ELM). The nodes between the hidden layers are connected in Recurrent-ELM, thus the input of the hidden layer receives both the information from the current input layer and the previously hidden layer. Moreover, the weights and biases of the proposed model are generated by analysis rather than random. Six regression applications are used to verify the designed Recurrent-ELM, compared with extreme learning machine (ELM), fast learning network (FLN), online sequential extreme learning machine (OS-ELM), and an ensemble of online sequential extreme learning machine (EOS-ELM), the experimental results show that the Recurrent-ELM has better generalization and stability in several samples. In addition, to further test the performance of Recurrent-ELM, we employ it in the combustion modeling of a 330 MW coal-fired boiler compared with FLN, SVR and OS-ELM. The results show that Recurrent-ELM has better accuracy and generalization ability, and the theoretical model has some potential application value in practical application.  相似文献   

14.
Online learning algorithms have been preferred in many applications due to their ability to learn by the sequentially arriving data. One of the effective algorithms recently proposed for training single hidden-layer feedforward neural networks (SLFNs) is online sequential extreme learning machine (OS-ELM), which can learn data one-by-one or chunk-by-chunk at fixed or varying sizes. It is based on the ideas of extreme learning machine (ELM), in which the input weights and hidden layer biases are randomly chosen and then the output weights are determined by the pseudo-inverse operation. The learning speed of this algorithm is extremely high. However, it is not good to yield generalization models for noisy data and is difficult to initialize parameters in order to avoid singular and ill-posed problems. In this paper, we propose an improvement of OS-ELM based on the bi-objective optimization approach. It tries to minimize the empirical error and obtain small norm of network weight vector. Singular and ill-posed problems can be overcome by using the Tikhonov regularization. This approach is also able to learn data one-by-one or chunk-by-chunk. Experimental results show the better generalization performance of the proposed approach on benchmark datasets.  相似文献   

15.

In order to curb the model expansion of the kernel learning methods and adapt the nonlinear dynamics in the process of the nonstationary time series online prediction, a new online sequential learning algorithm with sparse update and adaptive regularization scheme is proposed based on kernel-based incremental extreme learning machine (KB-IELM). For online sparsification, a new method is presented to select sparse dictionary based on the instantaneous information measure. This method utilizes a pruning strategy, which can prune the least “significant” centers, and preserves the important ones by online minimizing the redundancy of dictionary. For adaptive regularization scheme, a new objective function is constructed based on basic ELM model. New model has different structural risks in different nonlinear regions. At each training step, new added sample could be assigned optimal regularization factor by optimization procedure. Performance comparisons of the proposed method with other existing online sequential learning methods are presented using artificial and real-word nonstationary time series data. The results indicate that the proposed method can achieve higher prediction accuracy, better generalization performance and stability.

  相似文献   

16.
Ensemble of online sequential extreme learning machine   总被引:3,自引:0,他引:3  
Yuan  Yeng Chai  Guang-Bin   《Neurocomputing》2009,72(13-15):3391
Liang et al. [A fast and accurate online sequential learning algorithm for feedforward networks, IEEE Transactions on Neural Networks 17 (6) (2006), 1411–1423] has proposed an online sequential learning algorithm called online sequential extreme learning machine (OS-ELM), which can learn the data one-by-one or chunk-by-chunk with fixed or varying chunk size. It has been shown [Liang et al., A fast and accurate online sequential learning algorithm for feedforward networks, IEEE Transactions on Neural Networks 17 (6) (2006) 1411–1423] that OS-ELM runs much faster and provides better generalization performance than other popular sequential learning algorithms. However, we find that the stability of OS-ELM can be further improved. In this paper, we propose an ensemble of online sequential extreme learning machine (EOS-ELM) based on OS-ELM. The results show that EOS-ELM is more stable and accurate than the original OS-ELM.  相似文献   

17.
基于核学习的强大非线性映射性能,针对短时交通流量预测,提出一类基于核学习方法的预测模型。核递推最小二乘(KRLS)基于近似线性依赖(approximate linear dependence,ALD) 技术可降低计算复杂度及存储量,是一种在线核学习方法,适用于较大规模数据集的学习;核偏最小二乘(KPLS)方法将输入变量投影在潜在变量上,利用输入与输出变量之间的协方差信息提取潜在特征;核极限学习机(KELM)方法用核函数表示未知的隐含层非线性特征映射,通过正则化最小二乘算法计算网络的输出权值,能以极快的学习速度获得良好的推广性。为验证所提方法的有效性,将KELM、KPLS、ALD-KRLS用于不同实测交通流数据中,在同等条件下,与现有方法进行比较。实验结果表明,不同核学习方法的预测精度和训练速度均有提高,体现了核学习方法在短时交通流量预测中的应用潜力。  相似文献   

18.
高炉煤气是钢铁企业重要的二次能源,其产生量和消耗量的实时准确预测对高炉煤气系统的平衡调度具有重要作用;但由于高炉煤气系统工况多变、产消量数据波动较大,给高炉煤气产消量的准确预测带来了很大的挑战;为此,通过对煤气产消量数据特征的深入分析,提出了一种基于自适应遗忘因子极限学习机(AF-ELM)的在线预测算法;在序贯极限学习机的基础上,引入遗忘因子逐步遗忘旧样本,通过预测误差反馈机制,自适应的调节遗忘因子,从而提高预测方法对系统工况的动态变化的适应能力,提高预测精度;将该算法应用于钢铁企业的高炉煤气产消量在线预测,实验结果表明与序贯极限学习机相比,该预测方法在系统工况变化的情况下能保持较高的预测精度,更适合于高炉煤气产消量的在线预测。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号