首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this paper, a regularized correntropy criterion (RCC) for extreme learning machine (ELM) is proposed to deal with the training set with noises or outliers. In RCC, the Gaussian kernel function is utilized to substitute Euclidean norm of the mean square error (MSE) criterion. Replacing MSE by RCC can enhance the anti-noise ability of ELM. Moreover, the optimal weights connecting the hidden and output layers together with the optimal bias terms can be promptly obtained by the half-quadratic (HQ) optimization technique with an iterative manner. Experimental results on the four synthetic data sets and the fourteen benchmark data sets demonstrate that the proposed method is superior to the traditional ELM and the regularized ELM both trained by the MSE criterion.  相似文献   

2.
本文提出一种基于UD(upper-diagonal)分解与偏差补偿结合的辨识方法,用于变量带误差(errors-in-variables,EIV)模型辨识.考虑单输入单输出(single input and single output,SISO)线性动态系统,当输入和输出含有零均值、方差未知的高斯测量白噪声时,该类系统的模型参数估计是一种典型的EIV模型辨识问题.为了获得这种EIV模型参数的无偏估计,本文先推导出最小二乘模型参数估计偏差量与输入输出噪声方差以及最小二乘损失函数与输入输出噪声方差的关系,然后采用UD分解方法递推获得模型参数估计值,再利用输入输出噪声方差估计值补偿模型参数估计偏差,以此获得模型参数的无偏估计.本文还讨论了算法实现过程中遇到的一些问题及修补方法,并通过仿真例验证了所提辨识方法的有效性.  相似文献   

3.
极限学习机是一种随机化算法,它随机生成单隐含层神经网络输入层连接权和隐含层偏置,用分析的方法确定输出层连接权。给定网络结构,用极限学习机重复训练网络,会得到不同的学习模型。本文提出了一种集成模型对数据进行分类的方法。首先用极限学习机算法重复训练若干个单隐含层前馈神经网络,然后用多数投票法集成训练好的神经网络,最后用集成模型对数据进行分类,并在10个数据集上和极限学习机及集成极限学习机进行了实验比较。实验结果表明,本文提出的方法优于极限学习机和集成极限学习机。  相似文献   

4.
As a novel learning algorithm for single-hidden-layer feedforward neural networks, extreme learning machines (ELMs) have been a promising tool for regression and classification applications. However, it is not trivial for ELMs to find the proper number of hidden neurons due to the nonoptimal input weights and hidden biases. In this paper, a new model selection method of ELM based on multi-objective optimization is proposed to obtain compact networks with good generalization ability. First, a new leave-one-out (LOO) error bound of ELM is derived, and it can be calculated with negligible computational cost once the ELM training is finished. Furthermore, the hidden nodes are added to the network one-by-one, and at each step, a multi-objective optimization algorithm is used to select optimal input weights by minimizing this LOO bound and the norm of output weight simultaneously in order to avoid over-fitting. Experiments on five UCI regression data sets are conducted, demonstrating that the proposed algorithm can generally obtain better generalization performance with more compact network than the conventional gradient-based back-propagation method, original ELM and evolutionary ELM.  相似文献   

5.
为及时发现WSN节点故障隐患,准确掌握WSN运行状态,本文利用粗糙集理论属性约简算法(简称RS)对WSN节点故障属性进行约简,以最优的故障属性决策表重构训练样本数据集,作为极限学习机(Extreme Learning Machine, ELM)神经网络的输入,建立一个数据驱动的WSN节点故障断模型.采用乌鸦搜索算法(Crow Search Algorithm, CSA)优化ELM神经网络的输入权值和隐含层阀值,改善网络参数随机生成带来的ELM模型输出不稳定、分类精度偏低的问题.通过对RS-GA-ELM模型进行仿真分析.结果表明, RS-GA-ELM模型能够在可靠性不同的数据集中,保持较高的故障诊断效率,符合WSN节点故障诊断的需求.  相似文献   

6.
The extreme learning machine (ELM) is a novel single hidden layer feedforward neural network, which has the superiority in many aspects, especially in the training speed; however, there are still some shortages that restrict the further development of ELM, such as the perturbation and multicollinearity in the linear model. To the adverse effects caused by the perturbation or the multicollinearity, this paper proposes an enhanced ELM based on ridge regression (RR-ELM) for regression, which replaces the least square method to calculate output weights. With an additional adjustment of ridge regression, all the characteristics become even better. Simulative results show that the RR-ELM, compared with ELM, has better stability and generalization performance.  相似文献   

7.
This paper presents a performance enhancement scheme for the recently developed extreme learning machine (ELM) for multi-category sparse data classification problems. ELM is a single hidden layer neural network with good generalization capabilities and extremely fast learning capacity. In ELM, the input weights are randomly chosen and the output weights are analytically calculated. The generalization performance of the ELM algorithm for sparse data classification problem depends critically on three free parameters. They are, the number of hidden neurons, the input weights and the bias values which need to be optimally chosen. Selection of these parameters for the best performance of ELM involves a complex optimization problem.In this paper, we present a new, real-coded genetic algorithm approach called ‘RCGA-ELM’ to select the optimal number of hidden neurons, input weights and bias values which results in better performance. Two new genetic operators called ‘network based operator’ and ‘weight based operator’ are proposed to find a compact network with higher generalization performance. We also present an alternate and less computationally intensive approach called ‘sparse-ELM’. Sparse-ELM searches for the best parameters of ELM using K-fold validation. A multi-class human cancer classification problem using micro-array gene expression data (which is sparse), is used for evaluating the performance of the two schemes. Results indicate that the proposed RCGA-ELM and sparse-ELM significantly improve ELM performance for sparse multi-category classification problems.  相似文献   

8.
Extreme learning machine (ELM) is a learning algorithm for generalized single-hidden-layer feed-forward networks (SLFNs). In order to obtain a suitable network architecture, Incremental Extreme Learning Machine (I-ELM) is a sort of ELM constructing SLFNs by adding hidden nodes one by one. Although kinds of I-ELM-class algorithms were proposed to improve the convergence rate or to obtain minimal training error, they do not change the construction way of I-ELM or face the over-fitting risk. Making the testing error converge quickly and stably therefore becomes an important issue. In this paper, we proposed a new incremental ELM which is referred to as Length-Changeable Incremental Extreme Learning Machine (LCI-ELM). It allows more than one hidden node to be added to the network and the existing network will be regarded as a whole in output weights tuning. The output weights of newly added hidden nodes are determined using a partial error-minimizing method. We prove that an SLFN constructed using LCI-ELM has approximation capability on a universal compact input set as well as on a finite training set. Experimental results demonstrate that LCI-ELM achieves higher convergence rate as well as lower over-fitting risk than some competitive I-ELM-class algorithms.  相似文献   

9.
Considering the uncertainty of hidden neurons, choosing significant hidden nodes, called as model selection, has played an important role in the applications of extreme learning machines(ELMs). How to define and measure this uncertainty is a key issue of model selection for ELM. From the information geometry point of view, this paper presents a new model selection method of ELM for regression problems based on Riemannian metric. First, this paper proves theoretically that the uncertainty can be characterized by a form of Riemannian metric. As a result, a new uncertainty evaluation of ELM is proposed through averaging the Riemannian metric of all hidden neurons. Finally, the hidden nodes are added to the network one by one, and at each step, a multi-objective optimization algorithm is used to select optimal input weights by minimizing this uncertainty evaluation and the norm of output weight simultaneously in order to obtain better generalization performance. Experiments on five UCI regression data sets and cylindrical shell vibration data set are conducted, demonstrating that the proposed method can generally obtain lower generalization error than the original ELM, evolutionary ELM, ELM with model selection, and multi-dimensional support vector machine. Moreover, the proposed algorithm generally needs less hidden neurons and computational time than the traditional approaches, which is very favorable in engineering applications.  相似文献   

10.
基于改进极限学习机的软测量建模方法   总被引:1,自引:1,他引:0  
针对生物发酵过程中一些生物参量难以测量的问题,提出一种基于改进极限学习机(IELM)的软测量建模方法。该方法通过最小二乘方法和误差反馈原理计算出最优的网络输入到隐含层的学习参数,以提高模型的稳定性和预测精度。通过双对角化方法计算出最优的输出权值,解决输出矩阵的病态问题,进一步提高模型的稳定性。将所提方法应用于红霉素发酵过程生物量浓度的软测量。结果表明,与ELM、PL-ELM、IRLS-ELM软测量建模方法相比,IELM在线软测量建模方法具有更高的预测精度和更强的泛化能力。  相似文献   

11.
Recently there have been renewed interests in single-hidden-layer neural networks (SHLNNs). This is due to its powerful modeling ability as well as the existence of some efficient learning algorithms. A prominent example of such algorithms is extreme learning machine (ELM), which assigns random values to the lower-layer weights. While ELM can be trained efficiently, it requires many more hidden units than is typically needed by the conventional neural networks to achieve matched classification accuracy. The use of a large number of hidden units translates to significantly increased test time, which is more valuable than training time in practice. In this paper, we propose a series of new efficient learning algorithms for SHLNNs. Our algorithms exploit both the structure of SHLNNs and the gradient information over all training epochs, and update the weights in the direction along which the overall square error is reduced the most. Experiments on the MNIST handwritten digit recognition task and the MAGIC gamma telescope dataset show that the algorithms proposed in this paper obtain significantly better classification accuracy than ELM when the same number of hidden units is used. For obtaining the same classification accuracy, our best algorithm requires only 1/16 of the model size and thus approximately 1/16 of test time compared with ELM. This huge advantage is gained at the expense of 5 times or less the training cost incurred by the ELM training.  相似文献   

12.
李军  乃永强 《控制与决策》2015,30(9):1559-1566

针对一类多输入多输出(MIMO) 仿射非线性动态系统, 提出一种基于极限学习机(ELM) 的鲁棒自适应神经控制方法. ELM随机确定单隐层前馈网络(SLFNs) 的隐含层参数, 仅需调整网络的输出权值, 能以极快的学习速度获得良好的推广性. 在所提出的控制方法中, 利用ELM逼近系统的未知非线性项, 针对ELM网络的权值、逼近误差及外界扰动的未知上界值分别设计参数自适应律, 通过Lyapunov 稳定性分析可以保证闭环系统所有信号半全局最终一致有界. 仿真结果表明了该控制方法的有效性.

  相似文献   

13.
高频地波雷达(High-frequency surface wave radar,HFSWR)在超视距舰船目标检测跟踪中有广泛应用.然而,HFSWR工作频段的电磁环境十分复杂,舰船目标信号往往被淹没在各种噪声中.本文提出一种基于最优误差自校正极限学习机(Optimized error self-adjustment e...  相似文献   

14.
This paper proposes a modified ELM algorithm that properly selects the input weights and biases before training the output weights of single-hidden layer feedforward neural networks with sigmoidal activation function and proves mathematically the hidden layer output matrix maintains full column rank. The modified ELM avoids the randomness compared with the ELM. The experimental results of both regression and classification problems show good performance of the modified ELM algorithm.  相似文献   

15.
极限学习机(ELM)由于高效的训练方式被广泛应用于分类回归,然而不同的输入权值在很大程度上会影响其学习性能。为了进一步提高ELM的学习性能,针对ELM的输入权值进行了研究,充分利用图像局部感知的稀疏性,将局部感知的方法运用到基于自动编码器的ELM(ELM-AE)上,提出了局部感知的类限制极限学习机(RF-C2ELM)。通过对MNIST数据集进行分类问题分析实验,实验结果表明,在具有相同隐层结点数的条件下,提出的方法能够获得更高的分类精度。  相似文献   

16.
This paper proposes a novel self-constructing least-Wilcoxon generalized Radial Basis Function Neural-Fuzzy System (LW-GRBFNFS) and its applications to non-linear function approximation and chaos time sequence prediction. In general, the hidden layer parameters of the antecedent part of most traditional RBFNFS are decided in advance and the output weights of the consequent part are evaluated by least square estimation. The hidden layer structure of the RBFNFS is lack of flexibility because the structure is fixed and cannot be adjusted effectively according to the dynamic behavior of the system. Furthermore, the resultant performance of using least square estimation for output weights is often weakened by the noise and outliers.This paper creates a self-constructing scenario for generating antecedent part of RBFNFS with particle swarm optimizer (PSO). For training the consequent part of RBFNFS, instead of traditional least square (LS) estimation, least-Wilcoxon (LW) norm is employed in the proposed approach to do the estimation. As is well known in statistics, the resulting linear function by using the rank-based LW norm approximation to linear function problems is usually robust against (or insensitive to) noises and outliers and therefore increases the accuracy of the output weights of RBFNFS. Several nonlinear functions approximation and chaotic time series prediction problems are used to verify the efficiency of self-constructing LW-GRBFNIS proposed in this paper. The experimental results show that the proposed method not only creates optimal hidden nodes but also effectively mitigates the noise and outliers problems.  相似文献   

17.
This paper presents a performance enhancement scheme for the recently developed extreme learning machine (ELM) for classifying power system disturbances using particle swarm optimization (PSO). Learning time is an important factor while designing any computational intelligent algorithms for classifications. ELM is a single hidden layer neural network with good generalization capabilities and extremely fast learning capacity. In ELM, the input weights are chosen randomly and the output weights are calculated analytically. However, ELM may need higher number of hidden neurons due to the random determination of the input weights and hidden biases. One of the advantages of ELM over other methods is that the parameter that the user must properly adjust is the number of hidden nodes only. But the optimal selection of its parameter can improve its performance. In this paper, a hybrid optimization mechanism is proposed which combines the discrete-valued PSO with the continuous-valued PSO to optimize the input feature subset selection and the number of hidden nodes to enhance the performance of ELM. The experimental results showed the proposed algorithm is faster and more accurate in discriminating power system disturbances.  相似文献   

18.
Feedforward neural networks have been extensively used to approximate complex nonlinear mappings directly from the input samples. However, their traditional learning algorithms are usually much slower than required. In this work, two hidden-feature-space ridge regression methods HFSR and centered-ELM are first proposed for feedforward networks. As the special kernel methods, the important characteristics of both HFSR and centered-ELM are that rigorous Mercer's condition for kernel functions is not required and that they can inherently be used to propagate the prominent advantages of ELM into MLFN. Except for randomly assigned weights adopted in both ELM and HFSR, HFSR also exploits another randomness, i.e., randomly selected examplars from the training set for kernel activation functions. Through forward layer-by-layer data transformation, we can extend HFSR and Centered-ELM to MLFN. Accordingly, as the unified framework for HFSR and Centered-ELM, the least learning machine (LLM) is proposed for both SLFN and MLFN with a single or multiple outputs. LLM actually gives a new learning method for MLFN with keeping the same virtues of ELM only for SLFN, i.e., only the parameters in the last hidden layer require being adjusted, all the parameters in other hidden layers can be randomly assigned, and LLM is also much faster than BP for MLFN in training the sample sets. The experimental results clearly indicate the power of LLM on its application in nonlinear regression modeling.  相似文献   

19.
Recently, a novel learning algorithm for single-hidden-layer feedforward neural networks (SLFNs) named extreme learning machine (ELM) was proposed by Huang et al. The essence of ELM is that the learning parameters of hidden nodes, including input weights and biases, are randomly assigned and need not be tuned while the output weights can be analytically determined by the simple generalized inverse operation. The only parameter needed to be defined is the number of hidden nodes. Compared with other traditional learning algorithms for SLFNs, ELM provides extremely faster learning speed, better generalization performance and with least human intervention. This paper firstly introduces a brief review of ELM, describing the principle and algorithm of ELM. Then, we put emphasis on the improved methods or the typical variants of ELM, especially on incremental ELM, pruning ELM, error-minimized ELM, two-stage ELM, online sequential ELM, evolutionary ELM, voting-based ELM, ordinal ELM, fully complex ELM, and symmetric ELM. Next, the paper summarized the applications of ELM on classification, regression, function approximation, pattern recognition, forecasting and diagnosis, and so on. In the last, the paper discussed several open issues of ELM, which may be worthy of exploring in the future.  相似文献   

20.
受道路环境和人为因素影响,实际交通系统可视为一个复杂的非线性动力系统,交通流数据具有较强的非线性、时变性和易受随机噪声影响等特征.针对复杂环境下的短时交通流预测问题,提出一种基于烟花差分进化混合算法-极限学习机的短时交通流预测方法.采用奇异谱分析方法滤除原始交通流数据中包含的噪声成分,降噪后的交通流数据用于训练极限学习机(ELM)网络预测模型.进行相空间重构,利用C-C算法确定ELM网络的结构和关键参数.通过融合烟花算法和差分进化算法提出一种烟花差分进化混合算法,可有效提高基本算法的整体优化性能.将改进的混合优化算法用于优化ELM网络的权阈值(结构为9-11-1,维数为110),建立短时交通流预测模型.测试与应用结果表明,所构建的短时交通流预测模型具有较高的预测精度和较强的泛化能力(均方误差为7.75,平均绝对百分比误差为0.086 7),预测值与实际值的拟合程度较好.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号