首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Symmetric extreme learning machine   总被引:1,自引:1,他引:0  
Extreme learning machine (ELM) can be considered as a black-box modeling approach that seeks a model representation extracted from the training data. In this paper, a modified ELM algorithm, called symmetric ELM (S-ELM), is proposed by incorporating a priori information of symmetry. S-ELM is realized by transforming the original activation function of hidden neurons into a symmetric one with respect to the input variables of the samples. In theory, S-ELM can approximate N arbitrary distinct samples with zero error. Simulation results show that, in the applications where there exists the prior knowledge of symmetry, S-ELM can obtain better generalization performance, faster learning speed, and more compact network architecture.  相似文献   

2.
Extreme learning machine (ELM) [G.-B. Huang, Q.-Y. Zhu, C.-K. Siew, Extreme learning machine: a new learning scheme of feedforward neural networks, in: Proceedings of the International Joint Conference on Neural Networks (IJCNN2004), Budapest, Hungary, 25-29 July 2004], a novel learning algorithm much faster than the traditional gradient-based learning algorithms, was proposed recently for single-hidden-layer feedforward neural networks (SLFNs). However, ELM may need higher number of hidden neurons due to the random determination of the input weights and hidden biases. In this paper, a hybrid learning algorithm is proposed which uses the differential evolutionary algorithm to select the input weights and Moore-Penrose (MP) generalized inverse to analytically determine the output weights. Experimental results show that this approach is able to achieve good generalization performance with much more compact networks.  相似文献   

3.
In order to overcome the disadvantage of the traditional algorithm for SLFN (single-hidden layer feedforward neural network), an improved algorithm for SLFN, called extreme learning machine (ELM), is proposed by Huang et al. However, ELM is sensitive to the neuron number in hidden layer and its selection is a difficult-to-solve problem. In this paper, a self-adaptive mechanism is introduced into the ELM. Herein, a new variant of ELM, called self-adaptive extreme learning machine (SaELM), is proposed. SaELM is a self-adaptive learning algorithm that can always select the best neuron number in hidden layer to form the neural networks. There is no need to adjust any parameters in the training process. In order to prove the performance of the SaELM, it is used to solve the Italian wine and iris classification problems. Through the comparisons between SaELM and the traditional back propagation, basic ELM and general regression neural network, the results have proven that SaELM has a faster learning speed and better generalization performance when solving the classification problem.  相似文献   

4.
Variational Bayesian extreme learning machine   总被引:1,自引:0,他引:1  
Extreme learning machine (ELM) randomly generates parameters of hidden nodes and then analytically determines the output weights with fast learning speed. The ill-posed problem of parameter matrix of hidden nodes directly causes unstable performance, and the automatical selection problem of the hidden nodes is critical to holding the high efficiency of ELM. Focusing on the ill-posed problem and the automatical selection problem of the hidden nodes, this paper proposes the variational Bayesian extreme learning machine (VBELM). First, the Bayesian probabilistic model is involved into ELM, where the Bayesian prior distribution can avoid the ill-posed problem of hidden node matrix. Then, the variational approximation inference is employed in the Bayesian model to compute the posterior distribution and the independent variational hyperparameters approximately, which can be used to select the hidden nodes automatically. Theoretical analysis and experimental results elucidate that VBELM has stabler performance with more compact architectures, which presents probabilistic predictions comparison with traditional point predictions, and it also provides the hyperparameter criterion for hidden node selection.  相似文献   

5.
Convex incremental extreme learning machine   总被引:6,自引:2,他引:6  
Guang-Bin  Lei   《Neurocomputing》2007,70(16-18):3056
Unlike the conventional neural network theories and implementations, Huang et al. [Universal approximation using incremental constructive feedforward networks with random hidden nodes, IEEE Transactions on Neural Networks 17(4) (2006) 879–892] have recently proposed a new theory to show that single-hidden-layer feedforward networks (SLFNs) with randomly generated additive or radial basis function (RBF) hidden nodes (according to any continuous sampling distribution) can work as universal approximators and the resulting incremental extreme learning machine (I-ELM) outperforms many popular learning algorithms. I-ELM randomly generates the hidden nodes and analytically calculates the output weights of SLFNs, however, I-ELM does not recalculate the output weights of all the existing nodes when a new node is added. This paper shows that while retaining the same simplicity, the convergence rate of I-ELM can be further improved by recalculating the output weights of the existing nodes based on a convex optimization method when a new hidden node is randomly added. Furthermore, we show that given a type of piecewise continuous computational hidden nodes (possibly not neural alike nodes), if SLFNs can work as universal approximators with adjustable hidden node parameters, from a function approximation point of view the hidden node parameters of such “generalized” SLFNs (including sigmoid networks, RBF networks, trigonometric networks, threshold networks, fuzzy inference systems, fully complex neural networks, high-order networks, ridge polynomial networks, wavelet networks, etc.) can actually be randomly generated according to any continuous sampling distribution. In theory, the parameters of these SLFNs can be analytically determined by ELM instead of being tuned.  相似文献   

6.
针对极端学习机(extreme learning machine,ELM)结构设计问题,基于隐含层激活函数及其导函数提出一种前向神经网络结构增长算法.首先以Sigmoid函数为例给出了一类基函数的派生特性:导函数可以由其原函数表示.其次,利用这种派生特性提出了ELM结构设计方法,该方法自动生成双隐含层前向神经网络,其第1隐含层的结点随机逐一生成.第2隐含层的输出由第1隐含层新添结点的激活函数及其导函数确定,输出层权值由最小二乘法分析获得.最后给出了所提算法收敛性及稳定性的理论证明.对非线性系统辨识及双螺旋分类问题的仿真结果证明了所提算法的有效性.  相似文献   

7.
Extreme learning machine (ELM) works for generalized single-hidden-layer feedforward networks (SLFNs), and its essence is that the hidden layer of SLFNs need not be tuned. But ELM only utilizes labeled data to carry out the supervised learning task. In order to exploit unlabeled data in the ELM model, we first extend the manifold regularization (MR) framework and then demonstrate the relation between the extended MR framework and ELM. Finally, a manifold regularized extreme learning machine is derived from the proposed framework, which maintains the properties of ELM and can be applicable to large-scale learning problems. Experimental results show that the proposed semi-supervised extreme learning machine is the most cost-efficient method. It tends to have better scalability and achieve satisfactory generalization performance at a relatively faster learning speed than traditional semi-supervised learning algorithms.  相似文献   

8.
A wavelet extreme learning machine   总被引:2,自引:0,他引:2  
Extreme learning machine (ELM) has been widely used in various fields to overcome the problem of low training speed of the conventional neural network. Kernel extreme learning machine (KELM) introduces the kernel method to ELM model, which is applicable in Stat ML. However, if the number of samples in Stat ML is too small, perhaps the unbalanced samples cannot reflect the statistical characteristics of the input data, so that the learning ability of Stat ML will be influenced. At the same time, the mix kernel functions used in KELM are conventional functions. Therefore, the selection of kernel function can still be optimized. Based on the problems above, we introduce the weighted method to KELM to deal with the unbalanced samples. Wavelet kernel functions have been widely used in support vector machine and obtain a good classification performance. Therefore, to realize a combination of wavelet analysis and KELM, we introduce wavelet kernel functions to KELM model, which has a mix kernel function of wavelet kernel and sigmoid kernel, and introduce the weighted method to KELM model to balance the sample distribution, and then we propose the weighted wavelet–mix kernel extreme learning machine. The experimental results show that this method can effectively improve the classification ability with better generalization. At the same time, the wavelet kernel functions perform very well compared with the conventional kernel functions in KELM model.  相似文献   

9.
来杰  王晓丹  李睿  赵振冲 《计算机应用》2019,39(6):1619-1625
针对极限学习机算法(ELM)参数随机赋值降低算法鲁棒性及性能受噪声影响显著的问题,将去噪自编码器(DAE)与ELM算法相结合,提出了基于去噪自编码器的极限学习机算法(DAE-ELM)。首先,通过去噪自编码器产生ELM的输入数据、输入权值与隐含层参数;然后,以ELM求得隐含层输出权值,完成对分类器的训练。该算法一方面继承了DAE的优点,自动提取的特征更具代表性与鲁棒性,对于噪声有较强的抑制作用;另一方面克服了ELM参数赋值的随机性,增强了算法鲁棒性。实验结果表明,在不含噪声影响下DAE-ELM相较于ELM、PCA-ELM、SAA-2算法,其分类错误率在MNIST数据集中至少下降了5.6%,在Fashion MNIST数据集中至少下降了3.0%,在Rectangles数据集中至少下降了2.0%,在Convex数据集中至少下降了12.7%。  相似文献   

10.
极限学习机(ELM)由于高效的训练方式被广泛应用于分类回归,然而不同的输入权值在很大程度上会影响其学习性能。为了进一步提高ELM的学习性能,针对ELM的输入权值进行了研究,充分利用图像局部感知的稀疏性,将局部感知的方法运用到基于自动编码器的ELM(ELM-AE)上,提出了局部感知的类限制极限学习机(RF-C2ELM)。通过对MNIST数据集进行分类问题分析实验,实验结果表明,在具有相同隐层结点数的条件下,提出的方法能够获得更高的分类精度。  相似文献   

11.
The original extreme learning machine (ELM) was designed for the balanced data, and it balanced misclassification cost of every sample to get the solution. Weighted extreme learning machine assumed that the balance can be achieved through the equality of misclassification costs. This paper improves previous weighted ELM with decay-weight matrix setting for balance and optimization learning. The decay-weight matrix is based on the sample number of each class, but the weight sum values of each class are not necessarily equal. When the number of samples is reduced, the weight sum is also reduced. By adjusting the decaying velocity, classifier could achieve more appropriate boundary position. From the experimental results, the decay-weighted ELM obtains the better effects in solving the imbalance classification tasks, particularly in multiclass tasks. This method was successfully applied to build the prediction model in the urban traffic congestion prediction system.  相似文献   

12.
标记分布学习作为一种新的学习范式,利用最大熵模型构造的专用化算法能够很好地解决某些标记多样性问题,但是计算量巨大。基于此,引入运行速度快、稳定性更高的核极限学习机模型,提出基于核极限学习机的标记分布学习算法(KELM-LDL)。首先在极限学习机算法中通过RBF核函数将特征映射到高维空间,然后对原标记空间建立KELM回归模型求得输出权值,最后通过模型计算预测未知样本的标记分布。与现有算法在各领域不同规模数据集的实验表明,实验结果均优于多个对比算法,统计假设检验进一步说明KELM-LDL算法的有效性和稳定性。  相似文献   

13.
极端学习机以其快速高效和良好的泛化能力在模式识别领域得到了广泛应用,然而现有的ELM及其改进算法并没有充分考虑到数据维数对ELM分类性能和泛化能力的影响,当数据维数过高时包含的冗余属性及噪音点势必降低ELM的泛化能力,针对这一问题本文提出一种基于流形学习的极端学习机,该算法结合维数约减技术有效消除数据冗余属性及噪声对ELM分类性能的影响,为验证所提方法的有效性,实验使用普遍应用的图像数据,实验结果表明本文所提算法能够显著提高ELM的泛化性能。  相似文献   

14.

针对增量型极限学习机(I-ELM) 中存在大量降低学习效率及准确性的冗余节点的问题, 提出一种基于Delta 检验(DT) 和混沌优化算法(COA) 的改进式增量型核极限学习算法. 利用COA的全局搜索能力对I-ELM 中的隐含层节点参数进行寻优, 结合DT 算法检验模型输出误差, 确定有效的隐含层节点数量, 从而降低网络复杂程度, 提高算法的学习效率; 加入核函数可增强网络的在线预测能力. 仿真结果表明, 所提出的DCI-ELMK 算法具有较好的预测精度和泛化能力, 网络结构更为紧凑.

  相似文献   

15.
郭威  徐涛  于建江  汤克明 《控制与决策》2017,32(9):1556-1564
针对大规模在线学习问题,提出一种二维分割贯序正则化超限学习机(BP-SRELM).BP-SRELM以在线贯序超限学习机为基础,结合分治策略的思想,从实例和特征两个维度对高维隐层输出矩阵进行分割,以降低问题求解的规模和计算复杂性,从而极大地提高对大规模学习问题的执行效率.同时,BP-SRELM通过融合使用Tikhonov正则化技术进一步增强其在实际应用中的稳定性和泛化能力.实验结果表明,所提出的BP-SRELM不仅具有更高的稳定性和预测精度,而且在学习速度上优势明显,适用于大规模数据流的在线学习与实时建模.  相似文献   

16.
Fingerprint matching based on extreme learning machine   总被引:2,自引:2,他引:0  
Considering fingerprint matching as a classification problem, the extreme learning machine (ELM) is a powerful classifier for assigning inputs to their corresponding classes, which offers better generalization performance, much faster learning speed, and minimal human intervention, and is therefore able to overcome the disadvantages of other gradient-based, standard optimization-based, and least squares-based learning techniques, such as high computational complexity, difficult parameter tuning, and so on. This paper proposes a novel fingerprint recognition system by first applying the ELM and Regularized ELM (R-ELM) to fingerprint matching to overcome the demerits of traditional learning methods. The proposed method includes the following steps: effective preprocessing, extraction of invariant moment features, and PCA for feature selection. Finally, ELM and R-ELM are used for fingerprint matching. Experimental results show that the proposed methods have a higher matching accuracy and are less time-consuming; thus, they are suitable for real-time processing. Other comparative studies involving traditional methods also show that the proposed methods with ELM and R-ELM outperform the traditional ones.  相似文献   

17.
Due to the fast learning speed, simplicity of implementation and minimal human intervention, extreme learning machine has received considerable attentions recently, mostly from the machine learning community. Generally, extreme learning machine and its various variants focus on classification and regression problems. Its potential application in analyzing censored time-to-event data is yet to be verified. In this study, we present an extreme learning machine ensemble to model right-censored survival data by combining the Buckley-James transformation and the random forest framework. According to experimental and statistical analysis results, we show that the proposed model outperforms popular survival models such as random survival forest, Cox proportional hazard models on well-known low-dimensional and high-dimensional benchmark datasets in terms of both prediction accuracy and time efficiency.  相似文献   

18.
Xia  Min  Wang  Jie  Liu  Jia  Weng  Liguo  Xu  Yiqing 《Neural computing & applications》2020,32(12):7747-7758
Neural Computing and Applications - This paper proposes a density-based semi-supervised online sequential extreme learning machine (D-SOS-ELM). The proposed method can realize online learning of...  相似文献   

19.
Face recognition based on extreme learning machine   总被引:2,自引:0,他引:2  
Extreme learning machine (ELM) is an efficient learning algorithm for generalized single hidden layer feedforward networks (SLFNs), which performs well in both regression and classification applications. It has recently been shown that from the optimization point of view ELM and support vector machine (SVM) are equivalent but ELM has less stringent optimization constraints. Due to the mild optimization constraints ELM can be easy of implementation and usually obtains better generalization performance. In this paper we study the performance of the one-against-all (OAA) and one-against-one (OAO) ELM for classification in multi-label face recognition applications. The performance is verified through four benchmarking face image data sets.  相似文献   

20.

针对极限学习机(ELM) 网络结构优化问题, 提出一种改进的灵敏度剪枝ELM(ImSAP-ELM). ImSAP-ELM 将??2 正则化因子引入SAP-ELM 中, 采用留一准则确定最优隐节点数. 推导基于奇异值分解的输出权重计算公式, 避免矩阵奇异导致求解无效的问题. 将ImSAP-ELM 用于故障预测, 利用多组同类型故障数据建立多个ImSAP-ELM 模型, 基于加权思想融合不同ImSAP-ELM 的预测值. 某型无人机发射机实例表明, 相比于ELM、OP-ELM (最优剪枝ELM) 和SAP-ELM, ImSAP-ELM 耗时最高, 但是ImSAP-ELM 的预测误差小于其他3 种方法.

  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号