首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Recently, a novel learning algorithm for single-hidden-layer feedforward neural networks (SLFNs) named extreme learning machine (ELM) was proposed by Huang et al. The essence of ELM is that the learning parameters of hidden nodes, including input weights and biases, are randomly assigned and need not be tuned while the output weights can be analytically determined by the simple generalized inverse operation. The only parameter needed to be defined is the number of hidden nodes. Compared with other traditional learning algorithms for SLFNs, ELM provides extremely faster learning speed, better generalization performance and with least human intervention. This paper firstly introduces a brief review of ELM, describing the principle and algorithm of ELM. Then, we put emphasis on the improved methods or the typical variants of ELM, especially on incremental ELM, pruning ELM, error-minimized ELM, two-stage ELM, online sequential ELM, evolutionary ELM, voting-based ELM, ordinal ELM, fully complex ELM, and symmetric ELM. Next, the paper summarized the applications of ELM on classification, regression, function approximation, pattern recognition, forecasting and diagnosis, and so on. In the last, the paper discussed several open issues of ELM, which may be worthy of exploring in the future.  相似文献   

2.
Evolutionary selection extreme learning machine optimization for regression   总被引:2,自引:1,他引:1  
Neural network model of aggression can approximate unknown datasets with the less error. As an important method of global regression, extreme learning machine (ELM) represents a typical learning method in single-hidden layer feedforward network, because of the better generalization performance and the faster implementation. The “randomness” property of input weights makes the nonlinear combination reach arbitrary function approximation. In this paper, we attempt to seek the alternative mechanism to input connections. The idea is derived from the evolutionary algorithm. After predefining the number L of hidden nodes, we generate original ELM models. Each hidden node is seemed as a gene. To rank these hidden nodes, the larger weight nodes are reassigned for the updated ELM. We put L/2 trivial hidden nodes in a candidate reservoir. Then, we generate L/2 new hidden nodes to combine L hidden nodes from this candidate reservoir. Another ranking is used to choose these hidden nodes. The fitness-proportional selection may select L/2 hidden nodes and recombine evolutionary selection ELM. The entire algorithm can be applied for large-scale dataset regression. The verification shows that the regression performance is better than the traditional ELM and Bayesian ELM under less cost gain.  相似文献   

3.
In this paper, a novel self-adaptive extreme learning machine (ELM) based on affinity propagation (AP) is proposed to optimize the radial basis function neural network (RBFNN). As is well known, the parameters of original ELM which developed by G.-B. Huang are randomly determined. However, that cannot objectively obtain a set of optimal parameters of RBFNN trained by ELM algorithm for different realistic datasets. The AP algorithm can automatically produce a set of clustering centers for the different datasets. According to the results of AP, we can, respectively, get the cluster number and the radius value of each cluster. In that case, the above cluster number and radius value can be used to initialize the number and widths of hidden layer neurons in RBFNN and that is also the parameters of coefficient matrix H of ELM. This may successfully avoid the subjectivity prior knowledge and randomness of training RBFNN. Experimental results show that the method proposed in this thesis has a more powerful generalization capability than conventional ELM for an RBFNN.  相似文献   

4.
In this work, we look at the symmetry of normal modes in symmetric structures, particularly structures with cyclic symmetry. We show that normal modes of symmetric structures have different levels of symmetry, or symmetricity. One novel theoretical result of this work is that, for a ring structure with m subunits, the symmetricity of the normal modes falls into m groups of equal size, with normal modes in each group having the same symmetricity. The normal modes in each group can be computed separately, using a much smaller amount of memory and time (up to m3 less). Lastly, we show that symmetry in normal modes depends strongly on symmetry in structure. This work suggests a deeper reason for the existence of symmetric complexes: that they may be formed not only for structural purpose, but likely also for a dynamical reason, that certain structural symmetry is needed to obtain certain symmetric motions that are functionally critical.  相似文献   

5.
Cost-sensitive learning is a crucial problem in machine learning research. Traditional classification problem assumes that the misclassification for each category has the same cost, and the target of learning algorithm is to minimize the expected error rate. In cost-sensitive learning, costs of misclassification for samples of different categories are not the same; the target of algorithm is to minimize the sum of misclassification cost. Cost-sensitive learning can meet the actual demand of real-life classification problems, such as medical diagnosis, financial projections, and so on. Due to fast learning speed and perfect performance, extreme learning machine (ELM) has become one of the best classification algorithms, while voting based on extreme learning machine (V-ELM) makes classification results more accurate and stable. However, V-ELM and some other versions of ELM are all based on the assumption that all misclassifications have same cost. Therefore, they cannot solve cost-sensitive problems well. To overcome the drawback of ELMs mentioned above, an algorithm called cost-sensitive ELM (CS-ELM) is proposed by introducing misclassification cost of each sample into V-ELM. Experimental results on gene expression data show that CS-ELM is effective in reducing misclassification cost.  相似文献   

6.
In statistical machine translation (SMT), re-ranking of huge amount of randomly generated translation hypotheses is one of the essential components in determining the quality of translation result. In this work, a novel re-ranking modelling framework called cascaded re-ranking modelling (CRM) is proposed by cascading a classification model and a regression model. The proposed CRM effectively and efficiently selects the good but rare hypotheses in order to alleviate simultaneously the issues of translation quality and computational cost. CRM can be partnered with any classifier such as support vector machines (SVM) and extreme learning machine (ELM). Compared to other state-of-the-art methods, experimental results show that CRM partnered with ELM (CRM-ELM) can raise at most 11.6% of translation quality over the popular benchmark Chinese–English corpus (IWSLT 2014) and French–English parallel corpus (WMT 2015) with extremely fast training time for huge corpus.  相似文献   

7.
In this paper, we present a machine learning approach to measure the visual quality of JPEG-coded images. The features for predicting the perceived image quality are extracted by considering key human visual sensitivity (HVS) factors such as edge amplitude, edge length, background activity and background luminance. Image quality assessment involves estimating the functional relationship between HVS features and subjective test scores. The quality of the compressed images are obtained without referring to their original images (‘No Reference’ metric). Here, the problem of quality estimation is transformed to a classification problem and solved using extreme learning machine (ELM) algorithm. In ELM, the input weights and the bias values are randomly chosen and the output weights are analytically calculated. The generalization performance of the ELM algorithm for classification problems with imbalance in the number of samples per quality class depends critically on the input weights and the bias values. Hence, we propose two schemes, namely the k-fold selection scheme (KS-ELM) and the real-coded genetic algorithm (RCGA-ELM) to select the input weights and the bias values such that the generalization performance of the classifier is a maximum. Results indicate that the proposed schemes significantly improve the performance of ELM classifier under imbalance condition for image quality assessment. The experimental results prove that the estimated visual quality of the proposed RCGA-ELM emulates the mean opinion score very well. The experimental results are compared with the existing JPEG no-reference image quality metric and full-reference structural similarity image quality metric.  相似文献   

8.
This paper presents a performance enhancement scheme for the recently developed extreme learning machine (ELM) for multi-category sparse data classification problems. ELM is a single hidden layer neural network with good generalization capabilities and extremely fast learning capacity. In ELM, the input weights are randomly chosen and the output weights are analytically calculated. The generalization performance of the ELM algorithm for sparse data classification problem depends critically on three free parameters. They are, the number of hidden neurons, the input weights and the bias values which need to be optimally chosen. Selection of these parameters for the best performance of ELM involves a complex optimization problem.In this paper, we present a new, real-coded genetic algorithm approach called ‘RCGA-ELM’ to select the optimal number of hidden neurons, input weights and bias values which results in better performance. Two new genetic operators called ‘network based operator’ and ‘weight based operator’ are proposed to find a compact network with higher generalization performance. We also present an alternate and less computationally intensive approach called ‘sparse-ELM’. Sparse-ELM searches for the best parameters of ELM using K-fold validation. A multi-class human cancer classification problem using micro-array gene expression data (which is sparse), is used for evaluating the performance of the two schemes. Results indicate that the proposed RCGA-ELM and sparse-ELM significantly improve ELM performance for sparse multi-category classification problems.  相似文献   

9.
In this paper we introduce a new symmetry feature named “symmetry kernel” (SK) to support a measure of symmetry. Given any symmetry transform S, SK of a pattern P is the maximal included symmetric sub-set of P for all directions and shifts. We provide a first algorithm to exhibit this kernel where the centre of symmetry is assumed to be the centre of mass. Then we prove that, in any direction, the optimal axis corresponds to the maximal correlation of a pattern with its symmetric version. That leads to a second algorithm. The associated symmetry measure is a modified difference between the respective surfaces of a pattern and its kernel. A series of experiments supports the actual algorithm validation.  相似文献   

10.
Extreme learning machine (ELM) works for generalized single-hidden-layer feedforward networks (SLFNs), and its essence is that the hidden layer of SLFNs need not be tuned. But ELM only utilizes labeled data to carry out the supervised learning task. In order to exploit unlabeled data in the ELM model, we first extend the manifold regularization (MR) framework and then demonstrate the relation between the extended MR framework and ELM. Finally, a manifold regularized extreme learning machine is derived from the proposed framework, which maintains the properties of ELM and can be applicable to large-scale learning problems. Experimental results show that the proposed semi-supervised extreme learning machine is the most cost-efficient method. It tends to have better scalability and achieve satisfactory generalization performance at a relatively faster learning speed than traditional semi-supervised learning algorithms.  相似文献   

11.
基于极限学习机参数迁移的域适应算法   总被引:1,自引:0,他引:1  
针对含少量标签样本的迁移学习问题,本文提出了基于极限学习机(Extreme learning machine,ELM)参数迁移的域适应算法,其核心思想是将目标域的ELM分类器参数投影到源域参数空间中,使其最大限度地与源域的分类器参数分布相同.此外,考虑到迁移中有可能带来负迁移的情况,在目标函数中引入正则项约束.本文算法与以往的域适应算法相比优势在于,其分类器参数以及转移矩阵是同时优化得到的,并且其目标函数求解过程相对简单.实验结果表明,与主流的域适应算法相比,本文算法在精度与效率上都表现出明显的优势.  相似文献   

12.
In the big data era, extreme learning machine (ELM) can be a good solution for the learning of large sample data as it has high generalization performance and fast training speed. However, the emerging big and distributed data blocks may still challenge the method as they may cause large-scale training which is hard to be finished by a common commodity machine in a limited time. In this paper, we propose a MapReduce-based distributed framework named MR-ELM to enable large-scale ELM training. Under the framework, ELM submodels are trained parallelly with the distributed data blocks on the cluster and then combined as a complete single-hidden layer feedforward neural network. Both classification and regression capabilities of MR-ELM have been theoretically proven, and its generalization performance is shown to be as high as that of the original ELM and some common ELM ensemble methods through many typical benchmarks. Compared with the original ELM and the other parallel ELM algorithms, MR-ELM is a general and scalable ELM training framework for both classification and regression and is suitable for big data learning under the cloud environment where the data are usually distributed instead of being located in one machine.  相似文献   

13.
极限学习机是一种随机化算法,它随机生成单隐含层神经网络输入层连接权和隐含层偏置,用分析的方法确定输出层连接权。给定网络结构,用极限学习机重复训练网络,会得到不同的学习模型。本文提出了一种集成模型对数据进行分类的方法。首先用极限学习机算法重复训练若干个单隐含层前馈神经网络,然后用多数投票法集成训练好的神经网络,最后用集成模型对数据进行分类,并在10个数据集上和极限学习机及集成极限学习机进行了实验比较。实验结果表明,本文提出的方法优于极限学习机和集成极限学习机。  相似文献   

14.
In order to overcome the disadvantage of the traditional algorithm for SLFN (single-hidden layer feedforward neural network), an improved algorithm for SLFN, called extreme learning machine (ELM), is proposed by Huang et al. However, ELM is sensitive to the neuron number in hidden layer and its selection is a difficult-to-solve problem. In this paper, a self-adaptive mechanism is introduced into the ELM. Herein, a new variant of ELM, called self-adaptive extreme learning machine (SaELM), is proposed. SaELM is a self-adaptive learning algorithm that can always select the best neuron number in hidden layer to form the neural networks. There is no need to adjust any parameters in the training process. In order to prove the performance of the SaELM, it is used to solve the Italian wine and iris classification problems. Through the comparisons between SaELM and the traditional back propagation, basic ELM and general regression neural network, the results have proven that SaELM has a faster learning speed and better generalization performance when solving the classification problem.  相似文献   

15.
Dynamic ensemble extreme learning machine based on sample entropy   总被引:1,自引:1,他引:0  
Extreme learning machine (ELM) as a new learning algorithm has been proposed for single-hidden layer feed-forward neural networks, ELM can overcome many drawbacks in the traditional gradient-based learning algorithm such as local minimal, improper learning rate, and low learning speed by randomly selecting input weights and hidden layer bias. However, ELM suffers from instability and over-fitting, especially on large datasets. In this paper, a dynamic ensemble extreme learning machine based on sample entropy is proposed, which can alleviate to some extent the problems of instability and over-fitting, and increase the prediction accuracy. The experimental results show that the proposed approach is robust and efficient.  相似文献   

16.

In this paper, a new method is proposed to identify solid oxide fuel cell using extreme learning machine–Hammerstein model (ELM–Hammerstein). The ELM–Hammerstein model consists of a static ELM neural network followed by a linear dynamic subsystem. First, the structure of ELM–Hammerstein model is determined by Lipschitz quotient criterion from input–output data. Then, a generalized ELM algorithm is proposed to estimate the parameters of ELM–Hammerstein model, including the parameters of linear dynamic part and the output weights of ELM. The proposed method can obtain accurate identification results and its computation is more efficient. Simulation results demonstrate its effectiveness.

  相似文献   

17.
In this paper, extreme learning machine (ELM) is used to reconstruct a surface with a high speed. It is shown that an improved ELM, called polyharmonic extreme learning machine (P-ELM), is proposed to reconstruct a smoother surface with a high accuracy and robust stability. The proposed P-ELM improves ELM in the sense of adding a polynomial in the single-hidden-layer feedforward networks to approximate the unknown function of the surface. The proposed P-ELM can not only retain the advantages of ELM with an extremely high learning speed and a good generalization performance but also reflect the intrinsic properties of the reconstructed surface. The detailed comparisons of the P-ELM, RBF algorithm, and ELM are carried out in the simulation to show the good performances and the effectiveness of the proposed algorithm.  相似文献   

18.

Recently, extreme learning machine (ELM) has attracted increasing attention due to its successful applications in classification, regression, and ranking. Normally, the desired output of the learning system using these machine learning techniques is a simple scalar output. However, there are many applications in machine learning which require more complex output rather than a simple scalar one. Therefore, structured output is used for such applications where the system is trained to predict structured output instead of simple one. Previously, support vector machine (SVM) has been introduced for structured output learning in various applications. However, from machine learning point of view, ELM is known to offer better generalization performance compared to other learning techniques. In this study, we extend ELM to more generalized framework to handle complex outputs where simple outputs are considered as special cases of it. Besides the good generalization property of ELM, the resulting model will possesses rich internal structure that reflects task-specific relations and constraints. The experimental results show that structured ELM achieves similar (for binary problems) or better (for multi-class problems) generalization performance when compared to ELM. Moreover, as verified by the simulation results, structured ELM has comparable or better precision performance with structured SVM when tested for more complex output such as object localization problem on PASCAL VOC2006. Also, the investigation on parameter selections is presented and discussed for all problems.

  相似文献   

19.
极端学习机(extreme learning machine, ELM)训练速度快、分类率高,已经广泛应用于人脸识别等实际问题中,并取得了较好的效果.但实际问题中的数据往往维数较高,且经常带有噪声及离群点,降低了ELM算法的分类率.这主要是由于:1)输入样本维数过高;2)激活函数选取不当.以上两点使激活函数的输出值趋于零,最终降低了ELM算法的性能.针对第1个问题,提出一种鲁棒的线性降维方法(RAF-global embedding, RAF-GE)预处理高维数据,再通过ELM算法对数据进行分类;而对第2个问题,深入分析不同激活函数的性质,提出一种鲁棒激活函数(robust activation function, RAF),该激活函数可尽量避免激活函数的输出值趋于零,提升RAF-GE及ELM算法的性能.实验证实人脸识别方法的性能普遍优于使用其他激活函数的对比方法.  相似文献   

20.
为提升模拟电路故障诊断精度,结合基于故障特征间一维模糊度的特征选择算法,提出一种新的多核超限学习机诊断模型。该模型通过设置虚拟的基核,将正则化参数融入基核权重求解过程中;同时,通过将特征空间类内散度集成到多核优化目标函数中,在最小化训练误差的同时,使得同一模式的故障样本更加集中,有效提升了故障模式间的辨识力。通过两个模拟电路诊断实例表明:相比于单核学习算法,所提方法可以显著提升诊断精度,并且可以将难以辨识的故障样本更加准确地隔离到相应模糊组中;相比于一般的多核学习算法,所提方法在取得相似诊断精度的同时,时间花费更少。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号