首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Extreme learning machine (ELM) works for generalized single-hidden-layer feedforward networks (SLFNs), and its essence is that the hidden layer of SLFNs need not be tuned. But ELM only utilizes labeled data to carry out the supervised learning task. In order to exploit unlabeled data in the ELM model, we first extend the manifold regularization (MR) framework and then demonstrate the relation between the extended MR framework and ELM. Finally, a manifold regularized extreme learning machine is derived from the proposed framework, which maintains the properties of ELM and can be applicable to large-scale learning problems. Experimental results show that the proposed semi-supervised extreme learning machine is the most cost-efficient method. It tends to have better scalability and achieve satisfactory generalization performance at a relatively faster learning speed than traditional semi-supervised learning algorithms.  相似文献   

2.
Due to the significant efficiency and simple implementation, extreme learning machine (ELM) algorithms enjoy much attention in regression and classification applications recently. Many efforts have been paid to enhance the performance of ELM from both methodology (ELM training strategies) and structure (incremental or pruned ELMs) perspectives. In this paper, a local coupled extreme learning machine (LC-ELM) algorithm is presented. By assigning an address to each hidden node in the input space, LC-ELM introduces a decoupler framework to ELM in order to reduce the complexity of the weight searching space. The activated degree of a hidden node is measured by the membership degree of the similarity between the associated address and the given input. Experimental results confirm that the proposed approach works effectively and generally outperforms the original ELM in both regression and classification applications.  相似文献   

3.

Recently, extreme learning machine (ELM) has attracted increasing attention due to its successful applications in classification, regression, and ranking. Normally, the desired output of the learning system using these machine learning techniques is a simple scalar output. However, there are many applications in machine learning which require more complex output rather than a simple scalar one. Therefore, structured output is used for such applications where the system is trained to predict structured output instead of simple one. Previously, support vector machine (SVM) has been introduced for structured output learning in various applications. However, from machine learning point of view, ELM is known to offer better generalization performance compared to other learning techniques. In this study, we extend ELM to more generalized framework to handle complex outputs where simple outputs are considered as special cases of it. Besides the good generalization property of ELM, the resulting model will possesses rich internal structure that reflects task-specific relations and constraints. The experimental results show that structured ELM achieves similar (for binary problems) or better (for multi-class problems) generalization performance when compared to ELM. Moreover, as verified by the simulation results, structured ELM has comparable or better precision performance with structured SVM when tested for more complex output such as object localization problem on PASCAL VOC2006. Also, the investigation on parameter selections is presented and discussed for all problems.

  相似文献   

4.
Face recognition based on extreme learning machine   总被引:2,自引:0,他引:2  
Extreme learning machine (ELM) is an efficient learning algorithm for generalized single hidden layer feedforward networks (SLFNs), which performs well in both regression and classification applications. It has recently been shown that from the optimization point of view ELM and support vector machine (SVM) are equivalent but ELM has less stringent optimization constraints. Due to the mild optimization constraints ELM can be easy of implementation and usually obtains better generalization performance. In this paper we study the performance of the one-against-all (OAA) and one-against-one (OAO) ELM for classification in multi-label face recognition applications. The performance is verified through four benchmarking face image data sets.  相似文献   

5.
尽管极限学习机因具有快速、简单、易实现及普适的逼近能力等特点被广泛应用于分类、回归及特征学习问题,但是,极限学习机同其他标准分类方法一样将最大化各类总分类性能作为算法的优化目标,因此,在实际应用中遇到数据样本分布不平衡时,算法对大类样本具有性能偏向性。针对极限学习机类不平衡学习问题的研究起步晚,算法少的问题,在介绍了极限学习机类不平衡数据学习研究现状,极限学习机类不平衡数据学习的典型算法-加权极限学习机及其改进算法的基础上,提出一种不需要对原始不平衡样本进行处理的Adaboost提升的加权极限学习机,通过在15个UCI不平衡数据集进行分析实验,实验结果表明提出的算法具有更好的分类性能。  相似文献   

6.
极端学习机以其快速高效和良好的泛化能力在模式识别领域得到了广泛应用,然而现有的ELM及其改进算法并没有充分考虑到数据维数对ELM分类性能和泛化能力的影响,当数据维数过高时包含的冗余属性及噪音点势必降低ELM的泛化能力,针对这一问题本文提出一种基于流形学习的极端学习机,该算法结合维数约减技术有效消除数据冗余属性及噪声对ELM分类性能的影响,为验证所提方法的有效性,实验使用普遍应用的图像数据,实验结果表明本文所提算法能够显著提高ELM的泛化性能。  相似文献   

7.
As a novel learning algorithm for single-hidden-layer feedforward neural networks, extreme learning machines (ELMs) have been a promising tool for regression and classification applications. However, it is not trivial for ELMs to find the proper number of hidden neurons due to the nonoptimal input weights and hidden biases. In this paper, a new model selection method of ELM based on multi-objective optimization is proposed to obtain compact networks with good generalization ability. First, a new leave-one-out (LOO) error bound of ELM is derived, and it can be calculated with negligible computational cost once the ELM training is finished. Furthermore, the hidden nodes are added to the network one-by-one, and at each step, a multi-objective optimization algorithm is used to select optimal input weights by minimizing this LOO bound and the norm of output weight simultaneously in order to avoid over-fitting. Experiments on five UCI regression data sets are conducted, demonstrating that the proposed algorithm can generally obtain better generalization performance with more compact network than the conventional gradient-based back-propagation method, original ELM and evolutionary ELM.  相似文献   

8.
In order to overcome the disadvantage of the traditional algorithm for SLFN (single-hidden layer feedforward neural network), an improved algorithm for SLFN, called extreme learning machine (ELM), is proposed by Huang et al. However, ELM is sensitive to the neuron number in hidden layer and its selection is a difficult-to-solve problem. In this paper, a self-adaptive mechanism is introduced into the ELM. Herein, a new variant of ELM, called self-adaptive extreme learning machine (SaELM), is proposed. SaELM is a self-adaptive learning algorithm that can always select the best neuron number in hidden layer to form the neural networks. There is no need to adjust any parameters in the training process. In order to prove the performance of the SaELM, it is used to solve the Italian wine and iris classification problems. Through the comparisons between SaELM and the traditional back propagation, basic ELM and general regression neural network, the results have proven that SaELM has a faster learning speed and better generalization performance when solving the classification problem.  相似文献   

9.
极限学习机(ELM)由于高效的训练方式被广泛应用于分类回归,然而不同的输入权值在很大程度上会影响其学习性能。为了进一步提高ELM的学习性能,针对ELM的输入权值进行了研究,充分利用图像局部感知的稀疏性,将局部感知的方法运用到基于自动编码器的ELM(ELM-AE)上,提出了局部感知的类限制极限学习机(RF-C2ELM)。通过对MNIST数据集进行分类问题分析实验,实验结果表明,在具有相同隐层结点数的条件下,提出的方法能够获得更高的分类精度。  相似文献   

10.
Extreme learning machine for regression and multiclass classification   总被引:13,自引:0,他引:13  
Due to the simplicity of their implementations, least square support vector machine (LS-SVM) and proximal support vector machine (PSVM) have been widely used in binary classification applications. The conventional LS-SVM and PSVM cannot be used in regression and multiclass classification applications directly, although variants of LS-SVM and PSVM have been proposed to handle such cases. This paper shows that both LS-SVM and PSVM can be simplified further and a unified learning framework of LS-SVM, PSVM, and other regularization algorithms referred to extreme learning machine (ELM) can be built. ELM works for the "generalized" single-hidden-layer feedforward networks (SLFNs), but the hidden layer (or called feature mapping) in ELM need not be tuned. Such SLFNs include but are not limited to SVM, polynomial network, and the conventional feedforward neural networks. This paper shows the following: 1) ELM provides a unified learning platform with a widespread type of feature mappings and can be applied in regression and multiclass classification applications directly; 2) from the optimization method point of view, ELM has milder optimization constraints compared to LS-SVM and PSVM; 3) in theory, compared to ELM, LS-SVM and PSVM achieve suboptimal solutions and require higher computational complexity; and 4) in theory, ELM can approximate any target continuous function and classify any disjoint regions. As verified by the simulation results, ELM tends to have better scalability and achieve similar (for regression and binary class cases) or much better (for multiclass cases) generalization performance at much faster learning speed (up to thousands times) than traditional SVM and LS-SVM.  相似文献   

11.
极限学习机(Extreme learning machine, ELM)作为一种新技术具有在回归和分类中良好的泛化性能。局部空间信息的模糊C均值算法(Weighted fuzzy local information C-means, WFLICM)用邻域像素点的空间信息标记中心点的影响因子,增强了模糊C均值聚类算法的去噪声能力。基于极限学习机理论,对WFLICM进行改进优化,提出了基于ELM的局部空间信息的模糊C均值聚类图像分割算法(New kernel weighted fuzzy local information C-means based on ELM,ELM-NKWFLICM)。该方法基于ELM特征映射技术,将原始数据通过ELM特征映射技术映射到高维ELM隐空间中,再用改进的新核局部空间信息的模糊C均值聚类图像分割算法(New kernel weighted fuzzy local information C-means,NKWFLICM)进行聚类。 实验结果表明 ELM-NKWFLICM算法具有比WFLICM算法更强的去噪声能力,且很好地保留了原图像的细节,算法在处理复杂非线性数据时更高效, 同时克服了模糊聚类算法对模糊指数的敏感性问题。  相似文献   

12.
Evolutionary selection extreme learning machine optimization for regression   总被引:2,自引:1,他引:1  
Neural network model of aggression can approximate unknown datasets with the less error. As an important method of global regression, extreme learning machine (ELM) represents a typical learning method in single-hidden layer feedforward network, because of the better generalization performance and the faster implementation. The “randomness” property of input weights makes the nonlinear combination reach arbitrary function approximation. In this paper, we attempt to seek the alternative mechanism to input connections. The idea is derived from the evolutionary algorithm. After predefining the number L of hidden nodes, we generate original ELM models. Each hidden node is seemed as a gene. To rank these hidden nodes, the larger weight nodes are reassigned for the updated ELM. We put L/2 trivial hidden nodes in a candidate reservoir. Then, we generate L/2 new hidden nodes to combine L hidden nodes from this candidate reservoir. Another ranking is used to choose these hidden nodes. The fitness-proportional selection may select L/2 hidden nodes and recombine evolutionary selection ELM. The entire algorithm can be applied for large-scale dataset regression. The verification shows that the regression performance is better than the traditional ELM and Bayesian ELM under less cost gain.  相似文献   

13.
针对大数据分类问题应用设计了一种快速隐层优化方法来解决分布式超限学习机(Extreme Learning Machine,ELM)在训练过程中存在的突出问题--需要独立重复运行多次才能优化隐层结点个数或模型泛化性能。在不增加算法时间复杂度的前提下,新算法能同时训练多个ELM隐层网络,全面兼顾模型泛化能力和隐层结点个数的优化,并通过分布式计算避免大量重复计算。同时,在算法求解过程中通过这种方式能更精确、更直观地学习隐含层结点个数变化带来的影响。比较多种类型标准测试函数的实验结果,相对于分布式ELM,新算法在求解精度、泛化能力、稳定性上大大提高。  相似文献   

14.
Symmetric extreme learning machine   总被引:1,自引:1,他引:0  
Extreme learning machine (ELM) can be considered as a black-box modeling approach that seeks a model representation extracted from the training data. In this paper, a modified ELM algorithm, called symmetric ELM (S-ELM), is proposed by incorporating a priori information of symmetry. S-ELM is realized by transforming the original activation function of hidden neurons into a symmetric one with respect to the input variables of the samples. In theory, S-ELM can approximate N arbitrary distinct samples with zero error. Simulation results show that, in the applications where there exists the prior knowledge of symmetry, S-ELM can obtain better generalization performance, faster learning speed, and more compact network architecture.  相似文献   

15.
在大数据环境背景下,传统机器学习算法多采用单机离线训练的方式,显然已经无法适应持续增长的大规模流式数据的变化。针对该问题,提出一种基于Flink平台的分布式在线集成学习算法。该方法基于Flink分布式计算框架,首先通过数据并行的方式对在线学习算法进行分布式在线训练;然后将训练出的多个子模型通过随机梯度下降算法进行模型的动态权重分配,实现对多个子模型的结果聚合;与此同时,对于训练效果不好的模型利用其样本进行在线更新;最后通过单机与集群环境在不同数据集上做实验对比分析。实验结果表明,在线学习算法结合Flink框架的分布式集成训练,能达到集中训练方式下的性能,同时大大提高了训练的时间效率。  相似文献   

16.
This paper presents a performance enhancement scheme for the recently developed extreme learning machine (ELM) for multi-category sparse data classification problems. ELM is a single hidden layer neural network with good generalization capabilities and extremely fast learning capacity. In ELM, the input weights are randomly chosen and the output weights are analytically calculated. The generalization performance of the ELM algorithm for sparse data classification problem depends critically on three free parameters. They are, the number of hidden neurons, the input weights and the bias values which need to be optimally chosen. Selection of these parameters for the best performance of ELM involves a complex optimization problem.In this paper, we present a new, real-coded genetic algorithm approach called ‘RCGA-ELM’ to select the optimal number of hidden neurons, input weights and bias values which results in better performance. Two new genetic operators called ‘network based operator’ and ‘weight based operator’ are proposed to find a compact network with higher generalization performance. We also present an alternate and less computationally intensive approach called ‘sparse-ELM’. Sparse-ELM searches for the best parameters of ELM using K-fold validation. A multi-class human cancer classification problem using micro-array gene expression data (which is sparse), is used for evaluating the performance of the two schemes. Results indicate that the proposed RCGA-ELM and sparse-ELM significantly improve ELM performance for sparse multi-category classification problems.  相似文献   

17.
数据分块数的选择是并行/分布式机器学习模型选择的基本问题之一,直接影响着机器学习算法的泛化性和运行效率。现有并行/分布式机器学习方法往往根据经验或处理器个数来选择数据分块数,没有明确的数据分块数选择准则。提出一个并行效率敏感的并行/分布式机器学习数据分块数选择准则,该准则可在保证并行/分布式机器学习模型测试精度的情况下,提高计算效率。首先推导并行/分布式机器学习模型的泛化误差与分块数目的关系。然后以此为基础,提出折衷泛化性与并行效率的数据分块数选择准则。最后,在ADMM框架下随机傅里叶特征空间中,给出采用该数据分块数选择准则的大规模支持向量机实现方案,并在高性能计算集群和大规模标准数据集上对所提出的数据分块数选择准则的有效性进行实验验证。  相似文献   

18.
Extreme learning machine (ELM) as a new learning approach has shown its good generalization performance in regression and classification applications. Clustering analysis is an important tool to explore the structure of data and has been employed in many disciplines and applications. In this paper, we present a method that builds on ELM projection of input data into a high-dimensional feature space and followed by unsupervised clustering using artificial bee colony (ABC) algorithm. While ELM projection facilitates separability of clusters, a metaheuristic technique such as ABC algorithm overcomes problems of dependence on initialization of cluster centers and convergence to local minima suffered by conventional algorithms such as K-means. The proposed ELM-ABC algorithm is tested on 12 benchmark data sets. The experimental results show that the ELM-ABC algorithm can effectively improve the quality of clustering.  相似文献   

19.
Incorporating prior knowledge into learning by dividing training data   总被引:2,自引:0,他引:2  
In most large-scale real-world pattern classification problems, there is always some explicit information besides given training data, namely prior knowledge, with which the training data are organized. In this paper, we proposed a framework for incorporating this kind of prior knowledge into the training of min-max modular (M3) classifier to improve learning performance. In order to evaluate the proposed method, we perform experiments on a large-scale Japanese patent classification problem and consider two kinds of prior knowledge included in patent documents: patent’s publishing date and the hierarchical structure of patent classification system. In the experiments, traditional support vector machine (SVM) and M3-SVM without prior knowledge are adopted as baseline classifiers. Experimental results demonstrate that the proposed method is superior to the baseline classifiers in terms of training cost and generalization accuracy. Moreover, M3-SVM with prior knowledge is found to be much more robust than traditional support vector machine to noisy dated patent samples, which is crucial for incremental learning.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号