首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Face recognition based on extreme learning machine   总被引:2,自引:0,他引:2  
Extreme learning machine (ELM) is an efficient learning algorithm for generalized single hidden layer feedforward networks (SLFNs), which performs well in both regression and classification applications. It has recently been shown that from the optimization point of view ELM and support vector machine (SVM) are equivalent but ELM has less stringent optimization constraints. Due to the mild optimization constraints ELM can be easy of implementation and usually obtains better generalization performance. In this paper we study the performance of the one-against-all (OAA) and one-against-one (OAO) ELM for classification in multi-label face recognition applications. The performance is verified through four benchmarking face image data sets.  相似文献   

2.
极限学习机在岩性识别中的应用   总被引:3,自引:0,他引:3  
基于传统支持向量机(SVM)训练速度慢、参数选择难等问题,提出了基于极限学习机(ELM)的岩性识别.该算法是一种新的单隐层前馈神经网络(SLFNs)学习算法,不但可以简化参数选择过程,而且可以提高网络的训练速度.在确定了最优参数的基础上,建立了ELM的岩性分类模型,并且将ELM的分类结果与SVM进行对比.实验结果表明,ELM以较少的神经元个数获得与SVM相当的分类正确率,并且ELM参数选择比SVM简便,有效降低了训练速度,表明了ELM应用于岩性识别的可行性和算法的有效性.  相似文献   

3.
Recently, a novel learning algorithm for single-hidden-layer feedforward neural networks (SLFNs) named extreme learning machine (ELM) was proposed by Huang et al. The essence of ELM is that the learning parameters of hidden nodes, including input weights and biases, are randomly assigned and need not be tuned while the output weights can be analytically determined by the simple generalized inverse operation. The only parameter needed to be defined is the number of hidden nodes. Compared with other traditional learning algorithms for SLFNs, ELM provides extremely faster learning speed, better generalization performance and with least human intervention. This paper firstly introduces a brief review of ELM, describing the principle and algorithm of ELM. Then, we put emphasis on the improved methods or the typical variants of ELM, especially on incremental ELM, pruning ELM, error-minimized ELM, two-stage ELM, online sequential ELM, evolutionary ELM, voting-based ELM, ordinal ELM, fully complex ELM, and symmetric ELM. Next, the paper summarized the applications of ELM on classification, regression, function approximation, pattern recognition, forecasting and diagnosis, and so on. In the last, the paper discussed several open issues of ELM, which may be worthy of exploring in the future.  相似文献   

4.
Extreme learning machine (ELM) works for generalized single-hidden-layer feedforward networks (SLFNs), and its essence is that the hidden layer of SLFNs need not be tuned. But ELM only utilizes labeled data to carry out the supervised learning task. In order to exploit unlabeled data in the ELM model, we first extend the manifold regularization (MR) framework and then demonstrate the relation between the extended MR framework and ELM. Finally, a manifold regularized extreme learning machine is derived from the proposed framework, which maintains the properties of ELM and can be applicable to large-scale learning problems. Experimental results show that the proposed semi-supervised extreme learning machine is the most cost-efficient method. It tends to have better scalability and achieve satisfactory generalization performance at a relatively faster learning speed than traditional semi-supervised learning algorithms.  相似文献   

5.
A study on effectiveness of extreme learning machine   总被引:7,自引:0,他引:7  
Extreme learning machine (ELM), proposed by Huang et al., has been shown a promising learning algorithm for single-hidden layer feedforward neural networks (SLFNs). Nevertheless, because of the random choice of input weights and biases, the ELM algorithm sometimes makes the hidden layer output matrix H of SLFN not full column rank, which lowers the effectiveness of ELM. This paper discusses the effectiveness of ELM and proposes an improved algorithm called EELM that makes a proper selection of the input weights and bias before calculating the output weights, which ensures the full column rank of H in theory. This improves to some extend the learning rate (testing accuracy, prediction accuracy, learning time) and the robustness property of the networks. The experimental results based on both the benchmark function approximation and real-world problems including classification and regression applications show the good performances of EELM.  相似文献   

6.
Over the last two decades, automatic speaker recognition has been an interesting and challenging problem to speech researchers. It can be classified into two different categories, speaker identification and speaker verification. In this paper, a new classifier, extreme learning machine, is examined on the text-independent speaker verification task and compared with SVM classifier. Extreme learning machine (ELM) classifiers have been proposed for generalized single hidden layer feedforward networks with a wide variety of hidden nodes. They are extremely fast in learning and perform well on many artificial and real regression and classification applications. The database used to evaluate the ELM and SVM classifiers is ELSDSR corpus, and the Mel-frequency Cepstral Coefficients were extracted and used as the input to the classifiers. Empirical studies have shown that ELM classifiers and its variants could perform better than SVM classifiers on the dataset provided with less training time.  相似文献   

7.
李军  乃永强 《控制与决策》2015,30(9):1559-1566

针对一类多输入多输出(MIMO) 仿射非线性动态系统, 提出一种基于极限学习机(ELM) 的鲁棒自适应神经控制方法. ELM随机确定单隐层前馈网络(SLFNs) 的隐含层参数, 仅需调整网络的输出权值, 能以极快的学习速度获得良好的推广性. 在所提出的控制方法中, 利用ELM逼近系统的未知非线性项, 针对ELM网络的权值、逼近误差及外界扰动的未知上界值分别设计参数自适应律, 通过Lyapunov 稳定性分析可以保证闭环系统所有信号半全局最终一致有界. 仿真结果表明了该控制方法的有效性.

  相似文献   

8.

Recently, extreme learning machine (ELM) has attracted increasing attention due to its successful applications in classification, regression, and ranking. Normally, the desired output of the learning system using these machine learning techniques is a simple scalar output. However, there are many applications in machine learning which require more complex output rather than a simple scalar one. Therefore, structured output is used for such applications where the system is trained to predict structured output instead of simple one. Previously, support vector machine (SVM) has been introduced for structured output learning in various applications. However, from machine learning point of view, ELM is known to offer better generalization performance compared to other learning techniques. In this study, we extend ELM to more generalized framework to handle complex outputs where simple outputs are considered as special cases of it. Besides the good generalization property of ELM, the resulting model will possesses rich internal structure that reflects task-specific relations and constraints. The experimental results show that structured ELM achieves similar (for binary problems) or better (for multi-class problems) generalization performance when compared to ELM. Moreover, as verified by the simulation results, structured ELM has comparable or better precision performance with structured SVM when tested for more complex output such as object localization problem on PASCAL VOC2006. Also, the investigation on parameter selections is presented and discussed for all problems.

  相似文献   

9.
Extreme learning machine (ELM) [G.-B. Huang, Q.-Y. Zhu, C.-K. Siew, Extreme learning machine: a new learning scheme of feedforward neural networks, in: Proceedings of the International Joint Conference on Neural Networks (IJCNN2004), Budapest, Hungary, 25-29 July 2004], a novel learning algorithm much faster than the traditional gradient-based learning algorithms, was proposed recently for single-hidden-layer feedforward neural networks (SLFNs). However, ELM may need higher number of hidden neurons due to the random determination of the input weights and hidden biases. In this paper, a hybrid learning algorithm is proposed which uses the differential evolutionary algorithm to select the input weights and Moore-Penrose (MP) generalized inverse to analytically determine the output weights. Experimental results show that this approach is able to achieve good generalization performance with much more compact networks.  相似文献   

10.
后验概率在多分类支持向量机上的应用   总被引:1,自引:0,他引:1  
支持向量机是基于统计学习理论的一种新的分类规则挖掘方法。在已有多分类支持向量机基础上,首次提出了几何距离多分类支持向量分类器;随后,将二值支持向量机的后验概率输出也推广到多分类问题,避免了使用迭代算法,在快速预测的前提下提高了预测准确率。数值实验的结果表明,这两种方法都具有很好的推广性能,能明显提高分类器对未知样本的分类准确率。  相似文献   

11.
一种新型的多元分类支持向量机   总被引:3,自引:0,他引:3  
最小二乘支持向量机采用最小二乘线性系统代替传统的支持向量机采用二次规划方法解决模式识别问题。该文详细推理和分析了二元分类最小二乘支持向量机算法,构建了多元分类最小二乘支持向量机,并通过典型样本进行测试,结果表明采用多元分类最小二乘支持向量机进行模式识别是有效、可行的。  相似文献   

12.
一种基于近似支撑矢量机(PSVM)的交通目标分类方法   总被引:1,自引:0,他引:1  
本文介绍了支撑向量机的特点,给出了实际应用中传统支撑矢量机存在的问题。为了克服支撑矢量机算法的不足,引入了一种近似支撑矢量机(PSVM)算法,并将此算法用于交通目标的分类识别。实验结果表明此算法比BP神经网络法准确率高,比传统的SVM法的效率高。  相似文献   

13.
极限学习机与支持向量机在储层渗透率预测中的对比研究   总被引:4,自引:0,他引:4  
极限学习机ELM是一种简单易用、有效的单隐层前馈神经网络SLFNs学习算法。传统的神经网络学习算法(如BP算法)需要人为设置大量的网络训练参数,并且很容易产生局部最优解。极限学习机只需要设置网络的隐层节点个数,在算法执行过程中不需要调整网络的输入权值以及隐元的偏置,并且产生唯一的最优解,因此具有学习速度快且泛化性能好的优点。本文将极限学习机引入到储层渗透率的预测中,通过对比支持向量机,分析其在储层渗透率预测中的可行性和优势。实验结果表明,极限学习机与支持向量机有近似的预测精度,但在参数选择以及学习速度上极限学习机具有明显的优势。  相似文献   

14.
It is well-known that single-hidden-layer feedforward networks (SLFNs) with additive models are universal approximators. However the training of these models was slow until the birth of extreme learning machine (ELM) “Huang et al. Neurocomputing 70(1–3):489–501 (2006)” and its later improvements. Before ELM, the faster algorithms for efficiently training SLFNs were gradient based ones which need to be applied iteratively until a proper model is obtained. This slow convergence implies that SLFNs are not used as widely as they could be, even taking into consideration their overall good performances. The ELM allowed SLFNs to become a suitable option to classify a great number of patterns in a short time. Up to now, the hidden nodes were randomly initiated and tuned (though not in all approaches). This paper proposes a deterministic algorithm to initiate any hidden node with an additive activation function to be trained with ELM. Our algorithm uses the information retrieved from principal components analysis to fit the hidden nodes. This approach considerably decreases computational cost compared to later ELM improvements and overcomes their performance.  相似文献   

15.
Online learning algorithms have been preferred in many applications due to their ability to learn by the sequentially arriving data. One of the effective algorithms recently proposed for training single hidden-layer feedforward neural networks (SLFNs) is online sequential extreme learning machine (OS-ELM), which can learn data one-by-one or chunk-by-chunk at fixed or varying sizes. It is based on the ideas of extreme learning machine (ELM), in which the input weights and hidden layer biases are randomly chosen and then the output weights are determined by the pseudo-inverse operation. The learning speed of this algorithm is extremely high. However, it is not good to yield generalization models for noisy data and is difficult to initialize parameters in order to avoid singular and ill-posed problems. In this paper, we propose an improvement of OS-ELM based on the bi-objective optimization approach. It tries to minimize the empirical error and obtain small norm of network weight vector. Singular and ill-posed problems can be overcome by using the Tikhonov regularization. This approach is also able to learn data one-by-one or chunk-by-chunk. Experimental results show the better generalization performance of the proposed approach on benchmark datasets.  相似文献   

16.
In this paper, we develop an online sequential learning algorithm for single hidden layer feedforward networks (SLFNs) with additive or radial basis function (RBF) hidden nodes in a unified framework. The algorithm is referred to as online sequential extreme learning machine (OS-ELM) and can learn data one-by-one or chunk-by-chunk (a block of data) with fixed or varying chunk size. The activation functions for additive nodes in OS-ELM can be any bounded nonconstant piecewise continuous functions and the activation functions for RBF nodes can be any integrable piecewise continuous functions. In OS-ELM, the parameters of hidden nodes (the input weights and biases of additive nodes or the centers and impact factors of RBF nodes) are randomly selected and the output weights are analytically determined based on the sequentially arriving data. The algorithm uses the ideas of ELM of Huang developed for batch learning which has been shown to be extremely fast with generalization performance better than other batch training methods. Apart from selecting the number of hidden nodes, no other control parameters have to be manually chosen. Detailed performance comparison of OS-ELM is done with other popular sequential learning algorithms on benchmark problems drawn from the regression, classification and time series prediction areas. The results show that the OS-ELM is faster than the other sequential algorithms and produces better generalization performance  相似文献   

17.
Damage location detection has direct relationship with the field of aerospace structure as the detection system can inspect any exterior damage that may affect the operations of the equipment. In the literature, several kinds of learning algorithms have been applied in this field to construct the detection system and some of them gave good results. However, most learning algorithms are time-consuming due to their computational complexity so that the real-time requirement in many practical applications cannot be fulfilled. Kernel extreme learning machine (kernel ELM) is a learning algorithm, which has good prediction performance while maintaining extremely fast learning speed. Kernel ELM is originally applied to this research to predict the location of impact event on a clamped aluminum plate that simulates the shell of aerospace structures. The results were compared with several previous work, including support vector machine (SVM), and conventional back-propagation neural networks (BPNN). The comparison result reveals the effectiveness of kernel ELM for impact detection, showing that kernel ELM has comparable accuracy to SVM but much faster speed on current application than SVM and BPNN.  相似文献   

18.
Convex incremental extreme learning machine   总被引:6,自引:2,他引:6  
Guang-Bin  Lei   《Neurocomputing》2007,70(16-18):3056
Unlike the conventional neural network theories and implementations, Huang et al. [Universal approximation using incremental constructive feedforward networks with random hidden nodes, IEEE Transactions on Neural Networks 17(4) (2006) 879–892] have recently proposed a new theory to show that single-hidden-layer feedforward networks (SLFNs) with randomly generated additive or radial basis function (RBF) hidden nodes (according to any continuous sampling distribution) can work as universal approximators and the resulting incremental extreme learning machine (I-ELM) outperforms many popular learning algorithms. I-ELM randomly generates the hidden nodes and analytically calculates the output weights of SLFNs, however, I-ELM does not recalculate the output weights of all the existing nodes when a new node is added. This paper shows that while retaining the same simplicity, the convergence rate of I-ELM can be further improved by recalculating the output weights of the existing nodes based on a convex optimization method when a new hidden node is randomly added. Furthermore, we show that given a type of piecewise continuous computational hidden nodes (possibly not neural alike nodes), if SLFNs can work as universal approximators with adjustable hidden node parameters, from a function approximation point of view the hidden node parameters of such “generalized” SLFNs (including sigmoid networks, RBF networks, trigonometric networks, threshold networks, fuzzy inference systems, fully complex neural networks, high-order networks, ridge polynomial networks, wavelet networks, etc.) can actually be randomly generated according to any continuous sampling distribution. In theory, the parameters of these SLFNs can be analytically determined by ELM instead of being tuned.  相似文献   

19.
衣治安  刘杨 《计算机应用》2007,27(11):2860-2862
目前性能较好的多分类算法有1-v-r支持向量机(SVM)、1-1-1SVM、DDAG SVM等,但存在大量不可分区域且训练时间较长的问题。提出一种基于二叉树的多分类SVM算法用于电子邮件的分类与过滤,通过构建二叉树将多分类转化为二值分类,算法采用先聚类再分类的思想,计算测试样本与子类中心的最大相似度和子类间的分离度,以构造决策节点的最优分类超平面。对于C类分类只需C-1个决策函数,从而可节省训练时间。实验表明,该算法得到了较高的查全率、查准率。  相似文献   

20.
In last year’s, the expert target recognition has been become very important topic in radar literature. In this study, a target recognition system is introduced for expert target recognition (ATR) using radar target echo signals of High Range Resolution (HRR) radars. This study includes a combination of an adaptive feature extraction and classification using optimum wavelet entropy parameter values. The features used in this study are extracted from radar target echo signals. Herein, a genetic wavelet extreme learning machine classifier model (GAWELM) is developed for expert target recognition. The GAWELM composes of three stages. These stages of GAWELM are genetic algorithm, wavelet analysis and extreme learning machine (ELM) classifier. In previous studies of radar target recognition have shown that the learning speed of feedforward networks is in general much slower than required and it has been a major disadvantage. There are two important causes. These are: (1) the slow gradient-based learning algorithms are commonly used to train neural networks, and (2) all the parameters of the networks are fixed iteratively by using such learning algorithms. In this paper, a new learning algorithm named extreme learning machine (ELM) for single-hidden layer feedforward networks (SLFNs) Ahern et al., 1989, Al-Otum and Al-Sowayan, 2011, Avci et al., 2005a, Avci et al., 2005b, Biswal et al., 2009, Frigui et al., in press, Cao et al., 2010, Guo et al., 2011, Famili et al., 1997, Han and Huang, 2006, Huang et al., 2011, Huang et al., 2006, Huang and Siew, 2005, Huang et al., 2009, Jiang et al., 2011, Kubrusly and Levan, 2009, Le et al., 2011, Lhermitte et al., in press, Martínez-Martínez et al., 2011, Matlab, 2011, Nelson et al., 2002, Nejad and Zakeri, 2011, Tabib et al., 2009, Tang et al., 2011, which randomly choose hidden nodes and analytically determines the output weights of SLFNs, to eliminate the these disadvantages of feedforward networks for expert target recognition area. Then, the genetic algorithm (GA) stage is used for obtaining the feature extraction method and finding the optimum wavelet entropy parameter values. Herein, the optimal one of four variant feature extraction methods is obtained by using a genetic algorithm (GA). The four feature extraction methods proposed GAWELM model are discrete wavelet transform (DWT), discrete wavelet transform–short-time Fourier transform (DWT–STFT), discrete wavelet transform–Born–Jordan time–frequency transform (DWT–BJTFT), and discrete wavelet transform–Choi–Williams time–frequency transform (DWT–CWTFT). The discrete wavelet transform stage is performed for optimum feature extraction in the time–frequency domain. The discrete wavelet transform stage includes discrete wavelet transform and calculating of discrete wavelet entropies. The extreme learning machine (ELM) classifier is performed for evaluating the fitness function of the genetic algorithm and classification of radar targets. The performance of the developed GAWELM expert radar target recognition system is examined by using noisy real radar target echo signals. The applications results of the developed GAWELM expert radar target recognition system show that this GAWELM system is effective in rating real radar target echo signals. The correct classification rate of this GAWELM system is about 90% for radar target types used in this study.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号