共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
尽管极限学习机因具有快速、简单、易实现及普适的逼近能力等特点被广泛应用于分类、回归及特征学习问题,但是,极限学习机同其他标准分类方法一样将最大化各类总分类性能作为算法的优化目标,因此,在实际应用中遇到数据样本分布不平衡时,算法对大类样本具有性能偏向性。针对极限学习机类不平衡学习问题的研究起步晚,算法少的问题,在介绍了极限学习机类不平衡数据学习研究现状,极限学习机类不平衡数据学习的典型算法-加权极限学习机及其改进算法的基础上,提出一种不需要对原始不平衡样本进行处理的Adaboost提升的加权极限学习机,通过在15个UCI不平衡数据集进行分析实验,实验结果表明提出的算法具有更好的分类性能。 相似文献
3.
Mario A. Muñoz Laura Villanova Davaatseren Baatar Kate Smith-Miles 《Machine Learning》2018,107(1):109-147
This paper tackles the issue of objective performance evaluation of machine learning classifiers, and the impact of the choice of test instances. Given that statistical properties or features of a dataset affect the difficulty of an instance for particular classification algorithms, we examine the diversity and quality of the UCI repository of test instances used by most machine learning researchers. We show how an instance space can be visualized, with each classification dataset represented as a point in the space. The instance space is constructed to reveal pockets of hard and easy instances, and enables the strengths and weaknesses of individual classifiers to be identified. Finally, we propose a methodology to generate new test instances with the aim of enriching the diversity of the instance space, enabling potentially greater insights than can be afforded by the current UCI repository. 相似文献
4.
Interval data offer a valuable way of representing the available information in complex problems where uncertainty, inaccuracy, or variability must be taken into account. Considered in this paper is the learning of interval neural networks, of which the input and output are vectors with interval components, and the weights are real numbers. The back-propagation (BP) learning algorithm is very slow for interval neural networks, just as for usual real-valued neural networks. Extreme learning machine (ELM) has faster learning speed than the BP algorithm. In this paper, ELM is applied for learning of interval neural networks, resulting in an interval extreme learning machine (IELM). There are two steps in the ELM for usual feedforward neural networks. The first step is to randomly generate the weights connecting the input and the hidden layers, and the second step is to use the Moore–Penrose generalized inversely to determine the weights connecting the hidden and output layers. The first step can be directly applied for interval neural networks. But the second step cannot, due to the involvement of nonlinear constraint conditions for IELM. Instead, we use the same idea as that of the BP algorithm to form a nonlinear optimization problem to determine the weights connecting the hidden and output layers of IELM. Numerical experiments show that IELM is much faster than the usual BP algorithm. And the generalization performance of IELM is much better than that of BP, while the training error of IELM is a little bit worse than that of BP, implying that there might be an over-fitting for BP. 相似文献
5.
Sattar Ahmed M. A. Ertuğrul Ömer Faruk Gharabaghi B. McBean E. A. Cao J. 《Neural computing & applications》2019,31(1):157-169
Neural Computing and Applications - A novel failure rate prediction model is developed by the extreme learning machine (ELM) to provide key information needed for optimum ongoing... 相似文献
6.
Extreme learning machine for regression and multiclass classification 总被引:13,自引:0,他引:13
Huang GB Zhou H Ding X Zhang R 《IEEE transactions on systems, man, and cybernetics. Part B, Cybernetics》2012,42(2):513-529
Due to the simplicity of their implementations, least square support vector machine (LS-SVM) and proximal support vector machine (PSVM) have been widely used in binary classification applications. The conventional LS-SVM and PSVM cannot be used in regression and multiclass classification applications directly, although variants of LS-SVM and PSVM have been proposed to handle such cases. This paper shows that both LS-SVM and PSVM can be simplified further and a unified learning framework of LS-SVM, PSVM, and other regularization algorithms referred to extreme learning machine (ELM) can be built. ELM works for the "generalized" single-hidden-layer feedforward networks (SLFNs), but the hidden layer (or called feature mapping) in ELM need not be tuned. Such SLFNs include but are not limited to SVM, polynomial network, and the conventional feedforward neural networks. This paper shows the following: 1) ELM provides a unified learning platform with a widespread type of feature mappings and can be applied in regression and multiclass classification applications directly; 2) from the optimization method point of view, ELM has milder optimization constraints compared to LS-SVM and PSVM; 3) in theory, compared to ELM, LS-SVM and PSVM achieve suboptimal solutions and require higher computational complexity; and 4) in theory, ELM can approximate any target continuous function and classify any disjoint regions. As verified by the simulation results, ELM tends to have better scalability and achieve similar (for regression and binary class cases) or much better (for multiclass cases) generalization performance at much faster learning speed (up to thousands times) than traditional SVM and LS-SVM. 相似文献
7.
Recently, a novel learning algorithm for single-hidden-layer feedforward neural networks (SLFNs) named extreme learning machine (ELM) was proposed by Huang et al. The essence of ELM is that the learning parameters of hidden nodes, including input weights and biases, are randomly assigned and need not be tuned while the output weights can be analytically determined by the simple generalized inverse operation. The only parameter needed to be defined is the number of hidden nodes. Compared with other traditional learning algorithms for SLFNs, ELM provides extremely faster learning speed, better generalization performance and with least human intervention. This paper firstly introduces a brief review of ELM, describing the principle and algorithm of ELM. Then, we put emphasis on the improved methods or the typical variants of ELM, especially on incremental ELM, pruning ELM, error-minimized ELM, two-stage ELM, online sequential ELM, evolutionary ELM, voting-based ELM, ordinal ELM, fully complex ELM, and symmetric ELM. Next, the paper summarized the applications of ELM on classification, regression, function approximation, pattern recognition, forecasting and diagnosis, and so on. In the last, the paper discussed several open issues of ELM, which may be worthy of exploring in the future. 相似文献
8.
一种基于鲁棒估计的极限学习机方法 总被引:2,自引:0,他引:2
极限学习机(ELM)是一种单隐层前馈神经网络(single-hidden layer feedforward neural networks,SLFNs),它相较于传统神经网络算法来说结构简单,具有较快的学习速度和良好的泛化性能等优点。ELM的输出权值是由最小二乘法(least square,LE)计算得出,然而经典的LS估计的抗差能力较差,容易夸大离群点和噪声的影响,从而造成训练出的参数模型不准确甚至得到完全错误的结果。为了解决此问题,提出一种基于M估计的采用加权最小二乘方法来取代最小二乘法计算输出权值的鲁棒极限学习机算法(RBELM),通过对多个数据集进行回归和分类分析实验,结果表明,该方法能够有效降低异常值的影响,具有良好的抗差能力。 相似文献
9.
为了解决声音和图像情感识别的不足,提出一种新的情感识别方式:触觉情感识别。对CoST(corpus of social touch)数据集进行了一系列触觉情感识别研究,对CoST数据集进行数据预处理,提出一些关于触觉情感识别的特征。利用极限学习机分类器探究不同手势下的情感识别,对14种手势下的3种情感(温柔、正常、暴躁)进行识别,准确度较高,且识别速度快识别时间短。结果表明,手势的不同会影响情感识别的准确率,其中手势“stroke”的识别效果在不同分类器下的分类精度均为最高,且有较好的分类精度,达到72.07%;极限学习机作为触觉情感识别的分类器,具有较好的分类效果,识别速度快;有的手势本身对应着某种情感,从而影响分类结果。 相似文献
10.
Ee May Kan Meng Hiot Lim Yew Soon Ong Ah Hwee Tan Swee Ping Yeo 《Neural computing & applications》2013,22(3-4):469-477
Unmanned aerial vehicles (UAVs) rely on global positioning system (GPS) information to ascertain its position for navigation during mission execution. In the absence of GPS information, the capability of a UAV to carry out its intended mission is hindered. In this paper, we learn alternative means for UAVs to derive real-time positional reference information so as to ensure the continuity of the mission. We present extreme learning machine as a mechanism for learning the stored digital elevation information so as to aid UAVs to navigate through terrain without the need for GPS. The proposed algorithm accommodates the need of the on-line implementation by supporting multi-resolution terrain access, thus capable of generating an immediate path with high accuracy within the allowable time scale. Numerical tests have demonstrated the potential benefits of the approach. 相似文献
11.
已有的急速学习机(Extreme Learning Machine)的学习精度受隐节点数目的影响很大。无论是已提出的单隐层急速学习机还是多隐层神经网络,都是先确定隐藏层数,再通过增加每一层的神经元个数来提高精度。但当训练集规模很大时,往往需要引入很多的隐节点,导致违逆矩阵计算复杂度大,从而不利于学习效率的提高。提出逐层可加的急速学习机MHL-ELM(Extreme Learning Machine with Incremental Hidden Layers),其思想是首先对当前隐藏层神经元(数目不大且不寻优,因而复杂度小)的权值进行随机赋值,用ELM思想求出逼近误差;若误差达不到要求,再增加一个隐含层。然后运用ELM的思想对当前隐含层优化。逐渐增加隐含层,直至满足误差精度为止。除此以外,MHL-ELM的算法复杂度为[l=1MO(N3l)]。实验使用10个UCI,keel真实数据集,通过与BP,OP-ELM等传统方法进行比较,表明MHL-ELM学习方法具有更好的泛化性,在学习精度和学习速度方面都有很大的提升。 相似文献
12.
Extreme learning machine (ELM) is widely used in training single-hidden layer feedforward neural networks (SLFNs) because of its good generalization and fast speed. However, most improved ELMs usually discuss the approximation problem for sample data with output noises, not for sample data with noises both in input and output values, i.e., error-in-variable (EIV) model. In this paper, a novel algorithm, called (regularized) TLS-ELM, is proposed to approximate the EIV model based on ELM and total least squares (TLS) method. The proposed TLS-ELM uses the idea of ELM to choose the hidden weights, and applies TLS method to determine the output weights. Furthermore, the perturbation quantities of hidden output matrix and observed values are given simultaneously. Comparison experiments of our proposed TLS-ELM with least square method, TLS method and ELM show that our proposed TLS-ELM has better accuracy and less training time. 相似文献
13.
极速学习机(ELM)由于具有较快的训练速度和较好的泛化能力而被广泛的应用到很多的领域,然而在计算数据样例个数较大的情况下,它的训练速度就会下降,甚至会出现程序报错,因此提出在ELM模型中用改进的共轭梯度算法代替广义逆的计算方法。实验结果表明,与求逆矩阵的ELM算法相比,在同等泛化精度的条件下,共轭梯度ELM有着更快的训练速度。通过研究发现:基于共轭梯度的极速学习机算法不需要计算一个大型矩阵的广义逆,而大部分广义逆的计算依赖于矩阵的奇异值分解(SVD),但这种奇异值分解对于阶数很高的矩阵具有很低的效率;因为已经证明共轭梯度算法可通过有限步迭代找到其解,所以基于共轭剃度的极速学习机有着较高的训练速度,而且也比较适用于处理大数据。 相似文献
14.
15.
Neural Computing and Applications - Based on the theory of local receptive field based extreme learning machine (ELM-LRF) and ELM auto encoder (ELM-AE), a new network... 相似文献
16.
Most manifold learning techniques are used to transform high-dimensional data sets into low-dimensional space. In the use of such techniques, after unseen data samples are added to the data set, retraining is usually necessary. However, retraining is a time-consuming process and no guarantee of the transformation into the exactly same coordinates, thus presenting a barrier to the application of manifold learning as a preprocessing step in predictive modeling. To solve this problem, learning a mapping from high-dimensional representations to low-dimensional coordinates is proposed via structured support vector machine. After training a mapping, low-dimensional representations of unobserved data samples can be easily predicted. Experiments on several datasets show that the proposed method outperforms the existing out-of-sample extension methods. 相似文献
17.
18.
Isa Ebtehaj Hossein Bonakdari Shahaboddin Shamshirband 《Engineering with Computers》2016,32(4):691-704
The minimum velocity required to prevent sediment deposition in open channels is examined in this study. The parameters affecting transport are first determined and then categorized into different dimensionless groups, including “movement,” “transport,” “sediment,” “transport mode,” and “flow resistance.” Six different models are presented to identify the effect of each of these parameters. The feed-forward neural network (FFNN) is used to predict the densimetric Froude number (Fr) and the extreme learning machine (ELM) algorithm is utilized to train it. The results of this algorithm are compared with back propagation (BP), genetic programming (GP) and existing sediment transport equations. The results indicate that FFNN-ELM produced better results than FNN-BP, GP and existing sediment transport methods in both training (RMSE = 0.26 and MARE = 0.052) and testing (RMSE = 0.121 and MARE = 0.023). Moreover, the performance of FFNN-ELM is examined for different pipe diameters. 相似文献
19.
Extreme learning machine: algorithm,theory and applications 总被引:2,自引:0,他引:2
Shifei Ding Han Zhao Yanan Zhang Xinzheng Xu Ru Nie 《Artificial Intelligence Review》2015,44(1):103-115
20.
基于球结构支持向量机的多标签分类的主动学习 总被引:1,自引:0,他引:1
为了实现数据的多标签分类,减少多标签训练样本开销,将球结构支持向量机与主动学习方法结合用于多标签分类,依据球重叠区域样本距离差值度确定样本类别,分析多标签分类特性,采用样本近邻方法更新分类器。实验结果表明,该方法可以用较少的训练样本获得更有效的分类结果。 相似文献