首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper presents a modified structure of a neural network with tunable activation function and provides a new learning algorithm for the neural network training. Simulation results of XOR problem, Feigenbaum function, and Henon map show that the new algorithm has better performance than BP (back propagation) algorithm in terms of shorter convergence time and higher convergence accuracy. Further modifications of the structure of the neural network with the faster learning algorithm demonstrate simpler structure with even faster convergence speed and better convergence accuracy.  相似文献   

2.
For solving the problem that extreme learning machine (ELM) algorithm uses fixed activation function and cannot be residual compensation, a new learning algorithm called variable activation function extreme learning machine based on residual prediction compensation is proposed. In the learning process, the proposed method adjusts the steep degree, position and mapping scope simultaneously. To enhance the nonlinear mapping capability of ELM, particle swarm optimization algorithm is used to optimize variable parameters according to root-mean square error for the prediction accuracy of the mode. For further improving the predictive accuracy, the auto-regressive moving average model is used to model the residual errors between actual value and predicting value of variable activation function extreme learning machine (V-ELM). The prediction of residual errors is used to rectify the prediction value of V-ELM. Simulation results verified the effectiveness and feasibility of this method by using Pole, Auto-Mpg, Housing, Diabetes, Triazines and Stock benchmark datasets. Also, it was implemented to develop a soft sensor model for the gasoline dry point in delayed coking and some satisfied results were obtained.  相似文献   

3.
This paper proposes a modified ELM algorithm that properly selects the input weights and biases before training the output weights of single-hidden layer feedforward neural networks with sigmoidal activation function and proves mathematically the hidden layer output matrix maintains full column rank. The modified ELM avoids the randomness compared with the ELM. The experimental results of both regression and classification problems show good performance of the modified ELM algorithm.  相似文献   

4.
Extreme learning machine (ELM) has been an important research topic over the last decade due to its high efficiency, easy-implementation, unification of classification and regression, and unification of binary and multi-class learning tasks. Though integrating these advantages, existing ELM algorithms cannot directly handle the case where some features of the samples are missing or unobserved, which is usually very common in practical applications. The work in this paper fills this gap by proposing an absent ELM (A-ELM) algorithm to address the above issue. By observing the fact that some structural characteristics of a part of packed malware instances hold unreasonable values, we cast the packed executable identification tasks into an absence learning problem, which can be efficiently addressed via the proposed A-ELM algorithm. Extensive experiments have been conducted on six UCI data sets and a packed data set to evaluate the performance of the proposed algorithm. As indicated, the proposed A-ELM algorithm is superior to other imputation algorithms and existing state-of-the-art ones.  相似文献   

5.
为了提高核极限学习机(KELM)数据分类的精度,提出了一种结合K折交叉验证(K-CV)与遗传算法(GA)的KELM分类器参数优化方法(GA-KELM),将CV训练所得多个模型的平均精度作为GA的适应度评价函数,为KELM的参数优化提供评价标准,用获得GA优化最优参数的KELM算法进行数据分类.利用UCI中数据集进行仿真,实验结果表明:所提方法在整体性能上优于GA结合支持向量机法(GA-SVM)和GA结合反向传播(GA-BP)算法,具有更高的分类精度.  相似文献   

6.
In order to overcome the disadvantage of the traditional algorithm for SLFN (single-hidden layer feedforward neural network), an improved algorithm for SLFN, called extreme learning machine (ELM), is proposed by Huang et al. However, ELM is sensitive to the neuron number in hidden layer and its selection is a difficult-to-solve problem. In this paper, a self-adaptive mechanism is introduced into the ELM. Herein, a new variant of ELM, called self-adaptive extreme learning machine (SaELM), is proposed. SaELM is a self-adaptive learning algorithm that can always select the best neuron number in hidden layer to form the neural networks. There is no need to adjust any parameters in the training process. In order to prove the performance of the SaELM, it is used to solve the Italian wine and iris classification problems. Through the comparisons between SaELM and the traditional back propagation, basic ELM and general regression neural network, the results have proven that SaELM has a faster learning speed and better generalization performance when solving the classification problem.  相似文献   

7.
在核极限学习机(Kernel Based Extreme Learning Machine,KELM)分类应用的基础上,结合狮群算法(Loin Swarm Optimization,LSO)强全局寻优能力与收敛快的特性,提出一种LSO优化KELM算法.将测试准确率作为LSO优化KELM的适应度函数,根据移动位置获取最优...  相似文献   

8.
Extreme learning machine (ELM) can be considered as a black-box modeling approach that seeks a model representation extracted from the training data. In this paper, a modified ELM algorithm, called symmetric ELM (S-ELM), is proposed by incorporating a priori information of symmetry. S-ELM is realized by transforming the original activation function of hidden neurons into a symmetric one with respect to the input variables of the samples. In theory, S-ELM can approximate N arbitrary distinct samples with zero error. Simulation results show that, in the applications where there exists the prior knowledge of symmetry, S-ELM can obtain better generalization performance, faster learning speed, and more compact network architecture.  相似文献   

9.
Extreme learning machine (ELM) [G.-B. Huang, Q.-Y. Zhu, C.-K. Siew, Extreme learning machine: a new learning scheme of feedforward neural networks, in: Proceedings of the International Joint Conference on Neural Networks (IJCNN2004), Budapest, Hungary, 25-29 July 2004], a novel learning algorithm much faster than the traditional gradient-based learning algorithms, was proposed recently for single-hidden-layer feedforward neural networks (SLFNs). However, ELM may need higher number of hidden neurons due to the random determination of the input weights and hidden biases. In this paper, a hybrid learning algorithm is proposed which uses the differential evolutionary algorithm to select the input weights and Moore-Penrose (MP) generalized inverse to analytically determine the output weights. Experimental results show that this approach is able to achieve good generalization performance with much more compact networks.  相似文献   

10.
In this paper, we proposed the Dandelion Algorithm (DA), based on the behaviour of dandelion sowing. In DA, the dandelion is sown in a certain range based on dynamic radius. Meanwhile the dandelion has self-learning ability; it could select a number of excellent seeds to learn. We compare the proposed algorithm with other existing algorithms. Simulations show that the proposed algorithm seems much superior to other algorithms. Moreover, the proposed algorithm can be applied to optimise extreme learning machine (ELM), which has a very good classification and prediction capability.  相似文献   

11.
针对生物发酵过程中一些生物参量难以测量的问题,提出一种基于改进极限学习机(IELM)的软测量建模方法。该方法通过最小二乘方法和误差反馈原理计算出最优的网络输入到隐含层的学习参数,以提高模型的稳定性和预测精度。通过双对角化方法计算出最优的输出权值,解决输出矩阵的病态问题,进一步提高模型的稳定性。将所提方法应用于红霉素发酵过程生物量浓度的软测量。结果表明,与ELM、PL-ELM、IRLS-ELM软测量建模方法相比,IELM在线软测量建模方法具有更高的预测精度和更强的泛化能力。  相似文献   

12.
Data streams classification is an important approach to get useful knowledge from massive and dynamic data. Because of concept drift, traditional data mining techniques cannot be directly applied in data streams environment. Extreme learning machine (ELM) is a single hidden layer feedforward neural network (SLFN), comparing with the traditional neural network (e.g. BP network), ELM has a faster speed, so it is very suitable for real-time data processing. In order to deal with the challenge in data streams classification, a new approach based on extreme learning machine is proposed in this paper. The approach utilizes ELMs as base classifiers and adaptively decides the number of the neurons in hidden layer, in addition, activation functions are also randomly selected from a series of functions to improve the performance of the approach. Finally, the algorithm trains a series of classifiers and the decision results for unlabeled data are made by weighted voting strategy. When the concept in data streams keeps stable, every classifier is incrementally updated by using new data; if concept drift is detected, the classifiers with weak performance will be cleared away. In the experiment, we used 7 artificial data sets and 9 real data sets from UCI repository to evaluate the performance of the proposed approach. The testing results showed, comparing with the conventional classification methods for data streams such as ELM, BP, AUE2 and Learn++.MF, on most data sets, the new approach could not only be simplest in the structure, but also get a higher and more stable accuracy with lower time consuming.  相似文献   

13.
《传感器与微系统》2019,(7):138-141
针对三轴加速度计存在的测量误差,建立了隐式非线性误差模型,并提出一种自主反向调优的极限学习机(RT-ELM)对误差模型进行训练。实验结果表明:三轴补偿后误差基本控制在±0. 07 m/s~2范围内,均方根误差小于0. 004 m/s~2,误差比补偿前减小超过100倍,补偿精度是固定型极限学习机ELM的7倍左右。任意选取训练集和测试集补偿效果基本一致,证明超限学习算法具有很好的泛化能力和鲁棒性,而且几千个样本点的训练时间仅0. 06 s左右,其速度是传统反向传播(BP)神经网络的上千倍,适用于对实时性要求较高的误差补偿和控制系统等领域。  相似文献   

14.
为实现脑卒中上肢居家康复评定的自动化和定量化,针对临床上最常用的Fugl-Meyer运动功能评定(FMA)量表,利用极限学习机(ELM)建立了FMA量表得分自动预测模型。选取FMA肩肘部分中的4个动作,采用固定于偏瘫侧前臂和上臂的两个加速度传感器采集24名患者的运动数据,经预处理和特征提取,基于遗传算法(GA)和ELM进行特征选择,分别建立单个动作ELM预测模型和综合预测模型。结果显示,该模型可对FMA肩肘部分得分进行精确的自动预测,预测均方根误差为2.1849分。该方法突破了传统评定中主观性、耗时性的限制及对康复医师或治疗师的依赖性,可方便用于居家康复的评定。速度传感器采集24名患者的运动数据,经预处理和特征提取,基于遗传算法(Genetic Algorithm, GA)和ELM进行特征选择,分别建立单个动作ELM预测模型和综合预测模型。结果显示,该模型可对FMA肩肘部分得分进行精确的自动预测,预测均方根误差为2.1849分。该方法突破了传统评定中主观性、耗时性的限制及对康复医师或治疗师的依赖性,可方便用于居家康复的评定。  相似文献   

15.
Extreme learning machine (ELM) works for generalized single-hidden-layer feedforward networks (SLFNs), and its essence is that the hidden layer of SLFNs need not be tuned. But ELM only utilizes labeled data to carry out the supervised learning task. In order to exploit unlabeled data in the ELM model, we first extend the manifold regularization (MR) framework and then demonstrate the relation between the extended MR framework and ELM. Finally, a manifold regularized extreme learning machine is derived from the proposed framework, which maintains the properties of ELM and can be applicable to large-scale learning problems. Experimental results show that the proposed semi-supervised extreme learning machine is the most cost-efficient method. It tends to have better scalability and achieve satisfactory generalization performance at a relatively faster learning speed than traditional semi-supervised learning algorithms.  相似文献   

16.
Convex incremental extreme learning machine   总被引:6,自引:2,他引:6  
Guang-Bin  Lei   《Neurocomputing》2007,70(16-18):3056
Unlike the conventional neural network theories and implementations, Huang et al. [Universal approximation using incremental constructive feedforward networks with random hidden nodes, IEEE Transactions on Neural Networks 17(4) (2006) 879–892] have recently proposed a new theory to show that single-hidden-layer feedforward networks (SLFNs) with randomly generated additive or radial basis function (RBF) hidden nodes (according to any continuous sampling distribution) can work as universal approximators and the resulting incremental extreme learning machine (I-ELM) outperforms many popular learning algorithms. I-ELM randomly generates the hidden nodes and analytically calculates the output weights of SLFNs, however, I-ELM does not recalculate the output weights of all the existing nodes when a new node is added. This paper shows that while retaining the same simplicity, the convergence rate of I-ELM can be further improved by recalculating the output weights of the existing nodes based on a convex optimization method when a new hidden node is randomly added. Furthermore, we show that given a type of piecewise continuous computational hidden nodes (possibly not neural alike nodes), if SLFNs can work as universal approximators with adjustable hidden node parameters, from a function approximation point of view the hidden node parameters of such “generalized” SLFNs (including sigmoid networks, RBF networks, trigonometric networks, threshold networks, fuzzy inference systems, fully complex neural networks, high-order networks, ridge polynomial networks, wavelet networks, etc.) can actually be randomly generated according to any continuous sampling distribution. In theory, the parameters of these SLFNs can be analytically determined by ELM instead of being tuned.  相似文献   

17.
Variational Bayesian extreme learning machine   总被引:1,自引:0,他引:1  
Extreme learning machine (ELM) randomly generates parameters of hidden nodes and then analytically determines the output weights with fast learning speed. The ill-posed problem of parameter matrix of hidden nodes directly causes unstable performance, and the automatical selection problem of the hidden nodes is critical to holding the high efficiency of ELM. Focusing on the ill-posed problem and the automatical selection problem of the hidden nodes, this paper proposes the variational Bayesian extreme learning machine (VBELM). First, the Bayesian probabilistic model is involved into ELM, where the Bayesian prior distribution can avoid the ill-posed problem of hidden node matrix. Then, the variational approximation inference is employed in the Bayesian model to compute the posterior distribution and the independent variational hyperparameters approximately, which can be used to select the hidden nodes automatically. Theoretical analysis and experimental results elucidate that VBELM has stabler performance with more compact architectures, which presents probabilistic predictions comparison with traditional point predictions, and it also provides the hyperparameter criterion for hidden node selection.  相似文献   

18.
Due to the significant efficiency and simple implementation, extreme learning machine (ELM) algorithms enjoy much attention in regression and classification applications recently. Many efforts have been paid to enhance the performance of ELM from both methodology (ELM training strategies) and structure (incremental or pruned ELMs) perspectives. In this paper, a local coupled extreme learning machine (LC-ELM) algorithm is presented. By assigning an address to each hidden node in the input space, LC-ELM introduces a decoupler framework to ELM in order to reduce the complexity of the weight searching space. The activated degree of a hidden node is measured by the membership degree of the similarity between the associated address and the given input. Experimental results confirm that the proposed approach works effectively and generally outperforms the original ELM in both regression and classification applications.  相似文献   

19.
A wavelet extreme learning machine   总被引:2,自引:0,他引:2  
Extreme learning machine (ELM) has been widely used in various fields to overcome the problem of low training speed of the conventional neural network. Kernel extreme learning machine (KELM) introduces the kernel method to ELM model, which is applicable in Stat ML. However, if the number of samples in Stat ML is too small, perhaps the unbalanced samples cannot reflect the statistical characteristics of the input data, so that the learning ability of Stat ML will be influenced. At the same time, the mix kernel functions used in KELM are conventional functions. Therefore, the selection of kernel function can still be optimized. Based on the problems above, we introduce the weighted method to KELM to deal with the unbalanced samples. Wavelet kernel functions have been widely used in support vector machine and obtain a good classification performance. Therefore, to realize a combination of wavelet analysis and KELM, we introduce wavelet kernel functions to KELM model, which has a mix kernel function of wavelet kernel and sigmoid kernel, and introduce the weighted method to KELM model to balance the sample distribution, and then we propose the weighted wavelet–mix kernel extreme learning machine. The experimental results show that this method can effectively improve the classification ability with better generalization. At the same time, the wavelet kernel functions perform very well compared with the conventional kernel functions in KELM model.  相似文献   

20.
对极限学习机的模型进行了研究,提出了一种结合期望风险最小化的极限学习机的预测模型。其基本思想是同时考虑结构风险和期望风险,根据期望风险和经验风险之间的关系,将期望风险转换成经验风险,进行最小化期望风险的极限学习机预测模型求解。利用人工数据集和实际数据集进行回归问题的数值实验,并与极限学习机(ELM)和正则极限学习机(RELM)两种算法的性能进行了比较,实验结果表明,所提方法能有效提高了泛化能力。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号