首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 203 毫秒
1.
针对极端学习机(ELM)网络结构设计问题,提出基于灵敏度分析法的ELM剪枝算法.利用隐含层节点输出和相对应的输出层权值向量,定义学习残差对于隐含层节点的灵敏度和网络规模适应度,根据灵敏度大小判断隐含层节点的重要性,利用网络规模适应度确定隐含层节点个数,删除重要性较低的节点.仿真结果表明,所提出的算法能够较为准确地确定与学习样本相匹配的网络规模,解决了ELM网络结构设计问题.  相似文献   

2.
极端学习机(ELM)以其快速高效和良好的泛化能力在模式识别领域得到了广泛应用。然而当前的ELM及其改进算法并没有充分考虑到隐层节点输出矩阵对极端学习机泛化能力的影响。通过实验发现激活函数选取不当及数据维数过高将导致隐层节点输出值趋于零,使得输出权值矩阵求解不准,降低ELM的分类性能。为此,提出一种微分同胚优化的极端学习机算法。该算法结合降维和微分同胚技术提高激活函数的鲁棒性,克服隐层节点输出值趋于零的问题。为验证所提算法的有效性使用人脸数据进行实验。实验结果表明所提算法具有良好的泛化性能。  相似文献   

3.
极限学习机(ELM)会大量映射到激活函数的饱和区域,同时隐含层输入与输出远远不能获得共同的分布方式,导致泛化性能大打折扣.针对这一问题,研究了在高斯分布下优化激活函数中仿射变换(AT)的极限学习机,主要思想是在隐含层输入数据上引入新型的线性关系,利用梯度下降算法对误差函数中的缩放参数和平移参数进行优化,以满足隐含层输出能够高度服从高斯分布.基于高斯分布计算仿射参数的方法,能够保证隐节点相互独立的同时,也强调了高度的依赖关系.实验结果表明,在实际分类数据集和图像回归数据集中,隐含层输出数据不能很好地服从均匀分布,但服从高斯分布趋势,总体上能够达到更好的实验效果.与原始ELM算法和AT-ELM1算法比较,均有显著的改善.  相似文献   

4.
极限学习机(ELM)会大量映射到激活函数的饱和区域,同时隐含层输入与输出远远不能获得共同的分布方式,导致泛化性能大打折扣.针对这一问题,研究了在高斯分布下优化激活函数中仿射变换(AT)的极限学习机,主要思想是在隐含层输入数据上引入新型的线性关系,利用梯度下降算法对误差函数中的缩放参数和平移参数进行优化,以满足隐含层输出能够高度服从高斯分布.基于高斯分布计算仿射参数的方法,能够保证隐节点相互独立的同时,也强调了高度的依赖关系.实验结果表明,在实际分类数据集和图像回归数据集中,隐含层输出数据不能很好地服从均匀分布,但服从高斯分布趋势,总体上能够达到更好的实验效果.与原始ELM算法和AT-ELM1算法比较,均有显著的改善.  相似文献   

5.
大多统计模型的输出与输入都是高度非线性和线性相叠加的关系,为了更好地实现数据驱动的研究,本文提出了一种隐含层组合型的ELM(Extreme Learning Machine with Hybrid Hidden Layer,HHL-ELM)神经网络。该HHL-ELM神经网络在传统的ELM网络的隐含层中增加一个特殊的节点,该特殊节点的激活函数与隐含层其他节点激活函数不同,从而形成了一种隐含层组合的网络结构,试图增强ELM网络模型的输出。同时,本文利用UCI标准数据集中的Housing数据集进行了测试,并通过工业应用实例进行了验证。最后进行了模型对比,结果表明HHL-ELM网络在处理复杂数据时具有精度高的特点,为神经网络发展及其应用提供了新思路。  相似文献   

6.
前馈神经网络的一种优化BP算法   总被引:2,自引:0,他引:2  
吴小红  金炳尧 《计算机科学》2004,31(Z2):240-241
1引言 在1986年,Rumelhart,Hinton和Williams等完整而简明地提出一种人工神经网络的误差反向传播学习算法(简称BP算法)[1],这个算法的学习过程是由正向和反向传播两部分组成:首先是正向传播过程,由输入信息向前传播到隐层或中间层的结点上,经过选定的激活函数(又称传递函数,从理论上讲,激活函数可以多种多样,但由于sigmoid型激活函数不仅映射算法简洁而且收敛性能好,因此常被用作人工神经元的激活函数)运算后,从隐层结点逐层把信息传播到输出层结点,在这个过程中,每一层神经元的输出状态只影响下一层神经元输出,即输入层的输入参数集经隐层逐层映射至输出层;如果输出层的实际输出值与期望值之间的误差大于允许值,则转入反向传播过程,它将误差信号沿原来的连接通路返回,伴随误差从输出层向输入层逐层反传,在这个过程中适时地修改各层神经元间的连接权值,减少输出误差,至此一个学习周期结束.如此经多个学习周期至误差等于或小于允许值,学习过程结束.  相似文献   

7.
基于人工神经网络的红外小目标检测   总被引:4,自引:0,他引:4  
提出一种使用人工神经网络技术来估计红外图像背景的快速算法,并利用红外图像中弱小目标的特性来构建目标模型,采用中心重合的大、小两个窗口,用大窗口的外层来估计目标周围的背景,即隐含层第一个结点的输出值,大窗口内的小窗口则是用采估计中心像素的特性,即隐含层第二个结点的输出值,用隐含层第二个结点减去第一个结点的差的大小来判断中心像素是属于目标还是背景,差值越大输出值越大.采用该思想训练网络权值,可以更好地检测真目标,剔除虚假目标.  相似文献   

8.
极端学习机(extreme learning machine,ELM)训练速度快、分类率高,已经广泛应用于人脸识别等实际问题中,并取得了较好的效果.但实际问题中的数据往往维数较高,且经常带有噪声及离群点,降低了ELM算法的分类率.这主要是由于:1)输入样本维数过高;2)激活函数选取不当.以上两点使激活函数的输出值趋于零,最终降低了ELM算法的性能.针对第1个问题,提出一种鲁棒的线性降维方法(RAF-global embedding,RAF-GE)预处理高维数据,再通过ELM算法对数据进行分类;而对第2个问题,深入分析不同激活函数的性质,提出一种鲁棒激活函数(robust activation function,RAF),该激活函数可尽量避免激活函数的输出值趋于零,提升RAF-GE及ELM算法的性能.实验证实人脸识别方法的性能普遍优于使用其他激活函数的对比方法.  相似文献   

9.
通过实验研究ELM算法中随机映射的作用及神经网络中隐含层结点个数对网络泛化能力的影响.在35个数据集上进行实验,针对不同的数据集,找到网络的最优精度所对应的隐含层结点个数.实验结果表明,当随机映射使数据升维到一定维数时,网络性能得到提高.  相似文献   

10.
针对风电网中电机的精确估计及电网的动态调整问题,提出了基于极限学习机(ELM)的风电机组功率预测实现方法.首先对EML中的隐含层激励函数进行建模,其次通过预处理样本数据、确定激活函数、最大隐含层节点数和最大主成分数、判断RMSECV均值、输出权值及输出矩阵计算等步骤来预测风电机组输出功率,最后通过系统仿真和数据对比分析验证了论文所提出的方法在电机的精确估计及电网的动态调整方面的性能增益.  相似文献   

11.
The extreme learning machine (ELM) is a new method for using single hidden layer feed-forward networks with a much simpler training method. While conventional kernel-based classifiers are based on a single kernel, in reality, it is often desirable to base classifiers on combinations of multiple kernels. In this paper, we propose the issue of multiple-kernel learning (MKL) for ELM by formulating it as a semi-infinite linear programming. We further extend this idea by integrating with techniques of MKL. The kernel function in this ELM formulation no longer needs to be fixed, but can be automatically learned as a combination of multiple kernels. Two formulations of multiple-kernel classifiers are proposed. The first one is based on a convex combination of the given base kernels, while the second one uses a convex combination of the so-called equivalent kernels. Empirically, the second formulation is particularly competitive. Experiments on a large number of both toy and real-world data sets (including high-magnification sampling rate image data set) show that the resultant classifier is fast and accurate and can also be easily trained by simply changing linear program.  相似文献   

12.
Feedforward neural networks have been extensively used to approximate complex nonlinear mappings directly from the input samples. However, their traditional learning algorithms are usually much slower than required. In this work, two hidden-feature-space ridge regression methods HFSR and centered-ELM are first proposed for feedforward networks. As the special kernel methods, the important characteristics of both HFSR and centered-ELM are that rigorous Mercer's condition for kernel functions is not required and that they can inherently be used to propagate the prominent advantages of ELM into MLFN. Except for randomly assigned weights adopted in both ELM and HFSR, HFSR also exploits another randomness, i.e., randomly selected examplars from the training set for kernel activation functions. Through forward layer-by-layer data transformation, we can extend HFSR and Centered-ELM to MLFN. Accordingly, as the unified framework for HFSR and Centered-ELM, the least learning machine (LLM) is proposed for both SLFN and MLFN with a single or multiple outputs. LLM actually gives a new learning method for MLFN with keeping the same virtues of ELM only for SLFN, i.e., only the parameters in the last hidden layer require being adjusted, all the parameters in other hidden layers can be randomly assigned, and LLM is also much faster than BP for MLFN in training the sample sets. The experimental results clearly indicate the power of LLM on its application in nonlinear regression modeling.  相似文献   

13.
Evolutionary selection extreme learning machine optimization for regression   总被引:2,自引:1,他引:1  
Neural network model of aggression can approximate unknown datasets with the less error. As an important method of global regression, extreme learning machine (ELM) represents a typical learning method in single-hidden layer feedforward network, because of the better generalization performance and the faster implementation. The “randomness” property of input weights makes the nonlinear combination reach arbitrary function approximation. In this paper, we attempt to seek the alternative mechanism to input connections. The idea is derived from the evolutionary algorithm. After predefining the number L of hidden nodes, we generate original ELM models. Each hidden node is seemed as a gene. To rank these hidden nodes, the larger weight nodes are reassigned for the updated ELM. We put L/2 trivial hidden nodes in a candidate reservoir. Then, we generate L/2 new hidden nodes to combine L hidden nodes from this candidate reservoir. Another ranking is used to choose these hidden nodes. The fitness-proportional selection may select L/2 hidden nodes and recombine evolutionary selection ELM. The entire algorithm can be applied for large-scale dataset regression. The verification shows that the regression performance is better than the traditional ELM and Bayesian ELM under less cost gain.  相似文献   

14.
随着大数据时代的到来,对异构和分布式的模糊XML数据管理显得越来越重要。在模糊XML数据的管理中,模糊XML文档的分类是关键问题。针对模糊XML文档的分类,提出采用双隐层极限学习机模型来实现模糊XML文档自动分类。这个模型可以分为两个部分:第一层采用极限学习机提取模糊XML文档的相应特征,第二层利用核极限学习机根据这些特征进行最终的模糊XML文档分类。通过实验验证了所提方法的性能优势。首先对主要的调节参数包括隐藏层节点的数目[L],常量[C]和核参数[γ]进行了研究,接下来的对比实验说明提出的基于双隐层ELM(Extreme Learning Machine)的方法相较于传统单隐层ELM以及SVM(Support Vector Machine)方法,分类精度得到较大提高,训练时间进一步缩减。  相似文献   

15.
针对传统极限学习机的输入权值矩阵和隐含层偏差是随机给定进而可能会导致在乳腺肿瘤的辅助诊断应用研究中存在精度明显不足的情况,提出用改进鱼群算法优化ELM方法。在完成对乳腺肿瘤有效的辅助诊断的过程中,本研究工作充分利用ELM能快速地完成训练过程且具有很好的泛化能力的特点,并结合用改进鱼群算法对ELM的隐含层偏差进行优化,构造出了乳腺肿瘤与从乳腺肿瘤样本数据中提取的10个特征向量之间的非线性映射关系。将本文提出的乳腺肿瘤识别方法的仿真结果与AFSA-ELM方法、ELM方法、LVQ方法、BP方法的仿真结果分别从识别准确率、假阴性率、学习速度三个方面做对比分析,仿真结果表明,本文所提方法对乳腺肿瘤诊断具有较高的分类识别准确率、假阴性率以及较快的学习速率。  相似文献   

16.
This paper presents a performance enhancement scheme for the recently developed extreme learning machine (ELM) for multi-category sparse data classification problems. ELM is a single hidden layer neural network with good generalization capabilities and extremely fast learning capacity. In ELM, the input weights are randomly chosen and the output weights are analytically calculated. The generalization performance of the ELM algorithm for sparse data classification problem depends critically on three free parameters. They are, the number of hidden neurons, the input weights and the bias values which need to be optimally chosen. Selection of these parameters for the best performance of ELM involves a complex optimization problem.In this paper, we present a new, real-coded genetic algorithm approach called ‘RCGA-ELM’ to select the optimal number of hidden neurons, input weights and bias values which results in better performance. Two new genetic operators called ‘network based operator’ and ‘weight based operator’ are proposed to find a compact network with higher generalization performance. We also present an alternate and less computationally intensive approach called ‘sparse-ELM’. Sparse-ELM searches for the best parameters of ELM using K-fold validation. A multi-class human cancer classification problem using micro-array gene expression data (which is sparse), is used for evaluating the performance of the two schemes. Results indicate that the proposed RCGA-ELM and sparse-ELM significantly improve ELM performance for sparse multi-category classification problems.  相似文献   

17.
为了能够更加高效地检测和诊断模拟电路中的故障元件,提出了自适应狼群算法优化极限学习机的方法。该方法采用自适应遗传算法对特征参数进行选择,从而生成最优特征子集,然后利用最优特征子集构造样本输入极限学习机ELM网络对故障进行分类。针对极限学习机的输入层和隐含层之间的连接权值、隐含层的偏差都将会使其学习速度和分类正确率受到影响的问题,采用本文方法对它们进行优化并选择相应的最优值,提高了极限学习机网络训练的稳定性与故障诊断的成功率。通过2个典型模拟电路的诊断实例,给出了这些方法的具体实现过程,故障诊断率均在99%以上。仿真结果表明使用该方法进行模拟电路故障诊断时具有良好的正确率和稳定性。  相似文献   

18.
In this paper, we propose a novel method that performs dynamic action classification by exploiting the effectiveness of the Extreme Learning Machine (ELM) algorithm for single hidden layer feedforward neural networks training. It involves data grouping and ELM based data projection in multiple levels. Given a test action instance, a neural network is trained by using labeled action instances forming the groups that reside to the test sample’s neighborhood. The action instances involved in this procedure are, subsequently, mapped to a new feature space, determined by the trained network outputs. This procedure is performed multiple times, which are determined by the test action instance at hand, until only a single class is retained. Experimental results denote the effectiveness of the dynamic classification approach, compared to the static one, as well as the effectiveness of the ELM in the proposed dynamic classification setting.  相似文献   

19.
In this paper, we propose an extreme learning machine (ELM) with tunable activation function (TAF-ELM) learning algorithm, which determines its activation functions dynamically by means of the differential evolution algorithm based on the input data. The main objective is to overcome the problem dependence of fixed slop of the activation function in ELM. We mainly considered the issue of processing of benchmark problems on function approximation and pattern classification. Compared with ELM and E-ELM learning algorithms with the same network size or compact network configuration, the proposed algorithm has improved generalization performance with good accuracy. In addition, the proposed algorithm also has very good performance in the TAF neural networks learning algorithms.  相似文献   

20.
相比径向基(RBF)神经网络,极限学习机(ELM)训练速度更快,泛化能力更强.同时,近邻传播聚类算法(AP)可以自动确定聚类个数.因此,文中提出融合AP聚类、多标签RBF(ML-RBF)和正则化ELM(RELM)的多标签学习模型(ML-AP-RBF-RELM).首先,在该模型中输入层使用ML-RBF进行映射,且通过AP聚类算法自动确定每一类标签的聚类个数,计算隐层节点个数.然后,利用每类标签的聚类个数通过K均值聚类确定隐层节点RBF函数的中心.最后,通过RELM快速求解隐层到输出层的连接权值.实验表明,ML-AP-RBF-RELM效果较好.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号