首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Yahia  Siwar  Said  Salwa  Zaied  Mourad 《Multimedia Tools and Applications》2020,79(19-20):13869-13890
Multimedia Tools and Applications - In this paper, we present a novel classification approach based on Extreme Learning Machine (ELM) and Wavelet Neural Networks. We introduce two novel...  相似文献   

2.
卷积神经网络是一种很好的特征提取器,但却不是最佳的分类器,而极限学习机能够很好地进行分类,却不能学习复杂的特征,根据这两者的优点和缺点,将它们结合起来,提出一种新的人脸识别方法。卷积神经网络提取人脸特征,极限学习机根据这些特征进行识别。本文还提出固定卷积神经网络的部分卷积核以减少训练参 数,从而提高识别精度的方法。在人脸库ORL和XM2VTS上进行测试的结果表明,本文的结合方法能有效提高人脸识别的识别率,而且固定部分卷积核的方式在训练样本少时具有优势。  相似文献   

3.
神经网络已经在模式识别、自动控制及数据挖掘等领域取得了广泛的应用,但学习方法的速度不能满足实际需求。传统的误差反向传播方法(BP)主要是基于梯度下降的思想,需要多次迭代;网络的所有参数都需要在训练过程中迭代确定,因此算法的计算量和搜索空间很大。ELM(Extreme Learning Machine,ELM)是一次学习思想使得学习速度提高很多,避免了多次迭代和局部最小值,具有良好的泛化性能、鲁棒性与可控性。但对于不同的数据集和不同的应用领域,无论ELM是用于数据分类或是回归,ELM算法本身还是存在问题,所以本文对已有方法深入对比分析,并指出极速学习方法未来的发展方向。  相似文献   

4.
神经网络极速学习方法研究   总被引:57,自引:0,他引:57  
单隐藏层前馈神经网络(Single-hidden Layer Feedforward Neural Network,SLFN)已经在模式识别、自动控制及数据挖掘等领域取得了广泛的应用,但传统学习方法的速度远远不能满足实际的需要,成为制约其发展的主要瓶颈.产生这种情况的两个主要原因是:(1)传统的误差反向传播方法(Back Propagation,BP)主要基于梯度下降的思想,需要多次迭代;(2)网络的所有参数都需要在训练过程中迭代确定.因此算法的计算量和搜索空间很大.针对以上问题,借鉴ELM的一次学习思想并基于结构风险最小化理论提出一种快速学习方法(RELM),避免了多次迭代和局部最小值,具有良好的泛化性、鲁棒性与可控性.实验表明RELM综合性能优于ELM、BP和SVM.  相似文献   

5.
In this paper, we introduce a new method based on Bernstein Neural Network model (BeNN) and extreme learning machine algorithm to solve the differential equation. In the proposed method, we develop a single-layer functional link BeNN, the hidden layer is eliminated by expanding the input pattern by Bernstein polynomials. The network parameters are obtained by solving a system of linear equations using the extreme learning machine algorithm. Finally, the numerical experiment is carried out by MATLAB, results obtained are compared with the existing method, which proves the feasibility and superiority of the proposed method.  相似文献   

6.
相比径向基(RBF)神经网络,极限学习机(ELM)训练速度更快,泛化能力更强.同时,近邻传播聚类算法(AP)可以自动确定聚类个数.因此,文中提出融合AP聚类、多标签RBF(ML-RBF)和正则化ELM(RELM)的多标签学习模型(ML-AP-RBF-RELM).首先,在该模型中输入层使用ML-RBF进行映射,且通过AP聚类算法自动确定每一类标签的聚类个数,计算隐层节点个数.然后,利用每类标签的聚类个数通过K均值聚类确定隐层节点RBF函数的中心.最后,通过RELM快速求解隐层到输出层的连接权值.实验表明,ML-AP-RBF-RELM效果较好.  相似文献   

7.
一种用于函数学习的小波神经网络   总被引:9,自引:0,他引:9  
在非线性系统辨识中,系统输入往往是多变量的.小波处理此类问题则比较复杂.结合 神经网络形式和小波特点建立一种新型的网络,可简单有效地解决网络多输入问题.同时给出 此网络可以逼近任意连续函数的数学证明.并通过实例验证了此方法的正确性.  相似文献   

8.
提出了一种新颖的基于小波神经网络构架的FLIR图像分割技术,旨在将小波变换的时—频局域特性与神经网络的自学习能力相结合,从而使FLIR图像的分割算法具有较强的逼近和容错能力。该算法在FLIR―ATR系统中得到应用,对于FLIR目标图像轮廓的提取和抑制杂散背景方面获得了良好的效果。  相似文献   

9.
Relevance ranking has been a popular and interesting topic over the years, which has a large variety of applications. A number of machine learning techniques were successfully applied as the learning algorithms for relevance ranking, including neural network, regularized least square, support vector machine and so on. From machine learning point of view, extreme learning machine actually provides a unified framework where the aforementioned algorithms can be considered as special cases. In this paper, pointwise ELM and pairwise ELM are proposed to learn relevance ranking problems for the first time. In particular, ELM type of linear random node is newly proposed together with kernel version of ELM to be linear as well. The famous publicly available dataset collection LETOR is tested to compare ELM-based ranking algorithms with state-of-art linear ranking algorithms.  相似文献   

10.
Differential Evolution Training Algorithm for Feed-Forward Neural Networks   总被引:11,自引:0,他引:11  
An evolutionary optimization method over continuous search spaces, differential evolution, has recently been successfully applied to real world and artificial optimization problems and proposed also for neural network training. However, differential evolution has not been comprehensively studied in the context of training neural network weights, i.e., how useful is differential evolution in finding the global optimum for expense of convergence speed. In this study, differential evolution has been analyzed as a candidate global optimization method for feed-forward neural networks. In comparison to gradient based methods, differential evolution seems not to provide any distinct advantage in terms of learning rate or solution quality. Differential evolution can rather be used in validation of reached optima and in the development of regularization terms and non-conventional transfer functions that do not necessarily provide gradient information. This revised version was published online in June 2006 with corrections to the Cover Date.  相似文献   

11.
乃永强  李军 《信息与控制》2015,44(3):257-262
针对刚性机械臂系统的控制问题,提出基于极限学习机(ELM)的自适应神经控制算法.极限学习机随机选择单隐层前馈神经网络(SLFN)的隐层节点及其参数,仅调整其网络的输出权值,以极快的学习速度获得良好的推广性.采用李亚普诺夫综合法,使所提出的ELM控制器通过输出权值的自适应调整能够逼近系统的模型不确定性部分,从而保证整个闭环控制系统的稳定性.将该自适应神经控制器应用于2自由度平面机械臂控制中,并与现有的径向基函数(RBF)神经网络自适应控制算法进行比较.实验结果表明,在同等条件下,ELM控制器具有良好的跟踪控制性能,表明了所提出控制算法的有效性.  相似文献   

12.
针对传统神经网络进行抽油机示功图识别诊断时受同步瞬时输入限制,不能有效体现连续输入信号的时间累积效应,诊断精度偏低的问题,提出一种极限学习离散过程元网络,模型内部通过三次样条数值积分处理离散样本和权值的时域的聚合运算.模型训练算法采用极限学习,将模型训练转化为最小二乘问题,通过利用Moore-Penrose广义逆和隐层输出权值矩阵来计算输出权值,提升模型学习速度.进行示功图识别时,直接将位移和载荷离散时间序列作为模型输入,对常见的5种示功图状态进行识别.实验结果表明,该方法具有较高的识别精度,同时相对于其它过程神经网络模型,学习速度较快.  相似文献   

13.
在分析核极限学习机原理的基础上,将小波函数作为核函数运用于极限学习机中,形成小波核极限学习机(WKELM)。实验表明,该算法提高了分类性能,增加了鲁棒性。在此基础上利用探测粒子群(Detecting Particle Swarm Optimization,DPSO)对WKELM参数优化,最终得到分类效果较优的DPSO-WKELM分类器。通过采用UCI基因数据进行仿真,将该分类结果与径向基核极限学习机(KELM)、WKELM等算法结果进行比较,得出所提算法具有较高的分类精度。  相似文献   

14.
针对大规模RGB-D数据集中存在的深度线索质量和非线性模型分类问题,提出基于卷积递归神经网络和核超限学习机的3D目标识别方法.该方法引入深度图编码算法,修正原始深度图中存在的数值丢失和噪声问题,将点云图统一到标准角度,形成深度编码图,并结合原始深度图作为新的深度线索.利用卷积递归神经网络学习不同视觉线索的层次特征,融入双路空间金字塔池化方法,分别处理多线索特征.最后,构建基于核方法的超限学习机作为分类器,实现3D目标识别.实验表明,文中方法有效提高3D目标识别率和分类效率.  相似文献   

15.
极限学习机综述   总被引:3,自引:0,他引:3  
极限学习机是一种单隐层前向网络的训练算法,主要特点是训练速度极快,而且可以达到很高的泛化性能。回顾了极限学习机的发展历程,分析了极限学习机的数学模型,详细介绍了极限学习机的各种改进算法,并列举了极限学习机在识别、预测和医学诊断领域的应用。最后总结预测了极限学习机的改进方向。  相似文献   

16.
Adjusting parameters iteratively is a traditional way of training neural networks, and the Rough RBF Neural Networks (R-RBF-NN) follows the same idea. However, this idea has many disadvantages, for instance, the training accuracy and generalization accuracy etc. So how to change this condition is a hot topic in Academics. On the basis of Extreme Learning Machine (ELM), this paper proposes a Weighted Regularized Extreme Learning Machine (WRELM), taking into account both minimizing structured risk and weighted least-squares principle, to train R-RBF-NN. The traditional iterative training method is replaced by the minimal norm least-squares solution of general linear system. The method proposed in this paper, increasing controllability of the entire learning process and considering the structured risk and empirical risk, can improve the performance of learning and generalization. Experiments show that it can reach a very superior performance in both time and accuracy when WRELM trains the Rough RBF Neural Networks in pattern classification and function regression, especially in pattern classification, which can improve the generalization accuracy more than 3.36 % compared with ELM. Obviously, the performance of the method proposed in this paper is better than the traditional methods.  相似文献   

17.
Self-Adaptive Evolutionary Extreme Learning Machine   总被引:1,自引:0,他引:1  
In this paper, we propose an improved learning algorithm named self-adaptive evolutionary extreme learning machine (SaE-ELM) for single hidden layer feedforward networks (SLFNs). In SaE-ELM, the network hidden node parameters are optimized by the self-adaptive differential evolution algorithm, whose trial vector generation strategies and their associated control parameters are self-adapted in a strategy pool by learning from their previous experiences in generating promising solutions, and the network output weights are calculated using the Moore?CPenrose generalized inverse. SaE-ELM outperforms the evolutionary extreme learning machine (E-ELM) and the different evolutionary Levenberg?CMarquardt method in general as it could self-adaptively determine the suitable control parameters and generation strategies involved in DE. Simulations have shown that SaE-ELM not only performs better than E-ELM with several manually choosing generation strategies and control parameters but also obtains better generalization performances than several related methods.  相似文献   

18.
Extreme learning machine (ELM) is a learning algorithm for generalized single-hidden-layer feed-forward networks (SLFNs). In order to obtain a suitable network architecture, Incremental Extreme Learning Machine (I-ELM) is a sort of ELM constructing SLFNs by adding hidden nodes one by one. Although kinds of I-ELM-class algorithms were proposed to improve the convergence rate or to obtain minimal training error, they do not change the construction way of I-ELM or face the over-fitting risk. Making the testing error converge quickly and stably therefore becomes an important issue. In this paper, we proposed a new incremental ELM which is referred to as Length-Changeable Incremental Extreme Learning Machine (LCI-ELM). It allows more than one hidden node to be added to the network and the existing network will be regarded as a whole in output weights tuning. The output weights of newly added hidden nodes are determined using a partial error-minimizing method. We prove that an SLFN constructed using LCI-ELM has approximation capability on a universal compact input set as well as on a finite training set. Experimental results demonstrate that LCI-ELM achieves higher convergence rate as well as lower over-fitting risk than some competitive I-ELM-class algorithms.  相似文献   

19.
针对加权极速学习机人为固定权重可能会错失更优权重的问题,提出了改进的加权极速学习机。该方法的多数类的初始权重设为1,使用多数类与少数类样例数的比值作为少数类的初始权重,然后通过在多数类或者少数类中添加权重调节因子,从缩小和扩大两个方向去调节权重,最后通过实验结果选出最优的权重。实验分别使用原加权极速学习机、其他权重的极速学习机和新方法在改造的UCI数据集上进行比较。结果表明新方法无论是在F-mea-sure还是G-mean上都要优于其他加权极速学习机。  相似文献   

20.
基于小波网络和多模块网络的数字识别   总被引:2,自引:0,他引:2  
本文研究一种新的数字识别方法,这种方法用小波神经网络抽取特征、用多模块结构神经网络作模式分类器。小波分解的函数近似能力和人工神经网络的学习能力结合起来形成的小波神经网络,有着良好的特征描述性能,可用作特征抽取工具。多模块结构的神经网络将一个k类的模式分类问题转换为k个互相独立的2类分类问题。这种结构将一个复杂的分类问题化解为多个简单的分类问题,各个模块互相并联,各自负责一种模式的识别。用这种修改过的多模块结构网络的BP训练方法,可加速训练和提高训练精度,并且各模块可互相独立地进行训练。用美国NIST数字样本进行训练及测试,结果良好。这种方法可用于更广泛的平面图形识别。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号