首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Extreme learning machine (ELM), which can be viewed as a variant of Random Vector Functional Link (RVFL) network without the input–output direct connections, has been extensively used to create multi-layer (deep) neural networks. Such networks employ randomization based autoencoders (AE) for unsupervised feature extraction followed by an ELM classifier for final decision making. Each randomization based AE acts as an independent feature extractor and a deep network is obtained by stacking several such AEs. Inspired by the better performance of RVFL over ELM, in this paper, we propose several deep RVFL variants by utilizing the framework of stacked autoencoders. Specifically, we introduce direct connections (feature reuse) from preceding layers to the fore layers of the network as in the original RVFL network. Such connections help to regularize the randomization and also reduce the model complexity. Furthermore, we also introduce denoising criterion, recovering clean inputs from their corrupted versions, in the autoencoders to achieve better higher level representations than the ordinary autoencoders. Extensive experiments on several classification datasets show that our proposed deep networks achieve overall better and faster generalization than the other relevant state-of-the-art deep neural networks.  相似文献   

2.
Ensemble of online sequential extreme learning machine   总被引:3,自引:0,他引:3  
Yuan  Yeng Chai  Guang-Bin   《Neurocomputing》2009,72(13-15):3391
Liang et al. [A fast and accurate online sequential learning algorithm for feedforward networks, IEEE Transactions on Neural Networks 17 (6) (2006), 1411–1423] has proposed an online sequential learning algorithm called online sequential extreme learning machine (OS-ELM), which can learn the data one-by-one or chunk-by-chunk with fixed or varying chunk size. It has been shown [Liang et al., A fast and accurate online sequential learning algorithm for feedforward networks, IEEE Transactions on Neural Networks 17 (6) (2006) 1411–1423] that OS-ELM runs much faster and provides better generalization performance than other popular sequential learning algorithms. However, we find that the stability of OS-ELM can be further improved. In this paper, we propose an ensemble of online sequential extreme learning machine (EOS-ELM) based on OS-ELM. The results show that EOS-ELM is more stable and accurate than the original OS-ELM.  相似文献   

3.
Online learning algorithms have been preferred in many applications due to their ability to learn by the sequentially arriving data. One of the effective algorithms recently proposed for training single hidden-layer feedforward neural networks (SLFNs) is online sequential extreme learning machine (OS-ELM), which can learn data one-by-one or chunk-by-chunk at fixed or varying sizes. It is based on the ideas of extreme learning machine (ELM), in which the input weights and hidden layer biases are randomly chosen and then the output weights are determined by the pseudo-inverse operation. The learning speed of this algorithm is extremely high. However, it is not good to yield generalization models for noisy data and is difficult to initialize parameters in order to avoid singular and ill-posed problems. In this paper, we propose an improvement of OS-ELM based on the bi-objective optimization approach. It tries to minimize the empirical error and obtain small norm of network weight vector. Singular and ill-posed problems can be overcome by using the Tikhonov regularization. This approach is also able to learn data one-by-one or chunk-by-chunk. Experimental results show the better generalization performance of the proposed approach on benchmark datasets.  相似文献   

4.

In the present article, delay and system of delay differential equations are treated using feed-forward artificial neural networks. We have solved multiple problems using neural network architectures with different depths. The neural networks are trained using the extreme learning machine algorithm for the satisfaction of delay differential equations and associated initial/boundary conditions. Further, numerical rates of convergence of the proposed algorithm are reported based on variation of error in the obtained solution for different number of training points. Emphasis is on analysing whether deeper network architectures trained with extreme learning machine algorithm can perform better than shallow network architectures for approximating the solutions of delay differential equations.

  相似文献   

5.
骨髓细胞的分类有重要的医学诊断意义。先对骨髓细胞图像分割和特征提取,用提取出来的训练集对极限学习机训练,再用该分类器对未知样本识别。针对单个分类器性能的不稳定,提出基于元胞自动机的极限学习机集成算法。通过元胞自动机抽样策略构建差异大的训练子集,多个分类器并行学习,多数投票法联合决策。实验结果表明,与BP、支持向量机比较,该算法基本无参数调整,学习速度快,分类精度高能达到97.33%,且有效克服了神经网络分类器不稳定的缺点。  相似文献   

6.
Journal of Intelligent Manufacturing - In this paper, an enhanced random vector functional link network (RVFL) algorithm was employed to predict kerf quality indices during CO2 laser cutting of...  相似文献   

7.
由于时间效率的约束,多元时间序列预测算法往往存在预测准确率不足的问题。对此,提出基于图拉普拉斯变换和极限学习机的时间序列预测算法。基于图拉普拉斯变换对时间序列进行半监督的特征提取,通过散布矩阵将监督特征和无监督特征进行融合。设计在线的极限学习机学习算法,仅需要在线更新网络的输出权重矩阵即可完成神经网络的学习。利用提取的特征在线训练极限学习机,实现对多元时间序列的实时预测。基于多个数据集进行仿真实验,结果表明该算法有效地提高了预测准确率。  相似文献   

8.
Functional data learning is an extension of traditional data learning, that is, learning the data chosen from the Euclidean space ${\mathbb{R}^{n}}$ to a metric space. This paper focuses on the functional data learning with generalized single-hidden layer feedforward neural networks (GSLFNs) acting on some metric spaces. In addition, three learning algorithms, named Hilbert parallel overrelaxation backpropagation (H-PORBP) algorithm, ν-generalized support vector regression (ν-GSVR) and generalized extreme learning machine (G-ELM) are proposed to train the GSLFNs acting on some metric spaces. The experimental results on some metric spaces indicate that GELM with additive/RBF hidden-nodes has a faster learning speed, a better accuracy, and a better stability than HPORBP algorithm and ν-GSVR for training the functional data. The idea of GELM can be used to extend those improved extreme learning machines (ELMs) that act on the Euclidean space ${\mathbb{R}^{n}, }$ such as online sequential ELM, incremental ELM, pruning ELM and so on, to some metric spaces.  相似文献   

9.
In this paper, we introduce a new method based on Bernstein Neural Network model (BeNN) and extreme learning machine algorithm to solve the differential equation. In the proposed method, we develop a single-layer functional link BeNN, the hidden layer is eliminated by expanding the input pattern by Bernstein polynomials. The network parameters are obtained by solving a system of linear equations using the extreme learning machine algorithm. Finally, the numerical experiment is carried out by MATLAB, results obtained are compared with the existing method, which proves the feasibility and superiority of the proposed method.  相似文献   

10.
极限学习机在岩性识别中的应用   总被引:3,自引:0,他引:3  
基于传统支持向量机(SVM)训练速度慢、参数选择难等问题,提出了基于极限学习机(ELM)的岩性识别.该算法是一种新的单隐层前馈神经网络(SLFNs)学习算法,不但可以简化参数选择过程,而且可以提高网络的训练速度.在确定了最优参数的基础上,建立了ELM的岩性分类模型,并且将ELM的分类结果与SVM进行对比.实验结果表明,ELM以较少的神经元个数获得与SVM相当的分类正确率,并且ELM参数选择比SVM简便,有效降低了训练速度,表明了ELM应用于岩性识别的可行性和算法的有效性.  相似文献   

11.
Ning  Meng Joo  Xianyao   《Neurocomputing》2009,72(16-18):3818
In this paper, we present a fast and accurate online self-organizing scheme for parsimonious fuzzy neural networks (FAOS-PFNN), where a novel structure learning algorithm incorporating a pruning strategy into new growth criteria is developed. The proposed growing procedure without pruning not only speeds up the online learning process but also facilitates a more parsimonious fuzzy neural network while achieving comparable performance and accuracy by virtue of the growing and pruning strategy. The FAOS-PFNN starts with no hidden neurons and parsimoniously generates new hidden units according to the proposed growth criteria as learning proceeds. In the parameter learning phase, all the free parameters of hidden units, regardless of whether they are newly created or originally existing, are updated by the extended Kalman filter (EKF) method. The effectiveness and superiority of the FAOS-PFNN paradigm is compared with other popular approaches like resource allocation network (RAN), RAN via the extended Kalman filter (RANEKF), minimal resource allocation network (MRAN), adaptive-network-based fuzzy inference system (ANFIS), orthogonal least squares (OLS), RBF-AFS, dynamic fuzzy neural networks (DFNN), generalized DFNN (GDFNN), generalized GAP-RBF (GGAP-RBF), online sequential extreme learning machine (OS-ELM) and self-organizing fuzzy neural network (SOFNN) on various benchmark problems in the areas of function approximation, nonlinear dynamic system identification, chaotic time-series prediction and real-world regression problems. Simulation results demonstrate that the proposed FAOS-PFNN algorithm can achieve faster learning speed and more compact network structure with comparably high accuracy of approximation and generalization.  相似文献   

12.
针对在现代木材加工企业中,实木板材以缺陷及纹理为主要品质分级要素的需求,提出利用基于局部二值模式、自学习的深度置信网络与softmax分类器组合的深度学习算法,实现对实木板材缺陷及纹理的分类。首先提取实木板材的缺陷及纹理特征,在此基础上利用深度置信网络对经过局部二值化处理的特征进行训练学习,并采用可自学习的学习率算法优化收敛速度、减少训练时间,最后使用softmax分类器获取常见缺陷及直纹、花纹的分类结果。通过与BP神经网络、支持向量机、极限学习机等几种经典算法的比较,采用深度置信网络得到的实木板材缺陷及纹理识别的误差率在3.59%左右,在实木板材缺陷和纹理上取得了更好的识别效果。  相似文献   

13.
极限学习机(ELM)是一种新型单馈层神经网络算法,在训练过程中只需要设置合适的隐藏层节点个数,随机赋值输入权值和隐藏层偏差,一次完成无需迭代.结合遗传算法在预测模型参数寻优方面的优势,找到极限学习机的最优参数取值,建立成都双流国际机场旅客吞吐量预测模型,通过对比支持向量机、BP神经网络,分析遗传-极限学习机算法在旅客吞吐量预测中的可行性和优势.仿真结果表明遗传-极限学习机算法不仅可行,并且与原始极限学习机算法相比,在预测精度和训练速度上具有比较明显的优势.  相似文献   

14.
一种卷积神经网络和极限学习机相结合的人脸识别方法   总被引:1,自引:1,他引:0  
卷积神经网络是一种很好的特征提取器,但却不是最佳的分类器,而极限学习机能够很好地进行分类,却不能学习复杂的特征,根据这两者的优点和缺点,将它们结合起来,提出一种新的人脸识别方法。卷积神经网络提取人脸特征,极限学习机根据这些特征进行识别。本文还提出固定卷积神经网络的部分卷积核以减少训练参 数,从而提高识别精度的方法。在人脸库ORL和XM2VTS上进行测试的结果表明,本文的结合方法能有效提高人脸识别的识别率,而且固定部分卷积核的方式在训练样本少时具有优势。  相似文献   

15.
In this paper, we propose an extreme learning machine (ELM) with tunable activation function (TAF-ELM) learning algorithm, which determines its activation functions dynamically by means of the differential evolution algorithm based on the input data. The main objective is to overcome the problem dependence of fixed slop of the activation function in ELM. We mainly considered the issue of processing of benchmark problems on function approximation and pattern classification. Compared with ELM and E-ELM learning algorithms with the same network size or compact network configuration, the proposed algorithm has improved generalization performance with good accuracy. In addition, the proposed algorithm also has very good performance in the TAF neural networks learning algorithms.  相似文献   

16.
This paper proposes a novel cross-correlation neural network (CNN) model for finding the principal singular subspace of a cross-correlation matrix between two high-dimensional data streams. We introduce a novel nonquadratic criterion (NQC) for searching the optimum weights of two linear neural networks (LNN). The NQC exhibits a single global minimum attained if and only if the weight matrices of the left and right neural networks span the left and right principal singular subspace of a cross-correlation matrix, respectively. The other stationary points of the NQC are (unstable) saddle points. We develop an adaptive algorithm based on the NQC for tracking the principal singular subspace of a cross-correlation matrix between two high-dimensional vector sequences. The NQC algorithm provides a fast online learning of the optimum weights for two LNN. The global asymptotic stability of the NQC algorithm is analyzed. The NQC algorithm has several key advantages such as faster convergence, which is illustrated through simulations.  相似文献   

17.
一种基于多进化神经网络的分类方法   总被引:9,自引:0,他引:9  
商琳  王金根  姚望舒  陈世福 《软件学报》2005,16(9):1577-1583
分类问题是目前数据挖掘和机器学习领域的重要内容.提出了一种基于多进化神经网络的分类方法CABEN(classification approach based on evolutionary neural networks).利用改进的进化策略和Levenberg-Marquardt方法对多个三层前馈神经网络同时进行训练.训练好各个分类模型以后,将待识别数据分别输入,最后根据绝对多数投票法决定最终分类结果.实验结果表明,该方法可以较好地进行数据分类,而且与传统的神经网络方法以及贝叶斯方法和决策树方法相比,在  相似文献   

18.
综合考虑异质信息网络具有的复杂性和异质性的特点,提出一种异质网中基于图卷积神经网络(heterogeneous graph convolution neural network embedding,HeGCNE)的链路预测方法.针对经典图卷积神经网络逐层传递规则的不足,提出改进的逐层传递规则,对异质节点进行表征学习,融...  相似文献   

19.
张明洋  闻英友  杨晓陶  赵宏 《控制与决策》2017,32(10):1887-1893
针对在线序贯极限学习机(OS-ELM)对增量数据学习效率低、准确性差的问题, 提出一种基于增量加权平均的在线序贯极限学习机(WOS-ELM)算法.将算法的原始数据训练模型残差与增量数据训练模型残差进行加权作为代价函数,推导出用于均衡原始数据与增量数据的训练模型,利用原始数据来弱化增量数据的波动,使在线极限学习机具有较好的稳定性,从而提高算法的学习效率和准确性. 仿真实验结果表明, 所提出的WOS-ELM算法对增量数据具有较好的预测精度和泛化能力.  相似文献   

20.
Zhou  Zhiyu  Liu  Dexin  Wang  Yaming  Zhu  Zefei 《Multimedia Tools and Applications》2022,81(18):25007-25027
Multimedia Tools and Applications - To enhance the accuracy of illumination estimation, this study proposes illumination correction using a modified random vector functional link (RVFL) algorithm...  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号