首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In last year’s, the expert target recognition has been become very important topic in radar literature. In this study, a target recognition system is introduced for expert target recognition (ATR) using radar target echo signals of High Range Resolution (HRR) radars. This study includes a combination of an adaptive feature extraction and classification using optimum wavelet entropy parameter values. The features used in this study are extracted from radar target echo signals. Herein, a genetic wavelet extreme learning machine classifier model (GAWELM) is developed for expert target recognition. The GAWELM composes of three stages. These stages of GAWELM are genetic algorithm, wavelet analysis and extreme learning machine (ELM) classifier. In previous studies of radar target recognition have shown that the learning speed of feedforward networks is in general much slower than required and it has been a major disadvantage. There are two important causes. These are: (1) the slow gradient-based learning algorithms are commonly used to train neural networks, and (2) all the parameters of the networks are fixed iteratively by using such learning algorithms. In this paper, a new learning algorithm named extreme learning machine (ELM) for single-hidden layer feedforward networks (SLFNs) Ahern et al., 1989, Al-Otum and Al-Sowayan, 2011, Avci et al., 2005a, Avci et al., 2005b, Biswal et al., 2009, Frigui et al., in press, Cao et al., 2010, Guo et al., 2011, Famili et al., 1997, Han and Huang, 2006, Huang et al., 2011, Huang et al., 2006, Huang and Siew, 2005, Huang et al., 2009, Jiang et al., 2011, Kubrusly and Levan, 2009, Le et al., 2011, Lhermitte et al., in press, Martínez-Martínez et al., 2011, Matlab, 2011, Nelson et al., 2002, Nejad and Zakeri, 2011, Tabib et al., 2009, Tang et al., 2011, which randomly choose hidden nodes and analytically determines the output weights of SLFNs, to eliminate the these disadvantages of feedforward networks for expert target recognition area. Then, the genetic algorithm (GA) stage is used for obtaining the feature extraction method and finding the optimum wavelet entropy parameter values. Herein, the optimal one of four variant feature extraction methods is obtained by using a genetic algorithm (GA). The four feature extraction methods proposed GAWELM model are discrete wavelet transform (DWT), discrete wavelet transform–short-time Fourier transform (DWT–STFT), discrete wavelet transform–Born–Jordan time–frequency transform (DWT–BJTFT), and discrete wavelet transform–Choi–Williams time–frequency transform (DWT–CWTFT). The discrete wavelet transform stage is performed for optimum feature extraction in the time–frequency domain. The discrete wavelet transform stage includes discrete wavelet transform and calculating of discrete wavelet entropies. The extreme learning machine (ELM) classifier is performed for evaluating the fitness function of the genetic algorithm and classification of radar targets. The performance of the developed GAWELM expert radar target recognition system is examined by using noisy real radar target echo signals. The applications results of the developed GAWELM expert radar target recognition system show that this GAWELM system is effective in rating real radar target echo signals. The correct classification rate of this GAWELM system is about 90% for radar target types used in this study.  相似文献   

2.
Extreme learning machine (ELM) [G.-B. Huang, Q.-Y. Zhu, C.-K. Siew, Extreme learning machine: a new learning scheme of feedforward neural networks, in: Proceedings of the International Joint Conference on Neural Networks (IJCNN2004), Budapest, Hungary, 25-29 July 2004], a novel learning algorithm much faster than the traditional gradient-based learning algorithms, was proposed recently for single-hidden-layer feedforward neural networks (SLFNs). However, ELM may need higher number of hidden neurons due to the random determination of the input weights and hidden biases. In this paper, a hybrid learning algorithm is proposed which uses the differential evolutionary algorithm to select the input weights and Moore-Penrose (MP) generalized inverse to analytically determine the output weights. Experimental results show that this approach is able to achieve good generalization performance with much more compact networks.  相似文献   

3.
Convex incremental extreme learning machine   总被引:6,自引:2,他引:6  
Guang-Bin  Lei   《Neurocomputing》2007,70(16-18):3056
Unlike the conventional neural network theories and implementations, Huang et al. [Universal approximation using incremental constructive feedforward networks with random hidden nodes, IEEE Transactions on Neural Networks 17(4) (2006) 879–892] have recently proposed a new theory to show that single-hidden-layer feedforward networks (SLFNs) with randomly generated additive or radial basis function (RBF) hidden nodes (according to any continuous sampling distribution) can work as universal approximators and the resulting incremental extreme learning machine (I-ELM) outperforms many popular learning algorithms. I-ELM randomly generates the hidden nodes and analytically calculates the output weights of SLFNs, however, I-ELM does not recalculate the output weights of all the existing nodes when a new node is added. This paper shows that while retaining the same simplicity, the convergence rate of I-ELM can be further improved by recalculating the output weights of the existing nodes based on a convex optimization method when a new hidden node is randomly added. Furthermore, we show that given a type of piecewise continuous computational hidden nodes (possibly not neural alike nodes), if SLFNs can work as universal approximators with adjustable hidden node parameters, from a function approximation point of view the hidden node parameters of such “generalized” SLFNs (including sigmoid networks, RBF networks, trigonometric networks, threshold networks, fuzzy inference systems, fully complex neural networks, high-order networks, ridge polynomial networks, wavelet networks, etc.) can actually be randomly generated according to any continuous sampling distribution. In theory, the parameters of these SLFNs can be analytically determined by ELM instead of being tuned.  相似文献   

4.
Recently, a novel learning algorithm for single-hidden-layer feedforward neural networks (SLFNs) named extreme learning machine (ELM) was proposed by Huang et al. The essence of ELM is that the learning parameters of hidden nodes, including input weights and biases, are randomly assigned and need not be tuned while the output weights can be analytically determined by the simple generalized inverse operation. The only parameter needed to be defined is the number of hidden nodes. Compared with other traditional learning algorithms for SLFNs, ELM provides extremely faster learning speed, better generalization performance and with least human intervention. This paper firstly introduces a brief review of ELM, describing the principle and algorithm of ELM. Then, we put emphasis on the improved methods or the typical variants of ELM, especially on incremental ELM, pruning ELM, error-minimized ELM, two-stage ELM, online sequential ELM, evolutionary ELM, voting-based ELM, ordinal ELM, fully complex ELM, and symmetric ELM. Next, the paper summarized the applications of ELM on classification, regression, function approximation, pattern recognition, forecasting and diagnosis, and so on. In the last, the paper discussed several open issues of ELM, which may be worthy of exploring in the future.  相似文献   

5.
A study on effectiveness of extreme learning machine   总被引:7,自引:0,他引:7  
Extreme learning machine (ELM), proposed by Huang et al., has been shown a promising learning algorithm for single-hidden layer feedforward neural networks (SLFNs). Nevertheless, because of the random choice of input weights and biases, the ELM algorithm sometimes makes the hidden layer output matrix H of SLFN not full column rank, which lowers the effectiveness of ELM. This paper discusses the effectiveness of ELM and proposes an improved algorithm called EELM that makes a proper selection of the input weights and bias before calculating the output weights, which ensures the full column rank of H in theory. This improves to some extend the learning rate (testing accuracy, prediction accuracy, learning time) and the robustness property of the networks. The experimental results based on both the benchmark function approximation and real-world problems including classification and regression applications show the good performances of EELM.  相似文献   

6.
Ensemble of online sequential extreme learning machine   总被引:3,自引:0,他引:3  
Yuan  Yeng Chai  Guang-Bin   《Neurocomputing》2009,72(13-15):3391
Liang et al. [A fast and accurate online sequential learning algorithm for feedforward networks, IEEE Transactions on Neural Networks 17 (6) (2006), 1411–1423] has proposed an online sequential learning algorithm called online sequential extreme learning machine (OS-ELM), which can learn the data one-by-one or chunk-by-chunk with fixed or varying chunk size. It has been shown [Liang et al., A fast and accurate online sequential learning algorithm for feedforward networks, IEEE Transactions on Neural Networks 17 (6) (2006) 1411–1423] that OS-ELM runs much faster and provides better generalization performance than other popular sequential learning algorithms. However, we find that the stability of OS-ELM can be further improved. In this paper, we propose an ensemble of online sequential extreme learning machine (EOS-ELM) based on OS-ELM. The results show that EOS-ELM is more stable and accurate than the original OS-ELM.  相似文献   

7.
This paper investigates the problem of the pth moment exponential stability for a class of stochastic recurrent neural networks with Markovian jump parameters. With the help of Lyapunov function, stochastic analysis technique, generalized Halanay inequality and Hardy inequality, some novel sufficient conditions on the pth moment exponential stability of the considered system are derived. The results obtained in this paper are completely new and complement and improve some of the previously known results (Liao and Mao, Stoch Anal Appl, 14:165–185, 1996; Wan and Sun, Phys Lett A, 343:306–318, 2005; Hu et al., Chao Solitions Fractals, 27:1006–1010, 2006; Sun and Cao, Nonlinear Anal Real, 8:1171–1185, 2007; Huang et al., Inf Sci, 178:2194–2203, 2008; Wang et al., Phys Lett A, 356:346–352, 2006; Peng and Liu, Neural Comput Appl, 20:543–547, 2011). Moreover, a numerical example is also provided to demonstrate the effectiveness and applicability of the theoretical results.  相似文献   

8.
It is well-known that single-hidden-layer feedforward networks (SLFNs) with additive models are universal approximators. However the training of these models was slow until the birth of extreme learning machine (ELM) “Huang et al. Neurocomputing 70(1–3):489–501 (2006)” and its later improvements. Before ELM, the faster algorithms for efficiently training SLFNs were gradient based ones which need to be applied iteratively until a proper model is obtained. This slow convergence implies that SLFNs are not used as widely as they could be, even taking into consideration their overall good performances. The ELM allowed SLFNs to become a suitable option to classify a great number of patterns in a short time. Up to now, the hidden nodes were randomly initiated and tuned (though not in all approaches). This paper proposes a deterministic algorithm to initiate any hidden node with an additive activation function to be trained with ELM. Our algorithm uses the information retrieved from principal components analysis to fit the hidden nodes. This approach considerably decreases computational cost compared to later ELM improvements and overcomes their performance.  相似文献   

9.
极限学习机与支持向量机在储层渗透率预测中的对比研究   总被引:4,自引:0,他引:4  
极限学习机ELM是一种简单易用、有效的单隐层前馈神经网络SLFNs学习算法。传统的神经网络学习算法(如BP算法)需要人为设置大量的网络训练参数,并且很容易产生局部最优解。极限学习机只需要设置网络的隐层节点个数,在算法执行过程中不需要调整网络的输入权值以及隐元的偏置,并且产生唯一的最优解,因此具有学习速度快且泛化性能好的优点。本文将极限学习机引入到储层渗透率的预测中,通过对比支持向量机,分析其在储层渗透率预测中的可行性和优势。实验结果表明,极限学习机与支持向量机有近似的预测精度,但在参数选择以及学习速度上极限学习机具有明显的优势。  相似文献   

10.
Blind signature and ring signature are two signature schemes with privacy concern. Zhang [Jianhong Zhang, Linkability analysis of some blind signature schemes, In International Conference on Computational Intelligence and Security 2006, IEEE, vol. 2, 2006, pp. 1367–1370, (Available at http://dx.doi.org/10.1109/ICCIAS.2006.295283.)] analyzed the unlinkability of Zhang and Kim [Fangguo Zhang, Kwangjo Kim, ID-based blind signature and ring signature from pairings, in: Yuliang Zheng (Ed.), Advances in Cryptology — ASIACRYPT 2002, 8th International Conference on the Theory and Application of Cryptology and Information Security, Queenstown, New Zealand, December 1–5, 2002, Proceedings, Lecture Notes in Computer Science, vol. 2501, Springer, 2002, pp. 533–547], Huang et al. [Zhenjie Huang, Kefei Chen, Yumin Wang, Efficient identity-based signatures and blind signatures, in: Yvo Desmedt, Huaxiong Wang, Yi Mu, Yongqing Li (Eds.), Cryptology and Network Security, 4th International Conference, CANS 2005, Xiamen, China, December 14–16, 2005, Proceedings, Lecture Notes in Computer Science, vol. 3810, Springer, 2005, pp. 120–133] and Wu et al. [Qianhong Wu, Willy Susilo, Yi Mu, Fangguo Zhang, Efficient partially blind signatures with provable security, in: Osvaldo Gervasi, Marina L. Gavrilova, (Eds.), Computational Science and Its Applications — ICCSA 2007, International Conference, Kuala Lumpur, Malaysia, August 26–29, 2007. Proceedings. Part III, Lecture Notes in Computer Science, vol. 4707, Springer, 2007, pp. 1096–1105] and claimed that they are indeed linkable. On the other hand, Gamage et al. [Chandana Gamage, Ben Gras, Bruno Crispo, Andrew S. Tanenbaum, An identity-based ring signature scheme with enhanced privacy, Securecomm and Workshops 2006, IEEE, 2006, pp. 1–5, (Available at http://dx.doi.org/10.1109/SECCOMW.2006.359554)] claimed that the scheme of Chow et al. [Sherman S.M. Chow, Siu-Ming Yiu, Lucas Chi Kwong Hui, Efficient identity based ring signature, in: John Ioannidis, Angelos D. Keromytis, Moti Yung (Eds.), Applied Cryptography and Network Security, Third International Conference, ACNS 2005, New York, NY, USA, June 7–10, 2005, Proceedings, Lecture Notes in Computer Science, vol. 3531, 2005, pp. 499–512] is vulnerable to key exposure attack. This paper shows that all these claims are incorrect. Furthermore, we show that the scheme proposed by Gamage et al. [Chandana Gamage, Ben Gras, Bruno Crispo, Andrew S. Tanenbaum, An identity-based ring signature scheme with enhanced privacy, Securecomm and Workshops 2006, IEEE, 2006, pp. 1–5, (Available at http://dx.doi.org/10.1109/SECCOMW.2006.359554)] which aimed to provide enhanced privacy actually has privacy level reduced. We hope this work can pinpoint the standard one should use when analyzing the unlinkability of blind signatures and the anonymity of ring signatures.  相似文献   

11.
According to conventional neural network theories, single-hidden-layer feedforward networks (SLFNs) with additive or radial basis function (RBF) hidden nodes are universal approximators when all the parameters of the networks are allowed adjustable. However, as observed in most neural network implementations, tuning all the parameters of the networks may cause learning complicated and inefficient, and it may be difficult to train networks with nondifferential activation functions such as threshold networks. Unlike conventional neural network theories, this paper proves in an incremental constructive method that in order to let SLFNs work as universal approximators, one may simply randomly choose hidden nodes and then only need to adjust the output weights linking the hidden layer and the output layer. In such SLFNs implementations, the activation functions for additive nodes can be any bounded nonconstant piecewise continuous functions g:R/spl rarr/R and the activation functions for RBF nodes can be any integrable piecewise continuous functions g:R/spl rarr/R and /spl int//sub R/g(x)dx/spl ne/0. The proposed incremental method is efficient not only for SFLNs with continuous (including nondifferentiable) activation functions but also for SLFNs with piecewise continuous (such as threshold) activation functions. Compared to other popular methods such a new network is fully automatic and users need not intervene the learning process by manually tuning control parameters.  相似文献   

12.
One of the open problems in neural network research is how to automatically determine network architectures for given applications. In this brief, we propose a simple and efficient approach to automatically determine the number of hidden nodes in generalized single-hidden-layer feedforward networks (SLFNs) which need not be neural alike. This approach referred to as error minimized extreme learning machine (EM-ELM) can add random hidden nodes to SLFNs one by one or group by group (with varying group size). During the growth of the networks, the output weights are updated incrementally. The convergence of this approach is proved in this brief as well. Simulation results demonstrate and verify that our new approach is much faster than other sequential/incremental/growing algorithms with good generalization performance.   相似文献   

13.
极限学习机在岩性识别中的应用   总被引:3,自引:0,他引:3  
基于传统支持向量机(SVM)训练速度慢、参数选择难等问题,提出了基于极限学习机(ELM)的岩性识别.该算法是一种新的单隐层前馈神经网络(SLFNs)学习算法,不但可以简化参数选择过程,而且可以提高网络的训练速度.在确定了最优参数的基础上,建立了ELM的岩性分类模型,并且将ELM的分类结果与SVM进行对比.实验结果表明,ELM以较少的神经元个数获得与SVM相当的分类正确率,并且ELM参数选择比SVM简便,有效降低了训练速度,表明了ELM应用于岩性识别的可行性和算法的有效性.  相似文献   

14.
The problem of mean square exponential stability for a class of impulsive stochastic fuzzy cellular neural networks with distributed delays and reaction–diffusion terms is investigated in this paper. By using the properties of M-cone, eigenspace of the spectral radius of nonnegative matrices, Lyapunov functional, Itô’s formula and inequality techniques, several new sufficient conditions guaranteeing the mean square exponential stability of its equilibrium solution are obtained. The derived results are less conservative than the results recently presented in Wang and Xu (Chaos Solitons Fractals 42:2713–2721, 2009), Zhang and Li (Stability analysis of impulsive stochastic fuzzy cellular neural networks with time varying delays and reaction–diffusion terms. World Academy of Science, Engineering and Technology 2010), Huang (Chaos Solitons Fractals 31:658–664, 2007), and Wang (Chaos Solitons Fractals 38:878–885, 2008). In fact, the systems discussed in Wang and Xu (Chaos Solitons Fractals 42:2713–2721, 2009), Zhang and Li (Stability analysis of impulsive stochastic fuzzy cellular neural networks with time varying delays and reaction–diffusion terms. World Academy of Science, Engineering and Technology 2010), Huang (Chaos Solitons Fractals 31:658–664, 2007), and Wang (Chaos Solitons Fractals 38:878–885, 2008) are special cases of ours. Two examples are presented to illustrate the effectiveness and efficiency of the results.  相似文献   

15.
Extreme learning machine for regression and multiclass classification   总被引:13,自引:0,他引:13  
Due to the simplicity of their implementations, least square support vector machine (LS-SVM) and proximal support vector machine (PSVM) have been widely used in binary classification applications. The conventional LS-SVM and PSVM cannot be used in regression and multiclass classification applications directly, although variants of LS-SVM and PSVM have been proposed to handle such cases. This paper shows that both LS-SVM and PSVM can be simplified further and a unified learning framework of LS-SVM, PSVM, and other regularization algorithms referred to extreme learning machine (ELM) can be built. ELM works for the "generalized" single-hidden-layer feedforward networks (SLFNs), but the hidden layer (or called feature mapping) in ELM need not be tuned. Such SLFNs include but are not limited to SVM, polynomial network, and the conventional feedforward neural networks. This paper shows the following: 1) ELM provides a unified learning platform with a widespread type of feature mappings and can be applied in regression and multiclass classification applications directly; 2) from the optimization method point of view, ELM has milder optimization constraints compared to LS-SVM and PSVM; 3) in theory, compared to ELM, LS-SVM and PSVM achieve suboptimal solutions and require higher computational complexity; and 4) in theory, ELM can approximate any target continuous function and classify any disjoint regions. As verified by the simulation results, ELM tends to have better scalability and achieve similar (for regression and binary class cases) or much better (for multiclass cases) generalization performance at much faster learning speed (up to thousands times) than traditional SVM and LS-SVM.  相似文献   

16.
In order to overcome the disadvantage of the traditional algorithm for SLFN (single-hidden layer feedforward neural network), an improved algorithm for SLFN, called extreme learning machine (ELM), is proposed by Huang et al. However, ELM is sensitive to the neuron number in hidden layer and its selection is a difficult-to-solve problem. In this paper, a self-adaptive mechanism is introduced into the ELM. Herein, a new variant of ELM, called self-adaptive extreme learning machine (SaELM), is proposed. SaELM is a self-adaptive learning algorithm that can always select the best neuron number in hidden layer to form the neural networks. There is no need to adjust any parameters in the training process. In order to prove the performance of the SaELM, it is used to solve the Italian wine and iris classification problems. Through the comparisons between SaELM and the traditional back propagation, basic ELM and general regression neural network, the results have proven that SaELM has a faster learning speed and better generalization performance when solving the classification problem.  相似文献   

17.
李军  乃永强 《控制与决策》2015,30(9):1559-1566

针对一类多输入多输出(MIMO) 仿射非线性动态系统, 提出一种基于极限学习机(ELM) 的鲁棒自适应神经控制方法. ELM随机确定单隐层前馈网络(SLFNs) 的隐含层参数, 仅需调整网络的输出权值, 能以极快的学习速度获得良好的推广性. 在所提出的控制方法中, 利用ELM逼近系统的未知非线性项, 针对ELM网络的权值、逼近误差及外界扰动的未知上界值分别设计参数自适应律, 通过Lyapunov 稳定性分析可以保证闭环系统所有信号半全局最终一致有界. 仿真结果表明了该控制方法的有效性.

  相似文献   

18.
We present two classes of convergent algorithms for learning continuous functions and regressions that are approximated by feedforward networks. The first class of algorithms, applicable to networks with unknown weights located only in the output layer, is obtained by utilizing the potential function methods of Aizerman et al. (1970). The second class, applicable to general feedforward networks, is obtained by utilizing the classical Robbins-Monro style stochastic approximation methods (1951). Conditions relating the sample sizes to the error bounds are derived for both classes of algorithms using martingale-type inequalities. For concreteness, the discussion is presented in terms of neural networks, but the results are applicable to general feedforward networks, in particular to wavelet networks. The algorithms can be directly adapted to concept learning problems.  相似文献   

19.
This paper investigates the learning of a wide class of single-hidden-layer feedforward neural networks (SLFNs) with two sets of adjustable parameters, i.e., the nonlinear parameters in the hidden nodes and the linear output weights. The main objective is to both speed up the convergence of second-order learning algorithms such as Levenberg-Marquardt (LM), as well as to improve the network performance. This is achieved here by reducing the dimension of the solution space and by introducing a new Jacobian matrix. Unlike conventional supervised learning methods which optimize these two sets of parameters simultaneously, the linear output weights are first converted into dependent parameters, thereby removing the need for their explicit computation. Consequently, the neural network (NN) learning is performed over a solution space of reduced dimension. A new Jacobian matrix is then proposed for use with the popular second-order learning methods in order to achieve a more accurate approximation of the cost function. The efficacy of the proposed method is shown through an analysis of the computational complexity and by presenting simulation results from four different examples.  相似文献   

20.
In previously published research (Hamade et al., 2005, Hamade et al., 2007, Hamade et al., 2009, Hamade and Artail, 2008) the authors developed a framework for analyzing the technical profiles of novice computer-aided design (CAD) trainees as they set to start training in a formal setting. The research included conducting a questionnaire to establish the trainees’ CAD-relevant technical foundation which served as the basis to statistically correlate this data to other experimental data collected for measuring the trainees’ performance over the duration of training. In this paper, we build on that work and attempt to forecast the performance of these CAD users based on their technical profiled attributes. For this purpose, we utilize three Artificial Neural Networks, ANN, techniques: Feed-Forward Back propagation, Elman Back propagation, and Generalized Regression with their capabilities are compared to those of Simulated Annealing as well as to those of linear regression techniques. Based on their profiled technical attributes, the Generalized regression neural network (GRNN) method is found to be most successful in discriminating the trainees including their predicted initial performance as well as their progress.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号