首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.

Recently, extreme learning machine (ELM) has attracted increasing attention due to its successful applications in classification, regression, and ranking. Normally, the desired output of the learning system using these machine learning techniques is a simple scalar output. However, there are many applications in machine learning which require more complex output rather than a simple scalar one. Therefore, structured output is used for such applications where the system is trained to predict structured output instead of simple one. Previously, support vector machine (SVM) has been introduced for structured output learning in various applications. However, from machine learning point of view, ELM is known to offer better generalization performance compared to other learning techniques. In this study, we extend ELM to more generalized framework to handle complex outputs where simple outputs are considered as special cases of it. Besides the good generalization property of ELM, the resulting model will possesses rich internal structure that reflects task-specific relations and constraints. The experimental results show that structured ELM achieves similar (for binary problems) or better (for multi-class problems) generalization performance when compared to ELM. Moreover, as verified by the simulation results, structured ELM has comparable or better precision performance with structured SVM when tested for more complex output such as object localization problem on PASCAL VOC2006. Also, the investigation on parameter selections is presented and discussed for all problems.

  相似文献   

2.
Neural networks do not readily provide an explanation of the knowledge stored in their weights as part of their information processing. Until recently, neural networks were considered to be black boxes, with the knowledge stored in their weights not readily accessible. Since then, research has resulted in a number of algorithms for extracting knowledge in symbolic form from trained neural networks. This article addresses the extraction of knowledge in symbolic form from recurrent neural networks trained to behave like deterministic finite-state automata (DFAs). To date, methods used to extract knowledge from such networks have relied on the hypothesis that networks' states tend to cluster and that clusters of network states correspond to DFA states. The computational complexity of such a cluster analysis has led to heuristics that either limit the number of clusters that may form during training or limit the exploration of the space of hidden recurrent state neurons. These limitations, while necessary, may lead to decreased fidelity, in which the extracted knowledge may not model the true behavior of a trained network, perhaps not even for the training set. The method proposed here uses a polynomial time, symbolic learning algorithm to infer DFAs solely from the observation of a trained network's input-output behavior. Thus, this method has the potential to increase the fidelity of the extracted knowledge.  相似文献   

3.
Neural Computing and Applications - A novel failure rate prediction model is developed by the extreme learning machine (ELM) to provide key information needed for optimum ongoing...  相似文献   

4.
Extreme learning machine for regression and multiclass classification   总被引:13,自引:0,他引:13  
Due to the simplicity of their implementations, least square support vector machine (LS-SVM) and proximal support vector machine (PSVM) have been widely used in binary classification applications. The conventional LS-SVM and PSVM cannot be used in regression and multiclass classification applications directly, although variants of LS-SVM and PSVM have been proposed to handle such cases. This paper shows that both LS-SVM and PSVM can be simplified further and a unified learning framework of LS-SVM, PSVM, and other regularization algorithms referred to extreme learning machine (ELM) can be built. ELM works for the "generalized" single-hidden-layer feedforward networks (SLFNs), but the hidden layer (or called feature mapping) in ELM need not be tuned. Such SLFNs include but are not limited to SVM, polynomial network, and the conventional feedforward neural networks. This paper shows the following: 1) ELM provides a unified learning platform with a widespread type of feature mappings and can be applied in regression and multiclass classification applications directly; 2) from the optimization method point of view, ELM has milder optimization constraints compared to LS-SVM and PSVM; 3) in theory, compared to ELM, LS-SVM and PSVM achieve suboptimal solutions and require higher computational complexity; and 4) in theory, ELM can approximate any target continuous function and classify any disjoint regions. As verified by the simulation results, ELM tends to have better scalability and achieve similar (for regression and binary class cases) or much better (for multiclass cases) generalization performance at much faster learning speed (up to thousands times) than traditional SVM and LS-SVM.  相似文献   

5.
Recently, a novel learning algorithm for single-hidden-layer feedforward neural networks (SLFNs) named extreme learning machine (ELM) was proposed by Huang et al. The essence of ELM is that the learning parameters of hidden nodes, including input weights and biases, are randomly assigned and need not be tuned while the output weights can be analytically determined by the simple generalized inverse operation. The only parameter needed to be defined is the number of hidden nodes. Compared with other traditional learning algorithms for SLFNs, ELM provides extremely faster learning speed, better generalization performance and with least human intervention. This paper firstly introduces a brief review of ELM, describing the principle and algorithm of ELM. Then, we put emphasis on the improved methods or the typical variants of ELM, especially on incremental ELM, pruning ELM, error-minimized ELM, two-stage ELM, online sequential ELM, evolutionary ELM, voting-based ELM, ordinal ELM, fully complex ELM, and symmetric ELM. Next, the paper summarized the applications of ELM on classification, regression, function approximation, pattern recognition, forecasting and diagnosis, and so on. In the last, the paper discussed several open issues of ELM, which may be worthy of exploring in the future.  相似文献   

6.
In real life, information about the world is uncertain and imprecise. The cause of this uncertainty is due to: deficiencies on given information, the fuzzy nature of our perception of events and objects, and on the limitations of the models we use to explain the world. The development of new methods for dealing with information with uncertainty is crucial for solving real life problems. In this paper three interval type-2 fuzzy neural network (IT2FNN) architectures are proposed, with hybrid learning algorithm techniques (gradient descent backpropagation and gradient descent with adaptive learning rate backpropagation). At the antecedents layer, a interval type-2 fuzzy neuron (IT2FN) model is used, and in case of the consequents layer an interval type-1 fuzzy neuron model (IT1FN), in order to fuzzify the rule’s antecedents and consequents of an interval type-2 Takagi-Sugeno-Kang fuzzy inference system (IT2-TSK-FIS). IT2-TSK-FIS is integrated in an adaptive neural network, in order to take advantage the best of both models. This provides a high order intuitive mechanism for representing imperfect information by means of use of fuzzy If-Then rules, in addition to handling uncertainty and imprecision. On the other hand, neural networks are highly adaptable, with learning and generalization capabilities. Experimental results are divided in two kinds: in the first one a non-linear identification problem for control systems is simulated, here a comparative analysis of learning architectures IT2FNN and ANFIS is done. For the second kind, a non-linear Mackey-Glass chaotic time series prediction problem with uncertainty sources is studied. Finally, IT2FNN proved to be more efficient mechanism for modeling real-world problems.  相似文献   

7.
A robust locally adaptive learning algorithm is developed via two enhancements of the Resilient Propagation (RPROP) method. Remaining drawbacks of the gradient-based approach are addressed by hybridization with gradient-independent Local Search. Finally, a global optimization method based on recursion of the hybrid is constructed, making use of tabu neighborhoods to accelerate the search for minima through diversification. Enhanced RPROP is shown to be faster and more accurate than the standard RPROP in solving classification tasks based on natural data sets taken from the UCI repository of machine learning databases. Furthermore, the use of Local Search is shown to improve Enhanced RPROP by solving the same classification tasks as part of the global optimization method.  相似文献   

8.
Unmanned aerial vehicles (UAVs) rely on global positioning system (GPS) information to ascertain its position for navigation during mission execution. In the absence of GPS information, the capability of a UAV to carry out its intended mission is hindered. In this paper, we learn alternative means for UAVs to derive real-time positional reference information so as to ensure the continuity of the mission. We present extreme learning machine as a mechanism for learning the stored digital elevation information so as to aid UAVs to navigate through terrain without the need for GPS. The proposed algorithm accommodates the need of the on-line implementation by supporting multi-resolution terrain access, thus capable of generating an immediate path with high accuracy within the allowable time scale. Numerical tests have demonstrated the potential benefits of the approach.  相似文献   

9.
Extreme learning machine (ELM) is widely used in training single-hidden layer feedforward neural networks (SLFNs) because of its good generalization and fast speed. However, most improved ELMs usually discuss the approximation problem for sample data with output noises, not for sample data with noises both in input and output values, i.e., error-in-variable (EIV) model. In this paper, a novel algorithm, called (regularized) TLS-ELM, is proposed to approximate the EIV model based on ELM and total least squares (TLS) method. The proposed TLS-ELM uses the idea of ELM to choose the hidden weights, and applies TLS method to determine the output weights. Furthermore, the perturbation quantities of hidden output matrix and observed values are given simultaneously. Comparison experiments of our proposed TLS-ELM with least square method, TLS method and ELM show that our proposed TLS-ELM has better accuracy and less training time.  相似文献   

10.
针对极限学习机(ELM)在训练过程中需要大量隐含层节点的问题,提出了差分进化与克隆算法改进人工蜂群优化的极限学习机(DECABC-ELM),在人工蜂群算法的基础上,引入了差分进化算法的差分变异算子和免疫克隆算法的克隆扩增算子,改进了人工蜂群收敛速度慢等缺点,使用改进的人工蜂群算法计算ELM的隐含层节点参数.将算法应用于回归和分类数据集,并与其他算法进行比较,获得了良好的效果.  相似文献   

11.
12.
Wu  Chao  Li  Yaqian  Zhao  Zhibiao  Liu  Bin 《Neural computing & applications》2020,32(12):8157-8173
Neural Computing and Applications - Based on the theory of local receptive field based extreme learning machine (ELM-LRF) and ELM auto encoder (ELM-AE), a new network...  相似文献   

13.
The problem of training feedforward neural networks is considered. To solve it, new algorithms are proposed. They are based on the asymptotic analysis of the extended Kalman filter (EKF) and on a separable network structure. Linear weights are interpreted as diffusion random variables with zero expectation and a covariance matrix proportional to an arbitrarily large parameter λ. Asymptotic expressions for the EKF are derived as λ→∞. They are called diffusion learning algorithms (DLAs). It is shown that they are robust with respect to the accumulation of rounding errors in contrast to their prototype EKF with a large but finite λ and that, under certain simplifying assumptions, an extreme learning machine (ELM) algorithm can be obtained from a DLA. A numerical example shows that the accuracy of a DLA may be higher than that of an ELM algorithm.  相似文献   

14.
We consider the problem of learning the dependence of one random variable on another, from a finite string of independently identically distributed (i.i.d.) copies of the pair. The problem is first converted to that of learning a function of the latter random variable and an independent random variable uniformly distributed on the unit interval. However, this cannot be achieved using the usual function learning techniques because the samples of the uniformly distributed random variables are not available. We propose a novel loss function, the minimizer of which results in an approximation to the needed function. Through successive approximation results (suggested by the proposed loss function), a suitable class of functions represented by combination feedforward neural networks is selected as the class to learn from. These results are also extended for countable as well as continuous state-space Markov chains. The effectiveness of the proposed method is indicated through simulation studies.  相似文献   

15.
Translated from Kibernetika i Sistemnyi Analiz, No. 4, pp. 156–167, July–August 1994.  相似文献   

16.
Knowledge and Information Systems - We present InfoMotif, a new semi-supervised, motif-regularized, learning framework over graphs. We overcome two key limitations of message passing in popular...  相似文献   

17.
Analog neural network for support vector machine learning   总被引:1,自引:0,他引:1  
An analog neural network for support vector machine learning is proposed, based on a partially dual formulation of the quadratic programming problem. It results in a simpler circuit implementation with respect to existing neural solutions for the same application. The effectiveness of the proposed network is shown through some computer simulations concerning benchmark problems  相似文献   

18.
《微型机与应用》2015,(17):81-84
针对极端学习机算法对不平衡数据分类问题的处理效果不够理想,提出了一种基于聚类欠采样的极端学习机算法。新算法首先对训练集的负类样本进行聚类生成不同的簇,然后在各簇中按规定的采样率对其进行欠采样,取出的样本组成新的负类数据集,从而使训练集正负类数据个数达到相对平衡,最后训练分类器对测试集进行测试。实验结果表明,新算法有效地降低了数据的不平衡对分类准确率的影响,具有更好的分类性能。  相似文献   

19.
The minimum velocity required to prevent sediment deposition in open channels is examined in this study. The parameters affecting transport are first determined and then categorized into different dimensionless groups, including “movement,” “transport,” “sediment,” “transport mode,” and “flow resistance.” Six different models are presented to identify the effect of each of these parameters. The feed-forward neural network (FFNN) is used to predict the densimetric Froude number (Fr) and the extreme learning machine (ELM) algorithm is utilized to train it. The results of this algorithm are compared with back propagation (BP), genetic programming (GP) and existing sediment transport equations. The results indicate that FFNN-ELM produced better results than FNN-BP, GP and existing sediment transport methods in both training (RMSE = 0.26 and MARE = 0.052) and testing (RMSE = 0.121 and MARE = 0.023). Moreover, the performance of FFNN-ELM is examined for different pipe diameters.  相似文献   

20.
Qian  Yinlong  Dong  Jing  Wang  Wei  Tan  Tieniu 《Multimedia Tools and Applications》2018,77(15):19633-19657
Multimedia Tools and Applications - Traditional steganalysis methods usually rely on handcrafted features. However, with the rapid development of advanced steganography, manual design of complex...  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号