首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 843 毫秒
1.
In order to overcome the disadvantage of the traditional algorithm for SLFN (single-hidden layer feedforward neural network), an improved algorithm for SLFN, called extreme learning machine (ELM), is proposed by Huang et al. However, ELM is sensitive to the neuron number in hidden layer and its selection is a difficult-to-solve problem. In this paper, a self-adaptive mechanism is introduced into the ELM. Herein, a new variant of ELM, called self-adaptive extreme learning machine (SaELM), is proposed. SaELM is a self-adaptive learning algorithm that can always select the best neuron number in hidden layer to form the neural networks. There is no need to adjust any parameters in the training process. In order to prove the performance of the SaELM, it is used to solve the Italian wine and iris classification problems. Through the comparisons between SaELM and the traditional back propagation, basic ELM and general regression neural network, the results have proven that SaELM has a faster learning speed and better generalization performance when solving the classification problem.  相似文献   

2.
Feedforward neural networks (FNN) have been proposed to solve complex problems in pattern recognition, classification and function approximation. Despite the general success of learning methods for FNN, such as the backpropagation (BP) algorithm, second-order algorithms, long learning time for convergence remains a problem to be overcome. In this paper, we propose a new hybrid algorithm for a FNN that combines unsupervised training for the hidden neurons (Kohonen algorithm) and supervised training for the output neurons (gradient descent method). Simulation results show the effectiveness of the proposed algorithm compared with other well-known learning methods.  相似文献   

3.
There is no method to determine the optimal topology for multi-layer neural networks for a given problem. Usually the designer selects a topology for the network and then trains it. Since determination of the optimal topology of neural networks belongs to class of NP-hard problems, most of the existing algorithms for determination of the topology are approximate. These algorithms could be classified into four main groups: pruning algorithms, constructive algorithms, hybrid algorithms and evolutionary algorithms. These algorithms can produce near optimal solutions. Most of these algorithms use hill-climbing method and may be stuck at local minima. In this article, we first introduce a learning automaton and study its behaviour and then present an algorithm based on the proposed learning automaton, called survival algorithm, for determination of the number of hidden units of three layers neural networks. The survival algorithm uses learning automata as a global search method to increase the probability of obtaining the optimal topology. The algorithm considers the problem of optimization of the topology of neural networks as object partitioning rather than searching or parameter optimization as in existing algorithms. In survival algorithm, the training begins with a large network, and then by adding and deleting hidden units, a near optimal topology will be obtained. The algorithm has been tested on a number of problems and shown through simulations that networks generated are near optimal.  相似文献   

4.
Recently there have been renewed interests in single-hidden-layer neural networks (SHLNNs). This is due to its powerful modeling ability as well as the existence of some efficient learning algorithms. A prominent example of such algorithms is extreme learning machine (ELM), which assigns random values to the lower-layer weights. While ELM can be trained efficiently, it requires many more hidden units than is typically needed by the conventional neural networks to achieve matched classification accuracy. The use of a large number of hidden units translates to significantly increased test time, which is more valuable than training time in practice. In this paper, we propose a series of new efficient learning algorithms for SHLNNs. Our algorithms exploit both the structure of SHLNNs and the gradient information over all training epochs, and update the weights in the direction along which the overall square error is reduced the most. Experiments on the MNIST handwritten digit recognition task and the MAGIC gamma telescope dataset show that the algorithms proposed in this paper obtain significantly better classification accuracy than ELM when the same number of hidden units is used. For obtaining the same classification accuracy, our best algorithm requires only 1/16 of the model size and thus approximately 1/16 of test time compared with ELM. This huge advantage is gained at the expense of 5 times or less the training cost incurred by the ELM training.  相似文献   

5.
R Setiono 《Neural computation》2001,13(12):2865-2877
This article presents an algorithm that constructs feedforward neural networks with a single hidden layer for pattern classification. The algorithm starts with a small number of hidden units in the network and adds more hidden units as needed to improve the network's predictive accuracy. To determine when to stop adding new hidden units, the algorithm makes use of a subset of the available training samples for cross validation. New hidden units are added to the network only if they improve the classification accuracy of the network on the training samples and on the cross-validation samples. Extensive experimental results show that the algorithm is effective in obtaining networks with predictive accuracy rates that are better than those obtained by state-of-the-art decision tree methods.  相似文献   

6.
The geometrical learning of binary neural networks   总被引:12,自引:0,他引:12  
In this paper, the learning algorithm called expand-and-truncate learning (ETL) is proposed to train multilayer binary neural networks (BNN) with guaranteed convergence for any binary-to-binary mapping. The most significant contribution of this paper is the development of a learning algorithm for three-layer BNN which guarantees the convergence, automatically determining a required number of neurons in the hidden layer. Furthermore, the learning speed of the proposed ETL algorithm is much faster than that of backpropagation learning algorithm in a binary field. Neurons in the proposed BNN employ a hard-limiter activation function, with only integer weights and integer thresholds. Therefore, this will greatly facilitate actual hardware implementation of the proposed BNN using currently available digital VLSI technology  相似文献   

7.
Real-time learning capability of neural networks   总被引:4,自引:0,他引:4  
In some practical applications of neural networks, fast response to external events within an extremely short time is highly demanded and expected. However, the extensively used gradient-descent-based learning algorithms obviously cannot satisfy the real-time learning needs in many applications, especially for large-scale applications and/or when higher generalization performance is required. Based on Huang's constructive network model, this paper proposes a simple learning algorithm capable of real-time learning which can automatically select appropriate values of neural quantizers and analytically determine the parameters (weights and bias) of the network at one time only. The performance of the proposed algorithm has been systematically investigated on a large batch of benchmark real-world regression and classification problems. The experimental results demonstrate that our algorithm can not only produce good generalization performance but also have real-time learning and prediction capability. Thus, it may provide an alternative approach for the practical applications of neural networks where real-time learning and prediction implementation is required.  相似文献   

8.
An efficient constrained training algorithm for feedforwardnetworks   总被引:3,自引:0,他引:3  
A novel algorithm is presented which supplements the training phase in feedforward networks with various forms of information about desired learning properties. This information is represented by conditions which must be satisfied in addition to the demand for minimization of the usual mean square error cost function. The purpose of these conditions is to improve convergence, learning speed, and generalization properties through prompt activation of the hidden units, optimal alignment of successive weight vector offsets, elimination of excessive hidden nodes, and regulation of the magnitude of search steps in the weight space. The algorithm is applied to several small- and large-scale binary benchmark training tasks, to test its convergence ability and learning speed, as well as to a large-scale OCR problem, to test its generalization capability. Its performance in terms of percentage of local minima, learning speed, and generalization ability is evaluated and found superior to the performance of the backpropagation algorithm and variants thereof taking especially into account the statistical significance of the results.  相似文献   

9.
A new evolutionary system for evolving artificial neural networks   总被引:39,自引:0,他引:39  
This paper presents a new evolutionary system, i.e., EPNet, for evolving artificial neural networks (ANNs). The evolutionary algorithm used in EPNet is based on Fogel's evolutionary programming (EP). Unlike most previous studies on evolving ANN's, this paper puts its emphasis on evolving ANN's behaviors. Five mutation operators proposed in EPNet reflect such an emphasis on evolving behaviors. Close behavioral links between parents and their offspring are maintained by various mutations, such as partial training and node splitting. EPNet evolves ANN's architectures and connection weights (including biases) simultaneously in order to reduce the noise in fitness evaluation. The parsimony of evolved ANN's is encouraged by preferring node/connection deletion to addition. EPNet has been tested on a number of benchmark problems in machine learning and ANNs, such as the parity problem, the medical diagnosis problems, the Australian credit card assessment problem, and the Mackey-Glass time series prediction problem. The experimental results show that EPNet can produce very compact ANNs with good generalization ability in comparison with other algorithms.  相似文献   

10.
In Gaussian mixture modeling, it is crucial to select the number of Gaussians for a sample set, which becomes much more difficult when the overlap in the mixture is larger. Under regularization theory, we aim to solve this problem using a semi-supervised learning algorithm through incorporating pairwise constraints into entropy regularized likelihood (ERL) learning which can make automatic model selection for Gaussian mixture. The simulation experiments further demonstrate that the presented semi-supervised learning algorithm (i.e., the constrained ERL learning algorithm) can automatically detect the number of Gaussians with a good parameter estimation, even when two or more actual Gaussians in the mixture are overlapped at a high degree. Moreover, the constrained ERL learning algorithm leads to some promising results when applied to iris data classification and image database categorization.  相似文献   

11.
针对极端学习机(extreme learning machine,ELM)结构设计问题,基于隐含层激活函数及其导函数提出一种前向神经网络结构增长算法.首先以Sigmoid函数为例给出了一类基函数的派生特性:导函数可以由其原函数表示.其次,利用这种派生特性提出了ELM结构设计方法,该方法自动生成双隐含层前向神经网络,其第1隐含层的结点随机逐一生成.第2隐含层的输出由第1隐含层新添结点的激活函数及其导函数确定,输出层权值由最小二乘法分析获得.最后给出了所提算法收敛性及稳定性的理论证明.对非线性系统辨识及双螺旋分类问题的仿真结果证明了所提算法的有效性.  相似文献   

12.
A new adaptive learning algorithm for constructing and training wavelet networks is proposed based on the time-frequency localization properties of wavelet frames and the adaptive projection algorithm. The exponential convergence of the adaptive projection algorithm in finite-dimensional Hilbert spaces is constructively proved, with exponential decay ratios given with high accuracy. The learning algorithm can sufficiently utilize the time-frequency information contained in the training data, iteratively determines the number of the hidden layer nodes and the weights of wavelet networks, and solves the problem of structure optimization of wavelet networks. The algorithm is simple and efficient, as illustrated by examples of signal representation and denoising.  相似文献   

13.
《Neurocomputing》1999,24(1-3):13-36
This paper reviews different approaches to improving the real time recurrent learning (RTRL) algorithm and attempts to group them into common frameworks. The characteristics of sub-grouping strategy, mode exchange RTRL, and cellular genetic algorithms are discussed. The relationships between these algorithms are highlighted and their time complexities and convergence capability are compared. The learning algorithms are applied to train recurrent neural networks in an attempt to solve a long-term dependency problem, to model the Hénon map, and to predict the chaotic intensity pulsations of an NH3 laser. The results show that the original RTRL algorithm achieves the lowest error among the gradient-based algorithms, but it requires the longest training time; whereas the sub-grouping strategy uses the shortest training time but its convergence capability is the poorest. The results also demonstrate that the cellular genetic algorithm is an alternative means of training recurrent neural networks when the gradient-based methods fail to find an acceptable solution.  相似文献   

14.
As a novel learning algorithm for single-hidden-layer feedforward neural networks, extreme learning machines (ELMs) have been a promising tool for regression and classification applications. However, it is not trivial for ELMs to find the proper number of hidden neurons due to the nonoptimal input weights and hidden biases. In this paper, a new model selection method of ELM based on multi-objective optimization is proposed to obtain compact networks with good generalization ability. First, a new leave-one-out (LOO) error bound of ELM is derived, and it can be calculated with negligible computational cost once the ELM training is finished. Furthermore, the hidden nodes are added to the network one-by-one, and at each step, a multi-objective optimization algorithm is used to select optimal input weights by minimizing this LOO bound and the norm of output weight simultaneously in order to avoid over-fitting. Experiments on five UCI regression data sets are conducted, demonstrating that the proposed algorithm can generally obtain better generalization performance with more compact network than the conventional gradient-based back-propagation method, original ELM and evolutionary ELM.  相似文献   

15.
有教师的线性基本函数前向三层神经网络结构研究   总被引:148,自引:0,他引:148  
优化选择隐节点数是人们在应用基于误差反传算法的有教师的线性基本函数前向三层神经网络过程中首先遇到的一个十分重要而又困难的问题。本文从国内外大量应用裕列中陪结归纳出了一个初定这种网络隐节点数的经验公式,提出了一种判断所选隐节点数是否多余的具体方法,并从理论上做了详细的推导。  相似文献   

16.
针对正则化极限学习机(RELM)中隐节点数影响分类准确性问题,提出一种灵敏度正则化极限学习机(SRELM)算法.首先根据隐含层激活函数的输出及其相对应的输出层权重系数,推导实际值与隐节点输出值残差相对于隐节点的灵敏度计算公式,然后根据不同隐节点的灵敏度进行排序,利用优化样本的分类准确率删减次要隐节点,从而有效提高SRELM的分类准确率.MNIST手写体数字库实验结果表明,相比于传统的SVM和RELM, SRELM方法的耗时与RELM相差不大,均明显低于SVM, SRELM对手写数字的识别准确率最高.  相似文献   

17.
神经网络增强学习的梯度算法研究   总被引:11,自引:1,他引:11  
徐昕  贺汉根 《计算机学报》2003,26(2):227-233
针对具有连续状态和离散行为空间的Markov决策问题,提出了一种新的采用多层前馈神经网络进行值函数逼近的梯度下降增强学习算法,该算法采用了近似贪心且连续可微的Boltzmann分布行为选择策略,通过极小化具有非平稳行为策略的Bellman残差平方和性能指标,以实现对Markov决策过程最优值函数的逼近,对算法的收敛性和近似最优策略的性能进行了理论分析,通过Mountain-Car学习控制问题的仿真研究进一步验证了算法的学习效率和泛化性能。  相似文献   

18.
Extended least squares based algorithm for training feedforward networks.   总被引:2,自引:0,他引:2  
An extended least squares-based algorithm for feedforward networks is proposed. The weights connecting the last hidden and output layers are first evaluated by least squares algorithm. The weights between input and hidden layers are then evaluated using the modified gradient descent algorithms. This arrangement eliminates the stalling problem experienced by the pure least squares type algorithms; however, still maintains the characteristic of fast convergence. In the investigated problems, the total number of FLOPS required for the networks to converge using the proposed training algorithm are only 0.221%-16.0% of that using the Levenberg-Marquardt algorithm. The number of floating point operations per iteration of the proposed algorithm are only 1.517-3.521 times of that of the standard backpropagation algorithm.  相似文献   

19.
强化学习是解决自适应问题的重要方法,被广泛地应用于连续状态下的学习控制,然而存在效率不高和收敛速度较慢的问题.在运用反向传播(back propagation,BP)神经网络基础上,结合资格迹方法提出一种算法,实现了强化学习过程的多步更新.解决了输出层的局部梯度向隐层节点的反向传播问题,从而实现了神经网络隐层权值的快速更新,并提供一个算法描述.提出了一种改进的残差法,在神经网络的训练过程中将各层权值进行线性优化加权,既获得了梯度下降法的学习速度又获得了残差梯度法的收敛性能,将其应用于神经网络隐层的权值更新,改善了值函数的收敛性能.通过一个倒立摆平衡系统仿真实验,对算法进行了验证和分析.结果显示,经过较短时间的学习,本方法能成功地控制倒立摆,显著提高了学习效率.  相似文献   

20.
自动编码机通过深度无监督学习能够表达数据的语义特征,但由于其隐含层节点个数难以有效确定,所处理的数据进一步用于分类时常会导致分类准确度低、稳定性弱等问题。针对这些问题,提出了一种稀疏和标签约束的半监督自动编码机(SLRAE),以实现无监督学习与监督学习的有机结合,更准确地抽取样本的本质特征。稀疏约束项针对每个隐含节点的响应添加约束条件,从而在隐含神经元数量较多的情况下仍可发现数据中潜在的结构;同时引入标签约束项,以监督学习的方式比对实际标签与期望标签,针对性地调整网络参数,进一步提高分类准确率。为验证所提方法的有效性,实验中对多个数据集进行广泛地测试,其结果表明,相对传统自动编码机(AE)、稀疏自动机(SAE)以及极限学习机(ELM),SLRAE所处理的数据应用于同一分类器,能明显提高分类准确率和稳定性。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号