首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 546 毫秒
1.
Artificial neural networks were used to support applications across a variety of business and scientific disciplines during the past years. Artificial neural network applications are frequently viewed as black boxes which mystically determine complex patterns in data. Contrary to this popular view, neural network designers typically perform extensive knowledge engineering and incorporate a significant amount of domain knowledge into artificial neural networks. This paper details heuristics that utilize domain knowledge to produce an artificial neural network with optimal output performance. The effect of using the heuristics on neural network performance is illustrated by examining several applied artificial neural network systems. Identification of an optimal performance artificial neural network requires that a full factorial design with respect to the quantity of input nodes, hidden nodes, hidden layers, and learning algorithm be performed. The heuristic methods discussed in this paper produce optimal or near-optimal performance artificial neural networks using only a fraction of the time needed for a full factorial design.  相似文献   

2.
已有的急速学习机(Extreme Learning Machine)的学习精度受隐节点数目的影响很大。无论是已提出的单隐层急速学习机还是多隐层神经网络,都是先确定隐藏层数,再通过增加每一层的神经元个数来提高精度。但当训练集规模很大时,往往需要引入很多的隐节点,导致违逆矩阵计算复杂度大,从而不利于学习效率的提高。提出逐层可加的急速学习机MHL-ELM(Extreme Learning Machine with Incremental Hidden Layers),其思想是首先对当前隐藏层神经元(数目不大且不寻优,因而复杂度小)的权值进行随机赋值,用ELM思想求出逼近误差;若误差达不到要求,再增加一个隐含层。然后运用ELM的思想对当前隐含层优化。逐渐增加隐含层,直至满足误差精度为止。除此以外,MHL-ELM的算法复杂度为[l=1MO(N3l)]。实验使用10个UCI,keel真实数据集,通过与BP,OP-ELM等传统方法进行比较,表明MHL-ELM学习方法具有更好的泛化性,在学习精度和学习速度方面都有很大的提升。  相似文献   

3.
Back-propagation neural networks that represent specific process parameters in a composite board manufacturing process were analyzed to determine their sensitivity to network design and to the values of the learning parameters used in the back-propagation algorithm. The effects of the number of hidden layers, the number of nodes in a hidden layer, and the values of the learning rate and momentum factor were studied. Three network modification strategies were applied to evaluate their effect on the predictive capability of the network. The convergence criteria were tightened, the number of hidden nodes and hidden layers was increased. These modifications did not improve the predictive capability of the composite board networks.  相似文献   

4.
针对径向基函数(RBF)网络隐层结构难以确定的问题,基于自适应共振理论(ART)网络良好的在线分类特性,提出一种RBF网络结构设计算法。该算法将ART网络的聚类特性用于RBF网络结构设计中,通过对输入向量与已存模式的相似度比较将输入向量进行分类,确定隐含层节点个数和初始参数,使网络具有精简的结构。对典型非线性函数逼近的仿真结果表明,所提出的结构具有快速的学习能力和良好的逼近能力。  相似文献   

5.
This paper studies how to train a new feed-forward neural network, radial basis perceptron (RBP) neural network, for distinguishing different sets in RL. RBP neural network is based on radial basis function (RBF) neural network and perceptron neural network. It has two hidden layers where the nodes are not fully connected but use selective connection. A training algorithm corresponding to the structure of RBP network is presented. It adopts the input-output clustering (IOC) method to provide an efficient and powerful procedure for constructing a RBP network that generalizes very well. First, during the learning procedure, RBP neural network adopts IOC method to define the number of units of hidden layers and select centers. Second, the width parameter σ of centers is self-adjustable according to the information included in the learning samples. The effectiveness of this network is illustrated using an example taken from applications for component analysis of civil building materials. Simulation shows that RBP neural network can be used to predict the components of civil building materials successfully and gets good generalization ability.  相似文献   

6.
贝叶斯正则化神经网络预测金属晶体结合能的研究   总被引:8,自引:3,他引:8  
采用贝叶斯正则化神经网络(BRNN)对61种金属晶体结合能进行了预测。对网络结构、训练集、预测集以及学习次数进行了优化,并用独立预测样本对贝叶斯正则化神经网络作了检验。预测结果表明,在推广能力方面,贝叶斯正则化神经网络优于熟知的反向传播(BP)神经网络和多元线性回归方法(MLR)。它可望成为元素和化合物构效关系研究的辅助手段。  相似文献   

7.
提出了一种新的结构自适应的径向基函数(RBF)神经网络模型。在该模型中,自组织映射(SOM)神经网络作为聚类网络,采用无监督学习算法对输入样本进行自组织分类,并将分类中心及其对应的权值向量传递给RBF神经网络,分别作为径向基函数的中心和相应的权值向量;RBF神经网络作为基础网络,采用高斯函数实现输入层到隐层的非线性映射,输出层则采用有监督学习算法训练网络的权值,从而实现输入层到输出层的非线性映射。通过对字母数据集进行仿真,表明该网络具有较好的性能。  相似文献   

8.
In this paper, we introduce a new learning method for composite function wavelet neural networks (CFWNN) by combining the differential evolution (DE) algorithm with extreme learning machine (ELM), in short, as CWN-E-ELM. The recently proposed CFWNN trained with ELM (CFWNN-ELM) has several promising features. But the CFWNN-ELM may have some redundant nodes due to the number of hidden nodes assigned a priori and the input weight matrix and the hidden node parameter vector randomly generated once and never changed during the learning phase. The introduction of DE into CFWNN-ELM is to search for the optimal network parameters and to reduce the number of hidden nodes used in the network. Simulations on several artificial function approximations, real-world data regressions and a chaotic signal prediction problem show some advantages of the proposed CWN-E-ELM. Compared with CFWNN-ELM, CWN-E-ELM has a much more compact network size and Compared with several relevant methods, CWN-E-ELM is able to achieve a better generalization performance.  相似文献   

9.
二进神经网络可以完备表达任意布尔函数,但对于孤立节点较多的奇偶校验问题却难以用简洁的网络结构实现。针对该问题,提出了一种实现奇偶校验等孤立节点较多的一类布尔函数的二进神经网络学习算法。该算法首先借助蚁群算法优化选择真节点及伪节点的访问顺序;其次结合几何学习算法,根据优化的节点访问顺序给出扩张分类超平面的步骤,从而减少隐层神经元的数目,同时给出了隐层神经元及输出元的表达形式;最后通过典型实例验证了该算法的有效性。  相似文献   

10.
基于GEP优化的RBF神经网络算法   总被引:1,自引:0,他引:1  
RBF神经网络作为一种采用局部调节来执行函数映射的人工神经网络,在逼近能力、分类能力和学习速度等方面都有良好的表现,但由于RBF网络的隐节点的个数和隐节点的中心难以确定,从而影响了整个网络的精度,极大地制约了该网络的广泛应用.为此本文提出基于GEP优化的RBF神经网络算法,对其中心向量及连接权值进行优化.实验表明,本文所提算法比RBF算法的预测误差平均减少了48.96% .  相似文献   

11.
R Setiono 《Neural computation》2001,13(12):2865-2877
This article presents an algorithm that constructs feedforward neural networks with a single hidden layer for pattern classification. The algorithm starts with a small number of hidden units in the network and adds more hidden units as needed to improve the network's predictive accuracy. To determine when to stop adding new hidden units, the algorithm makes use of a subset of the available training samples for cross validation. New hidden units are added to the network only if they improve the classification accuracy of the network on the training samples and on the cross-validation samples. Extensive experimental results show that the algorithm is effective in obtaining networks with predictive accuracy rates that are better than those obtained by state-of-the-art decision tree methods.  相似文献   

12.
A radial basis function (RBF) neural network was designed for time series forecasting using both an adaptive learning algorithm and response surface methodology (RSM). To improve the traditional RBF networks forecasting capability, the generalized delta rule learning method was employed to modify the radius of the kernel function. Then RSM was utilized to explore the mean square error response surface so that the appropriate combination of network parameters, such as the number of hidden nodes and the initial learning rates, could be found. Extensive studies were performed on the effect of the initial values of connection weights on the accuracy of the backpropagation learning method that was employed in the training of the RBF artificial neural network. The effectiveness of the neural network with the proposed radius-modification technique and the RSM method was demonstrated with an example of forecasting intensity pulsations of a laser. It was found that, by utilizing the proposed techniques, the neural network provided a more accurate prediction of the response.  相似文献   

13.
《遥感信息》2009,28(1):71-76
隐层数和隐层结点数直接关乎BP网络的学习能力,但目前对隐层结点数的选择尚无适用的理论,一般凭经验或试凑确定。本文提出一种分段式自适应确定隐层结点数的算法,它通过评估网络输出相对误差相应调整隐层结点数,通过迭代运算在使网络输出相对误差逐步减小的同时,逼近可能的最优隐层结点数。通常这个最优结点数即网络输出相对误差出现震荡的起点对应的结点数,以这个结点数决定的网络结构能够在网络输出精度与运算开销之间取得较佳平衡。  相似文献   

14.
This study presents an experimental evaluation of neural networks for nonlinear time-series forecasting. The effects of three main factors — input nodes, hidden nodes and sample size, are examined through a simulated computer experiment. Results show that neural networks are valuable tools for modeling and forecasting nonlinear time series while traditional linear methods are not as competent for this task. The number of input nodes is much more important than the number of hidden nodes in neural network model building for forecasting. Moreover, large sample is helpful to ease the overfitting problem.Scope and purposeInterest in using artificial neural networks for forecasting has led to a tremendous surge in research activities in the past decade. Yet, mixed results are often reported in the literature and the effect of key modeling factors on performance has not been thoroughly examined. The lack of systematic approaches to neural network model building is probably the primary cause of inconsistencies in reported findings. In this paper, we present a systematic investigation of the application of neural networks for nonlinear time-series analysis and forecasting. The purpose is to have a detailed examination of the effects of certain important neural network modeling factors on nonlinear time-series modeling and forecasting.  相似文献   

15.
BP神经网络合理隐结点数确定的改进方法   总被引:1,自引:0,他引:1  
合理选择隐含层结点个数是BP神经网络构造中的关键问题,对网络的适应能力、学习速率都有重要的影响.在此提出一种确定隐结点个数的改进方法.该方法基于隐含层神经元输出之间的线性相关关系与线性无关关系,对神经网络隐结点个数进行削减,缩减网络规模.以零件工艺过程中的加工参数作为BP神经网络的输入,加工完成的零件尺寸作为BP神经网络的输出建立模型,把该方法应用于此神经网络模型中,其训练结果证明了该方法的有效性.  相似文献   

16.
自适应径向基神经网络及其应用   总被引:7,自引:2,他引:5  
提出一种基于硬C均值算法的自适应RBF神经网络。该算法根据网络训练误差的变化,在隐层到输出层的权值修改过程中,对学习步长进行自适应调节;对通常采用的基函数宽度的计算方法作了改进;对于硬C均值算法出现的死节点,则在程序运行中自动进行删除。利用该改进的自适应RBF网络进行某合成氨装置的氢氮比预测,网络计算误差小、收敛迅速、结果令人满意,表明网络具有良好的性能。  相似文献   

17.
A recurrent self-organizing neural fuzzy inference network   总被引:15,自引:0,他引:15  
A recurrent self-organizing neural fuzzy inference network (RSONFIN) is proposed. The RSONFIN is inherently a recurrent multilayered connectionist network for realizing the basic elements and functions of dynamic fuzzy inference, and may be considered to be constructed from a series of dynamic fuzzy rules. The temporal relations embedded in the network are built by adding some feedback connections representing the memory elements to a feedforward neural fuzzy network. Each weight as well as node in the RSONFIN has its own meaning and represents a special element in a fuzzy rule. There are no hidden nodes initially in the RSONFIN. They are created online via concurrent structure identification and parameter identification. The structure learning together with the parameter learning forms a fast learning algorithm for building a small, yet powerful, dynamic neural fuzzy network. Two major characteristics of the RSONFIN can thus be seen: 1) the recurrent property of the RSONFIN makes it suitable for dealing with temporal problems and 2) no predetermination, like the number of hidden nodes, must be given, since the RSONFIN can find its optimal structure and parameters automatically and quickly. Moreover, to reduce the number of fuzzy rules generated, a flexible input partition method, the aligned clustering-based algorithm, is proposed. Various simulations on temporal problems are done and performance comparisons with some existing recurrent networks are also made. Efficiency of the RSONFIN is verified from these results.  相似文献   

18.
提出了一种新的结构自适应的径向基函数(RBF)神经网络模型。在该网络中,自组织映射(SOM)神经网络作为聚类网络,采用无监督学习算法对输入样本进行自组织分类,并将分类中心及其对应的权值向量传递给RBF神经网络,作为径向基函数的中心和相应的权值向量;RBF神经网络作为基础网络,采用高斯函数实现输入层到隐层的非线性映射,输出层则采用有监督学习算法训练网络的权值,从而实现输入层到输出层的非线性映射。通过对字母数据集进行仿真,表明该网络具有较好的性能。  相似文献   

19.
相比径向基(RBF)神经网络,极限学习机(ELM)训练速度更快,泛化能力更强.同时,近邻传播聚类算法(AP)可以自动确定聚类个数.因此,文中提出融合AP聚类、多标签RBF(ML-RBF)和正则化ELM(RELM)的多标签学习模型(ML-AP-RBF-RELM).首先,在该模型中输入层使用ML-RBF进行映射,且通过AP聚类算法自动确定每一类标签的聚类个数,计算隐层节点个数.然后,利用每类标签的聚类个数通过K均值聚类确定隐层节点RBF函数的中心.最后,通过RELM快速求解隐层到输出层的连接权值.实验表明,ML-AP-RBF-RELM效果较好.  相似文献   

20.
基于正交多项式基的神经网络模型   总被引:1,自引:0,他引:1  
本文采用一类正交多项式集合作为神经元的激励函数,构成一个正产多项式基神经网络。网络的拓扑结构2和相应的正交多项式基在学习的过程中确定,网络的权值经最小二乘算法得到,避免了局部极值问题,仿真结果表明,本文提出的方法是可行和有效的。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号