首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 140 毫秒
1.
王冬  赵同林 《数字社区&智能家居》2009,(11):8829-8830,8833
通过对伐区设计资料,及实际生产码单数据进行学习,确定以平均胸径、平均树高、保留密度、蓄积量为输入神经元,分析了影响BP网络学习效率和预测精度的影响因素,主要从隐含层神经元数量、训练数、隐舍层激励函数、学习样本数量几个方面对材种出材率预测BP网络模型进行了优化,确定了林分经验材种出材率预测人工神经网络模型。为林分经验材种出材率表的编制提供一种新的思路与方法  相似文献   

2.
文章介绍了BP人工神经网络和贝叶斯正则化算法的原理,探讨了贝叶斯正则化BP人工神经网络模型的建立,通过改变隐含层神经元个数的实验建立了只含1个隐含层且隐含层仅需1个神经元的煤与瓦斯突出预测模型的最佳网络结构。对该网络采用煤与瓦斯突出的预测指标进行训练、检测的结果表明,该网络预测的煤与瓦斯突出的危险程度与实际情况完全吻合;对该网络输入层输入的煤与瓦斯突出的预测指标、对输出层输出的预测结果的权值进行分析的结果表明,煤层地质构造类型对煤与瓦斯突出的影响为最大。上述研究结果对煤与瓦斯突出的预测预防研究、提高煤与瓦斯突出预测的准确性具有一定的参考价值。  相似文献   

3.
为进一步提高交通调度效率,解决日益严重的交通拥堵现状,提出基于BP神经网络模型预测车辆通行时间的方案。根据交叉路口特征,建立了三层BP(Back Propagation)神经网络模型,并确定模型的输入层和输出层神经元数目均为4个。采用MATLAB软件对采集的车辆通行数据进行仿真分析,最终确定隐含层神经元数目为9个。利用预测样本对BP神经网络模型进行了可行性验证。结果表明,BP神经网络模型能够用于预测排队车辆通行时间,误差在10%以内,可以作为交通控制器配时方案的依据,提高车辆通行效率。  相似文献   

4.
BP神经网络合理隐结点数确定的改进方法   总被引:1,自引:0,他引:1  
合理选择隐含层结点个数是BP神经网络构造中的关键问题,对网络的适应能力、学习速率都有重要的影响.在此提出一种确定隐结点个数的改进方法.该方法基于隐含层神经元输出之间的线性相关关系与线性无关关系,对神经网络隐结点个数进行削减,缩减网络规模.以零件工艺过程中的加工参数作为BP神经网络的输入,加工完成的零件尺寸作为BP神经网络的输出建立模型,把该方法应用于此神经网络模型中,其训练结果证明了该方法的有效性.  相似文献   

5.
针对聚乙烯醇生产过程的醇解度预测问题,建立神经网络模型;对醇解度的影响因素进行了研究,讨论了输入层、输出层、隐含层等神经元的设置及网络训练的参数,比较了梯度下降BP算法、动量-自适应学习速率调整算法、Levenberg-Marquardt BP算法三种不同的训练算法在本问题上的优劣,并与RBF网络相比较,综合考虑训练时间、训练精度、泛化能力等条件,动量一自适应学习速率调整算法是最适合醇解度预测的,并基于动量-自适应学习速率调整算法建立了神经网络模型;将模型应用于醇解度预测系统,系统实际运行情况表明,利用神经网络模型预测醇解度是可行有效的.  相似文献   

6.
邢毓华  李凡菲 《计算机仿真》2021,38(8):97-102,166
光伏充电站中设备故障维修时间对运行效率有着重要影响.为提高光伏充电站设备维修时间的预测精度,考虑到神经网络算法中隐含层神经元数对算法预测精度的影响,提出了一种改进的GA-BP神经网络算法,并以光伏充电站60个设备维修时间为样本验证了改进算法的有效性.结果表明,GA-BP神经网络结构中隐含层神经元数取5时算法预测精度最高,且采用改进GA-BP神经网络算法预测时平均相对误差仅为6.1%,较灰色模型与BP神经网络算法分别降低了 90.4%与57%.改进后的GA-BP神经网络的预测准确度远高于灰色模型和BP神经网络,得到的预测时间可为维修人员调度提供依据.  相似文献   

7.
提出一种量子BP网络模型及改进学习算法,该BP网络模型首先基于量子学中一位相移门和两位受控非门的通用性,构造出一种量子神经元,然后由该量子神经元构造隐含层,采用梯度下降法进行学习。输出层采用传统神经元构造,采用基于改进的带动量自适应学习率梯度下降法学习。在UCI两个数据集上采用该模型及算法,实验结果表明该方法比传统的BP网络具有较好的收敛速度和正确率。  相似文献   

8.
针对传统的BP神经网络应用于藻类生长预测时,往往出现训练时间较长、输出数据精度低等问题.本文提出了在含有两层隐含层的BP神经网络结构中,对于数量一定的神经元,若神经元在隐含层分配合理,则BP神经网络可以达到减少训练次数并且能满足问题精度的要求.应用实例表明,该方法对预测藻类生长显得非常有效.  相似文献   

9.
BP网是一种单向传播的多层前向网络,可用于最优预测。本文介绍了如何建立BP网络结构模型,包括如何确定隐含层层数以及隐含层节点数目,从而构造一个三层的9-20-1型神经网络,利用此网络预测煤体氧化速率的定量变化,分析得到的预测结果可用来研究煤体氧化性规律,对考察煤体自然发火倾向性具有指导意义。  相似文献   

10.
鲜切花价格指数是反映鲜切花市场现状的风向标,研究鲜切花价格指数变化,掌握鲜花市场的动态和规律性具有重要意义。本文针对具有时序特点的鲜切花价格指数,基于BP模型中的L-M优化算法构建鲜切花价格指数短期预测模型,采用tansig和purelin作为各层之间的传递函数,利用时间序列分析方法确定输入层的神经元个数,通过实验数据对比来确定隐含层的神经元个数。采用平均绝对误差、平均相对误差和均方根误差这3个评价指标对模型的预测精度进行检验,实验结果表明所构建模型是有效的和具有实际应用价值的。  相似文献   

11.
An intelligent system for sorting pistachio nut varieties   总被引:1,自引:0,他引:1  
An intelligent pistachio nut sorting system combining acoustic emissions analysis, Principal Component Analysis (PCA) and Multilayer Feedforward Neural Network (MFNN) classifier was developed and tested. To evaluate the performance of the system 3200 pistachio nuts from four native Iranian pistachio nut varieties were used. Each variety was consisted of 400 split-shells and 400 closed-shells nut. The nuts were randomly selected, slide down a chute, inclined 60° above the horizontal, on which nuts slide down to impact a steel plate and their acoustic signals were recorded from the impact. Sound signals in the time-domain are saved for subsequent analysis. The method is based on feature generation by Fast Fourier Transform (FFT), feature reduction by PCA and classification by MFNN. Features such as amplitude, phase and power spectrum of sound signals are computed via a 1024-point FFT. By using PCA more than 98% reduction in the dimension of feature vector is achieved. To find the optimal MFNN classifier, various topologies each having different number of neurons in the hidden layer were designed and evaluated. The best MFNN model had a 40–12–4 structure, that is, a network having one hidden layer with 40 neurons at its input, 12 neurons in the hidden layer and 4 neurons (pistachio varieties) in the output layer. The selection of the optimal model was based on the examination of mean square error, correlation coefficient and correct separation rate (CSR). The CSR or total weighted average in system accuracy for the 40–12–4 structure was 97.5%, that is, only 2.5% of nuts were misclassified.  相似文献   

12.
In this letter, we attempt to quantify the significance of increasing the number of neurons in the hidden layer of a feedforward neural network architecture using the singular value decomposition (SVD). Through this, we extend some well-known properties of the SVD in evaluating the generalizability of single hidden layer feedforward networks (SLFNs) with respect to the number of hidden layer neurons. The generalization capability of the SLFN is measured by the degree of linear independency of the patterns in hidden layer space, which can be indirectly quantified from the singular values obtained from the SVD, in a postlearning step. A pruning/growing technique based on these singular values is then used to estimate the necessary number of neurons in the hidden layer. More importantly, we describe in detail properties of the SVD in determining the structure of a neural network particularly with respect to the robustness of the selected model  相似文献   

13.
多层感知机在分类问题中具有广泛的应用。本文针对超平面阈值神经元构成的多层感知机用于分类的情况,求出了输入层神经元最多能把输入空间划分的区域数的解析表达式。该指标在很大程度上说明了感知机输入层的分类能力。本文还对隐含层神经元个数和输入层神经元个数之间的约束关系进行了讨论,得到了更准确的隐含层神经元个数上
上限。当分类空间的雏数远小于输入层神经元个数时,本文得到的隐含层神经元个数上限比现有的结果更小。  相似文献   

14.
A sequential orthogonal approach to the building and training of a single hidden layer neural network is presented in this paper. The Sequential Learning Neural Network (SLNN) model proposed by Zhang and Morris [1]is used in this paper to tackle the common problem encountered by the conventional Feed Forward Neural Network (FFNN) in determining the network structure in the number of hidden layers and the number of hidden neurons in each layer. The procedure starts with a single hidden neuron and sequentially increases in the number of hidden neurons until the model error is sufficiently small. The classical Gram–Schmidt orthogonalization method is used at each step to form a set of orthogonal bases for the space spanned by output vectors of the hidden neurons. In this approach it is possible to determine the necessary number of hidden neurons required. However, for the problems investigated in this paper, one hidden neuron itself is sufficient to achieve the desired accuracy. The neural network architecture has been trained and tested on two practical civil engineering problems – soil classification, and the prediction o strength and workability of high performance concrete.  相似文献   

15.
In this paper methodologies are proposed to estimate the number of hidden neurons that are to be placed numbers in the hidden layer of artificial neural networks (ANN) and certain new criteria are evolved for fixing this hidden neuron in multilayer perceptron neural networks. On the computation of the number of hidden neurons, the developed neural network model is applied for wind speed forecasting application. There is a possibility of over fitting or under fitting occurrence due to the random selection of hidden neurons in ANN model and this is addressed in this paper. Contribution is done in developing various 151 different criteria and the evolved criteria are tested for their validity employing various statistical error means. Simulation results prove that the proposed methodology minimized the computational error and enhanced the prediction accuracy. Convergence theorem is employed over the developed criterion to validate its applicability for fixing the number of hidden neurons. To evaluate the effectiveness of the proposed approach simulations were carried out on collected real-time wind data. Simulated results confirm that with minimum errors the presented approach can be utilized for wind speed forecasting. Comparative analysis has been performed for the estimation of the number of hidden neurons in multilayer perceptron neural networks. The presented approach is compact, enhances the accuracy rate with reduced error and faster convergence.  相似文献   

16.
前馈神经网络中隐层神经元数目的一种直接估计方法   总被引:19,自引:0,他引:19  
李玉鉴 《计算机学报》1999,22(11):1204-1208
目前还没有一个行之有效的方法直接估计前馈网络隐层神经元的数目。该文首先提出一种利用单调指数直接估算三层前馈网络隐层经元数目的方法,以保证网络近似逼近任意给定的训练数据。理论分析和计算实验表明,此方法能够在训练之前预先确定最优(最少)或接近最优的隐层神经元数目,使得网络在训练之后不仅可以较好地反映训练数据的变化趋势,而且有较为满意的逼近精度。  相似文献   

17.
Binary neural networks (BNNs) have important value in many application areas.They adopt linearly separable structures,which are simple and easy to implement by hardware.For a BNN with single hidden layer,the problem of how to determine the upper bound of the number of hidden neurons has not been solved well and truly.This paper defines a special structure called most isolated samples (MIS) in the Boolean space.We prove that at least 2 n 1 hidden neurons are needed to express the MIS logical relationship in the Boolean space if the hidden neurons of a BNN and its output neuron form a structure of AND/OR logic.Then the paper points out that the n -bit parity problem is just equivalent to the MIS structure.Furthermore,by proposing a new concept of restraining neuron and using it in the hidden layer,we can reduce the number of hidden neurons to n .This result explains the important role of restraining neurons in some cases.Finally,on the basis of Hamming sphere and SP function,both the restraining neuron and the n -bit parity problem are given a clear logical meaning,and can be described by a series of logical expressions.  相似文献   

18.
The central problem in training a radial basis function neural network (RBFNN) is the selection of hidden layer neurons, which includes the selection of the center and width of those neurons. In this paper, we propose an enhanced swarm intelligence clustering (ESIC) method to select hidden layer neurons, and then, train a cosine RBFNN based on the gradient descent learning process. Also, we apply this new method for classification of deep Web sources. Experimental results show that the average Precision, Recall and F of our ESIC-based RBFNN classifier achieve higher performance than BP, Support Vector Machines (SVM) and OLS RBF for our deep Web sources classification problems.  相似文献   

19.
The central problem in training a radial basis function neural network (RBFNN) is the selection of hidden layer neurons, which includes the selection of the center and width of those neurons. In this paper, we propose an enhanced swarm intelligence clustering (ESIC) method to select hidden layer neurons, and then, train a cosine RBFNN based on the gradient descent learning process. Also, we apply this new method for classification of deep Web sources. Experimental results show that the average Precision, Recall and F of our ESIC-based RBFNN classifier achieve higher performance than BP, Support Vector Machines (SVM) and OLS RBF for our deep Web sources classification problems.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号