首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 234 毫秒
1.
基于RBF神经网络的特点提出了一种动态调节隐含层隐节点个数的方法,由2部分组成:首先以网络输出数据的均方误差及其变化率为标准来调节隐含层节点的数目,然后调节优化隐含层节点的中心值,根据广义逆矩阵的方法求出输出层权值.所设计的神经网络具有最少的隐含层节点数,提高了学习训练速度,构造了板形板厚综合控制的数学模型,采用新的模型处理方法,用动态RBF神经网络进行控制仿真,取得了理想的结果.  相似文献   

2.
切比雪夫正交基神经网络的权值直接确定法   总被引:2,自引:0,他引:2  
经典的BP神经网络学习算法是基于误差回传的思想.而对于特定的网络模型,采用伪逆思想可以直接确定权值进而避免以往的反复迭代修正的过程.根据多项式插值和逼近理论构造一个切比雪夫正交基神经网络,其模型采用三层结构并以一组切比雪夫正交多项式函数作为隐层神经元的激励函数.依据误差回传(BP)思想可以推导出该网络模型的权值修正迭代公式,利用该公式迭代训练可得到网络的最优权值.区别于这种经典的做法,针对切比雪夫正交基神经网络模型,提出了一种基于伪逆的权值直接确定法,从而避免了传统方法通过反复迭代才能得到网络权值的冗长训练过程.仿真结果表明该方法具有更快的计算速度和至少相同的工作精度,从而验证了其优越性.  相似文献   

3.
吕芳 《自动化技术与应用》2021,40(3):113-117,123
针对高层建筑造价评估准确性低的缺陷,提出基于BP神经网络的高层建筑工程成本造价评估模型。根据建设项目总体投资组成,对成本造价指数分类,确定整体造价修正系数,利用灰关联分析方法构建评估指标体系;根据BP神经网络结构,计算网络误差,并通过梯度下降方法定义输出层、隐含层的误差信号,获取网络权值调整公式;最后利用自适应学习率调节公式设置网络参数,将建筑工程中关键参数引入到输入层,建立最终的成本造价评估模型。仿真实验表明,所提方法可以利用较少的信息量准确快速地评估出高层建筑工程成本的最佳方案,具有较强非线性信息处理能力。  相似文献   

4.
针对极端学习机(ELM)网络结构设计问题,提出基于灵敏度分析法的ELM剪枝算法.利用隐含层节点输出和相对应的输出层权值向量,定义学习残差对于隐含层节点的灵敏度和网络规模适应度,根据灵敏度大小判断隐含层节点的重要性,利用网络规模适应度确定隐含层节点个数,删除重要性较低的节点.仿真结果表明,所提出的算法能够较为准确地确定与学习样本相匹配的网络规模,解决了ELM网络结构设计问题.  相似文献   

5.
采用中心聚类与PSO的RBF网络设计方法   总被引:3,自引:0,他引:3       下载免费PDF全文
基于中心聚类法与微粒群(PSO)优化方法,提出一种径向基函数(RBF)网络的设计算法。算法采用中心聚类方法对输入样本数据进行聚类处理,自适应地确定RBF网络隐含层的初始参数;利用修正全局最优解计算方法的经典PSO算法优化RBF网络隐含层参数,进一步修正网络结构参数;输出层权值采用带遗忘因子的递推最小二乘算法在线更新。采用该方法建立炼铁过程中烧结矿成分与转鼓强度关系的预测模型,并用现场数据加以验证;实验结果表明该方法收敛速度快,所建立的模型具有较高的预测精度,可用于复杂非线性系统建模。  相似文献   

6.
Fourier三角基神经元网络的权值直接确定法   总被引:1,自引:0,他引:1  
根据Fourier变换理论,本文构造出一类基于三角正交基的前向神经网络模型。该模型由输入层、隐层、输出层构成,其输入层和输出层采用线性激励函数,以一组三角正交基为其隐层神经元的激励函数。依据误差回传算法(即BP算法),推导了权值修正的迭代公式。针对BP迭代法收敛速度慢、逼近目标函数精度较低的缺点,进一步提出基于伪逆的权值直接确定法,该方法避免了权值反复迭代的冗长过程。仿真和预测结果表明,该方法比传统的BP迭代法具有更快的计算速度和更高的仿真与测试精度。  相似文献   

7.
DBN网络的深度确定方法   总被引:5,自引:0,他引:5  
针对DBN网络隐含层层数难以选择的问题,首先从数学生物学角度分析了随机初始化的梯度下降法导致网络训练失败的原因,并进行验证,证明了RBM重构误差与网络能量的正相关定理;然后根据隐含层和误差的关系,提出一种基于重构误差的网络深度判断方法,在训练过程中自组织地训练网络,使其能够以一种接近人类处理问题的方式解决AI问题。手写数字识别的实验表明,该方法能够有效提高运算效率,降低运算成本。  相似文献   

8.
李军  乃永强 《控制与决策》2015,30(9):1559-1566

针对一类多输入多输出(MIMO) 仿射非线性动态系统, 提出一种基于极限学习机(ELM) 的鲁棒自适应神经控制方法. ELM随机确定单隐层前馈网络(SLFNs) 的隐含层参数, 仅需调整网络的输出权值, 能以极快的学习速度获得良好的推广性. 在所提出的控制方法中, 利用ELM逼近系统的未知非线性项, 针对ELM网络的权值、逼近误差及外界扰动的未知上界值分别设计参数自适应律, 通过Lyapunov 稳定性分析可以保证闭环系统所有信号半全局最终一致有界. 仿真结果表明了该控制方法的有效性.

  相似文献   

9.
权值初始化与激励函数调整相结合的学习算法   总被引:2,自引:0,他引:2  
提出了一种基于独立元分析(ICA)方法的权值初始化方法和动态调整S型激励函数的斜率相结合的神经网络学习算法。该方法利用ICA从输入数据中提取显著的特征信息来初始化输入层到隐含层权值。而且通过使神经网络的输出位于激励函数的活动区域,对隐含层到输出层的权值进行初始化。在学习过程中,再对每个隐单元和输出单元的激励函数的斜率进行自动调整。最后通过计算机仿真实际的基准问题,验证了论文提出的方法的有效性。实验结果表明,所提出的方法能有效地加快多层前向神经网络的训练过程。  相似文献   

10.
针对极端学习机(ELM)网络规模控制问题,从剪枝思路出发,提出了一种基于影响度剪枝的ELM分类算法。利用ELM网络单个隐节点连接输入层和输出层的权值向量、该隐节点的输出、初始隐节点个数以及训练样本个数,定义单个隐节点相对于整个网络学习的影响度,根据影响度判断隐节点的重要性并将其排序,采用与ELM网络规模相匹配的剪枝步长删除冗余节点,最后更新隐含层与输入层和输出层连接的权值向量。通过对多个UCI机器学习数据集进行分类实验,并将提出的算法与EM-ELM、PELM和ELM算法相比较,结果表明,该算法具有较高的稳定性和测试精度,训练速度较快,并能有效地控制网络规模。  相似文献   

11.
确定RBF神经网络参数的新方法   总被引:8,自引:0,他引:8  
邓继雄  李志舜  梁红 《微处理机》2006,27(4):48-49,52
提出一种确定RBF网络隐含层神经元和权值的有效方法。该方法将自动聚类算法与对称距离相结合优化每个隐含层神经元的中心向量;利用伪逆方法确定隐层神经元到输出神经元的权值。实验结果表明:该方法比自动聚类算法有更好的分类能力。  相似文献   

12.
Normalized Gaussian Radial Basis Function networks   总被引:4,自引:0,他引:4  
Guido Bugmann 《Neurocomputing》1998,20(1-3):97-110
The performances of normalised RBF (NRBF) nets and standard RBF nets are compared in simple classification and mapping problems. In normalized RBF networks, the traditional roles of weights and activities in the hidden layer are switched. Hidden nodes perform a function similar to a Voronoi tessellation of the input space, and the output weights become the network's output over the partition defined by the hidden nodes. Consequently, NRBF nets lose the localized characteristics of standard RBF nets and exhibit excellent generalization properties, to the extent that hidden nodes need to be recruited only for training data at the boundaries of class domains. Reflecting this, a new learning rule is proposed that greatly reduces the number of hidden nodes needed in classification tasks. As for mapping applications, it is shown that NRBF nets may outperform standard RBFs nets and exhibit more uniform errors. In both applications, the width of basis functions is uncritical, which makes NRBF nets easy to use.  相似文献   

13.
In this paper, we propose a new combination modeling method whose structure consists of three components: extreme learning machine (ELM), adaptive neuro-fuzzy inference system (ANFIS) and PS-ABC which is a modified hybrid artificial bee colony algorithm. The combination modeling method has been proposed in an attempt to obtain good approximations and generalization performances. In the whole model, ELM is used to build a global model, and ANFIS is applied to compensate the output errors of ELM model to improve the overall performance. In order to obtain a better generalization ability and stability model, PS-ABC is adopted to optimize input weights and biases of ELM. For stating the proposed model validity, it is applied to set up the mapping relation between the boiler efficiency and operational conditions of a 300 WM coal-fired boiler. Compared with other combination models, the proposed model shows better approximations and generalization performances.  相似文献   

14.
Haq  Nuhman Ul  Khan  Ahmad  Rehman  Zia ur  Din  Ahmad  Shao  Ling  Shah  Sajid 《Multimedia Tools and Applications》2021,80(14):21771-21787

The semantic segmentation process divides an image into its constituent objects and background by assigning a corresponding class label to each pixel in the image. Semantic segmentation is an important area in computer vision with wide practical applications. The contemporary semantic segmentation approaches are primarily based on two types of deep neural networks architectures i.e., symmetric and asymmetric networks. Both types of networks consist of several layers of neurons which are arranged in two sections called encoder and decoder. The encoder section receives the input image and the decoder section outputs the segmented image. However, both sections in symmetric networks have the same number of layers and the number of neurons in an encoder layer is the same as that of the corresponding layer in the decoder section but asymmetric networks do not strictly follow such one-one correspondence between encoder and decoder layers. At the moment, SegNet and ESNet are the two leading state-of-the-art symmetric encoder-decoder deep neural network architectures. However, both architectures require extensive training for good generalization and need several hundred epochs for convergence. This paper aims to improve the convergence and enhance network generalization by introducing two novelties into the network training process. The first novelty is a weight initialization method and the second contribution is an adaptive mechanism for dynamic layer learning rate adjustment in training loop. The proposed initialization technique uses transfer learning to initialize the encoder section of the network, but for initialization of decoder section, the weights of the encoder section layers are copied to the corresponding layers of the decoder section. The second contribution of the paper is an adaptive layer learning rate method, wherein the learning rates of the encoder layers are updated based on a metric representing the difference between the probability distributions of the input images and encoder weights. Likewise, the learning rates of the decoder layers are updated based on the difference between the probability distributions of the output labels and decoder weights. Intensive empirical validation of the proposed approach shows significant improvement in terms of faster convergence and generalization.

  相似文献   

15.
针对传统极端学习机输入权值与隐层阈值随机设定的问题,提出了输出值反向分配算法。算法在传统极端学习机的基础上,通过优化方法得到最优输出值分配系数,并利用最小二乘法确定网络输入参数。将该算法应用到常用数据集进行实验,并与其他极端学习机改进算法进行比较,显示该算法有良好的学习以及泛化能力,能够得到简单的网络结构,证明了算法的有效性。  相似文献   

16.
In this paper a new learning algorithm is proposed for the problem of simultaneous learning of a function and its derivatives as an extension of the study of error minimized extreme learning machine for single hidden layer feedforward neural networks. Our formulation leads to solving a system of linear equations and its solution is obtained by Moore-Penrose generalized pseudo-inverse. In this approach the number of hidden nodes is automatically determined by repeatedly adding new hidden nodes to the network either one by one or group by group and updating the output weights incrementally in an efficient manner until the network output error is less than the given expected learning accuracy. For the verification of the efficiency of the proposed method a number of interesting examples are considered and the results obtained with the proposed method are compared with that of other two popular methods. It is observed that the proposed method is fast and produces similar or better generalization performance on the test data.  相似文献   

17.
陈华伟  年晓玲  靳蕃 《计算机应用》2006,26(5):1106-1108
提出一种新的前向神经网络的学习算法,该算法在正向和反向阶段均可以对不同的层间的权值进行必要的调整,在正向阶段按最小范数二乘解原则确定连接隐层与输出层的权值,反向阶段则按误差梯度下降原则调整通连接输入层与隐层间的权值,具有很快的学习能力和收敛速度,并且能在一定的程度上保证所训练神经网络的泛化能力,实验结果初步验证了新算法的性能。  相似文献   

18.
This paper presents a performance enhancement scheme for the recently developed extreme learning machine (ELM) for multi-category sparse data classification problems. ELM is a single hidden layer neural network with good generalization capabilities and extremely fast learning capacity. In ELM, the input weights are randomly chosen and the output weights are analytically calculated. The generalization performance of the ELM algorithm for sparse data classification problem depends critically on three free parameters. They are, the number of hidden neurons, the input weights and the bias values which need to be optimally chosen. Selection of these parameters for the best performance of ELM involves a complex optimization problem.In this paper, we present a new, real-coded genetic algorithm approach called ‘RCGA-ELM’ to select the optimal number of hidden neurons, input weights and bias values which results in better performance. Two new genetic operators called ‘network based operator’ and ‘weight based operator’ are proposed to find a compact network with higher generalization performance. We also present an alternate and less computationally intensive approach called ‘sparse-ELM’. Sparse-ELM searches for the best parameters of ELM using K-fold validation. A multi-class human cancer classification problem using micro-array gene expression data (which is sparse), is used for evaluating the performance of the two schemes. Results indicate that the proposed RCGA-ELM and sparse-ELM significantly improve ELM performance for sparse multi-category classification problems.  相似文献   

19.
为了提高太阳黑子预测预报的精度,提出固定型极限学习过程神经网络(FELM-PNN)和增量型极限学习过程神经网络(IELM-PNN)两种学习算法.FELM-PNN的隐层节点数目固定,使用SVD求解隐层输出矩阵的Moore-Penrose广义逆,通过最小二乘法计算隐层输出权值;IELM-PNN逐次增加隐层节点,根据隐层输出矩阵和网络误差计算增加节点的输出权值.通过Henon时间序列预测验证了两种方法的有效性,并实际应用于第24周太阳黑子平滑月均值的中长期预测预报中.实验结果表明,两种方法的预测精度均有一定程度的提高,IELM-PNN的训练收敛性优于FELM-PNN.  相似文献   

20.
This paper proposes a novel artificial neural network called fast learning network (FLN). In FLN, input weights and hidden layer biases are randomly generated, and the weight values of the connection between the output layer and the input layer and the weight values connecting the output node and the input nodes are analytically determined based on least squares methods. In order to test the FLN validity, it is applied to nine regression applications, and experimental results show that, compared with support vector machine, back propagation, extreme learning machine, the FLN with much more compact networks can achieve very good generalization performance and stability at a very fast training speed and a quick reaction of the trained network to new observations. In addition, in order to further test the FLN validity, it is applied to model the thermal efficiency and NO x emissions of a 330 WM coal-fired boiler and achieves very good prediction precision and generalization ability at a high learning speed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号