首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 203 毫秒
1.
提出了一种新的结构自适应的径向基函数(RBF)神经网络模型。在该模型中,自组织映射(SOM)神经网络作为聚类网络,采用无监督学习算法对输入样本进行自组织分类,并将分类中心及其对应的权值向量传递给RBF神经网络,分别作为径向基函数的中心和相应的权值向量;RBF神经网络作为基础网络,采用高斯函数实现输入层到隐层的非线性映射,输出层则采用有监督学习算法训练网络的权值,从而实现输入层到输出层的非线性映射。通过对字母数据集进行仿真,表明该网络具有较好的性能。  相似文献   

2.
MRBF神经网络在图像分类中的应用   总被引:1,自引:0,他引:1  
提出了一种新的结构自适应的径向基函数(MRBF)神经网络处理纹理图像分类问题.自组织映射(SOM)神经网络作为聚类网络,采用无监督学习算法对输入样本进行自组织分类,并将分类中心和相应的权值向量作为RBF神经网络隐含层的输入,来处理RBF神经网络对中心位置敏感的问题.通过对从Brodatz albums中选取了9张纹理图像作为测试图像进行仿真,仿真结果表明该网络具有较好的性能.  相似文献   

3.
为了解决工业生产过程中许多重要的参数无法精确测量或者实时测量的问题,提出一种基于自组织特征映射(SOM)神经网络和径向基函数(RBF)神经网络结合构建网络模型的预测方法;其中,RBF神经网络作为基础网络实现从输入层到输出层的线性映射,得出预测输出;SOM神经网络作为聚类网络对输入样本进行自组织分类,将分类中心及其对应的权值向量作为RBF神经网络径向基函数的中心;以钕铁硼氢粉碎过程优化控制为例,建立了合金氢含量的检测模型,并与RBF神经网络检测模型进行了对比;仿真结果表明该混合网络检测模型检测精度高,泛化能力强,证实了该方法的有效性.  相似文献   

4.
一种用RBF神经网络改善传感器测量精度的新方法   总被引:3,自引:0,他引:3  
介绍一种利用径向基函数(RBF)神经网络和智能温度传感器DS18B20改善传感器精度的新方法。RBF网络具有良好的非线性映射能力、自学习和泛化能力,通过大量的样本数据训练构建了双输入单输出网络模型,采用改进的算法实现了传感器高精度温度补偿。  相似文献   

5.
基于广义径向基函数的神经网络分类预测   总被引:1,自引:0,他引:1  
径向基函数网络是神经网络中一种广泛使用的设计方法.它把神经网络的设计看作是一个高维空间的曲线逼近问题.相对于其他的神经网络方法.径向基函数神经网络除了具有一般神经网络的优点,如多维非线性映射能力、泛化能力、并行信息处理能力等,还具有很强的聚类分析能力,学习算法简单方便等优点.针对一个实际分类问题,利用广义径向基函数网络的思想训练一个网络并实现对测试数据集的分类预测.本算法采用k-均值聚类算法训练广义径向基函数网络中心,使用奇异值分解计算输出层权值.对该网络的实现细节及待改进之处进行简要分析.实验表明广义径向基函数神经网络的思想具有很强的聚类分析能力,学习算法简单方便等优点.  相似文献   

6.
罗庚合 《计算机应用》2013,33(7):1942-1945
针对极限学习机(ELM)算法随机选择输入层权值的问题,借鉴第2类型可拓神经网络(ENN-2)聚类的思想,提出了一种基于可拓聚类的ELM(EC-ELM)神经网络。该神经网络是以隐含层神经元的径向基中心向量作为输入层权值,采用可拓聚类算法动态调整隐含层节点数目和径向基中心,并根据所确定的输入层权值,利用Moore-Penrose广义逆快速完成输出层权值的求解。同时,对标准的Friedman#1回归数据集和Wine分类数据集进行测试,结果表明,EC-ELM提供了一种简便的神经网络结构和参数学习方法,并且比基于可拓理论的径向基函数(ERBF)、ELM神经网络具有更高的建模精度和更快的学习速度,为复杂过程的建模提供了新思路。  相似文献   

7.
基于粒子群算法的RBF网络参数优化算法   总被引:4,自引:1,他引:3  
针对神经网络的一些缺陷,研究神经网络基于粒子群优化的学习算法,将粒子群优化算法用于RBF神经网络的学习训练。提出了一种基于粒子群优化(PSO)算法的径向基(RBF)网络参数优化算法,首先利用减聚类算法确定网络径向基函数中心的个数,再用PSO算法优化径向基函数的中心及宽度,最后用PSO算法训练隐含层到输出层的网络权值,找到神经网络权值的最优解,以达到优化神经网络学习的目的。最后,通过一个实验与最小二乘法优化的神经网络进行了比较,验证了算法的有效性。  相似文献   

8.
径向基函数(RBF)神经网络可广泛应用于解决信号处理与模式识别问题,目前存在一些学习算法用来确定RBF中心节点和训练网络,对于确定RBF中心节点向量值和网络权重值可以看作同一系统问题,因此该文提出把扩展卡尔曼滤波器(EKF)用于多输入多输出的径向基函数(RBF)神经网络作为其学习算法,当确定神经网络中网络节点的个数后,EKF可以同时确定中心节点向量值和网络权重矩阵,为提高收敛速度提出带有次优渐消因子的扩展卡尔曼滤波器(SFEKF)用于RBF神经网络学习算法,仿真结果说明了在学习过程中应用EKF比常规RBF神经网络有更好的效果,学习速度比梯度下降法明显加快,减少了计算负担。  相似文献   

9.
结合主元分析(PCA)和径向基函数(RBF)神经网络,建立了地下水动态模拟与软测量预测模型。通过主元分析法提取主要成分,实现数据预处理;将选取的主要成分作为RBF神经网络的输入;采用k均值聚类算法确定RBF网络隐含层参数,并用递进最小二乘法确定输出层权值。仿真结果表明,该模型优化了网络结构,提高了预测精度。  相似文献   

10.
基于RBF神经网络的产品概念设计方案评价   总被引:1,自引:0,他引:1  
分析了现有评价方法存在的问题,利用Matlab神经网络工具箱构建了RBF网络模型,并以冰箱为实例进行评价.RBF神经网络采用监督学习算法和正交最小平方(OLS)算法决定基函数的中心、方差以及隐含层到输出层的权值.与BP神经网络模型的评价结果对比,建立的RBF神经网络评价模型具有更高的预测精度,收敛速度更快.  相似文献   

11.
According to conventional neural network theories, single-hidden-layer feedforward networks (SLFNs) with additive or radial basis function (RBF) hidden nodes are universal approximators when all the parameters of the networks are allowed adjustable. However, as observed in most neural network implementations, tuning all the parameters of the networks may cause learning complicated and inefficient, and it may be difficult to train networks with nondifferential activation functions such as threshold networks. Unlike conventional neural network theories, this paper proves in an incremental constructive method that in order to let SLFNs work as universal approximators, one may simply randomly choose hidden nodes and then only need to adjust the output weights linking the hidden layer and the output layer. In such SLFNs implementations, the activation functions for additive nodes can be any bounded nonconstant piecewise continuous functions g:R/spl rarr/R and the activation functions for RBF nodes can be any integrable piecewise continuous functions g:R/spl rarr/R and /spl int//sub R/g(x)dx/spl ne/0. The proposed incremental method is efficient not only for SFLNs with continuous (including nondifferentiable) activation functions but also for SLFNs with piecewise continuous (such as threshold) activation functions. Compared to other popular methods such a new network is fully automatic and users need not intervene the learning process by manually tuning control parameters.  相似文献   

12.
基于GEP优化的RBF神经网络算法   总被引:1,自引:0,他引:1  
RBF神经网络作为一种采用局部调节来执行函数映射的人工神经网络,在逼近能力、分类能力和学习速度等方面都有良好的表现,但由于RBF网络的隐节点的个数和隐节点的中心难以确定,从而影响了整个网络的精度,极大地制约了该网络的广泛应用.为此本文提出基于GEP优化的RBF神经网络算法,对其中心向量及连接权值进行优化.实验表明,本文所提算法比RBF算法的预测误差平均减少了48.96% .  相似文献   

13.
Convex incremental extreme learning machine   总被引:8,自引:2,他引:6  
Guang-Bin  Lei   《Neurocomputing》2007,70(16-18):3056
Unlike the conventional neural network theories and implementations, Huang et al. [Universal approximation using incremental constructive feedforward networks with random hidden nodes, IEEE Transactions on Neural Networks 17(4) (2006) 879–892] have recently proposed a new theory to show that single-hidden-layer feedforward networks (SLFNs) with randomly generated additive or radial basis function (RBF) hidden nodes (according to any continuous sampling distribution) can work as universal approximators and the resulting incremental extreme learning machine (I-ELM) outperforms many popular learning algorithms. I-ELM randomly generates the hidden nodes and analytically calculates the output weights of SLFNs, however, I-ELM does not recalculate the output weights of all the existing nodes when a new node is added. This paper shows that while retaining the same simplicity, the convergence rate of I-ELM can be further improved by recalculating the output weights of the existing nodes based on a convex optimization method when a new hidden node is randomly added. Furthermore, we show that given a type of piecewise continuous computational hidden nodes (possibly not neural alike nodes), if SLFNs can work as universal approximators with adjustable hidden node parameters, from a function approximation point of view the hidden node parameters of such “generalized” SLFNs (including sigmoid networks, RBF networks, trigonometric networks, threshold networks, fuzzy inference systems, fully complex neural networks, high-order networks, ridge polynomial networks, wavelet networks, etc.) can actually be randomly generated according to any continuous sampling distribution. In theory, the parameters of these SLFNs can be analytically determined by ELM instead of being tuned.  相似文献   

14.
Normalized Gaussian Radial Basis Function networks   总被引:4,自引:0,他引:4  
Guido Bugmann 《Neurocomputing》1998,20(1-3):97-110
The performances of normalised RBF (NRBF) nets and standard RBF nets are compared in simple classification and mapping problems. In normalized RBF networks, the traditional roles of weights and activities in the hidden layer are switched. Hidden nodes perform a function similar to a Voronoi tessellation of the input space, and the output weights become the network's output over the partition defined by the hidden nodes. Consequently, NRBF nets lose the localized characteristics of standard RBF nets and exhibit excellent generalization properties, to the extent that hidden nodes need to be recruited only for training data at the boundaries of class domains. Reflecting this, a new learning rule is proposed that greatly reduces the number of hidden nodes needed in classification tasks. As for mapping applications, it is shown that NRBF nets may outperform standard RBFs nets and exhibit more uniform errors. In both applications, the width of basis functions is uncritical, which makes NRBF nets easy to use.  相似文献   

15.
P.A.  C.  M.  J.C.   《Neurocomputing》2009,72(13-15):2731
This paper proposes a hybrid neural network model using a possible combination of different transfer projection functions (sigmoidal unit, SU, product unit, PU) and kernel functions (radial basis function, RBF) in the hidden layer of a feed-forward neural network. An evolutionary algorithm is adapted to this model and applied for learning the architecture, weights and node typology. Three different combined basis function models are proposed with all the different pairs that can be obtained with SU, PU and RBF nodes: product–sigmoidal unit (PSU) neural networks, product–radial basis function (PRBF) neural networks, and sigmoidal–radial basis function (SRBF) neural networks; and these are compared to the corresponding pure models: product unit neural network (PUNN), multilayer perceptron (MLP) and the RBF neural network. The proposals are tested using ten benchmark classification problems from well known machine learning problems. Combined functions using projection and kernel functions are found to be better than pure basis functions for the task of classification in several datasets.  相似文献   

16.
In this study we investigate a hybrid neural network architecture for modelling purposes. The proposed network is based on the multilayer perceptron (MLP) network. However, in addition to the usual hidden layers the first hidden layer is selected to be a centroid layer. Each unit in this new layer incorporates a centroid that is located somewhere in the input space. The output of these units is the Euclidean distance between the centroid and the input. The centroid layer clearly resembles the hidden layer of the radial basis function (RBF) networks. Therefore the centroid based multilayer perceptron (CMLP) networks can be regarded as a hybrid of MLP and RBF networks. The presented benchmark experiments show that the proposed hybrid architecture is able to combine the good properties of MLP and RBF networks resulting fast and efficient learning, and compact network structure.  相似文献   

17.
针对RBF神经网络隐含层节点数过多导致网络结构复杂的问题,提出了一种基于改进遗传算法(IGA)的RBF神经网络优化算法。利用IGA优化基于正交最小二乘法的RBF神经网络结构,通过对隐含层输出矩阵的列向量进行全局寻优,从而设计出结构更优的基于IGA的RBF神经网络(IGA-RBF)。将IGA-RBF神经网络的学习算法应用于电子元器件贮存环境温湿度预测模型,与基于正交最小二乘法的RBF神经网络进行比较的结果表明:IGA-RBF神经网络设计出来的网络训练步数减少了44步,隐含层节点数减少了34个,且预测模型得到的温湿度误差较小,拟合精度大于0.95,具有更高的预测精度。  相似文献   

18.
基于信息强度的RBF神经网络结构设计研究   总被引:6,自引:0,他引:6  
在系统研究前馈神经网络的基础上,针对径向基函数(Radial basis function, RBF) 网络的结构设计问题,提出一种弹性RBF神经网络结构优化设计方法. 利用隐含层神经元的输出信息(Output-information, OI)以及隐含层神经元与输出层神经元间的交互信息(Multi-information, MI)分析网络的连接强度, 以此判断增加或删除RBF神经网络隐含层神经元, 同时调整神经网络的拓扑结构,有效地解决了RBF神经网络结构设计问题; 利用梯度下降的参数修正算法保证了最终RBF网络的精度, 实现了神经网络的结构和参数自校正. 通过对典型非线性函数的逼近与污水处理过程关键水质参数建模, 结果证明了该弹性RBF具有良好的动态特征响应能力和逼近能力, 尤其是在训练速度、泛化能力、最终网络结构等方面较之最小资源神经网络(Minimal resource allocation net works, MRAN)、增长修剪RBF 神经网络(Generalized growing and pruning RBF, GGAP-RBF)和自组织RBF神经网络(Self-organizing RBF, SORBF)有较大的提高.  相似文献   

19.
Artificial neural networks were used to support applications across a variety of business and scientific disciplines during the past years. Artificial neural network applications are frequently viewed as black boxes which mystically determine complex patterns in data. Contrary to this popular view, neural network designers typically perform extensive knowledge engineering and incorporate a significant amount of domain knowledge into artificial neural networks. This paper details heuristics that utilize domain knowledge to produce an artificial neural network with optimal output performance. The effect of using the heuristics on neural network performance is illustrated by examining several applied artificial neural network systems. Identification of an optimal performance artificial neural network requires that a full factorial design with respect to the quantity of input nodes, hidden nodes, hidden layers, and learning algorithm be performed. The heuristic methods discussed in this paper produce optimal or near-optimal performance artificial neural networks using only a fraction of the time needed for a full factorial design.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号