首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到10条相似文献,搜索用时 125 毫秒
1.
提出了一种新的结构自适应的径向基函数(RBF)神经网络模型。在该模型中,自组织映射(SOM)神经网络作为聚类网络,采用无监督学习算法对输入样本进行自组织分类,并将分类中心及其对应的权值向量传递给RBF神经网络,分别作为径向基函数的中心和相应的权值向量;RBF神经网络作为基础网络,采用高斯函数实现输入层到隐层的非线性映射,输出层则采用有监督学习算法训练网络的权值,从而实现输入层到输出层的非线性映射。通过对字母数据集进行仿真,表明该网络具有较好的性能。  相似文献   

2.
提出了一种新的结构自适应的径向基函数(RBF)神经网络模型。在该网络中,自组织映射(SOM)神经网络作为聚类网络,采用无监督学习算法对输入样本进行自组织分类,并将分类中心及其对应的权值向量传递给RBF神经网络,作为径向基函数的中心和相应的权值向量;RBF神经网络作为基础网络,采用高斯函数实现输入层到隐层的非线性映射,输出层则采用有监督学习算法训练网络的权值,从而实现输入层到输出层的非线性映射。通过对字母数据集进行仿真,表明该网络具有较好的性能。  相似文献   

3.
神经网络传递函数的功能分析与仿真研究   总被引:1,自引:0,他引:1  
从函数映射的角度,以三层前向神经网络为例,对神经网络的映射关系进行了分析,提出前向神经网络的映射关系可以视为一种广义级数展开,展开系数就是隐层与输出层的连接权,而传递函数的作用在于提供一个“母基”,它与输入到隐层间的连接权一起,构造了不同的展开函数。根据这一理论,着重对神经网络传递函数在映射中的作用进行了分析,指出如果灵活选择多个复合传递函数,可以使网络以更少的参数、更少的隐节点,完成从输入到输出的映射,从而提高神经网络的泛化能力。利用遗传优化对一个两类分类问题的训练仿真结果表明,采用混合传递函数,的确能够以更少的隐节点实现所需要的映射关系,网络结构的复杂度低,泛化能力也更好。该结果也进一步证实了神经网络映射关系的广义级数展开的正确性。  相似文献   

4.
In this paper, we develop an online sequential learning algorithm for single hidden layer feedforward networks (SLFNs) with additive or radial basis function (RBF) hidden nodes in a unified framework. The algorithm is referred to as online sequential extreme learning machine (OS-ELM) and can learn data one-by-one or chunk-by-chunk (a block of data) with fixed or varying chunk size. The activation functions for additive nodes in OS-ELM can be any bounded nonconstant piecewise continuous functions and the activation functions for RBF nodes can be any integrable piecewise continuous functions. In OS-ELM, the parameters of hidden nodes (the input weights and biases of additive nodes or the centers and impact factors of RBF nodes) are randomly selected and the output weights are analytically determined based on the sequentially arriving data. The algorithm uses the ideas of ELM of Huang developed for batch learning which has been shown to be extremely fast with generalization performance better than other batch training methods. Apart from selecting the number of hidden nodes, no other control parameters have to be manually chosen. Detailed performance comparison of OS-ELM is done with other popular sequential learning algorithms on benchmark problems drawn from the regression, classification and time series prediction areas. The results show that the OS-ELM is faster than the other sequential algorithms and produces better generalization performance  相似文献   

5.
张云  方宗德  王成  田丽丽  赵勇 《计算机测量与控制》2009,17(6):1095-1097,1105
提出了一种基于动态聚类和遗传算法相结合的组合RBF网络训练方法;采用动态聚类法对样本数据进行聚类,使RBF神经网络的隐含层节点中心数在训练过程中自动确定,使用经验公式作为标准,选取最优聚类数,采用遗传算法对隐层中心和宽度以及隐层到输出层的权值进行优化,在全局范围内寻找网络的最优模型;最后对轮对缺陷进行纹理特征提取,并组成训练样本和测试样本,输入到网络进行训练与测试;实验结果表明,与传统方法比较,该组合方法具有较高的识别率。  相似文献   

6.
RBF神经网络在遥感影像分类中的应用研究   总被引:7,自引:0,他引:7       下载免费PDF全文
用RBF神经网络进行遥感影像分类,在网络结构设计上使RBF层与输出层的节点数都等于所要分类的类别数。用Kohonen聚类算法确定RBF中心的时候,用训练样本的均值作为初始中心,并在RBF宽度进行求取的时候进行了改进,以避免内存溢出。所设计的RBF神经网络分类模型具有结构简单、算法简洁的优点。实验结果表明,该方法用于遥感影像分类取得了较高的分类精度,具有实际应用价值。  相似文献   

7.
A novel structure for radial basis function networks is proposed. In this structure, unlike traditional RBF, we set some weights between input and hidden layer. These weights, which take values around unity, are multiplication factors for input vector and perform a linear mapping. Doing this, we increase free parameters of the network, but since these weights are trainable, the overall performance of the network is improved significantly. According to the new weight vector, we called this structure Weighted RBF or WRBF. Weight adjustment formula is provided by applying the gradient descent algorithm. Two classification problems used to evaluate performance of the new RBF network: letter classification using UCI dataset with 16 features, a difficult problem, and digit recognition using HODA dataset with 64 features, an easy problem. WRBF is compared with classic RBF and MLP network, and our experiments show that WRBF outperforms both significantly. For example, in the case of 200 hidden neurons, WRBF achieved recognition rate of 92.78% on UCI dataset while RBF and MLP achieved 83.13 and 89.25% respectively. On HODA dataset, WRBF reached 97.94% recognition rate whereas RBF achieved 97.14%, and MLP accomplished 97.63%.  相似文献   

8.
传统的极限学习机作为一种有监督的学习模型,任意对隐藏层神经元的输入权值和偏置进行赋值,通过计算隐藏层神经元的输出权值完成学习过程.针对传统的极限学习机在数据分析预测研究中存在预测精度不足的问题,提出一种基于模拟退火算法改进的极限学习机.首先,利用传统的极限学习机对训练集进行学习,得到隐藏层神经元的输出权值,选取预测结果评价标准.然后利用模拟退火算法,将传统的极限学习机隐藏层输入权值和偏置视为初始解,预测结果评价标准视为目标函数,通过模拟退火的降温过程,找到最优解即学习过程中预测误差最小的极限学习机的隐藏层神经元输入权值和偏置,最后通过传统的极限学习机计算得到隐藏层输出权值.实验选取鸢尾花分类数据和波士顿房价预测数据进行分析.实验发现与传统的极限学习机相比,基于模拟退火改进的极限学习机在分类和回归性能上都更优.  相似文献   

9.
We propose a modified radial basis function (RBF) network in which the regression weights are used to replace the constant weights in the output layer. It is shown that the modified RBF network can reduce the number of hidden units significantly. A computationally efficient algorithm, known as the expectation-maximization (EM) algorithm, is used to estimate the parameters of the regression weights. A salient feature of this algorithm is that it decomposes a complicated multiparameter optimization problem into L separate small-scale optimization problems, where L is the number of hidden units. The superior performance of the modified RB network over the standard RBF network is illustrated by computer simulations  相似文献   

10.
基于GEP优化的RBF神经网络算法   总被引:1,自引:0,他引:1  
RBF神经网络作为一种采用局部调节来执行函数映射的人工神经网络,在逼近能力、分类能力和学习速度等方面都有良好的表现,但由于RBF网络的隐节点的个数和隐节点的中心难以确定,从而影响了整个网络的精度,极大地制约了该网络的广泛应用.为此本文提出基于GEP优化的RBF神经网络算法,对其中心向量及连接权值进行优化.实验表明,本文所提算法比RBF算法的预测误差平均减少了48.96% .  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号