首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到17条相似文献,搜索用时 296 毫秒
1.
该文利用凸函数共轭性质中的Young不等式构造前馈神经网络优化目标函数。这个优化目标函数若固定权值,对隐层输出来说为凸函数;若固定隐层输出,对权值来说为凸函数。因此,此目标函数不存在局部最小。此目标函数的优化速度快,大大提高了前馈神经网络的学习效率。仿真试验表明,与传统算法如误差反向传播算法或BP算法和含势态因子(Momentum factor)的BP算法及现有的分层优化算法相比,新算法能加快收敛速度,并降低学习误差。利用这种快速算法对矿体进行仿真预测,取得了良好效果。  相似文献   

2.
神经网络的隐层数和隐层节点数决定了网络规模,并对网络性能造成较大影响。在满足网络所需最少隐层节点数的前提下,利用剪枝算法删除某些冗余节点,减少隐层节点数,得到更加精简的网络结构。基于惩罚函数的剪枝算法是在目标函数后加入一个惩罚函数项,该惩罚函数项是一个变量为网络权值的函数。由于惩罚函数中的网络权值变量可以附加一个可调参数,将单一惩罚函数项泛化为一类随参数规律变化的新的惩罚函数,初始惩罚函数可看作泛化后惩罚函数的参数取定值的特殊情况。实验利用基于标准BP神经网络的XOR数据进行测试,得到隐层节点剪枝效果和网络权值随惩罚函数的泛化而发生变化,并从数据分析中得出具有更好剪枝效果及更优网络结构的惩罚函数泛化参数。  相似文献   

3.
本文提出的算法是利用凸函数共轭性质中的Young不等式构造优化目标函数,这个优化目标函数对于权值和隐层输出来说为凸函数,不存在局部最小。首先把隐层输出做为变量进行优化更新,然后快速计算出隐层前后的权值。数值实验表明:此算法简单,收敛速度快,泛化能力强,并能大大降低学习误差。  相似文献   

4.
贾文臣  叶世伟 《计算机工程》2005,31(10):142-144,176
提出的算法是利用凸函数共轭性质中的Young不等式构造优化目标函数,这个优化目标函数对于权值和隐层输出来说为凸函数,不存在局部最小。首先把隐层输出作为变量进行优化更新,然后快速计算出隐层前后的权值。数值实验表明:此算法简单,收敛速度快,泛化能力强,并大大降低了学习误差。  相似文献   

5.
一种基于遗传算法的RBF神经网络优化方法   总被引:19,自引:0,他引:19       下载免费PDF全文
提出了一种新的RBF神经网络的训练方法,采用遗传算法对RBF神经网络的隐层中心值和宽度进行了优化,用递推最小二乘法训练隐层和输出层之间的权值。在对非线性函数进行逼近的仿真中,验证了该算法的有效性。  相似文献   

6.
一种基于代数算法的RBF神经网络优化方法   总被引:1,自引:0,他引:1       下载免费PDF全文
提出了一种新的RBF神经网络的训练方法,采用动态K-均值方法对RBF 神经网络的隐层中心值和宽度进行了优化,用代数算法训练隐层和输出层之间的权值。在对非线性函数进行逼近的仿真中,验证了该算法的有效性。  相似文献   

7.
本文采用免疫单克隆算法对RBF神经网络的隐层中心值和宽度进行优化,用递推最小二乘法训练隐层和输出层之间的权值.并提出一种新的亲和力变异方法,有效地改善了抗体变异中的变异幅度变化对算法精度的影响,同时很好的体现了单克隆选择算法抗体变异的特点.通过对非线性函数进行逼近的仿真试验表明,免疫单克隆算法能很好的提高RBF网络的学习能力.  相似文献   

8.
用神经元网络辨识非线性系统中的网络结构选择*   总被引:4,自引:0,他引:4  
本文定义了神经元网络的权值拟熵,在对多层前馈网训练的常规目标函数中加入权值拟熵作为约束项以改变网络的权值分布从而修定网络结构。将此方法用于一类非线性系统的神经网络辨识中可以优化网络模型输入项数和隐节点数目。  相似文献   

9.
提出了一种新的结构自适应的径向基函数(RBF)神经网络模型。在该网络中,自组织映射(SOM)神经网络作为聚类网络,采用无监督学习算法对输入样本进行自组织分类,并将分类中心及其对应的权值向量传递给RBF神经网络,作为径向基函数的中心和相应的权值向量;RBF神经网络作为基础网络,采用高斯函数实现输入层到隐层的非线性映射,输出层则采用有监督学习算法训练网络的权值,从而实现输入层到输出层的非线性映射。通过对字母数据集进行仿真,表明该网络具有较好的性能。  相似文献   

10.
离散时间Hopfield网络的动力系统分析   总被引:2,自引:0,他引:2  
离散时间的Hopfield网络模型是一个非线性动力系统.对网络的状态变量引入新的能量函数,利用凸函数次梯度性质可以得到网络状态能量单调减少的条件.对于神经元的连接权值且激活函数单调非减(不一定严格单调增加)的Hopfield网络,若神经元激活函数的增益大于权值矩阵的最小特征值,则全并行时渐进收敛;而当网络串行时,只要网络中每个神经元激活函数的增益与该神经元的自反馈连接权值的和大于零即可.同时,若神经元激活函数单调,网络连接权值对称,利用凸函数次梯度的性质,证明了离散时间的Hopfield网络模型全并行时收敛到周期不大于2的极限环.  相似文献   

11.
Feedforward neural networks are the most commonly used function approximation techniques in neural networks. By the universal approximation theorem, it is clear that a single-hidden layer feedforward neural network (FNN) is sufficient to approximate the corresponding desired outputs arbitrarily close. Some researchers use genetic algorithms (GAs) to explore the global optimal solution of the FNN structure. However, it is rather time consuming to use GA for the training of FNN. In this paper, we propose a new optimization algorithm for a single-hidden layer FNN. The method is based on the convex combination algorithm for massaging information in the hidden layer. In fact, this technique explores a continuum idea which combines the classic mutation and crossover strategies in GA together. The proposed method has the advantage over GA which requires a lot of preprocessing works in breaking down the data into a sequence of binary codes before learning or mutation can apply. Also, we set up a new error function to measure the performance of the FNN and obtain the optimal choice of the connection weights and thus the nonlinear optimization problem can be solved directly. Several computational experiments are used to illustrate the proposed algorithm, which has good exploration and exploitation capabilities in search of the optimal weight for single hidden layer FNNs.  相似文献   

12.
Artificial neural networks (ANN) have been extensively used as global approximation tools in the context of approximate optimization. ANN traditionally minimizes the absolute difference between target outputs and approximate outputs thereby resulting in approximate optimal solutions being sometimes actually infeasible when it is used as a metamodel for inequality constraint functions. The paper explores the development of the efficient back-propagation neural network (BPN)-based metamodel that ensures the constraint feasibility of approximate optimal solution. The BPN architecture is optimized via two approaches of both derivative-based method and genetic algorithm (GA) to determine interconnection weights between layers in the network. The verification of the proposed approach is examined by adopting a standard ten-bar truss problem. Finally, a GA-based approximate optimization of suspension with an optical flying head is conducted to enhance the shock resistance capability in addition to dynamic characteristics.  相似文献   

13.
Interval data offer a valuable way of representing the available information in complex problems where uncertainty, inaccuracy, or variability must be taken into account. Considered in this paper is the learning of interval neural networks, of which the input and output are vectors with interval components, and the weights are real numbers. The back-propagation (BP) learning algorithm is very slow for interval neural networks, just as for usual real-valued neural networks. Extreme learning machine (ELM) has faster learning speed than the BP algorithm. In this paper, ELM is applied for learning of interval neural networks, resulting in an interval extreme learning machine (IELM). There are two steps in the ELM for usual feedforward neural networks. The first step is to randomly generate the weights connecting the input and the hidden layers, and the second step is to use the Moore–Penrose generalized inversely to determine the weights connecting the hidden and output layers. The first step can be directly applied for interval neural networks. But the second step cannot, due to the involvement of nonlinear constraint conditions for IELM. Instead, we use the same idea as that of the BP algorithm to form a nonlinear optimization problem to determine the weights connecting the hidden and output layers of IELM. Numerical experiments show that IELM is much faster than the usual BP algorithm. And the generalization performance of IELM is much better than that of BP, while the training error of IELM is a little bit worse than that of BP, implying that there might be an over-fitting for BP.  相似文献   

14.
A novel algorithm for weight adjustments in a multilayer neural network is derived using the principles of dynamic programming. The algorithm computes the optimal values for weights on a layer-by-layer basis starting from the output layer of the network. The advantage of this algorithm is that it provides an error function for every hidden layer expressed entirely in terms of the weights and outputs of the hidden layer, and minimization of this error function yields the optimum weights for the hidden layer.  相似文献   

15.
为提高神经网络的逼近能力,提出一种基于序列输入的神经网络模型及算法。模型隐层为序列神经元,输出层为普通神经元。输入为多维离散序列,输出为普通实值向量。先将各维离散输入序列值按序逐点加权映射,再将这些映射结果加权聚合之后映射为隐层序列神经元的输出,最后计算网络输出。采用Levenberg-Marquardt算法设计了该模型学习算法。仿真结果表明,当输入节点和序列长度比较接近时,模型的逼近能力明显优于普通神经网络。  相似文献   

16.
There is a function of dynamic mapping when processing non-linear complex data with Elman neural networks. Because Elman neural network inherits the feature of back-propagation neural network to some extent, it has many defects; for example, it is easy to fall into local minimum, the fixed learning rate, the uncertain number of hidden layer neuron and so on. It affects the processing accuracy. So we optimize the weights, thresholds and numbers of hidden layer neurons of Elman networks by genetic algorithm. It improves training speed and generalization ability of Elman neural networks to get the optimal algorithm model. It has been proved by instance analysis that new algorithm was superior to the traditional model in terms of convergence rate, predicted value error, number of trainings conducted successfully, etc. It indicates the effect of the new algorithm and deserves further popularization.  相似文献   

17.
There is no method to determine the optimal topology for multi-layer neural networks for a given problem. Usually the designer selects a topology for the network and then trains it. Since determination of the optimal topology of neural networks belongs to class of NP-hard problems, most of the existing algorithms for determination of the topology are approximate. These algorithms could be classified into four main groups: pruning algorithms, constructive algorithms, hybrid algorithms and evolutionary algorithms. These algorithms can produce near optimal solutions. Most of these algorithms use hill-climbing method and may be stuck at local minima. In this article, we first introduce a learning automaton and study its behaviour and then present an algorithm based on the proposed learning automaton, called survival algorithm, for determination of the number of hidden units of three layers neural networks. The survival algorithm uses learning automata as a global search method to increase the probability of obtaining the optimal topology. The algorithm considers the problem of optimization of the topology of neural networks as object partitioning rather than searching or parameter optimization as in existing algorithms. In survival algorithm, the training begins with a large network, and then by adding and deleting hidden units, a near optimal topology will be obtained. The algorithm has been tested on a number of problems and shown through simulations that networks generated are near optimal.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号