首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
In this paper, the deterministic convergence of an online gradient method with penalty and momentum is investigated for training two-layer feedforward neural networks. The monotonicity of the new error function with the penalty term in the training iteration is firstly proved. Under this conclusion, we show that the weights are uniformly bounded during the training process and the algorithm is deterministically convergent. Sufficient conditions are also provided for both weak and strong convergence results.  相似文献   

2.
In this brief, we consider an online gradient method with penalty for training feedforward neural networks. Specifically, the penalty is a term proportional to the norm of the weights. Its roles in the method are to control the magnitude of the weights and to improve the generalization performance of the network. By proving that the weights are automatically bounded in the network training with penalty, we simplify the conditions that are required for convergence of online gradient method in literature. A numerical example is given to support the theoretical analysis.   相似文献   

3.
Sigma-Pi (Σ-Π) neural networks (SPNNs) are known to provide more powerful mapping capability than traditional feed-forward neural networks. A unified convergence analysis for the batch gradient algorithm for SPNN learning is presented, covering three classes of SPNNs: Σ-Π-Σ, Σ-Σ-Π and Σ-Π-Σ-Π. The monotonicity of the error function in the iteration is also guaranteed.
  相似文献   

4.
Fast Learning Algorithms for Feedforward Neural Networks   总被引:7,自引:0,他引:7  
In order to improve the training speed of multilayer feedforward neural networks (MLFNN), we propose and explore two new fast backpropagation (BP) algorithms obtained: (1) by changing the error functions, in case using the exponent attenuation (or bell impulse) function and the Fourier kernel function as alternative functions; and (2) by introducing the hybrid conjugate-gradient algorithm of global optimization for dynamic learning rate to overcome the conventional BP learning problems of getting stuck into local minima or slow convergence. Our experimental results demonstrate the effectiveness of the modified error functions since the training speed is faster than that of existing fast methods. In addition, our hybrid algorithm has a higher recognition rate than the Polak-Ribieve conjugate gradient and conventional BP algorithms, and has less training time, less complication and stronger robustness than the Fletcher-Reeves conjugate-gradient and conventional BP algorithms for real speech data.  相似文献   

5.
Feedforward neural networks (FNN) have been proposed to solve complex problems in pattern recognition, classification and function approximation. Despite the general success of learning methods for FNN, such as the backpropagation (BP) algorithm, second-order algorithms, long learning time for convergence remains a problem to be overcome. In this paper, we propose a new hybrid algorithm for a FNN that combines unsupervised training for the hidden neurons (Kohonen algorithm) and supervised training for the output neurons (gradient descent method). Simulation results show the effectiveness of the proposed algorithm compared with other well-known learning methods.  相似文献   

6.
实数编码遗传算法的前向神经网络优化设计   总被引:11,自引:0,他引:11  
叶德谦  康建红  杨樱 《计算机工程》2005,31(16):163-164,175
提出一种综合控制策略的实数编码遗传算法,用该算法实现对前向网络结构及权值的同时优化设计。用非线性函数的逼近问题作仿真实验。结果表明,该算法能快速有效地确定网络结构及权值。  相似文献   

7.
基于实数编码遗传算法的神经网络优化设计   总被引:3,自引:0,他引:3  
提出一种基于综合控制策略的改进的实数编码遗传算法,用该算法对前向神经网络的结构及权值进行优化。通过实验结果表明,该算法能快速有效的确定网络的结构及权值。  相似文献   

8.
基于互补遗传算子的前馈神经网络三阶段学习方法   总被引:1,自引:0,他引:1  
论文提出了一种新的基于互补遗传算子的前馈神经网络三阶段学习方法。该方法把神经网络的学习过程分为三个阶段。第一阶段为结构辨识阶段,采用遗传算法进行神经网络隐层节点数目的选择和初始参数的设定,并基于发现的遗传算子的互补效应设计高效互补遗传算子。第二阶段为参数辨识阶段,采用效率较高的神经网络算法如L-M算法进行神经网络参数的进一步学习。第三阶段为剪枝阶段,通过获得最小结构的神经网络以提高其泛化能力。在整个学习过程中,学习过程的可控性以及神经网络的逼近精度、复杂度和泛化能力之间得到了满意平衡。仿真试验结果证明了该方法的有效性。  相似文献   

9.
为了求解线性矩阵方程问题,应用一种基于负梯度法的递归神经网络模型,并探讨了该递归神经网络实时求解线性矩阵方程的全局指数收敛问题.在讨论渐近收敛性基础上,进一步证明了该类神经网络在系数矩阵满足有解条件的情况下具有全局指数收敛性,在不能满足有解条件的情况下具有全局稳定性.计算机仿真结果证实了相关理论分析和该网络实时求解线性矩阵方程的有效性.  相似文献   

10.
提高BP网络收敛速率的又一种算法   总被引:3,自引:1,他引:3  
陈玉芳  雷霖 《计算机仿真》2004,21(11):74-77
提高BP网络的训练速率是改善BP网络性能的一项重要任务。该文在误差反向传播算法(BP算法)的基础上提出了一种新的训练算法,该算法对BP网络的传统动量法进行了修改,采用动态权值调整以减少训练时间。文章提供了改进算法的仿真实例,仿真结果表明用该方法解决某些问题时,其相对于BP网络传统算法的优越性。  相似文献   

11.
前馈神经网络的新学习算法研究及其应用   总被引:18,自引:0,他引:18  
张星昌 《控制与决策》1997,12(3):213-216
为了提高多层前馈神经网络的权的学习效率。通过引入变尺度法,提出一种新的学习算法。理论上新算法不仅具有变尺度优化方法的一切优点,而且也能起到Kick—Out学习算法中动量项及修正项的相同作用,同时又克服了动量系数及修正项系数难以适当选择的困难。仿真试验证明了新学习算法用于非线性动态系统建模时的有效性。  相似文献   

12.
本文介绍一种新的前馈神经网络的随机学习方法,着重讨论该算法的实现,并讨论了将它和BP算法相结合,从而得到一种非常实用的神经网络学习算法。  相似文献   

13.
Multistable neural networks have attracted much interests in recent years, since the monostable networks are computationally restricted. This paper studies a N linear threshold neurons recurrent networks without Self-Excitatory connections. Our studies show that this network performs a Winner-Take-All (WTA) behavior, which has been recognized as a basic computational model done in brain. The contributions of this paper are: (1) It proves by mathematics that the proposed model is Non-Divergent. (2) An important implication (Winner-Take-All) of the proposed network model is studied. (3) Digital computer simulations are carried out to validate the performance of the theory findings. This work was supported by Specialized Research Fund for the Doctoral Program of Higher Education of China under Grant 200806141049.  相似文献   

14.
Single-layer, continuous-time cellular neural/nonlinear networks (CNN) are considered with linear templates. The networks are programmed by the template-parameters. A fundamental question in template training or adaptation is the gradient computation or approximation of the error as a function of the template parameters. Exact equations are developed for computing the gradients. These equations are similar to the CNN network equations, i.e. they have the same neighborhood and connectivity as the original CNN network. It is shown that a CNN network, with a modified output function, can compute the gradients. Thus, fast on-line gradient computation is possible via the CNN Universal Machine, which allows on-line adaptation and training. The method for computing the gradient on-chip is investigated and demonstrated.  相似文献   

15.
In this paper, we consider a general class of neural networks, which have arbitrary constant delays in the neuron interconnections, and neuron activations belonging to the set of discontinuous monotone increasing and (possibly) unbounded functions. Based on the topological degree theory and Lyapunov functional method, we provide some new sufficient conditions for the global exponential stability and global convergence in finite time of these delayed neural networks. Under these conditions the uniqueness of initial value problem (IVP) is proved. The exponential convergence rate can be quantitatively estimated on the basis of the parameters defining the neural network. These conditions are easily testable and independent of the delay. In the end some remarks and examples are discussed to compare the present results with the existing ones.  相似文献   

16.
基于拟牛顿法的前向神经元网络学习算法   总被引:10,自引:0,他引:10  
杨秋贵  张杰 《控制与决策》1997,12(4):357-360
针对前向神经网络现有BP学习算法的不足,结合非线性最优化方法,提出一种基于拟牛顿法的神经元网络学习算法。该算法有效地改进了神经元网络的学习收敛速度,取得了比常规BP算法更好的收敛性能和学习速度。  相似文献   

17.
18.
基于LMI方法的时滞细胞神经网络稳定性分析   总被引:9,自引:0,他引:9  
神经网络是一个复杂的大规模非线性系统,而时滞神经网络的动态行为更为丰富和复杂.现有的研究时滞神经网络稳定性的方法中最为流行的是Lyapunov方法.它把稳定性问题变为某些适当地定义在系统轨迹上的泛函,通过这些泛函相应的稳定性条件就可以获得.该文得到了时滞细胞神经网络渐近稳定性的一些充分条件.作者利用了泛函微分方程的Lyapunov—Krasovskii稳定性理论和线性矩阵不等式(LMI)方法,精炼和推广了一些已有的结果.它们比目前文献报道的结果更少保守.该文还给出了确定时滞细胞神经网络稳定性更多的判定准则.  相似文献   

19.
基于输出层权值解析修正的神经网络有效训练   总被引:3,自引:0,他引:3  
根据神经网络训练误差对权值的梯度特征分析,提出了网络输出层权值与网络隐含层权值轮换修正的思想,并基于网络输出层权值与网络隐含层权值之间的依赖关系,建立了网络输出层权值解析修正和隐含层权值修正的具体方法,所提出的方法通过提高网络权值修正的准确性而提高网络训练的有效性。根据网络输出节点的输出误差与其总输入误差的关系,提出了进一步提高所获得网络推广性的具体方法。实例计算结果表明,所提出的方法可以显著地提高网络的训练效率,并有效地增强网络推广性。  相似文献   

20.
In this paper, we study a class of nonautonomous cellular neural networks with reaction-diffusion terms. By employing the method of variation parameter, applying inequality technique and introducing a lot of real parameters, we present some sufficient conditions ensuring the boundedness and globally exponential stability of the solutions for nonautonomous cellular neural networks with reaction-diffusion terms. The results obtained extend and improve the earlier publications. Finally, three examples with their numerical simulations are provided to show the correctness of our analysis.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号