首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 109 毫秒
1.
一类修正的DY共轭梯度法及其全局收敛性   总被引:2,自引:0,他引:2  
本文提出了一类求解无约束优化问题的修正DY共轭梯度法.算法采用新的迭代格式,每步迭代都可自行产生一个充分下降方向.采用Wolfe线搜索时,证明了全局收敛性.数值实验结果验证了算法是有效的.  相似文献   

2.
本文通过结合MFR方法与MDY方法,对搜索方向进行调整,提出了一类求解无约束优化问题的修正DY共轭梯度法,该法在每步迭代都能不依赖于任何搜索而自行产生充分下降方向.在适当的条件下,证明了在Armijo搜索下对于非凸的优化问题,本文算法是全局收敛的.数值实验表明本文算法是有效的.  相似文献   

3.
基于利用修正HS方法提高算法效率和利用DY方法保证算法的全局收敛性等思想,分别在不同条件下提出两种新的混合共轭梯度法求解大规模无约束优化问题.在一般Wlolfe线搜索下不需给定下降条件,证明了两个算法的全局收敛性,数值实验表明所提出算法的有效性,特别对于某些大规模无约束优化问题,数值表现较好.  相似文献   

4.
在DY共轭梯度法的基础上对解决无约束最优化问题提出一种改进的共轭梯度法.该方法在标准wolfe线搜索下具有充分下降性,且算法全局收敛.数值结果表明了该算法的有效性.最后将算法用于SO2氧化反应动力学模型的非线性参数估计,获得满意效果.  相似文献   

5.
本文基于修正的共轭梯度公式,提出了一个具有充分下降性的共轭梯度算法,该算法不需要线搜索,其步长由固定的公式给出.某种程度上,该算法利用了目标函数的二次信息,对目标函数的(近似)二次模型采取了精确线搜索,每步都只需要计算一次梯度值,特别适合大规模优化计算.本文还给出了该算法的全局收敛性分析,并得到强收敛结果.数值实验表明这种算法是很有应用前景的.  相似文献   

6.
基于预条件共轭梯度法的混凝土层析成像   总被引:1,自引:0,他引:1       下载免费PDF全文
樊瑶  赵祥模  褚燕利  党乐 《计算机工程》2008,34(23):258-260
根据常规图像重建的共轭梯度迭代算法,提出一种预条件共轭梯度法。用一种新的预条件子M来改善系数矩阵的条件数,结合一般的共轭梯度法,导出预条件共轭梯度法。实验结果表明,预条件共轭梯度算法比共轭梯度算法具有更好的CT重建效果和消噪能力,可提高计算的精度和图像的重建质量。  相似文献   

7.
共轭梯度法在BP网络中的应用   总被引:7,自引:0,他引:7  
该文针对广泛使用的前向多层网络的BP算法存在的收敛速率低、有局部振荡的缺陷,提出了共轭梯度法改进BP算法,它在共轭梯度方向修正权值、使用概率接受原则决定目标函数值变化的取舍。同时给出了提高网络抗过配合性能的罚函数方法。实例证明:在不同的初值下,共轭梯度法均具有快的全局收敛性。  相似文献   

8.
本文提出了一种求解无约束优化问题的修正PRP共轭梯度法.算法采用一个新的公式计算参数,避免了产生较小的步长.在适当的条件下,证明了算法具有下降性质,并且在采用强Wolfe线搜索时,算法是全局收敛的.最后,给出了初步的数值试验结果.  相似文献   

9.
本文对无约束优化问题提出了一种新的非标准共轭梯度算法,该算法的搜索方向类似于曲线搜索算法的方向。证明了新算法的全局收敛性,并通过数值模拟验证了该算法是有效的和快速的。  相似文献   

10.
为了快速得到高质量的重建图像,提出了对称共轭梯度法成像算法,大大缩减了迭代次数,同时,将ERT物理模型进行规范化和Tikhonov正则化处理,进而将QR分解的思想引入ERT方程的求解中,提出基于QR分解的对称共轭梯度算法,实现了单步图像重建.理论分析表明,该算法具有良好的收敛性.通过典型流型的仿真实验,证明了该算法可以...  相似文献   

11.
In this paper, two modified spectral conjugate gradient methods which satisfy sufficient descent property are developed for unconstrained optimization problems. For uniformly convex problems, the first modified spectral type of conjugate gradient algorithm is proposed under the Wolfe line search rule. Moreover, the search direction of the modified spectral conjugate gradient method is sufficiently descent for uniformly convex functions. Furthermore, according to the Dai–Liao's conjugate condition, the second spectral type of conjugate gradient algorithm can generate some sufficient decent direction at each iteration for general functions. Therefore, the second method could be considered as a modification version of the Dai–Liao's algorithm. Under the suitable conditions, the proposed algorithms are globally convergent for uniformly convex functions and general functions. The numerical results show that the approaches presented in this paper are feasible and efficient.  相似文献   

12.
In this paper, based on the numerical efficiency of Hestenes–Stiefel (HS) method, a new modified HS algorithm is proposed for unconstrained optimization. The new direction independent of the line search satisfies in the sufficient descent condition. Motivated by theoretical and numerical features of three-term conjugate gradient (CG) methods proposed by Narushima et al., similar to Dai and Kou approach, the new direction is computed by minimizing the distance between the CG direction and the direction of the three-term CG methods proposed by Narushima et al. Under some mild conditions, we establish global convergence of the new method for general functions when the standard Wolfe line search is used. Numerical experiments on some test problems from the CUTEst collection are given to show the efficiency of the proposed method.  相似文献   

13.
为解决核磁共振图像重构中由于欠采样导致的重构图像质量较低的问题,提出了一种基于凸-非凸稀疏正则和即插即用近似点梯度下降的核磁共振图像重构算法。首先给出了凸-非凸稀疏正则的近似点算子。然后基于该近似点算子提出近似点梯度下降算法。最后将上述算法中的近似点算子用某种合适的去噪器(如神经网络去噪器)替换,得到即插即用近似点梯度下降算法,并将其应用到核磁共振图像重构上。数值实验中,分别用不同的待重构图像、采样模板和去噪器进行对比实验,实验结果表明,所提算法在使用神经网络去噪器时,峰值信噪比较已有算法提升了6.26 dB。同时视觉效果也得到了显著的提升,在处理边缘和纹路方面效果都更加明显,从而验证了算法的有效性。  相似文献   

14.
A two-stage algorithm combining the advantages of adaptive genetic algorithm and modified Newton method is developed for effective training in feedforward neural networks. The genetic algorithm with adaptive reproduction, crossover, and mutation operators is to search for initial weight and bias of the neural network, while the modified Newton method, similar to BFGS algorithm, is to increase network training performance. The benchmark tests show that the two-stage algorithm is superior to many conventional ones: steepest descent, steepest descent with adaptive learning rate, conjugate gradient, and Newton-based methods and is suitable to small network in engineering applications. In addition to numerical simulation, the effectiveness of the two-stage algorithm is validated by experiments of system identification and vibration suppression.  相似文献   

15.
The conjugate gradient method is an effective method for large-scale unconstrained optimization problems. Recent research has proposed conjugate gradient methods based on secant conditions to establish fast convergence of the methods. However, these methods do not always generate a descent search direction. In contrast, Y. Narushima, H. Yabe, and J.A. Ford [A three-term conjugate gradient method with sufficient descent property for unconstrained optimization, SIAM J. Optim. 21 (2011), pp. 212–230] proposed a three-term conjugate gradient method which always satisfies the sufficient descent condition. This paper makes use of both ideas to propose descent three-term conjugate gradient methods based on particular secant conditions, and then shows their global convergence properties. Finally, numerical results are given.  相似文献   

16.
Singularities in the parameter spaces of hierarchical learning machines are known to be a main cause of slow convergence of gradient descent learning. The EM algorithm, which is another learning algorithm giving a maximum likelihood estimator, is also suffering from its slow convergence, which often appears when the component overlap is large. We analyze the dynamics of the EM algorithm for Gaussian mixtures around singularities and show that there exists a slow manifold caused by a singular structure, which is closely related to the slow convergence of the EM algorithm. We also conduct numerical simulations to confirm the theoretical analysis. Through the simulations, we compare the dynamics of the EM algorithm with that of the gradient descent algorithm, and show that their slow dynamics are caused by the same singular structure, and thus they have the same behaviors around singularities.  相似文献   

17.
针对无约束图像分割模型的实现问题,提出一种基于分块协调下降方法的快速数值算法.该算法将模型的对偶问题转化为一组约束一元或二元二次极值问题,不仅避免了原问题求解时局部不可微性和高非线性性等难点,使得求解过程简单并易于实现:而且与现有的基于梯度下降的算法相比,具有无条件全局收敛性并显著地提高了收敛速度.仿真实验结果表明了所提出算法的有效性和在分割效率上的优越性.  相似文献   

18.
《国际计算机数学杂志》2012,89(3-4):253-260
An algorithm using second derivatives for solving unconstrained optimization problems is presented. In this brief note the descent direction of the algorithm is based on a modification of the Newton direction, while the Armijo rule for choosing the stepsize is used. The rate of convergence of the algorithm is shown to be superlinear. Our computational experience shows that the method performs quite well and our numerical results are presented in Section 4.  相似文献   

19.
一种基于数值积分的过程神经元网络训练算法   总被引:1,自引:0,他引:1  
许少华  王颖  王皓  何新贵 《计算机科学》2010,37(11):203-205
针对过程神经元网络的训练问题,提出了一种基于数值积分的学习算法。直接采用数值积分进行网络中动态样本与连接权函数的时域加权聚合运算,采用梯度下降法实现连接权函数特征参数及网络性质参数的调整。设计了基于梯形积分、辛普森积分、柯特斯积分等3种过程神经元网络数值积分训练方法,以太阳黑子数据预测为例进行仿真实验,结果表明,基于数值积分的过程神经元网络训练算法是有效的,其中辛普森积分算法的性能最优。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号