共查询到20条相似文献,搜索用时 62 毫秒
1.
2.
3.
提出一种基于自适应混沌梯度下降的单目标耦合优化算法 .它采用变步长梯度下降法得到某个局部优化值 ,通过规则来判断其为局部极小值 ,然后利用一个由小到大变化的自适应尺度混沌遍历算法来获得一个更优值来代替局部极小值以跳出局部极小状态 ,全局优化值可以通过这种反复迭代来获得 .仿真结果表明 ,该算法能充分发挥梯度法寻优的快速性和混沌法寻优的全局搜索能力 ,有效地跳出局部极小 ,并快速找到最优值 相似文献
4.
5.
《计算机应用与软件》2017,(2)
针对拟态物理学算法优化算法后期搜索精度差、陷入局部最优的问题,提出一种融合共轭梯度法和混沌扰动的改进拟态物理学算法。该算法是在拟态物理学算法后期难以精细搜寻时,采用高精度解析算法——共轭梯度法替代拟态物理学算法进行局部搜寻;在整个算法中加入混沌扰动,避免算法"早熟"。仿真结果表明该算法收敛速度快、精度高,在跳出局部最优解上有明显优势。因此该算法适应于高维的复杂函数的寻优。 相似文献
6.
《计算机科学与探索》2016,(6):891-900
教与学优化算法通过模拟自然班的教与学行为实现复杂问题的求解,已经得到较为广泛的应用。为了克服该算法容易早熟,解精度低的弱点,提出了一种改进的混合混沌共轭梯度法教与学优化算法。改进算法应用Chebyshev混沌映射初始化种群,以提高初始种群对解空间的覆盖。为了保持种群多样性,引入动态学习因子,使学生个体能够在早期主要向教师学习,并逐渐提高个人知识对其进化的影响比例。每次迭代后,教师个体将执行共轭梯度搜索。种群内适应度较差的学生个体如果长时间状态难以改变,则基于反向学习和高斯学习进行二次学习优化。最后在多个典型测试函数上的实验表明,改进算法对比相关算法具有较佳的全局收敛性,解精度较高,适用于求解较高维的函数优化问题。 相似文献
7.
8.
9.
黄力明 《数值计算与计算机应用》2008,29(2):119-125
混沌微粒群优化算法利用了粒子群优化算法收敛速度快和混沌运动所具有的随机性、遍历性和初值敏感性,将混沌状态引入到优化变量中,把混沌的遍历范围映射到优化变量的取值范围.在算法执行过程中对优秀个体混沌扰动,有利于跳出局部极值点,搜索到全局最优解.分别用微粒群优化算法和混沌微粒群优化算法求解函数优化问题,对算法的性能进行检验,检验结果显示:混沌微粒群优化算法搜索全局最优解的成功率和收敛速度都要优于微粒群优化算法.将混沌微粒群优化算法与阈值法相结合,在算法初始化阶段对粒子位置混沌初始化;在算法运行期间对优秀个体进行混沌扰动避免落入局部最优,较好地解决了传统的多阈值图像分割方法中运算量大的问题.实验结果表明,混沌微粒群优化算法用于阈值寻优减少了搜索时间,提高了收敛率. 相似文献
10.
参数在线辨识是目前电力系统负荷建模的主要手段,而在辨识方法上主要使用了优化类算法.混沌优化算法是一种新型搜索算法,目前在电力系统负荷预测、无功优化中已有应用.该算法改进了以往混沌优化算法的流程,增加了参数搜索范围自动缩小的功能,减少了一次混沌序列生成的步骤.对测试函数的优化结果表明,改进算法在保证精度的基础上大大提高了寻优速度.将该改进算法应用到了直接考虑配电网的综合负荷模型的参数辨识上,仿真结果说明该算法寻优速度快,并且有良好的辨识精度.通过对仿真结果的分析指出,对于负荷模型参数辨识,混沌序列的迭代次数不必超过五万次,合理缩小参数寻优范围有助于提高算法的精度. 相似文献
11.
《国际计算机数学杂志》2012,89(16):3436-3447
Sufficient descent condition is very crucial in establishing the global convergence of nonlinear conjugate gradient method. In this paper, we modified two conjugate gradient methods such that both methods satisfy this property. Under suitable conditions, we prove the global convergence of the proposed methods. Numerical results show that the proposed methods are efficient for the given test problems. 相似文献
12.
改进的混沌优化方法及其应用 总被引:8,自引:0,他引:8
赵强 《自动化与仪器仪表》2006,(3):90-92
提出一种改进的混沌优化方法,该方法利用混沌变量对当前点进行扰动,并且通过时变参数逐渐减小搜索进程中的扰动幅度,同时以一定方式确定了时变参数的初值。用改进后的方法对连续对象的全局优化问题进行优化,仿真结果表明,该方法可以显著提高收敛速度和精确性。 相似文献
13.
14.
This paper establishes a spectral conjugate gradient method for solving unconstrained optimization problems, where the conjugate parameter and the spectral parameter satisfy a restrictive relationship. The search direction is sufficient descent without restarts in per-iteration. Moreover, this feature is independent of any line searches. Under the standard Wolfe line searches, the global convergence of the proposed method is proved when holds. The preliminary numerical results are presented to show effectiveness of the proposed method. 相似文献
15.
16.
The conjugate gradient method is an effective method for large-scale unconstrained optimization problems. Recent research has proposed conjugate gradient methods based on secant conditions to establish fast convergence of the methods. However, these methods do not always generate a descent search direction. In contrast, Y. Narushima, H. Yabe, and J.A. Ford [A three-term conjugate gradient method with sufficient descent property for unconstrained optimization, SIAM J. Optim. 21 (2011), pp. 212–230] proposed a three-term conjugate gradient method which always satisfies the sufficient descent condition. This paper makes use of both ideas to propose descent three-term conjugate gradient methods based on particular secant conditions, and then shows their global convergence properties. Finally, numerical results are given. 相似文献
17.
In this paper, two modified spectral conjugate gradient methods which satisfy sufficient descent property are developed for unconstrained optimization problems. For uniformly convex problems, the first modified spectral type of conjugate gradient algorithm is proposed under the Wolfe line search rule. Moreover, the search direction of the modified spectral conjugate gradient method is sufficiently descent for uniformly convex functions. Furthermore, according to the Dai–Liao's conjugate condition, the second spectral type of conjugate gradient algorithm can generate some sufficient decent direction at each iteration for general functions. Therefore, the second method could be considered as a modification version of the Dai–Liao's algorithm. Under the suitable conditions, the proposed algorithms are globally convergent for uniformly convex functions and general functions. The numerical results show that the approaches presented in this paper are feasible and efficient. 相似文献
18.
Zahra Khoshgam 《Optimization methods & software》2019,34(4):783-796
In this paper, according to the fifth-order Taylor expansion of the objective function and the modified secant equation suggested by Li and Fukushima, a new modified secant equation is presented. Also, a new modification of the scaled memoryless BFGS preconditioned conjugate gradient algorithm is suggested which is the idea to compute the scaling parameter based on a two-point approximation of our new modified secant equation. A remarkable feature of the proposed method is that it possesses a globally convergent even without convexity assumption on the objective function. Numerical results show that the proposed new modification of scaled conjugate gradient is efficient. 相似文献
19.