首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
自适应扩散混合变异机制微粒群算法   总被引:11,自引:0,他引:11  
为了避免微粒群算法(particle swarm optimization,简称PSO)在全局优化中陷入局部极值,分析了标准PSO算法早熟收敛的原因,提出了自适应扩散混合变异机制微粒群算法(InformPSO).结合生物群体信息扩散的习性,设计了一个考虑微粒分布和迭代次数的函数,自适应调整微粒的"社会认知"能力,提高种群的多样性;模拟了基因自组织和混沌进化规律,引入克隆选择使群体最佳微粒gBest实现遗传微变、局部增值,具有变异确定性;利用Logistic序列指导gBest随机漂移,进一步增强逃离局部极值能力.基于种群的随机状态转移过程,证明了新算法具有全局收敛性.与其他几种PSO变种相比,复杂基准函数仿真优化结果表明,新算法收敛速度快,求解精度高,稳定性好,能够有效抑制早熟收敛.  相似文献   

2.
针对粒子群算法(PSO)在解决高维、多模复杂问题时容易陷入局部最优的问题,提出了一种新颖的混合算法—催化粒子群算法(CPSO)。在CPSO优化过程中,种群中的粒子始终保持其个体历史最优值pbests。CPSO种群更新由改造PSO、横向交叉以及垂直交叉三个搜索算子交替进行,其中,每个算子产生的中庸解均通过贪婪思想产生占优解pbests,并作为下一个算子的父代种群。在CPSO中,纵横交叉算法(CSO)作为PSO的加速催化剂,一方面通过横向交叉改善PSO的全局收敛性能,另一方面通过纵向交叉维持种群的多样性。对6个典型benchmark函数的仿真结果表明,相比其它主流PSO变体,CPSO在全局收敛能力和收敛速率方面具有明显优势。  相似文献   

3.
一种自适应柯西变异的反向学习粒子群优化算法   总被引:1,自引:0,他引:1  
针对传统粒子群优化算法易出现早熟的问题,提出了一种自适应变异的反向学习粒子群优化算法。该算法在一般性反向学习方法的基础上,提出了自适应柯西变异策略(ACM)。采用一般性反向学习策略生成反向解,可扩大搜索空间,增强算法的全局勘探能力。为避免粒子陷入局部最优解而导致搜索停滞现象的发生,采用ACM策略对当前最优粒子进行扰动,自适应地获取变异点,在有效提高算法局部开采能力的同时,使算法能更加平稳快速地收敛到全局最优解。为进一步平衡算法的全局搜索与局部探测能力,采用非线性的自适应惯性权值。将算法在14个测试函数上与多种基于反向学习策略的PSO算法进行对比,实验结果表明提出的算法在解的精度以及收敛速度上得到了大幅度的提高。  相似文献   

4.
Particle swarm optimization (PSO) is a population based swarm intelligence algorithm that has been deeply studied and widely applied to a variety of problems. However, it is easily trapped into the local optima and premature convergence appears when solving complex multimodal problems. To address these issues, we present a new particle swarm optimization by introducing chaotic maps (Tent and Logistic) and Gaussian mutation mechanism as well as a local re-initialization strategy into the standard PSO algorithm. On one hand, the chaotic map is utilized to generate uniformly distributed particles to improve the quality of the initial population. On the other hand, Gaussian mutation as well as the local re-initialization strategy based on the maximal focus distance is exploited to help the algorithm escape from the local optima and make the particles proceed with searching in other regions of the solution space. In addition, an auxiliary velocity-position update strategy is exclusively used for the global best particle, which can effectively guarantee the convergence of the proposed particle swarm optimization. Extensive experiments on eight well-known benchmark functions with different dimensions demonstrate that the proposed PSO is superior or highly competitive to several state-of-the-art PSO variants in dealing with complex multimodal problems.  相似文献   

5.
Memetic algorithms, one type of algorithms inspired by nature, have been successfully applied to solve numerous optimization problems in diverse fields. In this paper, we propose a new memetic computing model, using a hierarchical particle swarm optimizer (HPSO) and latin hypercube sampling (LHS) method. In the bottom layer of hierarchical PSO, several swarms evolve in parallel to avoid being trapped in local optima. The learning strategy for each swarm is the well-known comprehensive learning method with a newly designed mutation operator. After the evolution process accomplished in bottom layer, one particle for each swarm is selected as candidate to construct the swarm in the top layer, which evolves by the same strategy employed in the bottom layer. The local search strategy based on LHS is imposed on particles in the top layer every specified number of generations. The new memetic computing model is extensively evaluated on a suite of 16 numerical optimization functions as well as the cylindricity error evaluation problem. Experimental results show that the proposed algorithm compares favorably with conventional PSO and several variants.  相似文献   

6.
林国汉  章兢  刘朝华 《计算机应用》2014,34(11):3241-3244
针对基本粒子群优化(PSO)算法早熟收敛和后期搜索效率低的问题,提出一种利用种群平均信息和精英变异的粒子群优化算法--MEPSO算法。该算法引入粒子个体与群体的平均信息,利用粒子平均信息来提高算法全局搜索能力,并采用时变加速系数(TVAC)以平衡算法的局部搜索和全局搜索能力;在算法后期,采用精英学习策略对精英粒子进行柯西变异操作,以进一步提高算法的全局搜索能力,减少算法陷入局部最优的危险。在6个典型的复杂函数上与基本PSO(BPSO)算法、时变加速因子PSO(PSO-TVAC)算法、时变惯性权重PSO(PSO-TVIW)算法和小波变异PSO(HPSOWM)算法进行对比,MEPSO的均值与标准方差均优于对比算法,且寻优时间最短,可靠性更好。结果表明, MEPSO能较好地兼顾局部搜索和全局搜索能力,收敛速度快,收敛精度和搜索效率高。  相似文献   

7.
Most real-world applications can be formulated as optimization problems, which commonly suffer from being trapped into the local optima. In this paper, we make full use of the global search capability of particle swarm optimization (PSO) and local search ability of extremal optimization (EO), and propose a gradient-based adaptive PSO with improved EO (called GAPSO-IEO) to overcome the issue of local optima deficiency of optimization in high-dimensional search and reduce the time complexity of the algorithm. In the proposed algorithm, the improved EO (IEO) is adaptively incorporated into PSO to avoid the particles being trapped into the local optima according to the evolutional states of the swarm, which are estimated based on the gradients of the fitness functions of the particles. We also improve the mutation strategy of EO by performing polynomial mutation (PLM) on each particle, instead of on each component of the particle, therefore, the algorithm is not sensitive to the dimension of the swarm. The proposed algorithm is tested on several unimodal/multimodal benchmark functions and Berkeley Segmentation Dataset and Benchmark (BSDS300). The results of experiments have shown the superiority and efficiency of the proposed approach compared with those of the state-of-the-art algorithms, and can achieve better performance in high-dimensional tasks.  相似文献   

8.
As same with many evolutional algorithms, performance of simple PSO depends on its parameters, and it often suffers the problem of being trapped in local optima so as to cause premature convergence. In this paper, an improved particle swarm optimization with decline disturbance index (DDPSO), is proposed to improve the ability of particles to explore the global and local optimization solutions, and to reduce the probability of being trapped into the local optima. The correctness of the modification, which incorporated a decline disturbance index, was proved. The key question why the proposed method can reduce the probability of being trapped in local optima was answered. The modification improves the ability of particles to explore the global and local optimization solutions, and reduces the probability of being trapped into the local optima. Theoretical analysis, which is based on stochastic processes, proves that the trajectory of particle is a Markov processes and DDPSO algorithm converges to the global optimal solution with mean square merit. After the exploration based on DDPSO, neighborhood search strategy is used in a local search and an adaptive meta-Lamarckian strategy is employed to dynamically decide which neighborhood should be selected to stress exploitation in each generation. The multi-objective combination problems with DDPSO for finding the pareto front was presented under certain performance index. Simulation results and comparisons with typical algorithms show the effectiveness and robustness of the proposed DDPSO.  相似文献   

9.
针对基本引力搜索算法(gravity search algorithm, GSA)易早熟、易陷入局部最优、缺少有效加速机制等缺点,提出了基于改进自适应黑洞机制的GSA(improved adaptive black hole gravity search algorithm, IABHGSA)。通过改进Tent映射对种群初始化,使得初始种群的分布更随机、均匀、全面,增强算法的全局勘探能力;引入改进自适应黑洞机制,根据粒子进化情况选择位置更新策略,使得位置更新更为合理,有效减小粒子陷入局部最优的可能性;通过基于学习思想的最优与最差粒子更新策略增强算法逃离局部最优的能力,并提高算法的寻优速度;引入群体迁徙,为算法提供有效的加速收敛机制。最后,选取八个基准测试函数对IABHGSA进行测试,并与相关算法的实验结果进行了对比,结果证明IABHGSA有更好的寻优性能。  相似文献   

10.
微粒群算法的全局搜索性能容易受到局部极值点的影响,对此,提出一种基于栅格的动态粒子数微粒群算法(GB-DPPPSO).通过设计栅格信息更新策略、粒子产生策略和粒子消灭策略,可以根据种群搜索情况动态控制粒子数变化,以保持种群多样性,提高全局搜索性能,通过对4个典型数学验证函数的仿真实验,表明了该算法相对于DPPPSO)在全局搜索成功率和搜索效率两方面均有明显改进.  相似文献   

11.
针对粒子群算法无法有效兼顾开采与勘探的问题, 提出一种基于密度峰值的依维度重置多种群粒子群算法. 首先采用密度峰值聚类中相对距离的思想并结合适应度值将种群分为两个子种群: 顶层群和底层群. 之后为顶层群设计专注于开采的学习策略而为底层群设计倾向于勘探的学习策略, 以均衡种群的勘探与开采. 最后依维度将陷入局部最优的粒子与全局最优粒子交叉重置, 在有效避免早熟收敛的同时也显著减少了无效计算次数. 将提出的算法与其他改进的优化算法在基础优化问题与CEC2017测试集上进行实验对比, 实验结果均值的统计检验证明了提出算法的改进具有统计学显著性.  相似文献   

12.
《Applied Soft Computing》2008,8(1):295-304
Several modified particle swarm optimizers are proposed in this paper. In DVPSO, a distribution vector is used in the update of velocity. This vector is adjusted automatically according to the distribution of particles in each dimension. In COPSO, the probabilistic use of a ‘crossing over’ update is introduced to escape from local minima. The landscape adaptive particle swarm optimizer (LAPSO) combines these two schemes with the aim of achieving more robust and efficient search. Empirical performance comparisons between these new modified PSO methods, and also the inertia weight PSO (IFPSO), the constriction factor PSO (CFPSO) and a covariance matrix adaptation evolution strategy (CMAES) are presented on several benchmark problems. All the experimental results show that LAPSO is an efficient method to escape from convergence to local optima and approaches the global optimum rapidly on the problems used.  相似文献   

13.
本文针对动态环境下多机器人形成动态目标队形的问题,提出一种基于粒子群算法的新方法作为多机器人快速编队的智能算法。该算法引入了贪婪机制,加快了收敛速度;同时为了防止贪婪机制陷入局部最小值,通过定义两个概率实现智能交换:一个概率表示“中心靠拢”,另一个表示“区域覆盖”。仿真结果表明,该算法能有效形成动态目标队形。而且该算法能够解决离散变量问题,具有广阔的应用领域。  相似文献   

14.
In this paper, a modified particle swarm optimization (PSO) algorithm is developed for solving multimodal function optimization problems. The difference between the proposed method and the general PSO is to split up the original single population into several subpopulations according to the order of particles. The best particle within each subpopulation is recorded and then applied into the velocity updating formula to replace the original global best particle in the whole population. To update all particles in each subpopulation, the modified velocity formula is utilized. Based on the idea of multiple subpopulations, for the multimodal function optimization the several optima including the global and local solutions may probably be found by these best particles separately. To show the efficiency of the proposed method, two kinds of function optimizations are provided, including a single modal function optimization and a complex multimodal function optimization. Simulation results will demonstrate the convergence behavior of particles by the number of iterations, and the global and local system solutions are solved by these best particles of subpopulations.  相似文献   

15.
针对标准灰狼优化算法在求解复杂工程优化问题时存在求解精度不高和易陷入局部最优的缺点,提出一种新型灰狼优化算法用于求解无约束连续函数优化问题。该算法首先利用反向学习策略产生初始种群个体,为算法全局搜索奠定基础;受粒子群优化算法的启发,提出一种非线性递减收敛因子更新公式,其动态调整以平衡算法的全局搜索能力和局部搜索能力;为避免算法陷入局部最优,对当前最优灰狼个体进行变异操作。对10个测试函数进行仿真实验,结果表明,与标准灰狼优化算法相比,改进灰狼优化算法具有更好的求解精度和更快的收敛速度。  相似文献   

16.
基于函数变换的改进混沌粒子群优化*   总被引:1,自引:0,他引:1  
李焱 《计算机应用研究》2010,27(11):4105-4107
粒子群在搜索过程中容易陷入局部而无法找到全局最优值,为了解决此早熟问题,提出基于函数变换的改进混沌粒子群优化算法。此方法将Logistic映射和改进的Tent映射引入到粒子群中代替随机数;将函数变换引入到粒子的速度、位置更新过程中以凸显全局最优值与局部极优值的差异,从而使粒子跳出局部极优值点,加细搜索进而找到全局最优值点。数值实验表明,基于函数变换的改进混沌粒子群在搜索时间和效率上要优于标准粒子群和基于Logistic映射的混沌粒子群。改进的算法是可行而有效的。  相似文献   

17.
Particle swarm optimization (PSO) is a population-based optimization tool that is inspired by the collective intelligent behavior of birds seeking food. It can be easily implemented and applied to solve various function optimization problems. However, relatively few researchers have explored the potential of PSO for multimodal problems. Although PSO is a simple, easily implemented, and powerful technique, it has a tendency to get trapped in a local optimum. This premature convergence makes it difficult to find global optimum solutions for multimodal problems. A hybrid Fletcher–Reeves based PSO (FRPSO) method is proposed in this paper. It is based on the idea of increasing exploitation of the local optimum, while maintaining a good exploration capability for finding better solutions. In FRPSO, standard PSO is used to update the particle’s current position, which is then further refined by the Fletcher–Reeves conjugate gradient method. This enhances the performance of standard PSO. The results of experiments conducted on seventeen benchmark test functions demonstrate that the proposed method shows superior performance on a set of multimodal functions when compared with standard PSO, a genetic algorithm (GA) and fitness distance ratio PSO (FDRPSO).  相似文献   

18.
一种克服粒子群早熟的混合优化算法   总被引:1,自引:0,他引:1  
针对粒子群优化算法在寻优时容易出现早熟现象,提出在粒子群收敛停滞时,从种群中随机选择粒子进行共轭梯度法计算,通过引入共轭梯度算法计算的信息来影响粒子速度的更新,以保持群体的活性,从而打破群体信息陷入局部最优的状况.不同于传统的粒子群算法,该算法有机地结合了粒子群的全局搜索能力和共轭梯度法的强大局部搜索能力,从而在一定程度上有效地克服了粒子群早熟的缺点.仿真计算结果表明,该改进粒子群的方法对于不同维数的非线性函数具有很好的寻优效果.  相似文献   

19.
Fan  Qian  Chen  Zhenjian  Li  Zhao  Xia  Zhanghua  Yu  Jiayong  Wang  Dongzheng 《Engineering with Computers》2021,37(3):1851-1878

Similar to other swarm-based algorithms, the recently developed whale optimization algorithm (WOA) has the problems of low accuracy and slow convergence. It is also easy to fall into local optimum. Moreover, WOA and its variants cannot perform well enough in solving high-dimensional optimization problems. This paper puts forward a new improved WOA with joint search mechanisms called JSWOA for solving the above disadvantages. First, the improved algorithm uses tent chaotic map to maintain the diversity of the initial population for global search. Second, a new adaptive inertia weight is given to improve the convergence accuracy and speed, together with jump out from local optimum. Finally, to enhance the quality and diversity of the whale population, as well as increase the probability of obtaining global optimal solution, opposition-based learning mechanism is used to update the individuals of the whale population continuously during each iteration process. The performance of the proposed JSWOA is tested by twenty-three benchmark functions of various types and dimensions. Then, the results are compared with the basic WOA, several variants of WOA and other swarm-based intelligent algorithms. The experimental results show that the proposed JSWOA algorithm with multi-mechanisms is superior to WOA and the other state-of-the-art algorithms in the competition, exhibiting remarkable advantages in the solution accuracy and convergence speed. It is also suitable for dealing with high-dimensional global optimization problems.

  相似文献   

20.

针对新颖全局和声搜索(NGHS) 算法过早收敛的问题, 提出自适应全局和声搜索(AGHS) 算法. 引入差分向量范数定义和声记忆库多样性, 给出新的位置更新策略, 排除变异操作. 以和声记忆库多样性信息为指导动态产生新和声, 提高算法对解空间信息开发的能力, 避免算法因过早收敛、易陷入局部最优的不足. AGHS算法操作更简单,需要设置的参数更少, 将其与目前文献中较优的几种改进HS 算法、PSO 算法和GA算法进行性能测试, 测试结果表明AGHS算法具有较高的寻优精度和较快的收敛速度.

  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号