首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Particle swarm optimization (PSO) has been shown as an effective tool for solving global optimization problems. So far, most PSO algorithms use a single learning pattern for all particles, which means that all particles in a swarm use the same strategy. This monotonic learning pattern may cause the lack of intelligence for a particular particle, which makes it unable to deal with different complex situations. This paper presents a novel algorithm, called self-learning particle swarm optimizer (SLPSO), for global optimization problems. In SLPSO, each particle has a set of four strategies to cope with different situations in the search space. The cooperation of the four strategies is implemented by an adaptive learning framework at the individual level, which can enable a particle to choose the optimal strategy according to its own local fitness landscape. The experimental study on a set of 45 test functions and two real-world problems show that SLPSO has a superior performance in comparison with several other peer algorithms.  相似文献   

2.
在群体智能算法中个体种群的多样性在进化后期逐渐消失,个体趋同性增加,因此粒子群算法的主要缺点是容易陷入局部最优值。提出了一种新的改进粒子群算法,该算法结合了压缩因子和综合信息策略,其中压缩因子可以平衡粒子群算法中的局部和全局搜索,综合信息可以较好地加强种群的多样性。改进后的粒子群算法与基本粒子群算法、自适应粒子群算法和压缩因子粒子群算法在7个测试函数上分别进行了精度对比测试、成功概率测试和收敛速度测试,结果表明新算法获得了较高的搜索精度和较快的收敛速度。  相似文献   

3.
In particle swarm optimization (PSO) each particle uses its personal and global or local best positions by linear summation. However, it is very time consuming to find the global or local best positions in case of complex problems. To overcome this problem, we propose a new multi-objective variant of PSO called attributed multi-objective comprehensive learning particle swarm optimizer (A-MOCLPSO). In this technique, we do not use global or local best positions to modify the velocity of a particle; instead, we use the best position of a randomly selected particle from the whole population to update the velocity of each dimension. This method not only increases the speed of the algorithm but also searches in more promising areas of the search space. We perform an extensive experimentation on well-known benchmark problems such as Schaffer (SCH), Kursawa (KUR), and Zitzler–Deb–Thiele (ZDT) functions. The experiments show very convincing results when the proposed technique is compared with existing versions of PSO known as multi-objective comprehensive learning particle swarm optimizer (MOCLPSO) and multi-objective particle swarm optimization (MOPSO), as well as non-dominated sorting genetic algorithm II (NSGA-II). As a case study, we apply our proposed A-MOCLPSO algorithm on an attack tree model for the security hardening problem of a networked system in order to optimize the total security cost and the residual damage, and provide diverse solutions for the problem. The results of our experiments show that the proposed algorithm outperforms the previous solutions obtained for the security hardening problem using NSGA-II, as well as MOCLPSO for the same problem. Hence, the proposed algorithm can be considered as a strong alternative to solve multi-objective optimization problems.  相似文献   

4.
求解约束优化问题的多目标粒子群算法*   总被引:1,自引:1,他引:0  
提出一种多目标粒子群算法处理约束优化问题(MOCPSO). 首先将约束优化问题转化为多目标问题, 然后给出一个不可行阈值来充分地利用不可行粒子的信息引导种群的飞行; 并提出一种粒子间的比较准则以比较它们的优劣; 最后, 为了增加种群的多样性, 提升种群跳出局部最优解的能力, 引入高斯白噪声扰动. 选取有代表性的标准测试函数对MOCPSO算法的性能进行仿真实验, 相比较其它算法, 结果显示MOCPSO算法是求解约束优化问题的有效算法.  相似文献   

5.
在基于粒子群算法的多模优化问题中,针对现存小生境方法需要特定参数的缺陷,提出了一种不需要参数的小生境算法。该算法通过粒子适应度在种群适应度中所占比例以及粒子之间的欧式距离两方面因素确定粒子的局部最优解,并通过每轮迭代中每个局部最优解粒子和以它作为局部最优解的普通粒子的欧式距离的平均值确定出该小生境的半径。在几个广泛的测试函数上的实验结果表明,该算法在收敛速度和成功率方面比需要小生境参数的算法(FERPSO、SPSO)更优秀。  相似文献   

6.
针对粒子群优化算法易过早收敛而陷入局部最优的缺陷,结合移动机器人全局路径规划问题模型,提出一种带扰动机制的粒子群优化算法。对于进入进化停滞状态的个体,采用个体修正策略产生新个体将其替代,来引导算法搜索可行路径,帮助粒子逃离局部极值。仿真实验表明,与其他算法相比,该算法具有更好的搜索精度和全局寻优能力。  相似文献   

7.
《Applied Soft Computing》2008,8(1):295-304
Several modified particle swarm optimizers are proposed in this paper. In DVPSO, a distribution vector is used in the update of velocity. This vector is adjusted automatically according to the distribution of particles in each dimension. In COPSO, the probabilistic use of a ‘crossing over’ update is introduced to escape from local minima. The landscape adaptive particle swarm optimizer (LAPSO) combines these two schemes with the aim of achieving more robust and efficient search. Empirical performance comparisons between these new modified PSO methods, and also the inertia weight PSO (IFPSO), the constriction factor PSO (CFPSO) and a covariance matrix adaptation evolution strategy (CMAES) are presented on several benchmark problems. All the experimental results show that LAPSO is an efficient method to escape from convergence to local optima and approaches the global optimum rapidly on the problems used.  相似文献   

8.
针对粒子群优化算法在处理高维、大规模、多变量耦合、多模态、多极值属性优化问题时易早熟收敛等性能和技术瓶颈,基于粒子群优化算法行为学习算子和3种不同学习偏好的差分变异算子,建立带偏向性轮盘赌的多算子选择与融合机制,提出一种带偏向性轮盘赌的多算子协同粒子群优化算法MOCPSO.MOCPSO针对迭代粒子群榜样粒子集,首先通过对迭代种群及其榜样粒子集优劣分组,同时采用轮盘赌分别为每组榜样粒子集选配不同学习偏好的变异算子,并为每组榜样粒子适配差分基向量和最优基向量,预学习并优化迭代种群及其榜样粒子,以权衡算法的全局探索和局部开发;然后通过合并所有子种群,并结合粒子群优化算法行为学习算子,指导迭代种群状态更新,以提高算法的全局收敛性;最后结合精英学习策略,对群体历史最优进行高斯扰动,以提高算法的局部逃生能力,保障算法收敛的多样性.实验结果表明,MOCPSO算法与5种先进的同类型群智能算法在求解CEC2014基准测试问题上具备竞争力,且有更强的优化特性.  相似文献   

9.
Memetic algorithms, one type of algorithms inspired by nature, have been successfully applied to solve numerous optimization problems in diverse fields. In this paper, we propose a new memetic computing model, using a hierarchical particle swarm optimizer (HPSO) and latin hypercube sampling (LHS) method. In the bottom layer of hierarchical PSO, several swarms evolve in parallel to avoid being trapped in local optima. The learning strategy for each swarm is the well-known comprehensive learning method with a newly designed mutation operator. After the evolution process accomplished in bottom layer, one particle for each swarm is selected as candidate to construct the swarm in the top layer, which evolves by the same strategy employed in the bottom layer. The local search strategy based on LHS is imposed on particles in the top layer every specified number of generations. The new memetic computing model is extensively evaluated on a suite of 16 numerical optimization functions as well as the cylindricity error evaluation problem. Experimental results show that the proposed algorithm compares favorably with conventional PSO and several variants.  相似文献   

10.
针对粒子群算法对全局和局部搜索平衡能力较弱的缺点,提出结合时变加速因子的粒子群算法。新算法基于压缩因子粒子群算法,利用双重压缩因子;第一个压缩因子用来调节全局和局部搜索模型;第二压缩因子利用时变的加速因子,进一步平衡全局和局部最优值对粒子种群升级的影响;通过对基本粒子群算法,压缩因子粒子群算法和混沌粒子群算法在8个标准Benchmark函数上进行三种测试,实验结果表明新算法精度较高,收敛速度较快。新算法通过时变的加速因子,较好平衡了粒子群算法的全局和局部搜索模型。  相似文献   

11.
This paper introduces a novel parameter automation strategy for the particle swarm algorithm and two further extensions to improve its performance after a predefined number of generations. Initially, to efficiently control the local search and convergence to the global optimum solution, time-varying acceleration coefficients (TVAC) are introduced in addition to the time-varying inertia weight factor in particle swarm optimization (PSO). From the basis of TVAC, two new strategies are discussed to improve the performance of the PSO. First, the concept of "mutation" is introduced to the particle swarm optimization along with TVAC (MPSO-TVAC), by adding a small perturbation to a randomly selected modulus of the velocity vector of a random particle by predefined probability. Second, we introduce a novel particle swarm concept "self-organizing hierarchical particle swarm optimizer with TVAC (HPSO-TVAC)". Under this method, only the "social" part and the "cognitive" part of the particle swarm strategy are considered to estimate the new velocity of each particle and particles are reinitialized whenever they are stagnated in the search space. In addition, to overcome the difficulties of selecting an appropriate mutation step size for different problems, a time-varying mutation step size was introduced. Further, for most of the benchmarks, mutation probability is found to be insensitive to the performance of MPSO-TVAC method. On the other hand, the effect of reinitialization velocity on the performance of HPSO-TVAC method is also observed. Time-varying reinitialization step size is found to be an efficient parameter optimization strategy for HPSO-TVAC method. The HPSO-TVAC strategy outperformed all the methods considered in this investigation for most of the functions. Furthermore, it has also been observed that both the MPSO and HPSO strategies perform poorly when the acceleration coefficients are fixed at two.  相似文献   

12.
Particle swarm optimizer (PSO) is an effective tool for solving many optimization problems. However, it may easily get trapped into local optimumwhen solving complex multimodal nonseparable problems. This paper presents a novel algorithm called distributed learning particle swarm optimizer (DLPSO) to solve multimodal nonseparable problems. The strategy for DLPSO is to extract good vector information from local vectors which are distributed around the search space and then to form a new vector which can jump out of local optima and will be optimized further. Experimental studies on a set of test functions show that DLPSO exhibits better performance in solving optimization problems with few interactions between variables than several other peer algorithms.  相似文献   

13.
基于SQP 局部搜索的混沌粒子群优化算法   总被引:1,自引:0,他引:1  
提出一种基于序贯二次规划(SQP)法的混沌粒子群优化方法(CPSO-SQP).将混沌PSO作为全局搜索器,并用SQP加速局部搜索,使得粒子能够在快速局部寻优的基础上对整个空间进行搜索,既保证了算法的收敛性,又大大增加了获得全局最优的几率.仿真结果表明,算法精度高、成功率大、全局收敛速度快,明显优于现有算法.将所提出的算法用于高密度聚乙烯(HDPE)装置串级反应过程的乙烯单耗优化,根据工业反应机理以及现场操作经验分析可知,所提出的算法是可行的.  相似文献   

14.
In this paper, we try to improve the performance of the particle swarm optimizer by incorporating the linkage concept, which is an essential mechanism in genetic algorithms, and design a new linkage identification technique called dynamic linkage discovery to address the linkage problem in real-parameter optimization problems. Dynamic linkage discovery is a costless and effective linkage recognition technique that adapts the linkage configuration by employing only the selection operator without extra judging criteria irrelevant to the objective function. Moreover, a recombination operator that utilizes the discovered linkage configuration to promote the cooperation of particle swarm optimizer and dynamic linkage discovery is accordingly developed. By integrating the particle swarm optimizer, dynamic linkage discovery, and recombination operator, we propose a new hybridization of optimization methodologies called particle swarm optimization with recombination and dynamic linkage discovery (PSO-RDL). In order to study the capability of PSO-RDL, numerical experiments were conducted on a set of benchmark functions as well as on an important real-world application. The benchmark functions used in this paper were proposed in the 2005 Institute of Electrical and Electronics Engineers Congress on Evolutionary Computation. The experimental results on the benchmark functions indicate that PSO-RDL can provide a level of performance comparable to that given by other advanced optimization techniques. In addition to the benchmark, PSO-RDL was also used to solve the economic dispatch (ED) problem for power systems, which is a real-world problem and highly constrained. The results indicate that PSO-RDL can successfully solve the ED problem for the three-unit power system and obtain the currently known best solution for the 40-unit system.  相似文献   

15.
基于高斯混合模型的活动轮廓模型脑MRI分割   总被引:2,自引:0,他引:2  
传统的活动轮廓模型用于图像分割往往基于目标的边界信息,在图像含有强噪音或目标具有弱边界时很难得到真实解.引入高斯混合模型构造新的约束项,在新的约束项作用下模型可以减少噪音的影响,并防止从弱边界泄漏.高斯混合模型求解通常使用Expectation-maximization(EM)算法,该算法是局部优化算法,且对初值敏感.因此引入粒子群算法,并提出一种改进的算法,利用该算法的全局优化性求解高斯混合模型的参数,以提高参数精度.对脑核磁共振图像(MRI)分割实验表明该模型具有较好的分割效果.  相似文献   

16.
According to the “No Free Lunch (NFL)” theorem, there is no single optimization algorithm to solve every problem effectively and efficiently. Different algorithms possess capabilities for solving different types of optimization problems. It is difficult to predict the best algorithm for every optimization problem. However, the ensemble of different optimization algorithms could be a potential solution and more efficient than using one single algorithm for solving complex problems. Inspired by this, we propose an ensemble of different particle swarm optimization algorithms called the ensemble particle swarm optimizer (EPSO) to solve real-parameter optimization problems. In each generation, a self-adaptive scheme is employed to identify the top algorithms by learning from their previous experiences in generating promising solutions. Consequently, the best-performing algorithm can be determined adaptively for each generation and assigned to individuals in the population. The performance of the proposed ensemble particle swarm optimization algorithm is evaluated using the CEC2005 real-parameter optimization benchmark problems and compared with each individual algorithm and other state-of-the-art optimization algorithms to show the superiority of the proposed ensemble particle swarm optimization (EPSO) algorithm.  相似文献   

17.
A heuristic particle swarm optimizer (HPSO) algorithm for truss structures with discrete variables is presented based on the standard particle swarm optimizer (PSO) and the harmony search (HS) scheme. The HPSO is tested on several truss structures with discrete variables and is compared with the PSO and the particle swarm optimizer with passive congregation (PSOPC), respectively. The results show that the HPSO is able to accelerate the convergence rate effectively and has the fastest convergence rate among these three algorithms. The research shows the proposed HPSO can be effectively used to solve optimization problems for steel structures with discrete variables.  相似文献   

18.
为了提高多目标优化算法解集的分布性和收敛性,提出一种基于分解和差分进化的多目标粒子群优化算法(dMOPSO-DE).该算法通过提出方向角产生一组均匀的方向向量,确保粒子分布的均匀性;引入隐式精英保持策略和差分进化修正机制选择全局最优粒子,避免种群陷入局部最优Pareto前沿;采用粒子重置策略保证群体的多样性.与非支配排序(NSGA-II)算法、多目标粒子群优化(MOPSO)算法、分解多目标粒子群优化(dMOPSO)算法和分解多目标进化-差分进化(MOEA/D-DE)算法进行比较,实验结果表明,所提出算法在求解多目标优化问题时具有良好的收敛性和多样性.  相似文献   

19.
目(2055)基于聚类的多子群粒子群优化算法*   总被引:6,自引:0,他引:6  
在粒子群优化算法基础上,提出了基于聚类的多子群粒子群优化算法。该算法在每次迭代过程中首先通过聚类方法把粒子群体分成若干个子群体,然后粒子群中的粒子根据其个体极值和“子群”中的最优粒子更新自己的速度和位置值。这种处理增加了粒子之间的信息交换,利用了更多粒子在迭代过程中的信息,使算法的收敛性能更好。仿真结果表明,该算法的性能优于粒子群优化算法。  相似文献   

20.
基于K- 均值聚类的动态多种群粒子群算法及其应用   总被引:3,自引:0,他引:3  
针对粒子群算法在求解复杂的多峰问题时极易陷入局部最优解的问题,提出一种基于K-均值聚类的动态多种群粒子群算法(KDMSPSO).在该算法中,利用K-均值聚类算法将种群分成若干个子群(聚类);为了增强子群间的信息交流,对子群进行动态重组;在每个子群中,粒子的速度由它所在子群的中心粒子和该粒子所有邻居的信息共同调整.在基准函数测试和实际应用中,其结果显示KDMSPSO算法相比其他PSO算法具有一定的优势.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号