首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
提出了一种基于聚类的小生境技术,可以有效地解决多模态问题并获得多个最优解,并且有较快的收敛速度。认知模式微粒群优化器只利用了每个粒子的认知信息从而在局部区域进行搜索,每个粒子在局部区域寻优并趋向区域最优解,且存在收敛速度慢等问题。为此,提出了一种改进算法,可以让粒子迅速收敛到局部最优解附近。最终每个粒子经历过的最优位置形成了若干个簇.通过对其聚类获得每个簇中的粒子信息。此时问题已转化为多个簇的单模态问题,在各个簇中再利用保收敛微粒群优化器获得每个簇的最优解。最后给出了实验,证明了该方法在圆形拓扑环境中的有效性。  相似文献   

2.
According to the “No Free Lunch (NFL)” theorem, there is no single optimization algorithm to solve every problem effectively and efficiently. Different algorithms possess capabilities for solving different types of optimization problems. It is difficult to predict the best algorithm for every optimization problem. However, the ensemble of different optimization algorithms could be a potential solution and more efficient than using one single algorithm for solving complex problems. Inspired by this, we propose an ensemble of different particle swarm optimization algorithms called the ensemble particle swarm optimizer (EPSO) to solve real-parameter optimization problems. In each generation, a self-adaptive scheme is employed to identify the top algorithms by learning from their previous experiences in generating promising solutions. Consequently, the best-performing algorithm can be determined adaptively for each generation and assigned to individuals in the population. The performance of the proposed ensemble particle swarm optimization algorithm is evaluated using the CEC2005 real-parameter optimization benchmark problems and compared with each individual algorithm and other state-of-the-art optimization algorithms to show the superiority of the proposed ensemble particle swarm optimization (EPSO) algorithm.  相似文献   

3.
《Applied Soft Computing》2008,8(1):295-304
Several modified particle swarm optimizers are proposed in this paper. In DVPSO, a distribution vector is used in the update of velocity. This vector is adjusted automatically according to the distribution of particles in each dimension. In COPSO, the probabilistic use of a ‘crossing over’ update is introduced to escape from local minima. The landscape adaptive particle swarm optimizer (LAPSO) combines these two schemes with the aim of achieving more robust and efficient search. Empirical performance comparisons between these new modified PSO methods, and also the inertia weight PSO (IFPSO), the constriction factor PSO (CFPSO) and a covariance matrix adaptation evolution strategy (CMAES) are presented on several benchmark problems. All the experimental results show that LAPSO is an efficient method to escape from convergence to local optima and approaches the global optimum rapidly on the problems used.  相似文献   

4.
Adaptive cooperative particle swarm optimizer   总被引:1,自引:1,他引:0  
An Adaptive Cooperative Particle Swarm Optimizer (ACPSO) is introduced in this paper, which facilitates cooperation technique through the usage of the Learning Automata (LA) algorithm. The cooperative strategy of ACPSO optimizes the problem collaboratively and evaluates it in different contexts. In the ACPSO algorithm, a set of learning automata associated with dimensions of the problem are trying to find the correlated variables of the search space and optimize the problem intelligently. This collective behavior of ACPSO will fulfill the task of adaptive selection of swarm members. Simulations were conducted on four types of benchmark suites which contain three state-of-the-art numerical optimization benchmark functions in addition to one new set of active coordinate rotated test functions. The results demonstrate the learning ability of ACPSO in finding correlated variables of the search space and also describe how efficiently it can optimize the coordinate rotated multimodal problems, composition functions and high-dimensional multimodal problems.  相似文献   

5.
针对微粒群算法容易出现早熟问题,提出一种动态种群与子群混合的微粒群算法(SPSDPSO)。该算法在微粒群搜索停滞时对微粒进行分群,在子群内部通过微粒随机初始化以及个体替代策略提高优化性能,在子群进化一定代数后重新混合为一个种群继续优化,种群进化与子群进化交替进行直至满足算法终止条件。SPSDPSO的种群与子群混合进化策略增强了群体多样性,并且使得子群体之间能够进行充分的信息交流。收敛性分析表明,SPSDPSO以概率1收敛到全局最优解。函数测试结果表明,新算法的全局收敛性能有了显著提高。  相似文献   

6.
This paper proposes Hyperspherical Acceleration Effect Particle Swarm Optimization (HAEPSO) for optimizing complex, multi-modal functions. The HAEPSO algorithm finds the particles that are trapped in deep local minima and accelerates them in the direction of global optima. This novel technique improves the efficiency by manipulating PSO parameters in hyperspherical coordinate system. Performance comparisons of HAEPSO are provided against different PSO variants on standard benchmark functions. Results indicate that the proposed algorithm gives robust results with good quality solution and faster convergence. The proposed algorithm is an effective technique for solving complex, higher dimensional multi-modal functions.  相似文献   

7.
一种新的改进粒子群优化方法   总被引:2,自引:2,他引:2       下载免费PDF全文
为解决粒子群优化算法易于陷入局部最优问题,提出了两种新方法并行修改粒子群优化算法惯性权重:对好于或等于整体适应度平均值的粒子,用动态非线性方程调整惯性权重,在保存相对有利环境的基础上逐步向全局最优处收敛;对比平均值差的粒子,用动态Logistic混沌映射公式调整惯性权重,在复杂多变的环境中逐步摆脱局部最优,动态寻找全局最优值。两种方法前后相辅相成、动态协调,使两个动态种群相互协作、协同进化。实验结果证实:该算法在不同情况下都超越了同类著名改进算法。  相似文献   

8.
为了提高多目标粒子群算法(MOPSO)在Pareto前沿的收敛性和分布性,对传统MOPSO方法进行了改进.首先采用基于Pareto支配概念的适应值比例方法选择gbest,其次利用动态拥挤距离更新外部精英集,并通过对精英种群执行遗传操作,最后在粒子种群引入自适应的淘汰机制,加强粒子种群和精美种群的进化.典型测试函数的计算结果表明,该算法在收敛精度和分布性方面得到明显改善.  相似文献   

9.
经验自举粒子群优化算法   总被引:1,自引:0,他引:1       下载免费PDF全文
经验自举粒子群优化算法(EIPSO)是在粒子群算法中引入经验自举(EI)搜索算子,该算子的作用就是将随机选择的粒子个体经验的局部重新初始化构成候选经验。根据候选经验和原经验的适应值确定个体的新经验。在粒子进化的每一代,以概率p来执行经验自举搜索,以概率1-p执行经验指导下的进化搜索。EI算子的引入使粒子的搜索范围和多样性得到保持,同时在粒子收敛后算法仍然具有一定的搜索能力。对比实验结果表明该EIPSO算法的良好的综合性能。  相似文献   

10.
一种改进惯性权重的PSO算法   总被引:3,自引:3,他引:3       下载免费PDF全文
针对高维复杂函数优化,标准PSO算法收敛速度慢,易陷入局部最优点的缺点,提出一个惯性权重函数使算法的全局与局部搜索能力得到良好平衡,以达到快速收敛;并且该算法通过在后期进行变异操作,有效地增强了算法跳出局部最优解的能力。通过对三个典型的测试函数的优化所做的对比实验,表明改进的算法在求解质量和求解速度两方面都得到了好的结果。  相似文献   

11.
Acceleration Factor Harmonious Particle Swarm Optimizer   总被引:4,自引:0,他引:4  
A Particle Swarm Optimizer (PSO) exhibits good performance for optimization problems, although it cannot guarantee convergence to a global, or even local minimum. However, there are some adjustable parameters, and restrictive conditions, which can affect the performance of the algorithm. In this paper, the sufficient conditions for the asymptotic stability of an acceleration factor and inertia weight are deduced, the value of the inertia weight ω is enhanced to (-1,1). Furthermore a new adaptive PSO algorithm - Acceleration Factor Harmonious PSO (AFHPSO) is proposed, and is proved to be a global search algorithm. AFHPSO is used for the parameter design of a fuzzy controller for a linear motor driving servo system. The performance of the nonlinear model for the servo system demonstrates the effectiveness of the optimized fuzzy controller and AFHPSO.  相似文献   

12.
一种新型的动态粒子群优化算法   总被引:1,自引:1,他引:0  
为了改进标准粒子群优化算法全局搜索性能,提出了一种种群动态变化的多种群粒子群优化算法。当算法搜索停滞时,把种群分裂成2个子种群,通过子种群粒子随机初始化及个体替代机制增强种群多样性,两个子种群并行搜索一定代数后,通过混合子种群来完成不同子种群中粒子的信息交流。收敛性分析表明,本文算法能以概率1收敛到全局最优解。实验结果表明,本文算法具有较好的全局寻优能力和较快的收敛速度。  相似文献   

13.
D-S证据理论是一种性能优越的信息融合方法,由于各传感器所提供的证据的重要程度不同,需要对各证据进行加权合成处理。目前的加权D-S证据理论限于合成规则的研究,较少讨论如何获取优化的证据权值。实际上,证据权值的确定是证据进行加权合成的基础和关键。针对加权证据理论的这一研究不足,提出了一种求取最佳证据权值的方法。首先,阐述了思想,再建立了优化模型;然后,改进了粒子群优化求解算法,利用其优越的求解非线性多峰值函数的能力,求解出了最优权值。通过实例仿真表明:这种证据理论的加权算法是有效的,与对比方法相比,具有更好的融合效果。  相似文献   

14.
《微型机与应用》2014,(15):72-75
提出了一种改进的多群协作粒子群优化算法,该算法整个种群采用主从模式,分为一个主群和多个从群,多个从群粒子统一地进行初始化操作,从而避免了多个粒子群重复搜索现象。同时,算法采取了一种扰动策略,即当前全局最优解在扰动因子的迭代周期内保持不变时,就重置粒子的速度,迫使粒子群摆脱局部极小。该算法不仅增加了种群的多样性,扩大了搜索范围,而且还改善整个种群易陷入局部极小值的缺陷。通过9个基准函数进行测试,实验结果表明,IMCPSO与MCPSO算法相比具有明显的优越性。  相似文献   

15.
Stability analysis of the particle dynamics in particle swarm optimizer   总被引:6,自引:0,他引:6  
Previous stability analysis of the particle swarm optimizer was restricted to the assumption that all parameters are nonrandom, in effect a deterministic particle swarm optimizer. We analyze the stability of the particle dynamics without this restrictive assumption using Lyapunov stability analysis and the concept of passive systems. Sufficient conditions for stability are derived, and an illustrative example is given. Simulation results confirm the prediction from theory that stability of the particle dynamics requires increasing the maximum value of the random parameter when the inertia factor is reduced.  相似文献   

16.
We investigate the runtime of a binary Particle Swarm Optimizer (PSO) for optimizing pseudo-Boolean functions f:{0,1}nR. The binary PSO maintains a swarm of particles searching for good solutions. Each particle consists of a current position from {0,1}n, its own best position and a velocity vector used in a probabilistic process to update its current position. The velocities for a particle are then updated in the direction of its own best position and the position of the best particle in the swarm.We present a lower bound for the time needed to optimize any pseudo-Boolean function with a unique optimum. To prove upper bounds we transfer a fitness-level argument that is well-established for evolutionary algorithms (EAs) to PSO. This method is applied to estimate the expected runtime for the class of unimodal functions. A simple variant of the binary PSO is considered in more detail for the test function OneMax, showing that there the binary PSO is competitive to EAs. An additional experimental comparison reveals further insights.  相似文献   

17.
针对约束优化问题的求解,提出一种改进的粒子群算法(CMPSO)。在CMPSO算法中,为了增加种群多样性,提升种群跳出局部最优解的能力,引入种群多样性阈值,当种群多样性低于给定阈值时,对全局最优粒子位置和粒子自身最优位置进行多项式变异;并根据粒子违背约束条件的程度,提出一种新的粒子间比较准则来比较粒子间的优劣,该准则可以保留一部分性能较优的不可行解;为提升种群向全局最优解飞行的概率,采取一种广义学习策略。对经典测试函数的仿真结果表明,所提出的算法是一种可行的约束优化问题的求解方法。  相似文献   

18.
In this paper, the dynamic multi-swarm particle swarm optimizer (DMS-PSO) is improved by hybridizing it with the harmony search (HS) algorithm and the resulting algorithm is abbreviated as DMS-PSO-HS. We present a novel approach to merge the HS algorithm into each sub-swarm of the DMS-PSO. Combining the exploration capabilities of the DMS-PSO and the stochastic exploitation of the HS, the DMS-PSO-HS is developed. The whole DMS-PSO population is divided into a large number of small and dynamic sub-swarms which are also individual HS populations. These sub-swarms are regrouped frequently and information is exchanged among the particles in the whole swarm. The DMS-PSO-HS demonstrates improved on multimodal and composition test problems when compared with the DMS-PSO and the HS.  相似文献   

19.
为了提升粒子群算法求解多目标问题的能力,通过分析初始种群的方法对算法的影响,提出一种基于正交设计的多目标粒子群算法(ODMOPSO)。在算法运行过程中,通过正交设计来产生初始种群,使得种群均匀分布在可行区域,进而使得算法能够在整个可行解空间上进行均匀搜索;同时,引入广义学习策略提升粒子向Pareto前沿飞行的概率。在基准函数的测试中,结果显示ODMOPSO算法获得了质量更高的解。  相似文献   

20.
一种非线性改变惯性权重的粒子群算法   总被引:14,自引:3,他引:14  
引入递减指数和迭代阈值对基本粒子群算法中线性递减权策略进行了改进,在优化遮代过程中,惯性权重随当前迭代次数、指数递减率和迭代阚值非线性变化。对三种具有代表性的测试函数进行了仿真实验,并与基本粒子群算法以及其他改进的粒子群算法进行了比较,结果表明,文中所提的改进粒子群算法在搜优精度、收敛速度以度稳定性等方面有明显优势。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号