首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 250 毫秒
1.
With the help of grey relational analysis, this study attempts to propose two grey-based parameter automation strategies for particle swarm optimization (PSO). One is for the inertia weight and the other is for the acceleration coefficients. By the proposed approaches, each particle has its own inertia weight and acceleration coefficients whose values are dependent upon the corresponding grey relational grade. Since the relational grade of a particle is varying over the iterations, those parameters are also time-varying. Even if in the same iteration, those parameters may differ for different particles. In addition, owing to grey relational analysis involving the information of population distribution, such parameter automation strategies make an attempt on the grey PSO to perform a global search over the search space with faster convergence speed. The proposed grey PSO is applied to solve the optimization problems of 12 unimodal and multimodal benchmark functions for illustration. Simulation results are compared with the adaptive PSO (APSO) and two well-known PSO variants, PSO with linearly varying inertia weight (PSO-LVIW) and PSO with time-varying acceleration coefficients (HPSO-TVAC), to demonstrate the search performance of the grey PSO.  相似文献   

2.
Particle swarm optimization (PSO) is a population-based stochastic optimization algorithm motivated by intelligent collective behavior of some animals such as flocks of birds or schools of fish. The most important features of the PSO are easy implementation and few adjustable parameters. A novel PSO method called LHNPSO, with low-discrepancy sequence initialized particles and high-order (1/π2) nonlinear time-varying inertia weight and constant acceleration coefficients, is proposed in this paper. The initial population particles are generated by using the Halton sequence to fill the search space efficiently. Nonlinear functions with orders varied within big ranges are employed to adjust the inertial weight, cognitive and social parameters. Based on the sensitivity analysis of PSO performance to the changes of the orders of these nonlinear functions, 1/π2 order nonlinear function is selected to adjust the time-varying inertia weight and the two acceleration coefficients are set to be constants. A set of well-known benchmark optimization problems is then used to investigate the performance of the proposed LHNPSO algorithm and facilitate the comparison with other three types of PSO algorithms. The results show that the easily implemented LHNPSO can converge faster and give a much more accurate final solution for a variety of benchmark test functions.  相似文献   

3.
In this article we describe a novel Particle Swarm Optimization (PSO) approach to multi-objective optimization (MOO), called Time Variant Multi-Objective Particle Swarm Optimization (TV-MOPSO). TV-MOPSO is made adaptive in nature by allowing its vital parameters (viz., inertia weight and acceleration coefficients) to change with iterations. This adaptiveness helps the algorithm to explore the search space more efficiently. A new diversity parameter has been used to ensure sufficient diversity amongst the solutions of the non-dominated fronts, while retaining at the same time the convergence to the Pareto-optimal front. TV-MOPSO has been compared with some recently developed multi-objective PSO techniques and evolutionary algorithms for 11 function optimization problems, using different performance measures.  相似文献   

4.
针对传统粒子群优化算法在求解复杂优化问题时易陷入局部最优和依赖参数的取值等问题,提出了一种独立自适应参数调整的粒子群优化算法。算法重新定义了粒子进化能力、种群进化能力以及进化率,在此基础上给出了粒子群惯性权重及学习因子的独立调整策略,更好地平衡了算法局部搜索与全局搜索的能力。为保持种群多样性,提高粒子向全局最优位置的收敛速度,在算法迭代过程中,采用粒子重构策略使种群中进化能力较弱的粒子向进化能力较强的粒子进行学习,重新构造生成新粒子。最后通过CEC2013中的10个基准测试函数与4种改进粒子群算法在不同维度下进行测试对比,实验结果验证了该算法在求解复杂函数时具有高效性,通过收敛性分析说明了算法的有效性。  相似文献   

5.
This paper introduces a novel parameter automation strategy for the particle swarm algorithm and two further extensions to improve its performance after a predefined number of generations. Initially, to efficiently control the local search and convergence to the global optimum solution, time-varying acceleration coefficients (TVAC) are introduced in addition to the time-varying inertia weight factor in particle swarm optimization (PSO). From the basis of TVAC, two new strategies are discussed to improve the performance of the PSO. First, the concept of "mutation" is introduced to the particle swarm optimization along with TVAC (MPSO-TVAC), by adding a small perturbation to a randomly selected modulus of the velocity vector of a random particle by predefined probability. Second, we introduce a novel particle swarm concept "self-organizing hierarchical particle swarm optimizer with TVAC (HPSO-TVAC)". Under this method, only the "social" part and the "cognitive" part of the particle swarm strategy are considered to estimate the new velocity of each particle and particles are reinitialized whenever they are stagnated in the search space. In addition, to overcome the difficulties of selecting an appropriate mutation step size for different problems, a time-varying mutation step size was introduced. Further, for most of the benchmarks, mutation probability is found to be insensitive to the performance of MPSO-TVAC method. On the other hand, the effect of reinitialization velocity on the performance of HPSO-TVAC method is also observed. Time-varying reinitialization step size is found to be an efficient parameter optimization strategy for HPSO-TVAC method. The HPSO-TVAC strategy outperformed all the methods considered in this investigation for most of the functions. Furthermore, it has also been observed that both the MPSO and HPSO strategies perform poorly when the acceleration coefficients are fixed at two.  相似文献   

6.
Particle swarm optimization (PSO) is a stochastic population-based algorithm motivated by intelligent collective behavior of birds. The performance of the PSO algorithm highly depends on choosing appropriate parameters. Inertia weight is a parameter of this algorithm which was first proposed by Shi and Eberhart to bring about a balance between the exploration and exploitation characteristics of PSO. This paper presents an adaptive approach which determines the inertia weight in different dimensions for each particle, based on its performance and distance from its best position. Each particle will then have different roles in different dimensions of the search environment. By considering the stability condition and an adaptive inertia weight, the acceleration parameters of PSO are adaptively determined. The corresponding approach is called stability-based adaptive inertia weight (SAIW). The proposed method and some other models for adjusting the inertia weight are evaluated and compared. The efficiency of SAIW is validated on 22 static test problems, moving peaks benchmarks (MPB) and a real-world problem for a radar system design. Experimental results indicate that the proposed model greatly improves the PSO performance in terms of the solution quality as well as convergence speed in static and dynamic environments.  相似文献   

7.
二阶微粒群算法   总被引:5,自引:0,他引:5  
为了提高标准微粒群算法的全局收敛性,提出了一种新的微粒群算法——二阶微粒群算法.首先,介绍了二阶微粒群算法的引入,分析了其收敛性,并且研究了其参数的选择范围.其次,在分析二阶微粒群算法的进化方程的基础上,引出了具有随机惯性权重的标准微粒群算法.再次,在二阶微粒群算法中加入振荡因子来调整微粒的速度变化率,更好地使二阶微粒群算法收敛于全局最优.最后,利用这几种改进方法对典型测试函数进行仿真,实验结果表明,这些方法能够有效克服早熟问题,在全局收敛性和收敛速度方面均优于标准微粒群算法.  相似文献   

8.
求解TSP的改进自组织PSO算法   总被引:2,自引:0,他引:2       下载免费PDF全文
针对粒子群算法(PSO)的早熟收敛现象,从种群多样性出发,基于自组织临界性特点改进PSO 算法的参数设置,采用自组织的惯性权重和加速系数,并增加了变异算子。借鉴交换子和交换序概念,设计出了能直接在离散域进行搜索的改进的自组织PSO算法。用于旅行商问题(TSP)的求解,并与基本及其他典型改进PSO算法进行性能比较。实验结果证实改进的自组织PSO算法是有效的。  相似文献   

9.
微粒群算法中惯性权重的调整策略   总被引:8,自引:0,他引:8       下载免费PDF全文
胡建秀  曾建潮 《计算机工程》2007,33(11):193-195
惯性权重是微粒群算法中的关键参数,可以平衡算法全局搜索能力和局部搜索能力的关系,提高算法的收敛性能。该文分析了惯性权重对微粒群算法收敛性能的影响,为了进一步提高算法的全局最优性,提出了几种对惯性权重的调整策略。通过对4个测试函数的仿真实验,验证了这些策略的可行性,表明这些策略能够简便高效地提高算法的全局收敛性和收敛速度。  相似文献   

10.
基于强化学习的适应性微粒群算法   总被引:1,自引:0,他引:1  
惯性权重足微粒群算法(PSO)的重要参数,它可以甲衡算法的全局和局部搜索能力的关系,改善算法的性能.对此,提出一种基于强化学习的适应性微粒群算法(RPSO).首先将不同惯性权重调整策略视为粒子的行动集合;然后通过计算Q函数值.考察粒子多步进化的效果;进而选择粒_了最优进化策略,动态调整惯性权重,以增强算法寻找全局最优的...  相似文献   

11.
PSO, like many stochastic search methods, is very sensitive to efficient parameter setting such that modifying a single parameter may cause a considerable change in the result. In this paper, we study the ability of learning automata for adaptive PSO parameter selection. We introduced two classes of learning automata based algorithms for adaptive selection of value for inertia weight and acceleration coefficients. In the first class, particles of a swarm use the same parameter values adjusted by learning automata. In the second class, each particle has its own characteristics and sets its parameter values individually. In addition, for both classed of proposed algorithms, two approaches for changing value of the parameters has been applied. In first approach, named adventurous, value of a parameter is selected from a finite set while in the second approach, named conservative, value of a parameter either changes by a fixed amount or remains unchanged. Experimental results show that proposed learning automata based algorithms compared to other schemes such as SPSO, PSOIW, PSO-TVAC, PSOLP, DAPSO, GPSO, and DCPSO have the same or even higher ability to find better solutions. In addition, proposed algorithms converge to stopping criteria for some of the highly multi modal functions significantly faster.  相似文献   

12.
具有随机惯性权重的PSO算法   总被引:11,自引:1,他引:11  
微粒群算法(PSO算法)是模拟鸟类、鱼群等的群体智能行为的一种优化算法,当前,在相关领域内,倍受国内外学者关注。该文在分析基本PSO算法的速度进化方程的基础上,提出一种能更好描述微粒进化过程的速度方程,由其引出一种具有随机惯性权重的PSO算法;通过五个典型测试函数的仿真实验,验证了其可行性,同时也表明具有随机惯性权重的PSO算法较具有线性递减惯性权重的PSO算法在收敛速度和全局收敛性方面有明显提高。  相似文献   

13.
Particle swarm optimization (PSO) algorithm is an algorithmic technique for optimization by solving a wide range of optimization problems. This paper presents a new approach of extending PSO to solve optimization problems by using the feedback control mechanism (FCPSO). The proposed FCPSO consists of two major steps. First, by evaluating the fitness value of each particle, a simple particle evolutionary fitness function is designed to control parameters involving acceleration coefficient, refreshing gap, learning probabilities and number of the potential exemplars automatically. By such a simple particle evolutionary fitness function, each particle has its own search parameters in a search environment. Secondly, a local learning method using a competitive penalized method is developed to refine the solution. The FCPSO has been comprehensively evaluated on 18 unimodal, multimodal and composite benchmark functions with or without rotation. Compared with various state-of-the-art algorithms, including traditional PSO algorithms and representative variants of PSO algorithms, the performance of FCPSO is promising. The effects of parameter adaptation, parameter sensitivity and local search method are studied. Lastly, the proposed FCPSO is applied to constructing a radial basis neural network, together with the K-means method for time-series prediction.  相似文献   

14.
Particle swarm optimizer (PSO), a new evolutionary computation algorithm, exhibits good performance for optimization problems, although PSO can not guarantee convergence of a global minimum, even a local minimum. However, there are some adjustable parameters and restrictive conditions which can affect performance of the algorithm. In this paper, the algorithm are analyzed as a time-varying dynamic system, and the sufficient conditions for asymptotic stability of acceleration factors, increment of acceleration factors and inertia weight are deduced. The value of the inertia weight is enhanced to (-1, 1). Based on the deduced principle of acceleration factors, a new adaptive PSO algorithm- harmonious PSO (HPSO) is proposed. Furthermore it is proved that HPSO is a global search algorithm. In the experiments, HPSO are used to the model identification of a linear motor driving servo system. An Akaike information criteria based fitness function is designed and the algorithms can not only estimate the parameters, but also determine the order of the model simultaneously. The results demonstrate the effectiveness of HPSO.  相似文献   

15.
Particle swarm optimizer (PSO), a new evolutionary computation algorithm, exhibits good performance for optimization problems, although PSO can not guarantee convergence of a global minimum, even a local minimum. However, there are some adjustable parameters and restrictive conditions which can affect performance of the algorithm. In this paper, the algorithm are analyzed as a time-varying dynamic system, and the sufficient conditions for asymptotic stability of acceleration factors, increment of acceleration factors and inertia weight are deduced. The value of the inertia weight is enhanced to (?1, 1). Based on the deduced principle of acceleration factors, a new adaptive PSO algorithmharmonious PSO (HPSO) is proposed. Furthermore it is proved that HPSO is a global search algorithm. In the experiments, HPSO are used to the model identification of a linear motor driving servo system. An Akaike information criteria based fitness function is designed and the algorithms can not only estimate the parameters, but also determine the order of the model simultaneously. The results demonstrate the effectiveness of HPSO.  相似文献   

16.
针对标准灰狼优化算法在求解复杂工程优化问题时存在求解精度不高和易陷入局部最优的缺点,提出一种新型灰狼优化算法用于求解无约束连续函数优化问题。该算法首先利用反向学习策略产生初始种群个体,为算法全局搜索奠定基础;受粒子群优化算法的启发,提出一种非线性递减收敛因子更新公式,其动态调整以平衡算法的全局搜索能力和局部搜索能力;为避免算法陷入局部最优,对当前最优灰狼个体进行变异操作。对10个测试函数进行仿真实验,结果表明,与标准灰狼优化算法相比,改进灰狼优化算法具有更好的求解精度和更快的收敛速度。  相似文献   

17.
康琦  汪镭  安静  吴启迪 《自动化学报》2010,36(8):1171-1181
从系统最优控制的角度对微粒群参数的动态优化问题进行探讨. 针对离散动态规划的``维数灾"问题, 将群体启发式随机搜索机制引入动态规划的最优策略求解, 提出了一种群体智能近似动态规划模式; 基于该模式给出简化的确定型微粒群反馈控制系统参数优化的近似计算方法, 并扩展应用于具有随机变量的微粒群系统; 仿真计算得到了微粒群加速因子的近似最优动态规律, 并将所得策略与一种时变加速因子(Time-varying acceleration coefficients, TVAC)策略进行了函数优化性能的比较与分析, 初步实验结果表明该近似动态规划模式可有效地用于微粒群系统参数的动态优化设置.  相似文献   

18.
Artificial bee colony algorithm (ABC), which is inspired by the foraging behavior of honey bee swarm, is a biological-inspired optimization. It shows more effective than genetic algorithm (GA), particle swarm optimization (PSO) and ant colony optimization (ACO). However, ABC is good at exploration but poor at exploitation, and its convergence speed is also an issue in some cases. For these insufficiencies, we propose an improved ABC algorithm called I-ABC. In I-ABC, the best-so-far solution, inertia weight and acceleration coefficients are introduced to modify the search process. Inertia weight and acceleration coefficients are defined as functions of the fitness. In addition, to further balance search processes, the modification forms of the employed bees and the onlooker ones are different in the second acceleration coefficient. Experiments show that, for most functions, the I-ABC has a faster convergence speed and better performances than each of ABC and the gbest-guided ABC (GABC). But I-ABC could not still substantially achieve the best solution for all optimization problems. In a few cases, it could not find better results than ABC or GABC. In order to inherit the bright sides of ABC, GABC and I-ABC, a high-efficiency hybrid ABC algorithm, which is called PS-ABC, is proposed. PS-ABC owns the abilities of prediction and selection. Results show that PS-ABC has a faster convergence speed like I-ABC and better search ability than other relevant methods for almost all functions.  相似文献   

19.
一种混沌粒子群算法   总被引:1,自引:0,他引:1       下载免费PDF全文
针对传统的粒子群算法易陷入局部最小,且算法后期的粒子速度下降过快而失去搜索能力等缺陷,本文提出了一种基于混沌思想的新型粒子群算法。该算法通过生成混沌序列的方式产生惯性权重取代传统惯性权重线性递减的方案,使粒子速度呈现多样性的特点,从而提高算法的全局搜索能力;根据算法中粒子群体的平均粒子速度调节惯性权重,防止粒子速度过早降低而造成的搜索能力下降的问题;最后通过引入粒子群算法系统模型稳定时惯性权重和加速系数之间的约束关系,增强了粒子群算法的局部搜索能力。对比仿真实验表明,本文所提改进的混沌粒子群算法较传统粒子群算法具有更好的搜索性能。  相似文献   

20.
粒子群优化算法的分析与改进   总被引:49,自引:2,他引:49  
分析了惯性权值对粒子群优化(PSO)算法优化性能的影响,进而提出选择惯性权值的新策略.在随机选取惯性权值的同时,自适应地调整随机惯性权值的数学期望,有效地调整算法的全局与局部搜索能力.测试表明基于随机惯性权(RIW)策略的PSO算法,其全局搜优的速率与精度有明显提高.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号