共查询到20条相似文献,搜索用时 0 毫秒
1.
针对约束优化问题的求解,提出一种改进的粒子群算法(CMPSO)。在CMPSO算法中,为了增加种群多样性,提升种群跳出局部最优解的能力,引入种群多样性阈值,当种群多样性低于给定阈值时,对全局最优粒子位置和粒子自身最优位置进行多项式变异;并根据粒子违背约束条件的程度,提出一种新的粒子间比较准则来比较粒子间的优劣,该准则可以保留一部分性能较优的不可行解;为提升种群向全局最优解飞行的概率,采取一种广义学习策略。对经典测试函数的仿真结果表明,所提出的算法是一种可行的约束优化问题的求解方法。 相似文献
2.
一种基于动态邻居和变异因子的粒子群算法 总被引:8,自引:2,他引:8
提出一种基于动态邻居和变异因子的粒子群算法(DNMPSO).在该算法中,粒子的邻居是根据它的运行而动态变化.每个粒子的学习机制分为自己的历史经验和所有邻居的经验两部分.为了保证有效求解多峰问题,在每一次迭代,对当前解采用水平混合变异,使每个粒子能更好地进行局部搜索,提升粒子跳出局部最优解的能力.通过与其他算法比较,结果表明该算法求解多峰问题的能力最优. 相似文献
3.
针对微粒群优化解决复杂优化问题时易陷入局部收敛、效率不高的缺点,提出一种基于动态邻域和自适应惯性权重的微粒群优化算法.通过定义动态邻域及其最优维值,提出种群个体的动态邻域最优维值学习策略,使微粒跟踪个体极值和邻域的最优维值进行搜索,以增加学习样本的多样性,避免局部收敛;提出一种基于个体适应度的惯性权重动态调整方法,提高算法的寻优效率.通过优化5个典型测试函数验证了本文所提方法的有效性. 相似文献
4.
5.
一种非线性改变惯性权重的粒子群算法 总被引:14,自引:3,他引:14
引入递减指数和迭代阈值对基本粒子群算法中线性递减权策略进行了改进,在优化遮代过程中,惯性权重随当前迭代次数、指数递减率和迭代阚值非线性变化。对三种具有代表性的测试函数进行了仿真实验,并与基本粒子群算法以及其他改进的粒子群算法进行了比较,结果表明,文中所提的改进粒子群算法在搜优精度、收敛速度以度稳定性等方面有明显优势。 相似文献
6.
Modal transducers can be designed by optimizing the polarity of the electrode which covers the piezoelectric layers bonded to the host structure. This paper is intended as a continuation of our previous work (Donoso and Bellido J Appl Mech 57:434–441, 2009a) to make better the performance of such piezoelectric devices by simultaneously optimizing the structure layout and the electrode profile. As the host structure is not longer fixed, the typical drawbacks in eigenproblem optimization such as spurious modes, mode tracking and switching or repeated eigenvalues soon appear. Further, our model has the novel issue that both cost and constraints explicitly depend on mode shapes. Moreover, due to the physics of the problem, the appearance of large gray areas is another pitfall to be solved. Our proposed approach overcomes all these difficulties with success and let obtain nearly 0-1 designs that improve the existing optimal electrode profiles over a homogeneous plate. 相似文献
7.
This paper presents a novel meta-heuristic algorithm, dynamic particle swarm optimizer with escaping prey (DPSOEP), for solving constrained non-convex and piecewise optimization problems. In DPSOEP, the particles developed from two different species are classified into three different types, consisting of preys, strong particles and weak particles, to simulate the behavior of hunting and escaping characteristics observed in nature. Compared to other variants of particle swarm optimizer (PSO), the proposed algorithm takes account of an escaping mechanism for the preys to circumvent the problem of local optimum and also develops a classification mechanism to cope with different situations in the search space so as to achieve a good balance between its global exploration and local exploitation abilities. Simulation results obtained based on thirteen benchmark functions and two practical economic dispatch problems prove the effectiveness and applicability of the DPSOEP to deal with non-convex and piecewise optimization problem, considering the integration of linear equality and inequality constraints. 相似文献
8.
We investigate the runtime of a binary Particle Swarm Optimizer (PSO) for optimizing pseudo-Boolean functions f:{0,1}n→R. The binary PSO maintains a swarm of particles searching for good solutions. Each particle consists of a current position from {0,1}n, its own best position and a velocity vector used in a probabilistic process to update its current position. The velocities for a particle are then updated in the direction of its own best position and the position of the best particle in the swarm.We present a lower bound for the time needed to optimize any pseudo-Boolean function with a unique optimum. To prove upper bounds we transfer a fitness-level argument that is well-established for evolutionary algorithms (EAs) to PSO. This method is applied to estimate the expected runtime for the class of unimodal functions. A simple variant of the binary PSO is considered in more detail for the test function OneMax, showing that there the binary PSO is competitive to EAs. An additional experimental comparison reveals further insights. 相似文献
9.
S.-Z. Zhao P.N. Suganthan Quan-Ke Pan M. Fatih Tasgetiren 《Expert systems with applications》2011,38(4):3735-3742
In this paper, the dynamic multi-swarm particle swarm optimizer (DMS-PSO) is improved by hybridizing it with the harmony search (HS) algorithm and the resulting algorithm is abbreviated as DMS-PSO-HS. We present a novel approach to merge the HS algorithm into each sub-swarm of the DMS-PSO. Combining the exploration capabilities of the DMS-PSO and the stochastic exploitation of the HS, the DMS-PSO-HS is developed. The whole DMS-PSO population is divided into a large number of small and dynamic sub-swarms which are also individual HS populations. These sub-swarms are regrouped frequently and information is exchanged among the particles in the whole swarm. The DMS-PSO-HS demonstrates improved on multimodal and composition test problems when compared with the DMS-PSO and the HS. 相似文献
10.
提出一种基于距离行为模型的改进微粒群算法,根据微粒所处区域来调整其飞行的速度。在吸引区域微粒加速飞向群体最优位置,在排斥区域按正常速度飞行。为了研究算法的性能,对几种典型高维非线性函数进行了测试。研究结果表明,与基本微粒群算法相比,改进后的微粒群算法提高了算法的收敛速度和收敛精度,改善了算法的性能。 相似文献
11.
This paper presents a new multi-objective optimization algorithm in which multi-swarm cooperative strategy is incorporated into particle swarm optimization algorithm, called multi-swarm cooperative multi-objective particle swarm optimizer (MC-MOPSO). This algorithm consists of multiple slave swarms and one master swarm. Each slave swarm is designed to optimize one objective function of the multi-objective problem in order to find out all the non-dominated optima of this objective function. In order to produce a well distributed Pareto front, the master swarm is developed to cover gaps among non-dominated optima by using a local MOPSO algorithm. Moreover, in order to strengthen the capability locating multiple optima of the PSO, several improved techniques such as the Pareto dominance-based species technique and the escape strategy of mature species are introduced. The simulation results indicate that our algorithm is highly competitive to solving the multi-objective optimization problems. 相似文献
12.
Particle swarm optimization (PSO) has received increasing interest from the optimization community due to its simplicity in implementation and its inexpensive computational overhead. However, PSO has premature convergence, especially in complex multimodal functions. Extremal optimization (EO) is a recently developed local-search heuristic method and has been successfully applied to a wide variety of hard optimization problems. To overcome the limitation of PSO, this paper proposes a novel hybrid algorithm, called hybrid PSO–EO algorithm, through introducing EO to PSO. The hybrid approach elegantly combines the exploration ability of PSO with the exploitation ability of EO. We testify the performance of the proposed approach on a suite of unimodal/multimodal benchmark functions and provide comparisons with other meta-heuristics. The proposed approach is shown to have superior performance and great capability of preventing premature convergence across it comparing favorably with the other algorithms. 相似文献
13.
为提升粒子群优化算法在复杂优化问题,特别是高维优化问题上的优化性能,提出一种基于Solis&Wets局部搜索的反向学习竞争粒子群优化算法(solis and wets-opposition based learning competitive particle swarm optimizer with local se... 相似文献
14.
Most real-world applications can be formulated as optimization problems, which commonly suffer from being trapped into the local optima. In this paper, we make full use of the global search capability of particle swarm optimization (PSO) and local search ability of extremal optimization (EO), and propose a gradient-based adaptive PSO with improved EO (called GAPSO-IEO) to overcome the issue of local optima deficiency of optimization in high-dimensional search and reduce the time complexity of the algorithm. In the proposed algorithm, the improved EO (IEO) is adaptively incorporated into PSO to avoid the particles being trapped into the local optima according to the evolutional states of the swarm, which are estimated based on the gradients of the fitness functions of the particles. We also improve the mutation strategy of EO by performing polynomial mutation (PLM) on each particle, instead of on each component of the particle, therefore, the algorithm is not sensitive to the dimension of the swarm. The proposed algorithm is tested on several unimodal/multimodal benchmark functions and Berkeley Segmentation Dataset and Benchmark (BSDS300). The results of experiments have shown the superiority and efficiency of the proposed approach compared with those of the state-of-the-art algorithms, and can achieve better performance in high-dimensional tasks. 相似文献
15.
Self-organizing hierarchical particle swarm optimizer with time-varying acceleration coefficients 总被引:25,自引:0,他引:25
Ratnaweera A. Halgamuge S.K. Watson H.C. 《Evolutionary Computation, IEEE Transactions on》2004,8(3):240-255
This paper introduces a novel parameter automation strategy for the particle swarm algorithm and two further extensions to improve its performance after a predefined number of generations. Initially, to efficiently control the local search and convergence to the global optimum solution, time-varying acceleration coefficients (TVAC) are introduced in addition to the time-varying inertia weight factor in particle swarm optimization (PSO). From the basis of TVAC, two new strategies are discussed to improve the performance of the PSO. First, the concept of "mutation" is introduced to the particle swarm optimization along with TVAC (MPSO-TVAC), by adding a small perturbation to a randomly selected modulus of the velocity vector of a random particle by predefined probability. Second, we introduce a novel particle swarm concept "self-organizing hierarchical particle swarm optimizer with TVAC (HPSO-TVAC)". Under this method, only the "social" part and the "cognitive" part of the particle swarm strategy are considered to estimate the new velocity of each particle and particles are reinitialized whenever they are stagnated in the search space. In addition, to overcome the difficulties of selecting an appropriate mutation step size for different problems, a time-varying mutation step size was introduced. Further, for most of the benchmarks, mutation probability is found to be insensitive to the performance of MPSO-TVAC method. On the other hand, the effect of reinitialization velocity on the performance of HPSO-TVAC method is also observed. Time-varying reinitialization step size is found to be an efficient parameter optimization strategy for HPSO-TVAC method. The HPSO-TVAC strategy outperformed all the methods considered in this investigation for most of the functions. Furthermore, it has also been observed that both the MPSO and HPSO strategies perform poorly when the acceleration coefficients are fixed at two. 相似文献
16.
An improved memetic algorithm using ring neighborhood topology for constrained optimization 总被引:1,自引:0,他引:1
Zhenzhou Hu Xinye Cai Zhun Fan 《Soft Computing - A Fusion of Foundations, Methodologies and Applications》2014,18(10):2023-2041
This paper proposes an improved memetic algorithm relying on ring neighborhood topology for constrained optimization problems based on our previous work in Cai et al. (Soft Comput (in press), 2013). The main motivation of using ring neighborhood topology is to provide a good balance between effective exploration and efficient exploitation, which is a very important design issue for memetic algorithms. More specifically, a novel variant of invasive weed optimization (IWO) as the local refinement procedure is proposed in this paper. The proposed IWO variant adopts a neighborhood-based dispersal operator to achieve more fine-grained local search through the estimation of neighborhood fitness information relying on the ring neighborhood topology. Furthermore, a modified version of differential evolution (DE), known as “DE/current-to-best/1”, is integrated into the improved memetic algorithm with the aim of providing a more effective exploration. Performance of the improved memetic algorithm has been comprehensively tested on 13 well-known benchmark test functions and four engineering constrained optimization problems. The experimental results show that the improved memetic algorithm obtains greater competitiveness when compared with the original memetic approach Cai et al. in (Soft Comput (in press), 2013) and other state-of-the-art algorithms. The effectiveness of the modification of each component in the proposed approach is also discussed in the paper. 相似文献
17.
Kadirkamanathan V. Selvarajah K. Fleming P.J. 《Evolutionary Computation, IEEE Transactions on》2006,10(3):245-255
Previous stability analysis of the particle swarm optimizer was restricted to the assumption that all parameters are nonrandom, in effect a deterministic particle swarm optimizer. We analyze the stability of the particle dynamics without this restrictive assumption using Lyapunov stability analysis and the concept of passive systems. Sufficient conditions for stability are derived, and an illustrative example is given. Simulation results confirm the prediction from theory that stability of the particle dynamics requires increasing the maximum value of the random parameter when the inertia factor is reduced. 相似文献
18.
This article describes an evolutionary image filter design for noise reduction using particle swarm optimization (PSO), where
mixed constraints on the circuit complexity, power, and signal delay are optimized. First, the evaluated values of correctness,
complexity, power, and signal delay are introduced to the fitness function. Then PSO autonomously synthesizes a filter. To
verify the validity of our method, an image filter for noise reduction was synthesized. The performance of the resultant filter
by PSO was similar to that of a genetic algorithm (GA), but the running time of PSO is 10% shorter than that of GA. 相似文献
19.
In this article, we present a multi-objective discrete particle swarm optimizer (DPSO) for learning dynamic Bayesian network
(DBN) structures. The proposed method introduces a hierarchical structure consisting of DPSOs and a multi-objective genetic
algorithm (MOGA). Groups of DPSOs find effective DBN sub-network structures and a group of MOGAs find the whole of the DBN
network structure. Through numerical simulations, the proposed method can find more effective DBN structures, and can obtain
them faster than the conventional method. 相似文献
20.
一种自适应惯性权重的并行粒子群聚类算法 总被引:2,自引:2,他引:2
针对K-means聚类算法和基于遗传(GA)的聚类算法的一些缺点,及求解实优化问题时粒子群算法优于遗传算法这一事实,提出了一种自适应惯性权重的并行粒子群聚类算法。理论分析和实验表明,该算法在收敛速度和收敛精度方面明显优于基于遗传算法的聚类方法。 相似文献