首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Improved artificial bee colony algorithm for global optimization   总被引:7,自引:0,他引:7  
The artificial bee colony algorithm is a relatively new optimization technique. This paper presents an improved artificial bee colony (IABC) algorithm for global optimization. Inspired by differential evolution (DE) and introducing a parameter M, we propose two improved solution search equations, namely “ABC/best/1” and “ABC/rand/1”. Then, in order to take advantage of them and avoid the shortages of them, we use a selective probability p to control the frequency of introducing “ABC/rand/1” and “ABC/best/1” and get a new search mechanism. In addition, to enhance the global convergence speed, when producing the initial population, both the chaotic systems and the opposition-based learning method are employed. Experiments are conducted on a suite of unimodal/multimodal benchmark functions. The results demonstrate the good performance of the IABC algorithm in solving complex numerical optimization problems when compared with thirteen recent algorithms.  相似文献   

2.
This paper proposes a new battery swapping station (BSS) model to determine the optimized charging scheme for each incoming Electric Vehicle (EV) battery. The objective is to maximize the BSS’s battery stock level and minimize the average charging damage with the use of different types of chargers. An integrated objective function is defined for the multi-objective optimization problem. The genetic algorithm (GA), differential evolution (DE) algorithm and three versions of particle swarm optimization (PSO) algorithms have been implemented to solve the problem, and the results show that GA and DE perform better than the PSO algorithms, but the computational time of GA and DE are longer than using PSO. Hence, the varied population genetic algorithm (VPGA) and varied population differential evolution (VPDE) algorithm are proposed to determine the optimal solution and reduce the computational time of typical evolutionary algorithms. The simulation results show that the performances of the proposed algorithms are comparable with the typical GA and DE, but the computational times of the VPGA and VPDE are significantly shorter. A 24-h simulation study is carried out to examine the feasibility of the model.  相似文献   

3.
Inspired by successful application of evolutionary algorithms to solving difficult optimization problems, we explore in this paper, the applicability of genetic algorithms (GAs) to the cover printing problem, which consists in the grouping of book covers on offset plates in order to minimize the total production cost. We combine GAs with a linear programming solver and we propose some innovative features such as the “unfixed two-point crossover operator” and the “binary stochastic sampling with replacement” for selection. Two approaches are proposed: an adapted genetic algorithm and a multiobjective genetic algorithm using the Pareto fitness genetic algorithm. The resulting solutions are compared. Some computational experiments have also been done to analyze the effects of different genetic operators on both algorithms.  相似文献   

4.
This research builds on prior work on developing near optimal solutions to the product line design problems within the conjoint analysis framework. In this research, we investigate and compare different genetic algorithm operators; in particular, we examine systematically the impact of employing alternative population maintenance strategies and mutation techniques within our problem context. Two alternative population maintenance methods, that we term “Emigration” and “Malthusian” strategies, are deployed to govern how individual product lines in one generation are carried over to the next generation. We also allow for two different types of reproduction methods termed “Equal Opportunity” in which the parents to be paired for mating are selected with equal opportunity and a second based on always choosing the best string in the current generation as one of the parents which is referred to as the “Queen bee”, while the other parent is randomly selected from the set of parent strings. We also look at the impact of integrating the artificial intelligence approach with a traditional optimization approach by seeding the GA with solutions obtained from a Dynamic Programming heuristic proposed by others. A detailed statistical analysis is also carried out to determine the impact of various problem and technique aspects on multiple measures of performance through means of a Monte Carlo simulation study. Our results indicate that such proposed procedures are able to provide multiple “good” solutions. This provides more flexibility for the decision makers as they now have the opportunity to select from a number of very good product lines. The results obtained using our approaches are encouraging, with statistically significant improvements averaging 5% or more, when compared to the traditional benchmark of the heuristic dynamic programming technique.  相似文献   

5.
一种基于PSO和GA的混合算法   总被引:2,自引:1,他引:2  
结合PSO算法和GA算法的优势,提出了一种新颖的PSO-GA混合算法(PGHA)。混合算法利用了PSO算法的速率和位置的更新规则,并引入了GA算法里的选择、交叉和变异思想。通过混合算法对4个标准函数进行实验并与标准PSO算法比较,结果表明混合算法表现出更好的性能。  相似文献   

6.
The decomposition method is currently one of the major methods for solving the convex quadratic optimization problems being associated with Support Vector Machines (SVM-optimization). A key issue in this approach is the policy for working set selection. We would like to find policies that realize (as well as possible) three goals simultaneously: “(fast) convergence to an optimal solution”, “efficient procedures for working set selection”, and “a high degree of generality” (including typical variants of SVM-optimization as special cases). In this paper, we study a general policy for working set selection that has been proposed in [Nikolas List, Hans Ulrich Simon, A general convergence theorem for the decomposition method, in: Proceedings of the 17th Annual Conference on Computational Learning Theory, 2004, pp. 363–377] and further analyzed in [Nikolas List, Hans Ulrich Simon, General polynomial time decomposition algorithms, in: Proceedings of the 17th Annual Conference on Computational Learning Theory, 2005, pp. 308–322]. It is known that it efficiently approaches feasible solutions with minimum cost for any convex quadratic optimization problem. Here, we investigate its computational complexity when it is used for SVM-optimization. It turns out that, for a variable size of the working set, the general policy poses an NP-hard working set selection problem. But a slight variation of it (sharing the convergence properties with the original policy) can be solved in polynomial time. For working sets of fixed size 2, the situation is even better. In this case, the general policy coincides with the “rate certifying pair approach” (introduced by Hush and Scovel). We show that maximum rate certifying pairs can be found in linear time, which leads to a quite efficient decomposition method with a polynomial convergence rate for SVM-optimization.  相似文献   

7.
Evolutionary crystal structure prediction proved to be a powerful approach in discovering new materials. Certain limitations are encountered for systems with a large number of degrees of freedom (“large systems”) and complex energy landscapes (“complex systems”). We explore the nature of these limitations and address them with a number of newly developed tools.For large systems a major problem is the lack of diversity: any randomly produced population consists predominantly of high-energy disordered structures, offering virtually no routes toward the ordered ground state. We offer two solutions: first, modified variation operators that favor atoms with higher local order (a function we introduce here), and, second, construction of the first generation non-randomly, using pseudo-subcells with, in general, fractional atomic occupancies. This enhances order and diversity and improves energies of the structures. We introduce an additional variation operator, coordinate mutation, which applies preferentially to low-order (“badly placed”) atoms. Biasing other variation operators by local order is also found to produce improved results. One promising version of coordinate mutation, explored here, displaces atoms along the eigenvector of the lowest-frequency vibrational mode. For complex energy landscapes, the key problem is the possible existence of several energy funnels - in this situation it is possible to get trapped in one funnel (not necessarily containing the ground state). To address this problem, we develop an algorithm incorporating the ideas of abstract “distance” between structures. These new ingredients improve the performance of the evolutionary algorithm USPEX, in terms of efficiency and reliability, for large and complex systems.  相似文献   

8.
It is widely assumed and observed in experiments that the use of diversity mechanisms in evolutionary algorithms may have a great impact on its running time. Up to now there is no rigorous analysis pointing out how different diversity mechanisms influence the runtime behavior. We consider evolutionary algorithms that differ from each other in the way they ensure diversity and point out situations where the right mechanism is crucial for the success of the algorithm. The considered evolutionary algorithms either diversify the population with respect to the search points or with respect to function values. Investigating simple plateau functions, we show that using the “right” diversity strategy makes the difference between an exponential and a polynomial runtime. Later on, we examine how the drawback of the “wrong” diversity mechanism can be compensated by increasing the population size.  相似文献   

9.
We continue the study of priority or “greedy-like” algorithms as initiated in Borodin et al. (2003) [10] and as extended to graph theoretic problems in Davis and Impagliazzo (2009) [12]. Graph theoretic problems pose some modeling problems that did not exist in the original applications of Borodin et al. and Angelopoulos and Borodin (2002) [3]. Following the work of Davis and Impagliazzo, we further clarify these concepts. In the graph theoretic setting, there are several natural input formulations for a given problem and we show that priority algorithm bounds in general depend on the input formulation. We study a variety of graph problems in the context of arbitrary and restricted priority models corresponding to known “greedy algorithms”.  相似文献   

10.
This paper proposes a novel and unconventional Memetic Computing approach for solving continuous optimization problems characterized by memory limitations. The proposed algorithm, unlike employing an explorative evolutionary framework and a set of local search algorithms, employs multiple exploitative search within the main framework and performs a multiple step global search by means of a randomized perturbation of the virtual population corresponding to a periodical randomization of the search for the exploitative operators. The proposed Memetic Computing approach is based on a populationless (compact) evolutionary framework which, instead of processing a population of solutions, handles its statistical model. This evolutionary framework is based on a Differential Evolution which cooperatively employs two exploitative search operators: the first is based on a standard Differential Evolution mutation and exponential crossover, and the second is the trigonometric mutation. These two search operators have an exploitative action on the algorithmic framework and thus contribute to the rapid convergence of the virtual population towards promising candidate solutions. The action of these search operators is counterbalanced by a periodical stochastic perturbation of the virtual population, which has the role of “disturbing” the excessively exploitative action of the framework and thus inhibits its premature convergence. The proposed algorithm, namely Disturbed Exploitation compact Differential Evolution, is a simple and memory-wise cheap structure that makes use of the Memetic Computing paradigm in order to solve complex optimization problems. The proposed approach has been tested on a set of various test problems and compared with state-of-the-art compact algorithms and with some modern population based meta-heuristics. Numerical results show that Disturbed Exploitation compact Differential Evolution significantly outperforms all the other compact algorithms present in literature and reaches a competitive performance with respect to modern population algorithms, including some memetic approaches and complex modern Differential Evolution based algorithms. In order to show the potential of the proposed approach in real-world applications, Disturbed Exploitation compact Differential Evolution has been implemented for performing the control of a space robot by simulating the implementation within the robot micro-controller. Numerical results show the superiority of the proposed algorithm with respect to other modern compact algorithms present in literature.  相似文献   

11.
One of the main reasons for using parallel evolutionary algorithms (PEAs) is to obtain efficient algorithms with an execution time much lower than that of their sequential counterparts in order, e.g., to tackle more complex problems. This naturally leads to measuring the speedup of the PEA. PEAs have sometimes been reported to provide super-linear performances for different problems, parameterizations, and machines. Super-linear speedup means that using “m” processors leads to an algorithm that runs more than “m” times faster than the sequential version. However, reporting super-linear speedup is controversial, especially for the “traditional” research community, since some non-orthodox practices could be thought of being the cause for this result. Therefore, we begin by offering a taxonomy for speedup, in order to clarify what is being measured. Also, we analyze the sources for such a scenario in this paper. Finally, we study an assorted set of results. Our conclusion is that super-linear performance is possible for PEAs, theoretically and in practice, both in homogeneous and in heterogeneous parallel machines.  相似文献   

12.
In their paper “Tight bound on Johnson's algorithm for maximum satisfiability” [J. Comput. System Sci. 58 (3) (1999) 622-640] Chen, Friesen and Zheng provided a tight bound on the approximation ratio of Johnson's algorithm for Maximum Satisfiability [J. Comput. System Sci. 9 (3) (1974) 256-278]. We give a simplified proof of their result and investigate to what extent it may be generalized to non-Boolean domains.  相似文献   

13.
In this paper, we describe a granular algorithm for translating information between two granular worlds, represented as fuzzy rulebases. These granular worlds are defined on the same universe of discourse, but employ different granulations of this universe. In order to translate information from one granular world to the other, we must regranulate the information so that it matches the information granularity of the target world. This is accomplished through the use of a first-order interpolation algorithm, implemented using linguistic arithmetic, a set of elementary granular computing operations. We first demonstrate this algorithm by studying the common “fuzzy-PD” rulebase at several different granularities, and conclude that the “3 × 3” granulation may be too coarse for this objective. We then examine the question of what the “natural” granularity of a system might be; this is studied through a 10-fold cross-validation experiment involving three different granulations of the same underlying mapping. For the problem under consideration, we find that a 7 × 7 granulation appears to be the minimum necessary precision.  相似文献   

14.
In this paper, a novel parametric and global image histogram thresholding method is presented. It is based on the estimation of the statistical parameters of “object” and “background” classes by the expectation-maximization (EM) algorithm, under the assumption that these two classes follow a generalized Gaussian (GG) distribution. The adoption of such a statistical model as an alternative to the more common Gaussian model is motivated by its attractive capability to approximate a broad variety of statistical behaviors with a small number of parameters. Since the quality of the solution provided by the iterative EM algorithm is strongly affected by initial conditions (which, if inappropriately set, may lead to unreliable estimation), a robust initialization strategy based on genetic algorithms (GAs) is proposed. Experimental results obtained on simulated and real images confirm the effectiveness of the proposed method.  相似文献   

15.
This paper proposes a novel PSO algorithm, referred to as SFIPSO (Scale-free fully informed particle swarm optimization). In the proposed algorithm a modified Barabási-Albert (BA) model [4] is used as a self-organizing construction mechanism, in order to adaptively generate the population topology exhibiting scale-free property. The swarm population is divided into two subpopulations: the active particles and the inactive particles. The former fly around the solution space to find the global optima; whereas the latter are iteratively activated by the active particles via attaching to them, according to their own degrees, fitness values, and spatial positions. Therefore, the topology will be gradually generated as the construction process and the optimization process progress synchronously. Moreover, the cognitive effect and the social effect on the variance of a particle’s velocity vector are distributed by its “contextual fitness” value, and the social effect is further distributed via a time-varying weighted fully informed mechanism that originated from [27]. It is proved by the results of comparative experiments carried out on eight benchmark test functions that the scale-free population topology construction mechanism and the weighted fully informed learning strategy can provide the swarm population with stronger diversity during the convergent process. As a result, SFIPSO obtained success rate of 100% on all of the eight test functions. Furthermore, SFIPSO also yielded good-quality solutions, especially on multimodal test functions. We further test the network properties of the generated population topology. The results prove that (1) the degree distribution of the topology follows power-law, therefore exhibits scale-free property, and (2) the topology exhibits “disassortative mixing” property, which can be interpreted as an important condition for the reinforcement of population diversity.  相似文献   

16.
Using exhaustive computer search we show that sorting 15 elements requires 42 comparisons, and that for n<47 there is no algorithm of the following form: “m and nm elements are sorted using the Ford-Johnson algorithm first, then the sorted sequences are merged”, whose total number of used comparisons is smaller than the number of comparisons used by the Ford-Johnson algorithm to sort n elements directly.  相似文献   

17.
Proximity searches become very difficult on “high dimensional” metric spaces, that is, those whose histogram of distances has a large mean and/or a small variance. This so-called “curse of dimensionality”, well known in vector spaces, is also observed in metric spaces. The search complexity grows sharply with the dimension and with the search radius. We present a general probabilistic framework applicable to any search algorithm and whose net effect is to reduce the search radius. The higher the dimension, the more effective the technique. We illustrate empirically its practical performance on a particular class of algorithms, where large improvements in the search time are obtained at the cost of a very small error probability.  相似文献   

18.
路志英  林丽晨  庞勇 《计算机仿真》2006,23(1):96-99,179
该文针对基本遗传算法(SGA)所存在的缺陷——早熟现象进行了分析,并在此基础上提出了基于种群多样度的变参数遗传算法(VPGA)。该算法从概率角度分析了遗传操作算子的作用,搜索范围以及多样性的影响,依据种群的多样度对遗传算法的参数进行自动调节,抑制早熟现象。并应用两种遗传算法对评价遗传算法性能的四个著名测试函数进行了仿真测试,仿真结果表明该算法相对于基本遗传算法的优越性和抑制早熟现象的有效性。  相似文献   

19.
We give a randomized algorithm (the “Wedge Algorithm”) of competitiveness for any metrical task system on a uniform space of k points, for any k?2, where , the kth harmonic number. This algorithm has better competitiveness than the Irani-Seiden algorithm if k is smaller than 108. The algorithm is better by a factor of 2 if k<47.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号