首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 109 毫秒
1.
爬山法是一种局部搜索能力相当好的算法,主要是因为它是通过个体的优劣信息来引导搜索的。而传统的遗传算法作为一种全局搜索算法,在搜索过程中却没有考虑个体间的信息,而仅依靠个体适应度来引导搜索,使得算法的收敛性受到限制。将定向爬山机制应用于遗传算法,提出了一种基于定向爬山的遗传算法(OHCGA)。该算法结合了爬山法与遗传算法的优点,通过比较个体的优劣,使用定向爬山操作引导算法向更优秀的解区域进行搜索。实验结果表明,与传统遗传算法(TGA)相比,OHCGA较大地提高了算法的收敛速度和搜索最优解的能力。  相似文献   

2.
给出了一种基于拉马克学习和精英保留策略的新型遗传算法。设计了拉马克学习规则和拉马克遗传算法框架,给出了算法收敛性的数学证明,并利用测试函数与经典遗传算法进行了对比实验。实验结果表明,该算法具有较好的收敛性能和局部搜索能力,可以用于求解各种工程优化问题。  相似文献   

3.
基于成长算子的改进遗传算法及仿真   总被引:1,自引:0,他引:1  
模拟生物界成长发育过程,加入成长算子对遗传算法框架进行改进,形成新的算法框架-成长遗传算法(growth GA).该算法能够克服简单遗传算法寻优速度较慢、局部搜索能力较弱的缺点.利用爬山法局部搜索能力强的特点,给出成长算子的一种具体实现,并证明加入成长算子不改变算法收敛性.与简单遗传算法和确定性拥挤遗传算法的对比函数优化实验证明:成长遗传算法有利于兼顾寻优速度和收敛精度.  相似文献   

4.
为了提高图染色算法的寻优能力和收敛速度,结合禁忌搜索算法和遗传算法的优缺点,提出了一种混合优化算法(GA-HM)。该算法利用遗传算法生成初始解,将染色元素分到不同的色集中,然后通过禁忌算法进行变领域搜索来更新顶点染色。实验结果表明,GA-HM对求解相同的目标解具有更好的全局最优性和收敛性。  相似文献   

5.
借鉴免疫算法基于浓度和适应度的抗体更新策略,提出一种改进微粒群局部搜索能力的免疫微粒群算法,并对其进行收敛性分析。该算法在满足收敛性的条件下,根据微粒浓度和适应度动态调整加速因子,保证了群的多样性和持续搜索能力。在与遗传算法、免疫遗传算法、基本微粒群等算法的仿真比较试验中,该算法不仅搜索到了最好的近优解,而且收敛速度最快。在无人潜水器仿真平台上的控制试验表明,基于免疫微粒群算法的控制器性能良好,具有较强的抗海流干扰能力。仿真结果证明了该算法的可行性。  相似文献   

6.
求解串并联系统配置问题的免疫遗传算法   总被引:1,自引:0,他引:1       下载免费PDF全文
通过对串并联系统配置可靠性问题的分析,提出了基于免疫遗传算法(IGA)求解该问题的方法。在保留基本遗传算法随机全局搜索能力的基础上,借鉴生物免疫机制中抗体的多样性保持策略,大大提高了算法的群体多样性。实验结果表明,免疫遗传算法可有效改善基本遗传算法的未成熟收敛和局部搜索能力差的缺点,具有很好的全局收敛能力,其全局收敛性及收敛速度均得到了提高。  相似文献   

7.
提出了一种融合蚁群系统、免疫算法和遗传算法的混合算法。将免疫算法和遗传算法引入到每次蚁群迭代的过程中,利用免疫算法的局部优化能力和遗传算法的全局搜索能力,来提高蚁群系统的收敛速度。该算法通过遗传算法的选择、交叉、变异操作和免疫算法的自适应疫苗接种操作,有效地解决了蚁群系统的易陷入局部最优和易退化的缺点。通过对旅行商问题的仿真实验表明该算法具有非常好的收敛速度和全局最优解的搜索能力。  相似文献   

8.
一种自适应遗传BP神经网络模型研究及应用   总被引:1,自引:0,他引:1  
温泉彻  彭宏  黎琼 《计算机仿真》2006,23(12):160-162,166
如何更有效地提高神经网络的收敛速度和收敛质量。基于遗传算法的全局搜索和BP神经网络局部精确搜索的特性,提出一种自适应遗传BP神经网络模型,该模型的主要算法是先采用一种自适应遗传算法优化BP网络初始权重。而后再进行BP网络的训练过程。最后并研究如何利用该模型进行三级跳远成绩预测。实验结果表明该方法优于传统BP算法。有利于提高网络的收敛性以及学习能力,可在一定程度上提高三级跳远成绩预测的准确率,具有一定的实用价值。  相似文献   

9.
郑娟毅  程秀琦  付姣姣 《计算机仿真》2021,38(5):126-130,167
针对现有路径动态诱导算法在交通问题规模增大时存在的性能急剧下降的问题,提出了一种改进的混合遗传蚁群算法.为解决蚁群算法对信息素的强依赖性导致的局部最优解现象,及遗传算法存在的全局搜索性能强但收敛速度慢等问题,将蚊群算法与遗传算法相结合,基于遗传算法的交叉变异因子,改进了信息素浓度的设定方式,加强了传统蚁群算法的全局搜索能力;利用蚁群算法的局部搜索能力较强的特点,提高了传统遗传算法的收敛速度.仿真结果表明,相比于遗传算法与蚁群算法,所提算法在求解不同规模的旅行商问题时具有更强的全局搜索性及快速收敛性.  相似文献   

10.
求解TSP问题的改进遗传算法   总被引:1,自引:0,他引:1  
旅行商问题(TSP)是遗传算法得以成功应用的典型问题.文章对遗传算法加以改进,提出了新的选择策略和交叉算子,并且引入了兄弟竞争的策略来加快收敛速度和全局搜索能力.把该算法应用在不同类型的TSP问题的求解上,表现出了比传统遗传算法更好的收敛性和计算效率.说明改进算法是有效的.  相似文献   

11.
一种函数优化问题的混合遗传算法   总被引:22,自引:0,他引:22  
彭伟  卢锡城 《软件学报》1999,10(8):819-823
将传统的局部搜索算法和遗传算法相结合,可以较好地解决遗传算法在达到全局最优解前收敛慢的问题.文章给出一种结合可变多面体法和正交遗传算法的混合算法.实验表明,它通过对问题的解空间交替进行全局和局部搜索,能更有效地求解函数优化问题.  相似文献   

12.
A novel parallel hybrid intelligence optimization algorithm (PHIOA) is proposed based on combining the merits of particle swarm optimization with genetic algorithms. The PHIOA uses the ideas of selection, crossover and mutation from genetic algorithms (GAs) and the update velocity and situation of particle swarm optimization (PSO) under the independence of PSO and GAs. The proposed algorithm divides the individuals into two equation groups according to their fitness values. The subgroup of the top fitness values is evolved by GAs and the other subgroup is evolved by the PSO algorithm. The optimal number is selected as a global optimum at every circulation which shows better results than both PSO and GAs, then improves the overall performance of the algorithm. The PHIOA is used to optimize the structure and parameters of the fuzzy neural network. Finally, the experimental results have demonstrated the superiority of the proposed PHIOA to search the global optimal solution. The PHIOA can improve the error accuracy while speeding up the convergence process, and effectively avoid the premature convergence to compare with the existing methods.  相似文献   

13.
Hybrid genetic algorithms for feature selection   总被引:15,自引:0,他引:15  
This paper proposes a novel hybrid genetic algorithm for feature selection. Local search operations are devised and embedded in hybrid GAs to fine-tune the search. The operations are parameterized in terms of their fine-tuning power, and their effectiveness and timing requirements are analyzed and compared. The hybridization technique produces two desirable effects: a significant improvement in the final performance and the acquisition of subset-size control. The hybrid GAs showed better convergence properties compared to the classical GAs. A method of performing rigorous timing analysis was developed, in order to compare the timing requirement of the conventional and the proposed algorithms. Experiments performed with various standard data sets revealed that the proposed hybrid GA is superior to both a simple GA and sequential search algorithms.  相似文献   

14.
Adaptive directed mutation (ADM) operator, a novel, simple, and efficient real-coded genetic algorithm (RCGA) is proposed and then employed to solve complex function optimization problems. The suggested ADM operator enhances the abilities of GAs in searching global optima as well as in speeding convergence by integrating the local directional search strategy and the adaptive random search strategies. Using 41 benchmark global optimization test functions, the performance of the new algorithm is compared with five conventional mutation operators and then with six genetic algorithms (GAs) reported in literature. Results indicate that the proposed ADM-RCGA is fast, accurate, and reliable, and outperforms all the other GAs considered in the present study.  相似文献   

15.
In this paper we propose several efficient hybrid methods based on genetic algorithms and fuzzy logic. The proposed hybridization methods combine a rough search technique, a fuzzy logic controller, and a local search technique. The rough search technique is used to initialize the population of the genetic algorithm (GA), its strategy is to make large jumps in the search space in order to avoid being trapped in local optima. The fuzzy logic controller is applied to dynamically regulate the fine-tuning structure of the genetic algorithm parameters (crossover ratio and mutation ratio). The local search technique is applied to find a better solution in the convergence region after the GA loop or within the GA loop. Five algorithms including one plain GA and four hybrid GAs along with some conventional heuristics are applied to three complex optimization problems. The results are analyzed and the best hybrid algorithm is recommended.  相似文献   

16.
Study on hybrid PS-ACO algorithm   总被引:4,自引:2,他引:2  
Ant colony optimization (ACO) algorithm is a recent meta-heuristic method inspired by the behavior of real ant colonies. The algorithm uses parallel computation mechanism and performs strong robustness, but it faces the limitations of stagnation and premature convergence. In this paper, a hybrid PS-ACO algorithm, ACO algorithm modified by particle swarm optimization (PSO) algorithm, is presented. The pheromone updating rules of ACO are combined with the local and global search mechanisms of PSO. On one hand, the search space is expanded by the local exploration; on the other hand, the search process is directed by the global experience. The local and global search mechanisms are combined stochastically to balance the exploration and the exploitation, so that the search efficiency can be improved. The convergence analysis and parameters selection are given through simulations on traveling salesman problems (TSP). The results show that the hybrid PS-ACO algorithm has better convergence performance than genetic algorithm (GA), ACO and MMAS under the condition of limited evolution iterations.  相似文献   

17.
Stochastic optimization algorithms like genetic algorithms (GAs) and particle swarm optimization (PSO) algorithms perform global optimization but waste computational effort by doing a random search. On the other hand deterministic algorithms like gradient descent converge rapidly but may get stuck in local minima of multimodal functions. Thus, an approach that combines the strengths of stochastic and deterministic optimization schemes but avoids their weaknesses is of interest. This paper presents a new hybrid optimization algorithm that combines the PSO algorithm and gradient-based local search algorithms to achieve faster convergence and better accuracy of final solution without getting trapped in local minima. In the new gradient-based PSO algorithm, referred to as the GPSO algorithm, the PSO algorithm is used for global exploration and a gradient based scheme is used for accurate local exploration. The global minimum is located by a process of finding progressively better local minima. The GPSO algorithm avoids the use of inertial weights and constriction coefficients which can cause the PSO algorithm to converge to a local minimum if improperly chosen. The De Jong test suite of benchmark optimization problems was used to test the new algorithm and facilitate comparison with the classical PSO algorithm. The GPSO algorithm is compared to four different refinements of the PSO algorithm from the literature and shown to converge faster to a significantly more accurate final solution for a variety of benchmark test functions.  相似文献   

18.
一类高效的混合遗传算法   总被引:2,自引:0,他引:2  
提出了一类用于求解函数优化问题的实数编码混合遗传算法。该算法由全局搜索和局部搜索模型组成,并将正交交叉运用于遗传操作产生的后代个体。一方面.本文提出的混合遗传算法能够有效地保持群体的多样性;另一方面,正交交叉能够产生高质量的个体。四个测试函数优化结果显示它在求解高维优化问题和复杂多极值优化问题方面有优势。  相似文献   

19.
A genetic algorithm with disruptive selection   总被引:9,自引:0,他引:9  
Genetic algorithms are a class of adaptive search techniques based on the principles of population genetics. The metaphor underlying genetic algorithms is that of natural evolution. Applying the “survival-of-the-fittest” principle, traditional genetic algorithms allocate more trials to above-average schemata. However, increasing the sampling rate of schemata that are above average does not guarantee convergence to a global optimum; the global optimum could be a relatively isolated peak or located in schemata that have large variance in performance. In this paper we propose a novel selection method, disruptive selection. This method adopts a nonmonotonic fitness function that is quite different from traditional monotonic fitness functions. Unlike traditional genetic algorithms, this method favors both superior and inferior individuals. Experimental results show that GAs using the proposed method easily find the optimal solution of a function that is hard for traditional GAs to optimize. We also present convergence analysis to estimate the occurrence ratio of the optima of a deceptive function after a certain number of generations of a genetic algorithm. Experimental results show that GAs using disruptive selection in some occasions find the optima more quickly and reliably than GAs using directional selection. These results suggest that disruptive selection can be useful in solving problems that have large variance within schemata and problems that are GA-deceptive  相似文献   

20.
Genetic Algorithms (GAs) are population based global search methods that can escape from local optima traps and find the global optima regions. However, near the optimum set their intensification process is often inaccurate. This is because the search strategy of GAs is completely probabilistic. With a random search near the optimum sets, there is a small probability to improve current solution. Another drawback of the GAs is genetic drift. The GAs search process is a black box process and no one knows that which region is being searched by the algorithm and it is possible that GAs search only a small region in the feasible space. On the other hand, GAs usually do not use the existing information about the optimality regions in past iterations.In this paper, a new method called SOM-Based Multi-Objective GA (SBMOGA) is proposed to improve the genetic diversity. In SBMOGA, a grid of neurons use the concept of learning rule of Self-Organizing Map (SOM) supporting by Variable Neighborhood Search (VNS) learn from genetic algorithm improving both local and global search. SOM is a neural network which is capable of learning and can improve the efficiency of data processing algorithms. The VNS algorithm is developed to enhance the local search efficiency in the Evolutionary Algorithms (EAs). The SOM uses a multi-objective learning rule based-on Pareto dominance to train its neurons. The neurons gradually move toward better fitness areas in some trajectories in feasible space. The knowledge of optimum front in past generations is saved in form of trajectories. The final state of the neurons determines a set of new solutions that can be regarded as the probability density distribution function of the high fitness areas in the multi-objective space. The new set of solutions potentially can improve the GAs overall efficiency. In the last section of this paper, the applicability of the proposed algorithm is examined in developing optimal policies for a real world multi-objective multi-reservoir system which is a non-linear, non-convex, multi-objective optimization problem.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号