首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 156 毫秒
1.
丁有军  钟声 《计算机科学》2012,39(10):218-219
分布估计算法从宏观的角度建立一个概率模型,用来描述解空间的分布,从而通过进化计算获得优势个体。目前,离散型分布估计算法研究已经比较成熟,而连续型分布估计算法研究进展缓慢。采用均匀分布缩小采样领域的思想,设计新的分布估计算法求解连续型优化问题。实验数据表明,该分布估计算法对于求解连续型问题是有效的。  相似文献   

2.
改进的MIMIC算法求解旅行商问题   总被引:2,自引:0,他引:2  
为了有效解决组合优化领域的旅行商问题,提出了一种改进的双变量相关的分布估计算法-MIMIC 算法.改进的MIMIC 算法将原有的二进制编码表达方式改为十进制编码,建立了求解旅行商问题的概率模型,描述了搜索空间上旅行路径的分布,以旅行路径的概率分布模型为基础进行随机采样,指导后代种群的产生,实现种群的进化以达到搜索最优旅行路径的目的.仿真实验表明,提出的改进的 MIMIC 算法是一种求解 TSP 问题的有效方法.  相似文献   

3.
基于分布估计算法的人工神经网络优化设计   总被引:1,自引:4,他引:1  
周晓燕 《微计算机信息》2005,21(30):130-131
布估计算法是一类新的进化算法,它通过统计在当前群体中选出的个体信息给出下一代个体分布的概率统计,用随机取样的方法生成下一代群体.文章将建立在一般结构Gauss网络上的分布估计算法应用于人工神经网络的优化.仿真实验结果表明,分布估计算法用于优化神经网络,可以在很短的时间内收敛至全局最优解,避免了BP算法的不足,提高了网络的学习性能,从而为人工神经网络的优化提供了一种新的途径.  相似文献   

4.
将贝叶斯统计推断理论引入分布估计算法概率模型中,提出一种基于贝叶斯统计推断的离散分布估计算法。根据离散优化问题中解的分布规律建立先验概率模型,将优势群体的概率模型和二元边缘分布算法中森林结构的概率模型相结合,得出条件概率模型,利用贝叶斯统计推断,并结合上述2种概率模型建立后验概率模型,以指导新群体的产生。仿真结果表明,该算法求解gr21旅行商问题的收敛速度大于EDAs1算法,在种群规模、最大运行代数等参数固定的情况下,分别分析结合速率和学习速率对算法性能的影响,得出当其值取0.2时,算法性能最稳定。  相似文献   

5.
构建在最大熵原理上的分布估计算法及其应用研究   总被引:1,自引:0,他引:1  
分布估计算法是进化计算领域的一个新方向.它主要用概率模型来建造进化计算中的遗传算法,它不再依赖于交叉与变异,而是估计较好个体的概率分布,用概率分布来引导对搜寻空间的探索.本文提出一类基于最大熵的分布估计算法.实验结果表明,在解决某些较复杂问题时,本文算法比遗传算法更具优势.  相似文献   

6.
本文针对传统分布估计算法在建立概率模型时面临的各种困难,提出一种基于条件概率和Gibbs抽样的概率模型,能有效改进分布估计算法的通用性.使用该模型的分布估计算法利用进化过程中有前途的优秀个体构造出多个监督学习样本集,并对每个样本集估计出对应分量的条件概率,再使用这一组条件概率进行Gibbs抽样产生新的个体替代种群中的劣等个体.通过仿真实验表明,改进后的算法能够求解出可加性降解函数的全局最优解,表现出较强的全局优化能力.  相似文献   

7.
分布估计算法作为一个新兴的随机优化算法,在计算机领域占有重要的位置,也是目前科研人员研究的热点。分布估计算法集遗传算法和统计学习的优点于一身,其个体分布的概率模型重要是建立在统计学习的手段之上的。分布估计算法的应用非常广泛,比如排考。排考是一项非常重要又复杂的工作,但是目前许多学校仍然使用传统的手工方式进行排考这不仅耗费时间和人力,而且效率比较低,容易出现差错。使用分布估计算法可以让排考工作变得方便,也比较准确。本文就分布估计算法在排考中的应用进行探究。  相似文献   

8.
一种多目标优化的多概率模型分布估计算法   总被引:1,自引:1,他引:1  
钟润添  龚海峰  李斌  庄镇泉 《计算机仿真》2007,24(4):180-182,234
提出了一种用于多目标优化的多概率模型分布估计算法,该算法在进化的每一代中使用多个概率模型来引导多目标优化问题柏拉图(Pareto)最优域的搜索.分布估计算法使用概率模型引导算法最优解的搜索,而使用多个概率模型可以保持所得多目标优化问题最优解集的多样性.该算法具有很强的寻优能力,所得结果可以很好地覆盖Pareto前沿.实验通过优化一组测试函数来评价该算法的性能,并与其它多目标优化算法进行了比较,结果表明该算法相比于其它同类算法可以更好地解决多目标优化问题.  相似文献   

9.
布估计算法是一类新的进化算法,它通过统计在当前群体中选出的个体信息给出下一代个体分布的概率统计,用随机取样的方法生成下一代群体。文章将建立在一般结构Gauss网络上的分布估计算法应用于人工神经网络的优化。仿真实验结果表明,分布估计算法用于优化神经网络,可以在很短的时间内收敛至全局最优解,避免了BP算法的不足,提高了网络的学习性能,从而为人工神经网络的优化提供了一种新的途径。  相似文献   

10.
为了进一步提升多目标进化算法(MOEAs)的收敛速度和解集分布性,针对变量无关问题,借助合作型协同进化模型,提出一种均衡分布性与收敛性的协同进化多目标优化算法(CMOA-BDC). CMOA-BDC 首先设置一个精英集合,采用支配关系从进化种群与精英集合中选择首层,并用拥挤距离保持其分布性;然后运用聚类将首层分类,并建立相应概率模型;最后通过模拟退火组合分布估计与遗传进化,达到协同进化.通过与经典 MOEAs 比较的结果表明, CMOA-BDC 获得的解集具有更好的收敛性和分布性.  相似文献   

11.
Estimation of distribution algorithms (EDAs) are a wide-ranging family of evolutionary algorithms whose common feature is the way they evolve by learning a probability distribution from the best individuals in a population and sampling it to generate the next one. Although they have been widely applied to solve combinatorial optimization problems, there are also extensions that work with continuous variables. In this paper [this paper is an extended version of delaOssa et al. Initial approaches to the application of islands-based parellel EDAs in continuous domains, in: Proceedings of the 34th International Conference on Parallel Processing Workshops (ICPP 2005 Workshops), Oslo, 2005, pp. 580–587] we focus on the solution of the latter by means of island models. Besides evaluating the performance of traditional island models when applied to EDAs, our main goal consists in achieving some insight about the behavior and benefits of the migration of probability models that this framework allow.  相似文献   

12.
Bayesian optimization algorithm (BOA) is one of the successful and widely used estimation of distribution algorithms (EDAs) which have been employed to solve different optimization problems. In EDAs, a model is learned from the selected population that encodes interactions among problem variables. New individuals are generated by sampling the model and incorporated into the population. Different probabilistic models have been used in EDAs to learn interactions. Bayesian network (BN) is a well-known graphical model which is used in BOA. Learning a proper model in EDAs and particularly in BOA is distinguished as a computationally expensive task. Different methods have been proposed in the literature to improve the complexity of model building in EDAs. This paper employs bivariate dependencies to learn accurate BNs in BOA efficiently. The proposed approach extracts the bivariate dependencies using an appropriate pairwise interaction-detection metric. Due to the static structure of the underlying problems, these dependencies are used in each generation of BOA to learn an accurate network. By using this approach, the computational cost of model building is reduced dramatically. Various optimization problems are selected to be solved by the algorithm. The experimental results show that the proposed approach successfully finds the optimum in problems with different types of interactions efficiently. Significant speedups are observed in the model building procedure as well.  相似文献   

13.
Evolutionary algorithms (EAs) are particularly suited to solve problems for which there is not much information available. From this standpoint, estimation of distribution algorithms (EDAs), which guide the search by using probabilistic models of the population, have brought a new view to evolutionary computation. While solving a given problem with an EDA, the user has access to a set of models that reveal probabilistic dependencies between variables, an important source of information about the problem. However, as the complexity of the used models increases, the chance of overfitting and consequently reducing model interpretability, increases as well. This paper investigates the relationship between the probabilistic models learned by the Bayesian optimization algorithm (BOA) and the underlying problem structure. The purpose of the paper is threefold. First, model building in BOA is analyzed to understand how the problem structure is learned. Second, it is shown how the selection operator can lead to model overfitting in Bayesian EDAs. Third, the scoring metric that guides the search for an adequate model structure is modified to take into account the non-uniform distribution of the mating pool generated by tournament selection. Overall, this paper makes a contribution towards understanding and improving model accuracy in BOA, providing more interpretable models to assist efficiency enhancement techniques and human researchers.  相似文献   

14.
An important problem in the study of evolutionary algorithms is how to continuously predict promising solutions while simultaneously escaping from local optima. In this paper, we propose an elitist probability schema (EPS) for the first time, to the best of our knowledge. Our schema is an index of binary strings that expresses the similarity of an elitist population at every string position. EPS expresses the accumulative effect of fitness selection with respect to the coding similarity of the population. For each generation, EPS can quantify the coding similarity of the population objectively and quickly. One of our key innovations is that EPS can continuously predict promising solutions while simultaneously escaping from local optima in most cases. To demonstrate the abilities of the EPS, we designed an elitist probability schema genetic algorithm and an elitist probability schema compact genetic algorithm. These algorithms are estimations of distribution algorithms (EDAs). We provided a fair comparison with the persistent elitist compact genetic algorithm (PeCGA), quantum-inspired evolutionary algorithm (QEA), and particle swarm optimization (PSO) for the 0–1 knapsack problem. The proposed algorithms converged quicker than PeCGA, QEA, and PSO, especially for the large knapsack problem. Furthermore, the computation time of the proposed algorithms was less than some EDAs that are based on building explicit probability models, and was approximately the same as QEA and PSO. This is acceptable for evolutionary algorithms, and satisfactory for EDAs. The proposed algorithms are successful with respect to convergence performance and computation time, which implies that EPS is satisfactory.  相似文献   

15.
Recently, a novel probabilistic model-building evolutionary algorithm (so called estimation of distribution algorithm, or EDA), named probabilistic model building genetic network programming (PMBGNP), has been proposed. PMBGNP uses graph structures for its individual representation, which shows higher expression ability than the classical EDAs. Hence, it extends EDAs to solve a range of problems, such as data mining and agent control. This paper is dedicated to propose a continuous version of PMBGNP for continuous optimization in agent control problems. Different from the other continuous EDAs, the proposed algorithm evolves the continuous variables by reinforcement learning (RL). We compare the performance with several state-of-the-art algorithms on a real mobile robot control problem. The results show that the proposed algorithm outperforms the others with statistically significant differences.  相似文献   

16.
Genetic Algorithms perform crossovers effectively when linkage sets - sets of variables tightly linked to form building blocks - are identified. Several methods have been proposed to detect the linkage sets. Perturbation methods (PMs) investigate fitness differences by perturbations of gene values and Estimation of distribution algorithms (EDAs) estimate the distribution of promising strings. In this paper, we propose a novel approach combining both of them, which detects dependencies of variables by estimating the distribution of strings clustered according to fitness differences. The proposed algorithm, called the Dependency Detection for Distribution Derived from fitness Differences (D(5)), can detect dependencies of a class of functions that are difficult for EDAs, and requires less computational cost than PMs.  相似文献   

17.
In estimation of distribution algorithms (EDAs), the joint probability distribution of high-performance solutions is presented by a probability model. This means that the priority search areas of the solution space are characterized by the probability model. From this point of view, an environment identification-based memory management scheme (EI-MMS) is proposed to adapt binary-coded EDAs to solve dynamic optimization problems (DOPs). Within this scheme, the probability models that characterize the search space of the changing environment are stored and retrieved to adapt EDAs according to environmental changes. A diversity loss correction scheme and a boundary correction scheme are combined to counteract the diversity loss during the static evolutionary process of each environment. Experimental results show the validity of the EI-MMS and indicate that the EI-MMS can be applied to any binary-coded EDAs. In comparison with three state-of-the-art algorithms, the univariate marginal distribution algorithm (UMDA) using the EI-MMS performs better when solving three decomposable DOPs. In order to understand the EI-MMS more deeply, the sensitivity analysis of parameters is also carried out in this paper.  相似文献   

18.
The adoption of probabilistic models for selected individuals is a powerful approach for evolutionary computation. Probabilistic models based on high-order statistics have been used by estimation of distribution algorithms (EDAs), resulting better effectiveness when searching for global optima for hard optimization problems. This paper proposes a new framework for evolutionary algorithms, which combines a simple EDA based on order 1 statistics and a clustering technique in order to avoid the high computational cost required by higher order EDAs. The algorithm uses clustering to group genotypically similar solutions, relying that different clusters focus on different substructures and the combination of information from different clusters effectively combines substructures. The combination mechanism uses an information gain measure when deciding which cluster is more informative for any given gene position, during a pairwise cluster combination. Empirical evaluations effectively cover a comprehensive range of benchmark optimization problems.   相似文献   

19.
Simplified lattice models have played an important role in protein structure prediction and protein folding problems. These models can be useful for an initial approximation of the protein structure, and for the investigation of the dynamics that govern the protein folding process. Estimation of distribution algorithms (EDAs) are efficient evolutionary algorithms that can learn and exploit the search space regularities in the form of probabilistic dependencies. This paper introduces the application of different variants of EDAs to the solution of the protein structure prediction problem in simplified models, and proposes their use as a simulation tool for the analysis of the protein folding process. We develop new ideas for the application of EDAs to the bidimensional and tridimensional (2-d and 3-d) simplified protein folding problems. This paper analyzes the rationale behind the application of EDAs to these problems, and elucidates the relationship between our proposal and other population-based approaches proposed for the protein folding problem. We argue that EDAs are an efficient alternative for many instances of the protein structure prediction problem and are indeed appropriate for a theoretical analysis of search procedures in lattice models. All the algorithms introduced are tested on a set of difficult 2-d and 3-d instances from lattice models. Some of the results obtained with EDAs are superior to the ones obtained with other well-known population-based optimization algorithms.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号