首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
薛锋  史旭华  史非凡 《计算机应用》2020,40(4):1091-1096
针对耗时计算目标函数的约束优化问题,提出用代理模型来代替耗时计算目标函数的方法,并结合目标函数的信息对约束个体进行选择,从而提出基于代理模型的差分进化约束优化算法。首先,采用拉丁超立方采样方法建立初始种群,用耗时计算目标函数对初始种群进行评估,并以此为样本数据建立目标函数的神经网络代理模型。然后,用差分进化方法为种群中的每一个亲本产生后代,并对后代使用代理模型进行评估,采用可行性规则来比较后代与其亲本并更新种群,根据替换机制将种群中较劣的个体替换为备用存档中较优的个体。最后,当达到最大适应度评估次数时算法停止,给出最优解。该算法与对比算法在10个测试函数上运行的结果表明,该算法得出的结果更精确。将该算法应用于工字梁优化问题的结果表明,相较于优化前的算法,该算法的适应度评估次数减少了80%;相对于FROFI(Feasibility Rule with the incorporation of Objective Function Information)算法,该算法的适应度评估次数减少了36%。运用所提算法进行优化可以有效减少调用耗时计算目标函数的次数,提升优化效率,节约计算成本。  相似文献   

2.
 Traditional evolutionary computing techniques use an explicit fitness function – mathematical or simulated – to derive a solution to a problem from a population of individuals, over a number of generations. In this paper an approach which allows such techniques to be used on problems in which evaluations are costly, which cannot be expressed formally, or which are difficult to simulate, is examined. A neural network is trained using example individuals with the explicit fitness and the resulting model of the fitness function is then used by the evolutionary algorithm to find a solution. It is shown that the approach is effective over a small range of function types in comparison to the traditional approach when limited training data is available. An iterative step is then added whereby after a number of generations the current best individual in a population is evaluated directly on the explicit fitness function. The individual and its “real” fitness are then added to the training data and the neural network is re-trained to improve its approximation of the fitness function. It is shown that in this way the performance of the model-based architecture is greatly improved on more rugged/complex landscapes without a large increase in the amount of training data required.  相似文献   

3.
Reducing the number of evaluations of expensive fitness functions is one of the main concerns in evolutionary algorithms, especially when working with instances of contemporary engineering problems. As an alternative to this efficiency constraint, surrogate-based methods are grounded in the construction of approximate models that estimate the solutions’ fitness by modeling the relationships between solution variables and their performance. This paper proposes a methodology based on granular computing for the construction of surrogate models for evolutionary algorithms. Under the proposed method, granules are associated with representative solutions of the problem under analysis. New solutions are evaluated with the expensive (original) fitness function only if they are not already covered by an existing granule. The parameters defining granules are periodically adapted as the search goes on using a neuro-fuzzy network that does not only reduce the number of fitness function evaluations, but also provides better convergence capabilities. The proposed method is evaluated on classical benchmark functions and on a recent benchmark created to test large-scale optimization models. Our results show that the proposed method considerably reduces the actual number of fitness function evaluations without significantly degrading the quality of solutions.  相似文献   

4.
A comprehensive survey of fitness approximation in evolutionary computation   总被引:14,自引:4,他引:14  
Evolutionary algorithms (EAs) have received increasing interests both in the academy and industry. One main difficulty in applying EAs to real-world applications is that EAs usually need a large number of fitness evaluations before a satisfying result can be obtained. However, fitness evaluations are not always straightforward in many real-world applications. Either an explicit fitness function does not exist, or the evaluation of the fitness is computationally very expensive. In both cases, it is necessary to estimate the fitness function by constructing an approximate model. In this paper, a comprehensive survey of the research on fitness approximation in evolutionary computation is presented. Main issues like approximation levels, approximate model management schemes, model construction techniques are reviewed. To conclude, open questions and interesting issues in the field are discussed.  相似文献   

5.
Estimating the fitness value of individuals in an evolutionary algorithm in order to reduce the computational expense of actually calculating the fitness has been a classical pursuit of practitioners. One area which could benefit from progress in this endeavour is bot evolution, i.e. the evolution of non-playing characters in computer games. Because assigning a fitness value to a bot (or rather, the decision tree that controls its behaviour) requires playing the game, the process is very costly. In this work, we introduce two major contributions to speed up this process in the computer game Unreal Tournament 2004?. Firstly, a method for fitness value approximation in genetic programming which is based on the idea that individuals that behave in a similar fashion will have a similar fitness. Thus, similarity of individuals is taken at the performance level, in contrast to commonly employed approaches which are either based on similarity of genotypes or, less frequently, phenotypes. The approximation performs a weighted average of the fitness values of a number of individuals, attaching a confidence level which is based on similarity estimation. The latter is the second contribution of this work, namely a method for estimating the similarity between individuals. This involves carrying out a number of tests consisting of playing a ‘static’ version of the game (with fixed inputs) with the individuals whose similarity is under evaluation and comparing the results. Because the tests involve a limited version of the game, the computational expense of the similarity estimation plus that of the fitness approximation is much lower than that of directly calculating the fitness. The success of the fitness approximation by similarity estimation method for bot evolution in UT2K4 allows us to expect similar results in environments that share the same characteristics.  相似文献   

6.
Blood velocity estimation is required in many clinical applications. Ultrasonic imaging is often used to reach this goal. This article presents a velocity vector estimation method from ultrasonic imaging. It complements Doppler imaging, which has several limitations. New techniques such as block-matching (BM) and decorrelation-based methods have already been developed to overcome these limitations. Our method is based on spatiotemporal filtering to estimate the apparent velocity vector for each pixel of the sequence of ultrasound images. A moving object is represented by a group of pixels travelling from image to image in the sequence, leaving a trace in the spatiotemporal volume. A bank of filters was designed to estimate a local texture orientation related to the velocity of the object. The method was first developed in 2D then extended in 3D to estimate the two components in the imaging plane. The method was applied to sequences of ultrasound images of calibrated flow in a vessel (mean velocity ). The velocity estimates obtained in 2D and 3D showed mean errors less than 5% and 12%, respectively. The results are presented as dynamic cartography and dense fields of velocity vectors. The associated velocity profiles show good agreement with the theoretical parabolic profile of laminar flow. Our approach has been compared with three other velocity estimation methods and showed good performance in comparison with them.  相似文献   

7.
In this paper, we propose a novel and highly robust estimator, called MDPE1 (Maximum Density Power Estimator). This estimator applies nonparametric density estimation and density gradient estimation techniques in parametric estimation (model fitting). MDPE optimizes an objective function that measures more than just the size of the residuals. Both the density distribution of data points in residual space and the size of the residual corresponding to the local maximum of the density distribution, are considered as important characteristics in our objective function. MDPE can tolerate more than 85% outliers. Compared with several other recently proposed similar estimators, MDPE has a higher robustness to outliers and less error variance.We also present a new range image segmentation algorithm, based on a modified version of the MDPE (Quick-MDPE), and its performance is compared to several other segmentation methods. Segmentation requires more than a simple minded application of an estimator, no matter how good that estimator is: our segmentation algorithm overcomes several difficulties faced with applying a statistical estimator to this task.  相似文献   

8.
The aim of this paper is to show how the hybridization of a multi-objective evolutionary algorithm (MOEA) and a local search method based on the use of rough set theory is a viable alternative to obtain a robust algorithm able to solve difficult constrained multi-objective optimization problems at a moderate computational cost. This paper extends a previously published MOEA [Hernández-Díaz AG, Santana-Quintero LV, Coello Coello C, Caballero R, Molina J. A new proposal for multi-objective optimization using differential evolution and rough set theory. In: 2006 genetic and evolutionary computation conference (GECCO’2006). Seattle, Washington, USA: ACM Press; July 2006], which was limited to unconstrained multi-objective optimization problems. Here, the main idea is to use this sort of hybrid approach to approximate the Pareto front of a constrained multi-objective optimization problem while performing a relatively low number of fitness function evaluations. Since in real-world problems the cost of evaluating the objective functions is the most significant, our underlying assumption is that, by aiming to minimize the number of such evaluations, our MOEA can be considered efficient. As in its previous version, our hybrid approach operates in two stages: in the first one, a multi-objective version of differential evolution is used to generate an initial approximation of the Pareto front. Then, in the second stage, rough set theory is used to improve the spread and quality of this initial approximation. To assess the performance of our proposed approach, we adopt, on the one hand, a set of standard bi-objective constrained test problems and, on the other hand, a large real-world problem with eight objective functions and 160 decision variables. The first set of problems are solved performing 10,000 fitness function evaluations, which is a competitive value compared to the number of evaluations previously reported in the specialized literature for such problems. The real-world problem is solved performing 250,000 fitness function evaluations, mainly because of its high dimensionality. Our results are compared with respect to those generated by NSGA-II, which is a MOEA representative of the state-of-the-art in the area.  相似文献   

9.
In evolutionary algorithm, one of the main issues is how to reduce the number of fitness evaluations required to obtain optimal solutions. Generally a large number of evaluations are needed to find optimal solutions, which leads to an increase of computational time. Expensive cost may have to be paid for fitness evaluation as well. Differential evolution (DE), which is widely used in many applications due to its simplicity and good performance, also cannot escape from this problem. In order to solve this problem a fitness approximation model has been proposed so far, replacing real fitness function for evaluation. In fitness approximation, an ability to estimate accurate value with compact structure is needed for good performance. Therefore in this paper we propose an efficient differential evolution using fitness estimator. We choose k-nearest neighbor (kNN) as fitness estimator because it does not need any training period or complex computation. However too many training samples in the estimator may cause computational complexity to be exponentially high. Accordingly, two schemes with regard to accuracy and efficiency are proposed to improve the estimator. Our proposed algorithm is tested with various benchmark functions and shown to find good optimal solutions with less fitness evaluation and more compact size, compared with DE and DE-kNN.  相似文献   

10.
The Guided Genetic Algorithm (GCA) is a hybrid of Genetic Algorithm and Guided Local Search, a meta–heuristic search algorithm. As the search progresses, GGA modifies both the fitness function and fitness template of candidate solutions based on feedback from constraints. The fitness template is then used to bias crossover and mutation. The Radio Link Frequency Assignment Problem (RLFAP) is a class of problem that has practical relevance to both military and civil applications. In this paper, we show how GGA can be applied to the RLFAP. We focus on an abstraction of a real life military application that involves the assigning of frequencies to radio links. GGA was tested on a set of eleven benchmark problems provided by the French military. This set of problems has been studied intensively by a number of prominent groups in Europe. It covers a variety of needs in military applications, including the satisfaction of constraints, finding optimal solutions that satisfy all the constraints and optimization of some objective functions whenever no solution exist (partial constraint satisfaction). Not only do these benchmark problems vary in problem nature, they are reasonably large for military applications (up to 916 variables, and up to 5548 constraints). This makes them a serious challenge to the generality, reliability as well as efficiency of algorithms. We show in this paper that GGA is capable of producing excellent results reliably in the whole set of benchmark problems.  相似文献   

11.
Prediction and interpolation of a process is investigated for known covariance functions of measurement error and perturbations belonging to a set of nonnegative-definite functions. Guaranteed estimation is studied, assuming that the guaranteed estimate is the best estimate of the parameters of the useful signal in the sense of the minimum of the mean-square error under the worst behavior of measurement errors and perturbations with covariance functions belonging to the set . Depending on the type of constraints, i.e., characteristics of the set , different approaches and methods are applied to guaranteed estimation. Papers concerned with this topic are reviewed. Prediction and interpolation are analytically investigated for the case in which covariance functions are bounded by the constraints imposed on their variance matrices. For the continuous case, the weight function and its equations are derived. Prediction and interpolation accuracies are evaluated and compared with the least-squares filters.  相似文献   

12.
Some theoretical models have been proposed in the literature to predict dynamics of real-coded evolutionary algorithms. These models are often applied to study very simplified algorithms, simple real-coded functions or sometimes these make difficult to obtain quantitative measures related to algorithm performance. This paper, trying to reduce these simplifications to obtain a more useful model, proposes a model that describes the behavior of a slightly simplified version of the popular real-coded CHC in multi-peaked landscape functions. Our approach is based on predicting the shape of the search pattern by modeling the dynamics of clusters, which are formed by individuals of the population. This is performed in terms of dynamical probability distributions as a basis to estimate its averaged behavior. Within reasonable time, numerical experiments show that is possible to achieve accurate quantitative predictions in functions of up to 5D about performance measures such as average fitness, the best fitness reached or number of fitness function evaluations.  相似文献   

13.
《Applied Soft Computing》2003,2(3):156-173
Evolutionary algorithms (EAs) are a popular and robust strategy for optimization problems. However, these algorithms may require huge computation power for solving real problems. This paper introduces a “fast evolutionary algorithm” (FEA) that does not evaluate all new individuals, thus operating faster. A fitness and associated reliability value are assigned to each new individual that is only evaluated using the true fitness function if the reliability value is below a threshold. Moreover, applying random evaluation and error compensation strategies to the FEA further enhances the performance of the algorithm. Simulation results show that for six optimization functions an average reduction of 40% in the number of evaluations was observed while obtaining similar solutions to those found using a traditional evolutionary algorithm. For these same functions, by completion, the algorithm also finds a 4% better fitness value on average for the same number of evaluations. For an image compression system, the algorithm found on average 3% (12%) better fitness values or compression ratios using only 58% (65%) number of evaluations needed by an EA in lossless (lossy) compression mode.  相似文献   

14.
Epochal dynamics, in which long periods of stasis in an evolving population are punctuated by a sudden burst of change, is a common behavior in both natural and artificial evolutionary processes. We analyze the population dynamics for a class of fitness functions that exhibit epochal behavior using a mathematical framework developed recently, which incorporates techniques from the fields of mathematical population genetics, molecular evolution theory, and statistical mechanics. Our analysis predicts the total number of fitness function evaluations to reach the global optimum as a function of mutation rate, population size, and the parameters specifying the fitness function. This allows us to determine the optimal evolutionary parameter settings for this class of fitness functions.We identify a generalized error threshold that smoothly bounds the two-dimensional regime of mutation rates and population sizes for which epochal evolutionary search operates most efficiently. Specifically, we analyze the dynamics of epoch destabilization under finite-population sampling fluctuations and show how the evolutionary parameters effectively introduce a coarse graining of the fitness function. More generally, we find that the optimal parameter settings for epochal evolutionary search correspond to behavioral regimes in which the consecutive epochs are marginally stable against the sampling fluctuations. Our results suggest that in order to achieve optimal search, one should set evolutionary parameters such that the coarse graining of the fitness function induced by the sampling fluctuations is just large enough to hide local optima.  相似文献   

15.
Adaptive suboptimal tracking for a first-order discrete object under Lipschitz uncertainty and bounded external perturbations is studied. The new nonlinear control law due to L.L. Xie and L. Guo based on nonparametric estimation of the stationary uncertainty is applied to control the object with a known linear part. This control law admits uncertainty with the Lipschitz constant L < 3/2 + 2. The upper estimate for the quality criterion, which is equal to the upper limit of the modulus of deviation of the output of the object from a given signal, is derived as a function of the Lipschitz constant and norm of the external perturbation. Unknown parameters in the closed-loop are estimated by the method of recurrent objective inequalities. The adaptive control is suboptimal due to the use of cone estimates for the vector of unknown parameters.  相似文献   

16.
Quality evaluations in optimization processes are frequently noisy. In particular evolutionary algorithms have been shown to cope with such stochastic variations better than other optimization algorithms. So far mostly additive noise models have been assumed for the analysis. However, we will argue in this paper that this restriction must be relaxed for a large class of applied optimization problems. We suggest systematic noise as an alternative scenario, where the noise term is added to the objective parameters or to environmental parameters inside the fitness function. We thoroughly analyze the sphere function with systematic noise for the evolution strategy with global intermediate recombination. The progress rate formula and a measure for the efficiency of the evolutionary progress lead to a recommended ratio between and . Furthermore, analysis of the dynamics identifies limited regions of convergence dependent on the normalized noise strength and the normalized mutation strength. A residual localization error R can be quantified and a second to ratio is derived by minimizing R .  相似文献   

17.
This study investigates the applicability of empirical and radiative transfer models to estimate water content at leaf and landscape level. The main goal is to evaluate and compare the accuracy of these two approaches for estimating leaf water content by means of laboratory reflectance/transmittance measurements and for mapping leaf and canopy water content by using airborne Multispectral Infrared and Visible Imaging Spectrometer (MIVIS) data acquired over intensive poplar plantations (Ticino, Italy).At leaf level, we tested the performance of different spectral indices to estimate leaf equivalent water thickness (EWT) and leaf gravimetric water content (GWC) by using inverse ordinary least squares (OLS) regression, and reduced major axis (RMA) regression. The analysis showed that leaf reflectance is related to changes in EWT rather than GWC, with best results obtained by using RMA regression by exploiting the spectral index related to the continuum removed area of the 1200 nm water absorption feature with an explained variance of 61% and prediction error of 6.6%. Moreover, we inverted the PROSPECT leaf radiative transfer model to estimate leaf EWT and GWC and compared the results with those obtained by means of empirical models. The inversion of this model showed that leaf EWT can be successfully estimated with no prior information with mean relative errors of 14% and determination coefficient of 0.65. Inversion of the PROSPECT model showed some difficulties in the simultaneous estimation of leaf EWT and dry matter content, which led to large errors in GWC estimation.At landscape level with MIVIS data, we tested the performance of different spectral indices to estimate canopy water per unit ground area (EWTcanopy). We found a relative error of 20% using a continuum removed spectral index around 1200 nm. Furthermore, we used a model simulation to evaluate the possibility of applying empirical models based on appositely developed MIVIS double ratios to estimate mean leaf EWT at landscape level (). It is shown that combined indices (double ratios) yielded significant results in estimating leaf EWT at landscape level by using MIVIS data (with errors around 2.6%), indicating their potential in reducing the effects of LAI on the recorded signal. The accuracy of the empirical estimation of EWTcanopy and was finally compared with that obtained from inversion of the PROSPECT + SAILH canopy reflectance model to evaluate the potential of both methods for practical applications. A relative error of 27% was found for EWTcanopy and an overestimation of leaf with relative errors around 19%.Results arising from this remote sensing application support the robustness of hyperspectral regression indices for estimating water content at both leaf and landscape level, with lower relative errors compared to those obtained from inversion of leaf and 1D canopy radiative transfer models.  相似文献   

18.

Context

The effort estimates of software development work are on average too low. A possible reason for this tendency is that software developers, perhaps unconsciously, assume ideal conditions when they estimate the most likely use of effort. In this article, we propose and evaluate a two-step estimation process that may induce more awareness of the difference between idealistic and realistic conditions and as a consequence more realistic effort estimates. The proposed process differs from traditional judgment-based estimation processes in that it starts with an effort estimation that assumes ideal conditions before the most likely use of effort is estimated.

Objective

The objective of the paper is to examine the potential of the proposed method to induce more realism in the judgment-based estimates of work effort.

Method

Three experiments with software professionals as participants were completed. In all three experiments there was one group of participants which followed the proposed and another group which followed the traditional estimation process. In one of the experiments there was an additional group which started with a probabilistically defined estimate of minimum effort before estimating the most likely effort.

Results

We found, in all three experiments, that estimation of most likely effort seems to assume rather idealistic assumptions and that the use of the proposed process seems to yield more realistic effort estimates. In contrast, starting with an estimate of the minimum effort, rather than an estimate based on ideal conditions, did not have the same positive effect on the subsequent estimate of the most likely effort.

Conclusion

The empirical results from our studies together with similar results from other domains suggest that the proposed estimation process is promising for the improvement of the realism of software development effort estimates.  相似文献   

19.
This paper presents a study of building blocks (BBs) in the context of genetic algorithms (GAs). In GAs literature, the BBs are common structures of high-quality solutions. The aim is to identify and maintain the BBs while performing solution recombination. To identify the BBs, we construct an simultaneity matrix according to a set of -bit solutions. The matrix element in row i and column j denoted by m ij is the degree of dependency between bit i and bit j. We search for a partition of for the matrix. The main idea of partitioning is to put i and j of which m ij is significantly high in the same partition subset. The partition represents the bit positions of BBs. The partition is exploited in solution recombination so that the bits governed by the same partition subset are passed together. It can be shown that by exploiting the simultaneity matrix the additively decomposable functions can be solved in a polynomial relationship between the number of function evaluations required to reach the optimum and the problem size. A comparison to the Bayesian optimization algorithm (BOA) is made. Empirical results show that the BOA uses less number of function evaluations than that of our algorithm. However, computing the matrix is ten times faster than constructing the Bayesian network.  相似文献   

20.
《Applied Soft Computing》2008,8(2):1085-1092
In this paper the design of maximally flat linear phase finite impulse response (FIR) filters is considered. The problem with using the genetic algorithm (GA) in this kind of problems is the high cost of evaluating the fitness for each string in the population. The designing of optimum FIR filters under given constraints and required criteria includes exhaustive number of evaluations for filter coefficients, and the repetitive evaluations of objective functions that implicitly constitutes construction of the filter transfer functions. This problem is handled here with acceptable results utilizing Markov random fields (MRF's) approach. We establish a new theoretical approach here and we apply it on the design of FIR filters. This approach allows us to construct an explicit probabilistic model of the GA fitness function forming what is called the “Ising GA” that is based on sampling from a Gibbs distribution. Ising GA avoids the exhaustive design of suggested FIR filters (solutions) for every string of coefficients in every generation and replace this by a probabilistic model of fitness every gap (period) of iterations. Experimentations done with Ising GA of probabilistic fitness models are less costly than those done with standard GA and with high quality solutions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号