首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
This paper deals with variable-fidelity optimization, a technique in which the advantages of high- and low-fidelity models are used in an optimization process. The high-fidelity model provides solution accuracy while the low-fidelity model reduces the computational cost. An outline of the theory of the Approximation Management Framework (AMF) proposed by Alexandrov (1996) and Lewis (1996) is given. The AMF algorithm provides the mathematical robustness required for variable-fidelity optimization. This paper introduces a subproblem formulation adapted to a modular implementation of the AMF. Also, we propose two types of second-order corrections (additive and multiplicative) which serve to build the approximation of the high-fidelity model based on the low-fidelity one. Results for a transonic airfoil shape optimization problem are presented. Application of a variable-fidelity algorithm leads to a threefold savings in high-fidelity solver calls, compared to a direct optimization using the high-fidelity solver only. However, premature stops of the algorithm are observed in some cases. A study of the influence of the numerical noise of solvers on robustness deficiency is presented. The study shows that numerical noise artificially introduced into an analytical function causes premature stops of the AMF. Numerical noise observed with our CFD solvers is therefore strongly suspected to be the cause of the robustness problems encountered.  相似文献   

2.
在基于仿真模型的工程设计优化中,采用高精度、高成本的分析模型会导致计算量大,采用低精度、低成本的分析模型会导致设计优化结果的可信度低,难以满足实际工程的要求。为了有效平衡高精度与低成本之间的矛盾关系,通过建立序贯层次Kriging模型融合高/低精度数据,采用大量低成本、低精度的样本点反映高精度分析模型的变化趋势,并采用少量高成本、高精度的样本点对低精度分析模型进行校正,以实现对优化目标的高精度预测。为了避免层次Kriging模型误差对优化结果的影响,将层次Kriging模型与遗传算法相结合,根据6σ设计准则计算每一代最优解的预测区间,具有较大预测区间的当前最优解即为新的高精度样本点。同时,在优化过程中序贯更新层次Kriging模型,提高最优解附近的层次Kriging模型的预测精度,从而保证设计结果的可靠性。将所提出的方法应用于微型飞行器机身结构的设计优化中,以验证该方法的有效性和优越性。采用具有不同单元数的网格模型分别作为低精度分析模型和高精度分析模型,利用最优拉丁超立方设计分别选取60个低精度样本点和20个高精度样本点建立初始层次Kriging模型,采用本文方法求解并与直接采用高精度仿真模型求解的结果进行比较。结果表明,所提出的方法能够有效利用高/低精度样本点处的信息,建立高精度的层次Kriging模型;本文方法仅需要少量的计算成本就能求得近似最优解,有效提高了设计效率,为类似的结构设计优化问题提供了参考。  相似文献   

3.
Response surface methods use least-squares regression analysis to fit low-order polynomials to a set of experimental data. It is becoming increasingly more popular to apply response surface approximations for the purpose of engineering design optimization based on computer simulations. However, the substantial expense involved in obtaining enough data to build quadratic response approximations seriously limits the practical size of problems. Multifidelity techniques, which combine cheap low-fidelity analyses with more accurate but expensive high-fidelity solutions, offer means by which the prohibitive computational cost can be reduced. Two optimum design problems are considered, both pertaining to the fluid flow in diffusers. In both cases, the high-fidelity analyses consist of solutions to the full Navier-Stokes equations, whereas the low-fidelity analyses are either simple empirical formulas or flow solutions to the Navier-Stokes equations achieved using coarse computational meshes. The multifidelity strategy includes the construction of two separate response surfaces: a quadratic approximation based on the low-fidelity data, and a linear correction response surface that approximates the ratio of high-and low-fidelity function evaluations. The paper demonstrates that this approach may yield major computational savings.  相似文献   

4.
Jin Yi  Mi Xiao  Junnan Xu  Lin Zhang 《工程优选》2017,49(1):161-180
Engineering design often involves different types of simulation, which results in expensive computational costs. Variable fidelity approximation-based design optimization approaches can realize effective simulation and efficiency optimization of the design space using approximation models with different levels of fidelity and have been widely used in different fields. As the foundations of variable fidelity approximation models, the selection of sample points of variable-fidelity approximation, called nested designs, is essential. In this article a novel nested maximin Latin hypercube design is constructed based on successive local enumeration and a modified novel global harmony search algorithm. In the proposed nested designs, successive local enumeration is employed to select sample points for a low-fidelity model, whereas the modified novel global harmony search algorithm is employed to select sample points for a high-fidelity model. A comparative study with multiple criteria and an engineering application are employed to verify the efficiency of the proposed nested designs approach.  相似文献   

5.
The global variable-fidelity modelling (GVFM) method presented in this article extends the original variable-complexity modelling (VCM) algorithm that uses a low-fidelity and scaling function to approximate a high-fidelity function for efficiently solving design-optimization problems. GVFM uses the design of experiments to sample values of high- and low-fidelity functions to explore global design space and to initialize a scaling function using the radial basis function (RBF) network. This approach makes it possible to remove high-fidelity-gradient evaluation from the process, which makes GVFM more efficient than VCM for high-dimensional design problems. The proposed algorithm converges with 65% fewer high-fidelity function calls for a one-dimensional problem than VCM and approximately 80% fewer for a two-dimensional numerical problem. The GVFM method is applied for the design optimization of transonic and subsonic aerofoils. Both aerofoil design problems show design improvement with a reasonable number of high- and low-fidelity function evaluations.  相似文献   

6.
This work presents a new bi-fidelity model reduction approach to the inverse problem under the framework of Bayesian inference. A low-rank approximation is introduced to the solution of the corresponding forward problem and admits a variable-separation form in terms of stochastic basis functions and physical basis functions. The calculation of stochastic basis functions is computationally predominant for the low-rank expression. To significantly improve the efficiency of constructing the low-rank approximation, we propose a bi-fidelity model reduction based on a novel variable-separation method, where a low-fidelity model is used to compute the stochastic basis functions and a high-fidelity model is used to compute the physical basis functions. The low-fidelity model has lower accuracy but efficient to evaluate compared with the high-fidelity model; it accelerates the derivative of recursive formulation for the stochastic basis functions. The high-fidelity model is computed in parallel for a few samples scattered in the stochastic space when we construct the high-fidelity physical basis functions. The required number of forward model simulations in constructing the basis functions is very limited. The bi-fidelity model can be constructed efficiently while retaining good accuracy simultaneously. In the proposed approach, both the stochastic basis functions and physical basis functions are calculated using the model information. This implies that a few basis functions may accurately represent the model solution in high-dimensional stochastic spaces. The bi-fidelity model reduction is applied to Bayesian inverse problems to accelerate posterior exploration. A few numerical examples in time-fractional derivative diffusion models are carried out to identify the smooth field and channel-structured field in porous media in the framework of Bayesian inverse problems.  相似文献   

7.
Computer simulation models are ubiquitous in modern engineering design. In many cases, they are the only way to evaluate a given design with sufficient fidelity. Unfortunately, an added computational expense is associated with higher fidelity models. Moreover, the systems being considered are often highly nonlinear and may feature a large number of designable parameters. Therefore, it may be impractical to solve the design problem with conventional optimization algorithms. A promising approach to alleviate these difficulties is surrogate-based optimization (SBO). Among proven SBO techniques, the methods utilizing surrogates constructed from corrected physics-based low-fidelity models are, in many cases, the most efficient. This article reviews a particular technique of this type, namely, shape-preserving response prediction (SPRP), which works on the level of the model responses to correct the underlying low-fidelity models. The formulation and limitations of SPRP are discussed. Applications to several engineering design problems are provided.  相似文献   

8.
In this article, a simple yet efficient and reliable technique for fully automated multi-objective design optimization of antenna structures using sequential domain patching (SDP) is discussed. The optimization procedure according to SDP is a two-step process: (i) obtaining the initial set of Pareto-optimal designs representing the best possible trade-offs between considered conflicting objectives, and (ii) Pareto set refinement for yielding the optimal designs at the high-fidelity electromagnetic (EM) simulation model level. For the sake of computational efficiency, the first step is realized at the level of a low-fidelity (coarse-discretization) EM model by sequential construction and relocation of small design space segments (patches) in order to create a path connecting the extreme Pareto front designs obtained beforehand. The second stage involves response correction techniques and local response surface approximation models constructed by reusing EM simulation data acquired in the first step. A major contribution of this work is an automated procedure for determining the patch dimensions. It allows for appropriate selection of the number of patches for each geometry variable so as to ensure reliability of the optimization process while maintaining its low cost. The importance of this procedure is demonstrated by comparing it with uniform patch dimensions.  相似文献   

9.
This paper deals with the response determination of a visco‐elastic Timoshenko beam under static loading condition and taking into account fractional calculus. In particular, the fractional derivative terms arise from representing constitutive behavior of the visco‐elastic material. Further, taking advantages of the Mellin transform method recently developed for the solution of fractional differential equation, the problem of fractional Timoshenko beam model is assessed in time domain without invoking the Laplace‐transforms as usual. Further, solution provided by the Mellin transform procedure will be compared with classical Central Difference scheme one, based on the Grunwald–Letnikov approximation of the fractional derivative. Moreover, Timoshenko beam response is generally evaluated by solving a couple of differential equations. In this paper, expressing the equation of the elastic curve just through a single relation, a more general procedure, which allows the determination of the beam response for any load condition and type of constraints, is developed. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

10.
Variable-fidelity (VF) modelling methods have been widely used in complex engineering system design to mitigate the computational burden. Building a VF model generally includes two parts: design of experiments and metamodel construction. In this article, an adaptive sampling method based on improved hierarchical kriging (ASM-IHK) is proposed to refine the improved VF model. First, an improved hierarchical kriging model is developed as the metamodel, in which the low-fidelity model is varied through a polynomial response surface function to capture the characteristics of a high-fidelity model. Secondly, to reduce local approximation errors, an active learning strategy based on a sequential sampling method is introduced to make full use of the already required information on the current sampling points and to guide the sampling process of the high-fidelity model. Finally, two numerical examples and the modelling of the aerodynamic coefficient for an aircraft are provided to demonstrate the approximation capability of the proposed approach, as well as three other metamodelling methods and two sequential sampling methods. The results show that ASM-IHK provides a more accurate metamodel at the same simulation cost, which is very important in metamodel-based engineering design problems.  相似文献   

11.
In this paper we propose a general methodology to obtain lumped parameter models for systems governed by parabolic partial differential equations which we call Galerkin lumped parameter methods. The idea consists of decomposing the computational domain into several subdomains connected through so‐called ports. Then a time‐independent adapted reduced basis is introduced by numerically solving several elliptic problems in each subdomain. The proposed lumped parameter model is the Galerkin approximation of the original problem in the space spanned by this basis. The relationship of these methods with classical lumped parameter models is analyzed. Numerical results are shown as well as a comparison of the solution obtained with the lumped model and the ‘exact’ one computed by standard finite element procedures. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

12.
In applications of the homogenization method for optimal structural topology design the solution is obtained by solving the optimahty conditions directly. This reduces the computational burden by taking advantage of closed-form solutions but it restricts the optimization model to having only one constraint. The article develops a generalized class of convex approximation methods for mathematical programming that can be used for the optimal topology homogenization problem with multiple constraints in-eluded in the model, without substantial reduction in computational efficiency. A richer class of design models can be then addressed using the hotnogenization method. Design examples illustrate the performance of the proposed solution strategy.  相似文献   

13.
A distributed evolutionary algorithm is presented that is based on a hierarchy of (fitness or cost function) evaluation passes within each deme and is efficient in solving engineering optimization problems. Starting with non-problem-specific evaluations (using surrogate models or metamodels, trained on previously evaluated individuals) and ending up with high-fidelity problem-specific evaluations, intermediate passes rely on other available lower-fidelity problem-specific evaluations with lower CPU cost per evaluation. The sequential use of evaluation models or metamodels, of different computational cost and modelling accuracy, by screening the generation members to get rid of non-promising individuals, leads to reduced overall computational cost. The distributed scheme is based on loosely coupled demes that exchange regularly their best-so-far individuals. Emphasis is put on the optimal way of coupling distributed and hierarchical search methods. The proposed method is tested on mathematical and compressor cascade airfoil design problems.  相似文献   

14.
结构不确定性量化是定量参数不确定性传递到结构响应的不确定性大小。传统的蒙特卡洛法需要进行大量的数值计算,耗时较高,难以应用于大型复杂结构的不确定性量化。代理模型方法是基于少量训练样本建立的近似数学模型,可代替原始物理模型进行不确定性量化以提高计算效率。针对高精度样本计算成本高而低精度样本计算精度低的问题,该文提出了整合高、低精度训练样本的广义协同高斯过程模型。基于该模型框架推导了结构响应均值和方差的解析表达式,实现了结构不确定性的量化解析。采用三个空间结构算例来验证结构不确定性量化解析方法的准确性,并与传统的蒙特卡洛法、协同高斯过程模型和高斯过程模型的计算结果对比,结果表明所提方法在计算精度和效率方面均具有优势。  相似文献   

15.
We present a methodical procedure for topology optimization under uncertainty with multiresolution finite element (FE) models. We use our framework in a bifidelity setting where a coarse and a fine mesh corresponding to low- and high-resolution models are available. The inexpensive low-resolution model is used to explore the parameter space and approximate the parameterized high-resolution model and its sensitivity, where parameters are considered in both structural load and stiffness. We provide error bounds for bifidelity FE approximations and their sensitivities and conduct numerical studies to verify these theoretical estimates. We demonstrate our approach on benchmark compliance minimization problems, where we show significant reduction in computational cost for expensive problems such as topology optimization under manufacturing variability, reliability-based topology optimization, and three-dimensional topology optimization while generating almost identical designs to those obtained with a single-resolution mesh. We also compute the parametric von Mises stress for the generated designs via our bifidelity FE approximation and compare them with standard Monte Carlo simulations. The implementation of our algorithm, which extends the well-known 88-line topology optimization code in MATLAB, is provided.  相似文献   

16.
In this paper, we implement the method of proper orthogonal decomposition (POD) to generate a reduced order model (ROM) of an optimization‐based mesh movement scheme. In this study it is shown that POD can be used effectively to generate an ROM, that accurately reproduces the full order mesh movement algorithm, with a decrease in computational time of over 99%. We further introduce a novel training procedure whereby the POD models are generated in a fully automated fashion. The technology is applicable to any mesh movement method and enables potential reductions of up to four orders of magnitude in mesh movement related costs. The proposed model can be implemented without having to pre‐train the POD model, to any fluid–structure interaction code with an existing mesh movement scheme. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

17.
张伟  韩旭  刘杰  杨刚 《工程力学》2013,30(3):58-65
提出了一种基于正交试验设计的土中爆炸数值模型确认方法。结合相应物理实验数据,该方法将数值模型确认问题转化为确定影响数值计算结果的各因素及其水平最佳搭配的优化问题。采用正交试验设计作为优化手段,达到了用较少的试验次数获得各因素及其水平最佳搭配的目的。通过极差分析,研究了在不同确认准则条件下,各因素及其水平对土中爆炸数值模拟结果的影响。计算结果表明:分别以最小计算耗时和最高计算精度为确认准则时,网格尺寸是影响数值结果的主要因素;以两者综合评估为确认准则时,计算方法是影响数值结果的主要因素。该文提出的这种方法为复杂数值模型确认问题的研究提供了新的思路。  相似文献   

18.
We consider engineering design optimization problems where the objective and/or constraint functions are evaluated by means of computationally expensive blackboxes. Our practical optimization strategy consists of solving surrogate optimization problems in the search step of the mesh adaptive direct search algorithm. In this paper, we consider locally weighted regression models to build the necessary surrogates, and present three ideas for appropriate and effective use of locally weighted scatterplot smoothing (LOWESS) models for surrogate optimization. First, a method is proposed to reduce the computational cost of LOWESS models. Second, a local scaling coefficient is introduced to adapt LOWESS models to the density of neighboring points while retaining smoothness. Finally, an appropriate order error metric is used to select the optimal shape coefficient of the LOWESS model. Our surrogate-assisted optimization approach utilizes LOWESS models to both generate and rank promising candidates found in the search and poll steps. The “real” blackbox functions that govern the original optimization problem are then evaluated at these ranked candidates with an opportunistic strategy, reducing CPU time significantly. Computational results are reported for four engineering design problems with up to six variables and six constraints. The results demonstrate the effectiveness of the LOWESS models as well as the order error metric for surrogate optimization.  相似文献   

19.
Physiological simulators which are intended for use in clinical environments face harsh expectations from medical practitioners; they must cope with significant levels of uncertainty arising from non-measurable parameters, population heterogeneity and disease heterogeneity, and their validation must provide watertight proof of their applicability and reliability in the clinical arena. This paper describes a systems engineering framework for the validation of an in silico simulation model of pulmonary physiology. We combine explicit modelling of uncertainty/variability with advanced global optimization methods to demonstrate that the model predictions never deviate from physiologically plausible values for realistic levels of parametric uncertainty. The simulation model considered here has been designed to represent a dynamic in vivo cardiopulmonary state iterating through a mass-conserving set of equations based on established physiological principles and has been developed for a direct clinical application in an intensive-care environment. The approach to uncertainty modelling is adapted from the current best practice in the field of systems and control engineering, and a range of advanced optimization methods are employed to check the robustness of the model, including sequential quadratic programming, mesh-adaptive direct search and genetic algorithms. An overview of these methods and a comparison of their reliability and computational efficiency in comparison to statistical approaches such as Monte Carlo simulation are provided. The results of our study indicate that the simulator provides robust predictions of arterial gas pressures for all realistic ranges of model parameters, and also demonstrate the general applicability of the proposed approach to model validation for physiological simulation.  相似文献   

20.
In this paper, the proper generalized decomposition (PGD) is used for model reduction in the solution of an inverse heat conduction problem within the Bayesian framework. Two PGD reduced order models are proposed and the approximation Error model (AEM) is applied to account for the errors between the complete and the reduced models. For the first PGD model, the direct problem solution is computed considering a separate representation of each coordinate of the problem during the process of solving the inverse problem. On the other hand, the second PGD model is based on a generalized solution integrating the unknown parameter as one of the coordinates of the decomposition. For the second PGD model, the reduced solution of the direct problem is computed before the inverse problem within the parameter space provided by the prior information about the parameters, which is required to be proper. These two reduced models are evaluated in terms of accuracy and reduction of the computational time on a transient three-dimensional two region inverse heat transfer problem. In fact, both reduced models result on substantial reduction of the computational time required for the solution of the inverse problem, and provide accurate estimates for the unknown parameter due to the application of the approximation error model approach.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号