首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
在基于仿真模型的工程设计优化中,采用高精度、高成本的分析模型会导致计算量大,采用低精度、低成本的分析模型会导致设计优化结果的可信度低,难以满足实际工程的要求。为了有效平衡高精度与低成本之间的矛盾关系,通过建立序贯层次Kriging模型融合高/低精度数据,采用大量低成本、低精度的样本点反映高精度分析模型的变化趋势,并采用少量高成本、高精度的样本点对低精度分析模型进行校正,以实现对优化目标的高精度预测。为了避免层次Kriging模型误差对优化结果的影响,将层次Kriging模型与遗传算法相结合,根据6σ设计准则计算每一代最优解的预测区间,具有较大预测区间的当前最优解即为新的高精度样本点。同时,在优化过程中序贯更新层次Kriging模型,提高最优解附近的层次Kriging模型的预测精度,从而保证设计结果的可靠性。将所提出的方法应用于微型飞行器机身结构的设计优化中,以验证该方法的有效性和优越性。采用具有不同单元数的网格模型分别作为低精度分析模型和高精度分析模型,利用最优拉丁超立方设计分别选取60个低精度样本点和20个高精度样本点建立初始层次Kriging模型,采用本文方法求解并与直接采用高精度仿真模型求解的结果进行比较。结果表明,所提出的方法能够有效利用高/低精度样本点处的信息,建立高精度的层次Kriging模型;本文方法仅需要少量的计算成本就能求得近似最优解,有效提高了设计效率,为类似的结构设计优化问题提供了参考。  相似文献   

2.
Jin Yi  Mi Xiao  Junnan Xu  Lin Zhang 《工程优选》2017,49(1):161-180
Engineering design often involves different types of simulation, which results in expensive computational costs. Variable fidelity approximation-based design optimization approaches can realize effective simulation and efficiency optimization of the design space using approximation models with different levels of fidelity and have been widely used in different fields. As the foundations of variable fidelity approximation models, the selection of sample points of variable-fidelity approximation, called nested designs, is essential. In this article a novel nested maximin Latin hypercube design is constructed based on successive local enumeration and a modified novel global harmony search algorithm. In the proposed nested designs, successive local enumeration is employed to select sample points for a low-fidelity model, whereas the modified novel global harmony search algorithm is employed to select sample points for a high-fidelity model. A comparative study with multiple criteria and an engineering application are employed to verify the efficiency of the proposed nested designs approach.  相似文献   

3.
Response surface methods use least-squares regression analysis to fit low-order polynomials to a set of experimental data. It is becoming increasingly more popular to apply response surface approximations for the purpose of engineering design optimization based on computer simulations. However, the substantial expense involved in obtaining enough data to build quadratic response approximations seriously limits the practical size of problems. Multifidelity techniques, which combine cheap low-fidelity analyses with more accurate but expensive high-fidelity solutions, offer means by which the prohibitive computational cost can be reduced. Two optimum design problems are considered, both pertaining to the fluid flow in diffusers. In both cases, the high-fidelity analyses consist of solutions to the full Navier-Stokes equations, whereas the low-fidelity analyses are either simple empirical formulas or flow solutions to the Navier-Stokes equations achieved using coarse computational meshes. The multifidelity strategy includes the construction of two separate response surfaces: a quadratic approximation based on the low-fidelity data, and a linear correction response surface that approximates the ratio of high-and low-fidelity function evaluations. The paper demonstrates that this approach may yield major computational savings.  相似文献   

4.
An Overview of First-Order Model Management for Engineering Optimization   总被引:3,自引:3,他引:0  
First-order approximation/model management optimization (AMMO) is a rigorous methodology for solving high-fidelity optimization problems with minimal expense in high-fidelity function and derivative evaluation. AMMO is a general approach that is applicable to any derivative based optimization algorithm and any combination of high-fidelity and low-fidelity models. This paper gives an overview of the principles that underlie AMMO and puts the method in perspective with other similarly motivated methods. AMMO is first illustrated by an example of a scheme for solving bound-constrained optimization problems. The principles can be easily extrapolated to other optimization algorithms. The applicability to general models is demonstrated on two recent computational studies of aerodynamic optimization with AMMO. One study considers variable-resolution models, where the high-fidelity model is provided by solutions on a fine mesh, while the corresponding low-fidelity model is computed by solving the same differential equations on a coarser mesh. The second study uses variable-fidelity physics models, with the high-fidelity model provided by the Navier-Stokes equations and the low-fidelity model—by the Euler equations. Both studies show promising savings in terms of high-fidelity function and derivative evaluations. The overview serves to introduce the reader to the general concept of AMMO and to illustrate the basic principles with current computational results.  相似文献   

5.
Variable-complexity methods are applied to aerodynamic shape design problems with the objective of reducing the total computational cost of the optimization process. Two main strategies are employed: the use of different levels of fidelity in the analysis models (variable fidelity) and the use of different sets of design variables (variable parameterization). Variable-fidelity methods with three different types of corrections are implemented and applied to a set of two-dimensional airfoil optimization problems that use computational fluid dynamics for the analysis. Variable parameterization is also used to solve the same problems. Both strategies are shown to reduce the computational cost of the optimization.  相似文献   

6.
Surrogate-based optimization (SBO) has recently found widespread use in aerodynamic shape design owing to its promising potential to speed up the whole process by the use of a low-cost objective function evaluation, to reduce the required number of expensive computational fluid dynamics simulations. However, the application of these SBO methods for industrial configurations still faces several challenges. The most crucial challenge nowadays is the ‘curse of dimensionality’, the ability of surrogates to handle a high number of design parameters. This article presents an application study on how the number and location of design variables may affect the surrogate-based design process and aims to draw conclusions on their ability to provide optimal shapes in an efficient manner. To do so, an optimization framework based on the combined use of a surrogate modelling technique (support vector machines for regression), an evolutionary algorithm and a volumetric non-uniform rational B-splines parameterization are applied to the shape optimization of a two-dimensional aerofoil (RAE 2822) and a three-dimensional wing (DPW) in transonic flow conditions.  相似文献   

7.
It is important to design robust and reliable systems by accounting for uncertainty and variability in the design process. However, performing optimization in this setting can be computationally expensive, requiring many evaluations of the numerical model to compute statistics of the system performance at every optimization iteration. This paper proposes a multifidelity approach to optimization under uncertainty that makes use of inexpensive, low‐fidelity models to provide approximate information about the expensive, high‐fidelity model. The multifidelity estimator is developed based on the control variate method to reduce the computational cost of achieving a specified mean square error in the statistic estimate. The method optimally allocates the computational load between the two models based on their relative evaluation cost and the strength of the correlation between them. This paper also develops an information reuse estimator that exploits the autocorrelation structure of the high‐fidelity model in the design space to reduce the cost of repeatedly estimating statistics during the course of optimization. Finally, a combined estimator incorporates the features of both the multifidelity estimator and the information reuse estimator. The methods demonstrate 90% computational savings in an acoustic horn robust optimization example and practical design turnaround time in a robust wing optimization problem. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

8.
The use of response surface methods are well established in the global optimization of expensive functions, the response surface acting as a surrogate to the expensive function objective.In structural design however, the change in objective may vary little between the two models: it is more often the constraints that change with models of varying fidelity. Here approaches are described whereby the coarse model constraints are mapped so that the mapped constraints more faithfully approximate the fine model constraints. The shape optimization of a simple structure demonstrates the approach.  相似文献   

9.
In many real-world optimization problems, the underlying objective and constraint function(s) are evaluated using computationally expensive iterative simulations such as the solvers for computational electro-magnetics, computational fluid dynamics, the finite element method, etc. The default practice is to run such simulations until convergence using termination criteria, such as maximum number of iterations, residual error thresholds or limits on computational time, to estimate the performance of a given design. This information is used to build computationally cheap approximations/surrogates which are subsequently used during the course of optimization in lieu of the actual simulations. However, it is possible to exploit information on pre-converged solutions if one has the control to abort simulations at various stages of convergence. This would mean access to various performance estimates in lower fidelities. Surrogate assisted optimization methods have rarely been used to deal with such classes of problem, where estimates at various levels of fidelity are available. In this article, a multiple surrogate assisted optimization approach is presented, where solutions are evaluated at various levels of fidelity during the course of the search. For any solution under consideration, the choice to evaluate it at an appropriate fidelity level is derived from neighbourhood information, i.e. rank correlations between performance at different fidelity levels and the highest fidelity level of the neighbouring solutions. Moreover, multiple types of surrogates are used to gain a competitive edge. The performance of the approach is illustrated using a simple 1D unconstrained analytical test function. Thereafter, the performance is further assessed using three 10D and three 20D test problems, and finally a practical design problem involving drag minimization of an unmanned underwater vehicle. The numerical experiments clearly demonstrate the benefits of the proposed approach for such classes of problem.  相似文献   

10.
This paper deals with variable-fidelity optimization, a technique in which the advantages of high- and low-fidelity models are used in an optimization process. The high-fidelity model provides solution accuracy while the low-fidelity model reduces the computational cost. An outline of the theory of the Approximation Management Framework (AMF) proposed by Alexandrov (1996) and Lewis (1996) is given. The AMF algorithm provides the mathematical robustness required for variable-fidelity optimization. This paper introduces a subproblem formulation adapted to a modular implementation of the AMF. Also, we propose two types of second-order corrections (additive and multiplicative) which serve to build the approximation of the high-fidelity model based on the low-fidelity one. Results for a transonic airfoil shape optimization problem are presented. Application of a variable-fidelity algorithm leads to a threefold savings in high-fidelity solver calls, compared to a direct optimization using the high-fidelity solver only. However, premature stops of the algorithm are observed in some cases. A study of the influence of the numerical noise of solvers on robustness deficiency is presented. The study shows that numerical noise artificially introduced into an analytical function causes premature stops of the AMF. Numerical noise observed with our CFD solvers is therefore strongly suspected to be the cause of the robustness problems encountered.  相似文献   

11.
This paper presents an efficient metamodel building technique for solving collaborative optimization (CO) based on high fidelity models. The proposed method is based on a metamodeling concept, that is designed to simultaneously utilize computationally efficient (low fidelity) and expensive (high fidelity) models in an optimization process. A distinctive feature of the method is the utilization of interaction between low and high fidelity models in the construction of high quality metamodels both at the discipline level and system level of the CO. The low fidelity model is tuned in such a way that it approaches the same level of accuracy as the high fidelity model; but at the same time remains computational inexpensive. In this process, the tuned low fidelity models are used in the discipline level optimization process. In the system level, to handle the computational cost of the equality constraints in CO, model management strategy along with metamodeling technique are used. To determine the fidelity of metamodels, the predictive estimation of model fidelity method is applied. The developed method is demonstrated on a 2D Airfoil design problem, involving tightly coupled high fidelity structural and aerodynamic models. The results obtained show that the proposed method significantly reduces computational cost, and improves the convergence rate for solving the multidisciplinary optimization problem based on high fidelity models.  相似文献   

12.
In this article, a simple yet efficient and reliable technique for fully automated multi-objective design optimization of antenna structures using sequential domain patching (SDP) is discussed. The optimization procedure according to SDP is a two-step process: (i) obtaining the initial set of Pareto-optimal designs representing the best possible trade-offs between considered conflicting objectives, and (ii) Pareto set refinement for yielding the optimal designs at the high-fidelity electromagnetic (EM) simulation model level. For the sake of computational efficiency, the first step is realized at the level of a low-fidelity (coarse-discretization) EM model by sequential construction and relocation of small design space segments (patches) in order to create a path connecting the extreme Pareto front designs obtained beforehand. The second stage involves response correction techniques and local response surface approximation models constructed by reusing EM simulation data acquired in the first step. A major contribution of this work is an automated procedure for determining the patch dimensions. It allows for appropriate selection of the number of patches for each geometry variable so as to ensure reliability of the optimization process while maintaining its low cost. The importance of this procedure is demonstrated by comparing it with uniform patch dimensions.  相似文献   

13.
The increasing computational requirements of advanced numerical tools for simulating material behaviour can prohibit direct integration of these tools in a design optimization procedure where multiple iterations are required. Therefore, a design approach is needed that can incorporate multiple simulations (multi-physics with different input variables) of varying fidelity in an iterative model management framework that can significantly reduce design cycle times. In this research, a material design tool based on a variable fidelity model management framework is applied to obtain the optimal size of a second phase, consisting of silicon carbide (SiC) fibres, in a silicon-nitride (Si3N4) matrix to obtain continuous fibre SiC-Si3N4 ceramic composites (CFCCs) with maximum high temperature strength and high temperature creep resistance. This investigation shows how models with different dimensions and input design variables can be handled and integrated efficiently by the trust region model management framework, while significantly reducing design cycle times in application to the design of multiphase composite materials.  相似文献   

14.
In optimization under uncertainty for engineering design, the behavior of the system outputs due to uncertain inputs needs to be quantified at each optimization iteration, but this can be computationally expensive. Multifidelity techniques can significantly reduce the computational cost of Monte Carlo sampling methods for quantifying the effect of uncertain inputs, but existing multifidelity techniques in this context apply only to Monte Carlo estimators that can be expressed as a sample average, such as estimators of statistical moments. Information reuse is a particular multifidelity method that treats previous optimization iterations as lower fidelity models. This work generalizes information reuse to be applicable to quantities whose estimators are not sample averages. The extension makes use of bootstrapping to estimate the error of estimators and the covariance between estimators at different fidelities. Specifically, the horsetail matching metric and quantile function are considered as quantities whose estimators are not sample averages. In an optimization under uncertainty for an acoustic horn design problem, generalized information reuse demonstrated computational savings of over 60% compared with regular Monte Carlo sampling.  相似文献   

15.
The global variable-fidelity modelling (GVFM) method presented in this article extends the original variable-complexity modelling (VCM) algorithm that uses a low-fidelity and scaling function to approximate a high-fidelity function for efficiently solving design-optimization problems. GVFM uses the design of experiments to sample values of high- and low-fidelity functions to explore global design space and to initialize a scaling function using the radial basis function (RBF) network. This approach makes it possible to remove high-fidelity-gradient evaluation from the process, which makes GVFM more efficient than VCM for high-dimensional design problems. The proposed algorithm converges with 65% fewer high-fidelity function calls for a one-dimensional problem than VCM and approximately 80% fewer for a two-dimensional numerical problem. The GVFM method is applied for the design optimization of transonic and subsonic aerofoils. Both aerofoil design problems show design improvement with a reasonable number of high- and low-fidelity function evaluations.  相似文献   

16.
Reduced‐order models that are able to approximate output quantities of interest of high‐fidelity computational models over a wide range of input parameters play an important role in making tractable large‐scale optimal design, optimal control, and inverse problem applications. We consider the problem of determining a reduced model of an initial value problem that spans all important initial conditions, and pose the task of determining appropriate training sets for reduced‐basis construction as a sequence of optimization problems. We show that, under certain assumptions, these optimization problems have an explicit solution in the form of an eigenvalue problem, yielding an efficient model reduction algorithm that scales well to systems with states of high dimension. Furthermore, tight upper bounds are given for the error in the outputs of the reduced models. The reduction methodology is demonstrated for a large‐scale contaminant transport problem. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

17.
The design of efficient flapping wings for human engineered micro aerial vehicles (MAVs) has long been an elusive goal, in part because of the large size of the design space. One strategy for overcoming this difficulty is to use a multifidelity simulation strategy that appropriately balances computation time and accuracy. We compare two models with different geometric and physical fidelity. The low‐fidelity model is an inviscid doublet lattice method with infinitely thin lifting surfaces. The high‐fidelity model is a high‐order accurate discontinuous Galerkin Navier–Stokes solver, which uses an accurate representation of the flapping wing geometry. To compare the performance of the two methods, we consider a model flapping wing with an elliptical planform and an analytically prescribed spanwise wing twist, at size scales relevant to MAVs. Our results show that in many cases, including those with mild separation, low‐fidelity simulations can accurately predict integrated forces, provide insight into the flow structure, indicate regions of likely separation, and shed light on design–relevant quantities. But for problems with significant levels of separation, higher‐fidelity methods are required to capture the details of the flow field. Inevitably high‐fidelity simulations are needed to establish the limits of validity of the lower fidelity simulations.Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

18.
Unlike the traditional topology optimization approach that uses the same discretization for finite element analysis and design optimization, this paper proposes a framework for improving multiresolution topology optimization (iMTOP) via multiple distinct discretizations for: (1) finite elements; (2) design variables; and (3) density. This approach leads to high fidelity resolution with a relatively low computational cost. In addition, an adaptive multiresolution topology optimization (AMTOP) procedure is introduced, which consists of selective adjustment and refinement of design variable and density fields. Various two‐dimensional and three‐dimensional numerical examples demonstrate that the proposed schemes can significantly reduce computational cost in comparison to the existing element‐based approach. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

19.
This paper deals with the optimal design of laminated composite plates with integrated piezoelectric actuators. Refined finite element models based on equivalent single layer high-order shear deformation theories are used. These models are combined with simulated annealing, a stochastic global optimization technique, in order to find the optimal location of piezoelectric actuators and also to find the optimal fiber reinforcement angles in both cases having the objective of maximizing the buckling load of the composite adaptive plate structure. To show the performance of the proposed optimization models, two illustrative and simple examples are presented and discussed. In one of these examples a comparison between the simulated annealing technique and a gradient based optimization scheme, is carried out.  相似文献   

20.
The optimization of subsurface flow processes is important for many applications, including oil field operations and the geological storage of carbon dioxide. These optimizations are very demanding computationally due to the large number of flow simulations that must be performed and the typically large dimension of the simulation models. In this work, reduced‐order modeling (ROM) techniques are applied to reduce the simulation time of complex large‐scale subsurface flow models. The procedures all entail proper orthogonal decomposition (POD), in which a high‐fidelity training simulation is run, solution snapshots are stored, and an eigen‐decomposition (SVD) is performed on the resulting data matrix. Additional recently developed ROM techniques are also implemented, including a snapshot clustering procedure and a missing point estimation technique to eliminate rows from the POD basis matrix. The implementation of the ROM procedures into a general‐purpose research simulator is described. Extensive flow simulations involving water injection into a geologically complex 3D oil reservoir model containing 60 000 grid blocks are presented. The various ROM techniques are assessed in terms of their ability to reproduce high‐fidelity simulation results for different well schedules and also in terms of the computational speedups they provide. The numerical solutions demonstrate that the ROM procedures can accurately reproduce the reference simulations and can provide speedups of up to an order of magnitude when compared with a high‐fidelity model simulated using an optimized solver. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号