首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In optimization under uncertainty for engineering design, the behavior of the system outputs due to uncertain inputs needs to be quantified at each optimization iteration, but this can be computationally expensive. Multifidelity techniques can significantly reduce the computational cost of Monte Carlo sampling methods for quantifying the effect of uncertain inputs, but existing multifidelity techniques in this context apply only to Monte Carlo estimators that can be expressed as a sample average, such as estimators of statistical moments. Information reuse is a particular multifidelity method that treats previous optimization iterations as lower fidelity models. This work generalizes information reuse to be applicable to quantities whose estimators are not sample averages. The extension makes use of bootstrapping to estimate the error of estimators and the covariance between estimators at different fidelities. Specifically, the horsetail matching metric and quantile function are considered as quantities whose estimators are not sample averages. In an optimization under uncertainty for an acoustic horn design problem, generalized information reuse demonstrated computational savings of over 60% compared with regular Monte Carlo sampling.  相似文献   

2.
The analysis and optimization of complex multiphysics systems presents a series of challenges that limit the practical use of computational tools. Specifically, the optimization of such systems involves multiple interconnected components with competing quantities of interest and high‐dimensional spaces and necessitates the use of costly high‐fidelity solvers to accurately simulate the coupled multiphysics. In this paper, we put forth a data‐driven framework to address these challenges leveraging recent advances in machine learning. We combine multifidelity Gaussian process regression and Bayesian optimization to construct probabilistic surrogate models for given quantities of interest and explore high‐dimensional design spaces in a cost‐effective manner. The synergistic use of these computational tools gives rise to a tractable and general framework for tackling realistic multidisciplinary optimization problems. To demonstrate the specific merits of our approach, we have chosen a challenging large‐scale application involving the hydrostructural optimization of three‐dimensional supercavitating hydrofoils. To this end, we have developed an automated workflow for performing multiresolution simulations of turbulent multiphase flows and multifidelity structural mechanics (combining three‐dimensional and one‐dimensional finite element results), the results of which drive our machine learning analysis in pursuit of the optimal hydrofoil shape.  相似文献   

3.
This paper presents an efficient metamodel building technique for solving collaborative optimization (CO) based on high fidelity models. The proposed method is based on a metamodeling concept, that is designed to simultaneously utilize computationally efficient (low fidelity) and expensive (high fidelity) models in an optimization process. A distinctive feature of the method is the utilization of interaction between low and high fidelity models in the construction of high quality metamodels both at the discipline level and system level of the CO. The low fidelity model is tuned in such a way that it approaches the same level of accuracy as the high fidelity model; but at the same time remains computational inexpensive. In this process, the tuned low fidelity models are used in the discipline level optimization process. In the system level, to handle the computational cost of the equality constraints in CO, model management strategy along with metamodeling technique are used. To determine the fidelity of metamodels, the predictive estimation of model fidelity method is applied. The developed method is demonstrated on a 2D Airfoil design problem, involving tightly coupled high fidelity structural and aerodynamic models. The results obtained show that the proposed method significantly reduces computational cost, and improves the convergence rate for solving the multidisciplinary optimization problem based on high fidelity models.  相似文献   

4.
Variable-complexity methods are applied to aerodynamic shape design problems with the objective of reducing the total computational cost of the optimization process. Two main strategies are employed: the use of different levels of fidelity in the analysis models (variable fidelity) and the use of different sets of design variables (variable parameterization). Variable-fidelity methods with three different types of corrections are implemented and applied to a set of two-dimensional airfoil optimization problems that use computational fluid dynamics for the analysis. Variable parameterization is also used to solve the same problems. Both strategies are shown to reduce the computational cost of the optimization.  相似文献   

5.
Unlike the traditional topology optimization approach that uses the same discretization for finite element analysis and design optimization, this paper proposes a framework for improving multiresolution topology optimization (iMTOP) via multiple distinct discretizations for: (1) finite elements; (2) design variables; and (3) density. This approach leads to high fidelity resolution with a relatively low computational cost. In addition, an adaptive multiresolution topology optimization (AMTOP) procedure is introduced, which consists of selective adjustment and refinement of design variable and density fields. Various two‐dimensional and three‐dimensional numerical examples demonstrate that the proposed schemes can significantly reduce computational cost in comparison to the existing element‐based approach. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

6.
Jin Yi  Mi Xiao  Junnan Xu  Lin Zhang 《工程优选》2017,49(1):161-180
Engineering design often involves different types of simulation, which results in expensive computational costs. Variable fidelity approximation-based design optimization approaches can realize effective simulation and efficiency optimization of the design space using approximation models with different levels of fidelity and have been widely used in different fields. As the foundations of variable fidelity approximation models, the selection of sample points of variable-fidelity approximation, called nested designs, is essential. In this article a novel nested maximin Latin hypercube design is constructed based on successive local enumeration and a modified novel global harmony search algorithm. In the proposed nested designs, successive local enumeration is employed to select sample points for a low-fidelity model, whereas the modified novel global harmony search algorithm is employed to select sample points for a high-fidelity model. A comparative study with multiple criteria and an engineering application are employed to verify the efficiency of the proposed nested designs approach.  相似文献   

7.
在基于仿真模型的工程设计优化中,采用高精度、高成本的分析模型会导致计算量大,采用低精度、低成本的分析模型会导致设计优化结果的可信度低,难以满足实际工程的要求。为了有效平衡高精度与低成本之间的矛盾关系,通过建立序贯层次Kriging模型融合高/低精度数据,采用大量低成本、低精度的样本点反映高精度分析模型的变化趋势,并采用少量高成本、高精度的样本点对低精度分析模型进行校正,以实现对优化目标的高精度预测。为了避免层次Kriging模型误差对优化结果的影响,将层次Kriging模型与遗传算法相结合,根据6σ设计准则计算每一代最优解的预测区间,具有较大预测区间的当前最优解即为新的高精度样本点。同时,在优化过程中序贯更新层次Kriging模型,提高最优解附近的层次Kriging模型的预测精度,从而保证设计结果的可靠性。将所提出的方法应用于微型飞行器机身结构的设计优化中,以验证该方法的有效性和优越性。采用具有不同单元数的网格模型分别作为低精度分析模型和高精度分析模型,利用最优拉丁超立方设计分别选取60个低精度样本点和20个高精度样本点建立初始层次Kriging模型,采用本文方法求解并与直接采用高精度仿真模型求解的结果进行比较。结果表明,所提出的方法能够有效利用高/低精度样本点处的信息,建立高精度的层次Kriging模型;本文方法仅需要少量的计算成本就能求得近似最优解,有效提高了设计效率,为类似的结构设计优化问题提供了参考。  相似文献   

8.
The design of efficient flapping wings for human engineered micro aerial vehicles (MAVs) has long been an elusive goal, in part because of the large size of the design space. One strategy for overcoming this difficulty is to use a multifidelity simulation strategy that appropriately balances computation time and accuracy. We compare two models with different geometric and physical fidelity. The low‐fidelity model is an inviscid doublet lattice method with infinitely thin lifting surfaces. The high‐fidelity model is a high‐order accurate discontinuous Galerkin Navier–Stokes solver, which uses an accurate representation of the flapping wing geometry. To compare the performance of the two methods, we consider a model flapping wing with an elliptical planform and an analytically prescribed spanwise wing twist, at size scales relevant to MAVs. Our results show that in many cases, including those with mild separation, low‐fidelity simulations can accurately predict integrated forces, provide insight into the flow structure, indicate regions of likely separation, and shed light on design–relevant quantities. But for problems with significant levels of separation, higher‐fidelity methods are required to capture the details of the flow field. Inevitably high‐fidelity simulations are needed to establish the limits of validity of the lower fidelity simulations.Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

9.
In this paper, we propose an efficient strategy for robust design based on Bayesian Monte Carlo simulation. Robust design is formulated as a multiobjective problem to allow explicit trade‐off between the mean performance and variability. The proposed method is applied to a compressor blade design in the presence of manufacturing uncertainty. Process capability data are utilized in conjunction with a parametric geometry model for manufacturing uncertainty quantification. High‐fidelity computational fluid dynamics simulations are used to evaluate the aerodynamic performance of the compressor blade. A probabilistic analysis for estimating the effect of manufacturing variations on the aerodynamic performance of the blade is performed and a case for the application of robust design is established. The proposed approach is applied to robust design of compressor blades and a selected design from the final Pareto set is compared with an optimal design obtained by minimizing the nominal performance. The selected robust blade has substantial improvement in robustness against manufacturing variations in comparison with the deterministic optimal blade. Significant savings in computational effort using the proposed method are also illustrated. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

10.
Accelerated life testing (ALT) design is usually performed based on assumptions of life distributions, stress–life relationship, and empirical reliability models. Time‐dependent reliability analysis on the other hand seeks to predict product and system life distribution based on physics‐informed simulation models. This paper proposes an ALT design framework that takes advantages of both types of analyses. For a given testing plan, the corresponding life distributions under different stress levels are estimated based on time‐dependent reliability analysis. Because both aleatory and epistemic uncertainty sources are involved in the reliability analysis, ALT data is used in this paper to update the epistemic uncertainty using Bayesian statistics. The variance of reliability estimation at the nominal stress level is then estimated based on the updated time‐dependent reliability analysis model. A design optimization model is formulated to minimize the overall expected testing cost with constraint on confidence of variance of the reliability estimate. Computational effort for solving the optimization model is minimized in three directions: (i) efficient time‐dependent reliability analysis method; (ii) a surrogate model is constructed for time‐dependent reliability under different stress levels; and (iii) the ALT design optimization model is decoupled into a deterministic design optimization model and a probabilistic analysis model. A cantilever beam and a helicopter rotor hub are used to demonstrate the proposed method. The results show the effectiveness of the proposed ALT design optimization model. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

11.
《IIE Transactions》2008,40(5):509-523
In this paper we introduce a robust optimization approach to solve the Vehicle Routing Problem (VRP) with demand uncertainty. This approach yields routes that minimize transportation costs while satisfying all demands in a given bounded uncertainty set. We show that for the Miller-Tucker-Zemlin formulation of the VRP and specific uncertainty sets, solving for the robust solution is no more difficult than solving a single deterministic VRP. Our computational results on benchmark instances and on families of clustered instances show that the robust solution can protect from unmet demand while incurring a small additional cost over deterministic optimal routes. This is most pronounced for clustered instances under moderate uncertainty, where remaining vehicle capacity is used to protect against variations within each cluster at a small additional cost. We compare the robust optimization model with classic stochastic VRP models for this problem to illustrate the differences and similarities between them. We also observe that the robust solution amounts to a clever management of the remaining vehicle capacity compared to uniformly and non-uniformly distributing this slack over the vehicles.  相似文献   

12.
This study presents a topological optimization method simultaneously considering the stress constraint and the uncertainty of the load positions for practical applications. The phase field design method is used to derive the topologically optimal structure shape. The stress penalization function, which makes intermediate design variable values disproportionately expensive, is employed to ensure numerical stability and avoid the singularity during the optimization process. The adaptive mesh refinement and a modified P-norm stress with correction factor are also employed to reduce the computational cost. As numerical examples, cantilever beam, L-shaped, and MBB-beam are presented to verify the proposed design method. The open-source code FreeFEM++ is used for the finite element analysis and the design process.  相似文献   

13.
This paper reviews the evolution of off-line quality engineering methods with respect to one or more quality criteria, and presents some recent results. The fundamental premises that justify the use of robust product/process design are established with an illustrative example. The use of designed experiments to model quality criteria and their optimization is briefly reviewed. The fact that most design-for-quality problems involve multiple quality criteria motivates the development of multiobjective optimization techniques for robust parameter design. Two situations are considered: one in which response surface models for the quality characteristics can be obtained using regression and considered over a continuous factor space, and one in which the problem scenario and the experiment permit only discrete parameter settings for the design factors. In the former scenario, a multiobjective optimization technique based on the reference-point method is presented; this technique also incorporates an inference mechanism to deal with uncertainty in the response surface models caused by finite, noisy data. In the discrete-factors scenario, an efficient method to reduce computational complexity for a class of models is presented.  相似文献   

14.
针对不确定因素引起的翼型气动性能波动现象,探讨了翼型几何形状随机不确定波动对翼型气动特性的影响,并进行了减小此影响的鲁棒优化设计。引入类别形状函数变换(CST)方法可以大大降低设计变量自由度数目,以NACA0012翼型为例,进行了考虑翼型几何形状随机扰动的气动不确定性分析,发现几何形状的微小波动对升力特性影响很小。发展了一种基于响应面和遗传算法的鲁棒优化设计方法,能够高效的减小阻力及其波动的影响。计算结果显示尽管相对于确定性优化结果鲁棒优化阻力略有增加,但波动更小,气动性能具有更好的鲁 棒性。  相似文献   

15.
Response surface methods use least-squares regression analysis to fit low-order polynomials to a set of experimental data. It is becoming increasingly more popular to apply response surface approximations for the purpose of engineering design optimization based on computer simulations. However, the substantial expense involved in obtaining enough data to build quadratic response approximations seriously limits the practical size of problems. Multifidelity techniques, which combine cheap low-fidelity analyses with more accurate but expensive high-fidelity solutions, offer means by which the prohibitive computational cost can be reduced. Two optimum design problems are considered, both pertaining to the fluid flow in diffusers. In both cases, the high-fidelity analyses consist of solutions to the full Navier-Stokes equations, whereas the low-fidelity analyses are either simple empirical formulas or flow solutions to the Navier-Stokes equations achieved using coarse computational meshes. The multifidelity strategy includes the construction of two separate response surfaces: a quadratic approximation based on the low-fidelity data, and a linear correction response surface that approximates the ratio of high-and low-fidelity function evaluations. The paper demonstrates that this approach may yield major computational savings.  相似文献   

16.
Genetic algorithms have already been applied to various fields of engineering problems as a general optimization tool in charge of expensive sampling of the coded design space. In order to reduce such a computational cost in practice, application of evolutionary strategies is growing rapidly in the adaptive use of problem‐specific information. This paper proposes a hybrid strategy to utilize a cooperative dynamic memory of more competitive solutions combining indirect information share in ant systems with direct constructive genetic search. Some proper coding techniques are employed to enable testing the method with various sets of control parameters. As a challenging field of interest, its application to structural layout optimization is considered while an example of a traveling salesman problem is also treated as a combinatorial benchmark. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

17.
Fluid–structure interactions (FSI) play a crucial role in many engineering fields. However, the computational cost associated with high‐fidelity aeroelastic models currently precludes their direct use in industry, especially for strong interactions. The strongly coupled segregated problem—that results from domain partitioning—can be interpreted as an optimization problem of a fluid–structure interface residual. Multi‐fidelity optimization techniques can therefore directly be applied to this problem in order to obtain the solution efficiently. In previous work, it is already shown that aggressive space mapping (ASM) can be used in this context. In this contribution, we extend the research towards the use of space mapping for FSI simulations. We investigate the performance of two other approaches, generalized space mapping and output space mapping, by application to both compressible and incompressible 2D problems. Moreover, an analysis of the influence of the applied low‐fidelity model on the achievable speedup is presented. The results indicate that output space mapping is a viable alternative to ASM when applied in the context of solver coupling for partitioned FSI, showing similar performance as ASM and resulting in reductions in computational cost up to 50% with respect to the reference quasi‐Newton method. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

18.
We present a robust optimization framework that is applicable to general nonlinear programs (NLP) with uncertain parameters. We focus on design problems with partial differential equations (PDE), which involve high computational cost. Our framework addresses the uncertainty with a deterministic worst-case approach. Since the resulting min–max problem is computationally intractable, we propose an approximate robust formulation that employs quadratic models of the involved functions that can be handled efficiently with standard NLP solvers. We outline numerical methods to build the quadratic models, compute their derivatives, and deal with high-dimensional uncertainties. We apply the presented approach to the parametrized shape optimization of systems that are governed by different kinds of PDE and present numerical results.  相似文献   

19.
A number of multi-objective evolutionary algorithms have been proposed in recent years and many of them have been used to solve engineering design optimization problems. However, designs need to be robust for real-life implementation, i.e. performance should not degrade substantially under expected variations in the variable values or operating conditions. Solutions of constrained robust design optimization problems should not be too close to the constraint boundaries so that they remain feasible under expected variations. A robust design optimization problem is far more computationally expensive than a design optimization problem as neighbourhood assessments of every solution are required to compute the performance variance and to ensure neighbourhood feasibility. A framework for robust design optimization using a surrogate model for neighbourhood assessments is introduced in this article. The robust design optimization problem is modelled as a multi-objective optimization problem with the aim of simultaneously maximizing performance and minimizing performance variance. A modified constraint-handling scheme is implemented to deal with neighbourhood feasibility. A radial basis function (RBF) network is used as a surrogate model and the accuracy of this model is maintained via periodic retraining. In addition to using surrogates to reduce computational time, the algorithm has been implemented on multiple processors using a master–slave topology. The preliminary results of two constrained robust design optimization problems indicate that substantial savings in the actual number of function evaluations are possible while maintaining an acceptable level of solution quality.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号