首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
An Overview of First-Order Model Management for Engineering Optimization   总被引:3,自引:3,他引:0  
First-order approximation/model management optimization (AMMO) is a rigorous methodology for solving high-fidelity optimization problems with minimal expense in high-fidelity function and derivative evaluation. AMMO is a general approach that is applicable to any derivative based optimization algorithm and any combination of high-fidelity and low-fidelity models. This paper gives an overview of the principles that underlie AMMO and puts the method in perspective with other similarly motivated methods. AMMO is first illustrated by an example of a scheme for solving bound-constrained optimization problems. The principles can be easily extrapolated to other optimization algorithms. The applicability to general models is demonstrated on two recent computational studies of aerodynamic optimization with AMMO. One study considers variable-resolution models, where the high-fidelity model is provided by solutions on a fine mesh, while the corresponding low-fidelity model is computed by solving the same differential equations on a coarser mesh. The second study uses variable-fidelity physics models, with the high-fidelity model provided by the Navier-Stokes equations and the low-fidelity model—by the Euler equations. Both studies show promising savings in terms of high-fidelity function and derivative evaluations. The overview serves to introduce the reader to the general concept of AMMO and to illustrate the basic principles with current computational results.  相似文献   

2.
在基于仿真模型的工程设计优化中,采用高精度、高成本的分析模型会导致计算量大,采用低精度、低成本的分析模型会导致设计优化结果的可信度低,难以满足实际工程的要求。为了有效平衡高精度与低成本之间的矛盾关系,通过建立序贯层次Kriging模型融合高/低精度数据,采用大量低成本、低精度的样本点反映高精度分析模型的变化趋势,并采用少量高成本、高精度的样本点对低精度分析模型进行校正,以实现对优化目标的高精度预测。为了避免层次Kriging模型误差对优化结果的影响,将层次Kriging模型与遗传算法相结合,根据6σ设计准则计算每一代最优解的预测区间,具有较大预测区间的当前最优解即为新的高精度样本点。同时,在优化过程中序贯更新层次Kriging模型,提高最优解附近的层次Kriging模型的预测精度,从而保证设计结果的可靠性。将所提出的方法应用于微型飞行器机身结构的设计优化中,以验证该方法的有效性和优越性。采用具有不同单元数的网格模型分别作为低精度分析模型和高精度分析模型,利用最优拉丁超立方设计分别选取60个低精度样本点和20个高精度样本点建立初始层次Kriging模型,采用本文方法求解并与直接采用高精度仿真模型求解的结果进行比较。结果表明,所提出的方法能够有效利用高/低精度样本点处的信息,建立高精度的层次Kriging模型;本文方法仅需要少量的计算成本就能求得近似最优解,有效提高了设计效率,为类似的结构设计优化问题提供了参考。  相似文献   

3.
In this article, hierarchical surrogate model combined with dimensionality reduction technique is investigated for uncertainty propagation of high-dimensional problems. In the proposed method, a low-fidelity sparse polynomial chaos expansion model is first constructed to capture the global trend of model response and exploit a low-dimensional active subspace (AS). Then a high-fidelity (HF) stochastic Kriging model is built on the reduced space by mapping the original high-dimensional input onto the identified AS. The effective dimensionality of the AS is estimated by maximum likelihood estimation technique. Finally, an accurate HF surrogate model is obtained for uncertainty propagation of high-dimensional stochastic problems. The proposed method is validated by two challenging high-dimensional stochastic examples, and the results demonstrate that our method is effective for high-dimensional uncertainty propagation.  相似文献   

4.
The global variable-fidelity modelling (GVFM) method presented in this article extends the original variable-complexity modelling (VCM) algorithm that uses a low-fidelity and scaling function to approximate a high-fidelity function for efficiently solving design-optimization problems. GVFM uses the design of experiments to sample values of high- and low-fidelity functions to explore global design space and to initialize a scaling function using the radial basis function (RBF) network. This approach makes it possible to remove high-fidelity-gradient evaluation from the process, which makes GVFM more efficient than VCM for high-dimensional design problems. The proposed algorithm converges with 65% fewer high-fidelity function calls for a one-dimensional problem than VCM and approximately 80% fewer for a two-dimensional numerical problem. The GVFM method is applied for the design optimization of transonic and subsonic aerofoils. Both aerofoil design problems show design improvement with a reasonable number of high- and low-fidelity function evaluations.  相似文献   

5.
This paper presents two techniques, i.e. the proper orthogonal decomposition (POD) and the stochastic collocation method (SCM), for constructing surrogate models to accelerate the Bayesian inference approach for parameter estimation problems associated with partial differential equations. POD is a model reduction technique that derives reduced‐order models using an optimal problem‐adapted basis to effect significant reduction of the problem size and hence computational cost. SCM is an uncertainty propagation technique that approximates the parameterized solution and reduces further forward solves to function evaluations. The utility of the techniques is assessed on the non‐linear inverse problem of probabilistically calibrating scalar Robin coefficients from boundary measurements arising in the quenching process and non‐destructive evaluation. A hierarchical Bayesian model that handles flexibly the regularization parameter and the noise level is employed, and the posterior state space is explored by the Markov chain Monte Carlo. The numerical results indicate that significant computational gains can be realized without sacrificing the accuracy. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

6.
This paper deals with variable-fidelity optimization, a technique in which the advantages of high- and low-fidelity models are used in an optimization process. The high-fidelity model provides solution accuracy while the low-fidelity model reduces the computational cost. An outline of the theory of the Approximation Management Framework (AMF) proposed by Alexandrov (1996) and Lewis (1996) is given. The AMF algorithm provides the mathematical robustness required for variable-fidelity optimization. This paper introduces a subproblem formulation adapted to a modular implementation of the AMF. Also, we propose two types of second-order corrections (additive and multiplicative) which serve to build the approximation of the high-fidelity model based on the low-fidelity one. Results for a transonic airfoil shape optimization problem are presented. Application of a variable-fidelity algorithm leads to a threefold savings in high-fidelity solver calls, compared to a direct optimization using the high-fidelity solver only. However, premature stops of the algorithm are observed in some cases. A study of the influence of the numerical noise of solvers on robustness deficiency is presented. The study shows that numerical noise artificially introduced into an analytical function causes premature stops of the AMF. Numerical noise observed with our CFD solvers is therefore strongly suspected to be the cause of the robustness problems encountered.  相似文献   

7.
结构不确定性量化是定量参数不确定性传递到结构响应的不确定性大小。传统的蒙特卡洛法需要进行大量的数值计算,耗时较高,难以应用于大型复杂结构的不确定性量化。代理模型方法是基于少量训练样本建立的近似数学模型,可代替原始物理模型进行不确定性量化以提高计算效率。针对高精度样本计算成本高而低精度样本计算精度低的问题,该文提出了整合高、低精度训练样本的广义协同高斯过程模型。基于该模型框架推导了结构响应均值和方差的解析表达式,实现了结构不确定性的量化解析。采用三个空间结构算例来验证结构不确定性量化解析方法的准确性,并与传统的蒙特卡洛法、协同高斯过程模型和高斯过程模型的计算结果对比,结果表明所提方法在计算精度和效率方面均具有优势。  相似文献   

8.
Stochastic analysis of structures using probability methods requires the statistical knowledge of uncertain material parameters. This is often quite easier to identify these statistics indirectly from structure response by solving an inverse stochastic problem. In this paper, a robust and efficient inverse stochastic method based on the non-sampling generalized polynomial chaos method is presented for identifying uncertain elastic parameters from experimental modal data. A data set on natural frequencies is collected from experimental modal analysis for sample orthotropic plates. The Pearson model is used to identify the distribution functions of the measured natural frequencies. This realization is then employed to construct the random orthogonal basis for each vibration mode. The uncertain parameters are represented by polynomial chaos expansions with unknown coefficients and the same random orthogonal basis as the vibration modes. The coefficients are identified via a stochastic inverse problem. The results show good agreement with experimental data.  相似文献   

9.
We present a model reduction approach to the solution of large‐scale statistical inverse problems in a Bayesian inference setting. A key to the model reduction is an efficient representation of the non‐linear terms in the reduced model. To achieve this, we present a formulation that employs masked projection of the discrete equations; that is, we compute an approximation of the non‐linear term using a select subset of interpolation points. Further, through this formulation we show similarities among the existing techniques of gappy proper orthogonal decomposition, missing point estimation, and empirical interpolation via coefficient‐function approximation. The resulting model reduction methodology is applied to a highly non‐linear combustion problem governed by an advection–diffusion‐reaction partial differential equation (PDE). Our reduced model is used as a surrogate for a finite element discretization of the non‐linear PDE within the Markov chain Monte Carlo sampling employed by the Bayesian inference approach. In two spatial dimensions, we show that this approach yields accurate results while reducing the computational cost by several orders of magnitude. For the full three‐dimensional problem, a forward solve using a reduced model that has high fidelity over the input parameter space is more than two million times faster than the full‐order finite element model, making tractable the solution of the statistical inverse problem that would otherwise require many years of CPU time. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

10.
A Hurty-Craig-Bampton (HCB) reduced order component can have unnecessarily large dimension if it contains many interface degrees of freedom. This is often the case for high spatial resolution models. Furthermore, for such high-fidelity models, their static constraint modes can be expensive to compute. To overcome these problems, a component mode synthesis method with interface reduction is developed using multifidelity models. The interface reduction basis is computed from the assembled system by coarsening each substructure's mesh, but keeping the model resolution at the interface intact. It is shown that such a mesh coarsening has a small effect on the interface reduction basis quality. Using this reduction basis, the dimension of the static constraint modes problem can be reduced and the modes computed at a low cost. When few interface modes can be used without significant loss of accuracy, it is possible to enrich the HCB basis with modal truncation augmentation (MTA) vectors to increase accuracy at a small extra cost. The accuracy of a procedure that utilises MTA vectors together with the multifidelity interface reduction is investigated. The method's performance and accuracy are illustrated on a planar problem and a more complex problem from industry.  相似文献   

11.
In this paper, we develop a Bayesian analysis of a threshold autoregressive model with exponential noise. An approximate Bayes methodology, which is introduced here, and the Gibbs sampler are used to compute marginal posterior densities for the parameters of the model, including the threshold parameter, and to compute one-step-ahead predictive density functions. The proposed methodogy is illustrated with a simulation study and a real example.  相似文献   

12.
In this article, we study the balancing principle for Tikhonov regularization in Hilbert scales for deterministic and statistical nonlinear inverse problems. While the rates of convergence in deterministic setting is order optimal, they prove to be order optimal up to a logarithmic term in the stochastic framework. The two-step approach allows us to consider a data-driven algorithm in a general error model for which an exponential behaviour of the tail of the estimator chosen in the first step is valid. Finally, we compute the overall rate of convergence for a Hammerstein operator equation and for a parameter identification problem. Moreover, we illustrate these rates for the last application after we study some large sample properties of the local polynomial estimator in a general stochastic framework.  相似文献   

13.
This paper develops a Bayesian methodology for assessing the confidence in model prediction by comparing the model output with experimental data when both are stochastic. The prior distribution of the response is first computed, which is then updated based on experimental observation using Bayesian analysis to compute a validation metric. A model error estimation methodology is then developed to include model form error, discretization error, stochastic analysis error (UQ error), input data error and output measurement error. Sensitivity of the validation metric to various error components and model parameters is discussed. A numerical example is presented to illustrate the proposed methodology.  相似文献   

14.
Tuning and calibration are processes for improving the representativeness of a computer simulation code to a physical phenomenon. This article introduces a statistical methodology for simultaneously determining tuning and calibration parameters in settings where data are available from a computer code and the associated physical experiment. Tuning parameters are set by minimizing a discrepancy measure while the distribution of the calibration parameters are determined based on a hierarchical Bayesian model. The proposed Bayesian model views the output as a realization of a Gaussian stochastic process with hyper-priors. Draws from the resulting posterior distribution are obtained by the Markov chain Monte Carlo simulation. Our methodology is compared with an alternative approach in examples and is illustrated in a biomechanical engineering application. Supplemental materials, including the software and a user manual, are available online and can be requested from the first author.  相似文献   

15.
There exists a deep chasm between machine learning (ML) and high-fidelity computational material models in science and engineering. Due to the complex interaction of internal physics, ML methods hardly conquer or innovate them. To fill the chasm, this paper finds an answer from the central notions of deep learning (DL) and proposes information index and link functions, which are essential to infuse principles of physics into ML. Like the convolution process of DL, the proposed information index integrates adjacent information and quantifies the physical similarity between laboratory and reality, enabling ML to see through a complex target system with the perspective of scientists. Like the hidden layers' weights of DL, the proposed link functions unravel the hidden relations between information index and physics rules. Like the error backpropagation of DL, the proposed framework adopts fitness-based spawning scheme of evolutionary algorithm. The proposed framework demonstrates that a fusion of information index, link functions, evolutionary algorithm, and Bayesian update scheme can engender self-evolving computational material models and that the fusion will help rename ML as a partner of researchers in the broad science and engineering.  相似文献   

16.
In computational sciences, optimization problems are frequently encountered in solving inverse problems for computing system parameters based on data measurements at specific sensor locations, or to perform design of system parameters. This task becomes increasingly complicated in the presence of uncertainties in boundary conditions or material properties. The task of computing the optimal probability density function (PDF) of parameters based on measurements of physical fields of interest in the form of a PDF, is posed as a stochastic optimization problem. This stochastic optimization problem is solved by dividing it into two problems—an auxiliary optimization problem to construct stochastic space representations from the PDF of measurement data, and a stochastic optimization problem to compute the PDF of problem parameters. The auxiliary optimization problem is solved using a downhill simplex method, whilst a gradient based approach is employed for solving the stochastic optimization problem. The gradients required for stochastic optimization are defined, using appropriate stochastic sensitivity problems. A computationally efficient sparse grid collocation scheme is utilized to compute the solution of these stochastic sensitivity problems. The implementation discussed, requires minimum intrusion into existing deterministic solvers, and it is thus applicable to a variety of problems. Numerical examples involving stochastic inverse heat conduction problems, contamination source identification problems and large deformation robust design problems are discussed.  相似文献   

17.
A priori model reduction methods based on separated representations are introduced for the prediction of the low frequency response of uncertain structures within a parametric stochastic framework. The proper generalized decomposition method is used to construct a quasi‐optimal separated representation of the random solution at some frequency samples. At each frequency, an accurate representation of the solution is obtained on reduced bases of spatial functions and stochastic functions. An extraction of the deterministic bases allows for the generation of a global reduced basis yielding a reduced order model of the uncertain structure, which appears to be accurate on the whole frequency band under study and for all values of input random parameters. This strategy can be seen as an alternative to traditional constructions of reduced order models in structural dynamics in the presence of parametric uncertainties. This reduced order model can then be used for further analyses such as the computation of the response at unresolved frequencies or the computation of more accurate stochastic approximations at some frequencies of interest. Because the dynamic response is highly nonlinear with respect to the input random parameters, a second level of separation of variables is introduced for the representation of functions of multiple random parameters, thus allowing the introduction of very fine approximations in each parametric dimension even when dealing with high parametric dimension. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

18.
19.
The Bayesian inference method has been frequently adopted to develop safety performance functions. One advantage of the Bayesian inference is that prior information for the independent variables can be included in the inference procedures. However, there are few studies that discussed how to formulate informative priors for the independent variables and evaluated the effects of incorporating informative priors in developing safety performance functions. This paper addresses this deficiency by introducing four approaches of developing informative priors for the independent variables based on historical data and expert experience. Merits of these informative priors have been tested along with two types of Bayesian hierarchical models (Poisson-gamma and Poisson-lognormal models). Deviance information criterion (DIC), R-square values, and coefficients of variance for the estimations were utilized as evaluation measures to select the best model(s). Comparison across the models indicated that the Poisson-gamma model is superior with a better model fit and it is much more robust with the informative priors. Moreover, the two-stage Bayesian updating informative priors provided the best goodness-of-fit and coefficient estimation accuracies. Furthermore, informative priors for the inverse dispersion parameter have also been introduced and tested. Different types of informative priors’ effects on the model estimations and goodness-of-fit have been compared and concluded. Finally, based on the results, recommendations for future research topics and study applications have been made.  相似文献   

20.
Modal derivative is an approach to compute a reduced basis for model order reduction of large‐scale nonlinear systems that typically stem from the discretization of partial differential equations. In this way, a complex nonlinear simulation model can be integrated into an optimization problem or the design of a controller, based on the resulting small‐scale state‐space model. We investigate the approximation properties of modal derivatives analytically and thus lay a theoretical foundation of their use in model order reduction, which has been missing so far. Concentrating on the application field of structural mechanics and structural dynamics, we show that the concept of modal derivatives can also be applied as nonlinear extension of the Craig–Bampton family of methods for substructuring. We furthermore generalize the approach from a pure projection scheme to a novel reduced‐order modeling method that replaces all nonlinear terms by quadratic expressions in the reduced state variables. This complexity reduction leads to a frequency‐preserving nonlinear quadratic state‐space model. Numerical examples with carefully chosen nonlinear model problems and three‐dimensional nonlinear elasticity confirm the analytical properties of the modal derivative reduction and show the potential of the proposed novel complexity reduction methods, along with the current limitations. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号