首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We propose a novel deep learning based surrogate model for solving high-dimensional uncertainty quantification and uncertainty propagation problems. The proposed deep learning architecture is developed by integrating the well-known U-net architecture with the Gaussian Gated Linear Network (GGLN) and referred to as the Gated Linear Network induced U-net or GLU-net. The proposed GLU-net treats the uncertainty propagation problem as an image to image regression and hence, is extremely data efficient. Additionally, it also provides estimates of the predictive uncertainty. The network architecture of GLU-net is less complex with 44% fewer parameters than the contemporary works. We illustrate the performance of the proposed GLU-net in solving the Darcy flow problem under uncertainty under the sparse data scenario. We consider the stochastic input dimensionality to be up to 4225. Benchmark results are generated using the vanilla Monte Carlo simulation. We observe the proposed GLU-net to be accurate and extremely efficient even when no information about the structure of the inputs is provided to the network. Case studies are performed by varying the training sample size and stochastic input dimensionality to illustrate the robustness of the proposed approach.  相似文献   

2.
Ning Zhang  Huachao Dong 《工程优选》2019,51(8):1336-1351
Constructing approximation models with surrogate modelling is often carried out in engineering design to save computational cost. However, the problem of the ‘curse of dimensionality’ still exists, and high-dimensional model representation (HDMR) has been proven to be very efficient in solving high-dimensional, computationally expensive black-box problems. This article proposes a new HDMR by combining separate stand-alone metamodels to form an ensemble based on cut-HDMR. It can improve prediction accuracy and alleviate prediction uncertainty for different problems compared with previous HDMRs. In this article, 10 representative mathematical examples and two engineering examples are used to illustrate the proposed technique and previous HDMRs. Furthermore, a comprehensive comparison of four metrics between the ensemble HDMR and other single HDMRs is presented, with a wide scope of dimensionalities. The results show that the single HDMRs perform well on specified examples but the ensemble HDMR provides more accurate predictions for all the test problems.  相似文献   

3.
Surrogate modeling techniques have been increasingly developed for optimization and uncertainty quantification problems in many engineering fields. The development of surrogates requires modeling high-dimensional and nonsmooth functions with limited information. To this end, the hybrid surrogate modeling method, where different surrogate models are combined, offers an effective solution. In this paper, a new hybrid modeling technique is proposed by combining polynomial chaos expansion and kernel function in a sparse Bayesian learning framework. The proposed hybrid model possesses both the global characteristic advantage of polynomial chaos expansion and the local characteristic advantage of the Gaussian kernel. The parameterized priors are utilized to encourage the sparsity of the model. Moreover, an optimization algorithm aiming at maximizing Bayesian evidence is proposed for parameter optimization. To assess the performance of the proposed method, a detailed comparison is made with the well-established PC-Kriging technique. The results show that the proposed method is superior in terms of accuracy and robustness.  相似文献   

4.
In this work, an adaptive simplex stochastic collocation method is introduced in which sample refinement is informed by variability in the solution of the system. The proposed method is based on the concept of multi-element stochastic collocation methods and is capable of dealing with very high-dimensional models whose solutions are expressed as a vector, a matrix, or a tensor. The method leverages random samples to create a multi-element polynomial chaos surrogate model that incorporates local anisotropy in the refinement, informed by the variance of the estimated solution. This feature makes it beneficial for strongly nonlinear and/or discontinuous problems with correlated non-Gaussian uncertainties. To solve large systems, a reduced-order model (ROM) of the high-dimensional response is identified using singular value decomposition (higher-order SVD for matrix/tensor solutions) and polynomial chaos is used to interpolate the ROM. The method is applied to several stochastic systems of varying type of response (scalar/vector/matrix) and it shows considerable improvement in performance compared to existing simplex stochastic collocation methods and adaptive sparse grid collocation methods.  相似文献   

5.
In this work, an alternative machine learning methodology is proposed, which utilizes nonlinear manifold learning techniques in the frame of surrogate modeling. Under the assumption that the solutions of a parametrized physical system lie on a low-dimensional manifold embedded in a high-dimensional Euclidean space, the goal is to unveil the manifold's intrinsic dimensionality and use it for the construction of a surrogate model, which will be used as a cost-efficient emulator of the high-dimensional physical system. To this purpose, a computational framework based on the diffusion maps algorithm is put forth herein, where a set of system solutions is used to identify the geometry of a low-dimensional space called the diffusion maps space. This space is completely described by a low-dimensional basis, which is constructed from the eigenvectors and eigenvalues of a diffusion operator on the data. The proposed approach exploits the diffusion maps space's reduced dimensionality for the construction of locally clustered interpolation schemes between the parameter space, the diffusion maps space, and the solution space, which are cheap to evaluate and highly accurate. This way, the need to formulate and solve the governing equations of the system is eliminated. In addition, a sampling methodology is proposed based on the metric of the diffusion maps space to efficiently sample the parameter space, thus ensuring the quality of the surrogate model. Even though it is exploited herein in the premises of uncertainty quantification, this methodology is applicable to any other problem type that depends on some parametric space (ie, optimization, sensitivity analysis, etc). In the numerical examples, it is shown that the proposed surrogate model is capable of high levels of accuracy, as well as significant computational gains.  相似文献   

6.
This paper presents a novel methodology for structural reliability analysis by means of the stochastic finite element method (SFEM). The key issue of structural reliability analysis is to determine the limit state function and corresponding multidimensional integral that are usually related to the structural stochastic displacement and/or its derivative, e.g., the stress and strain. In this paper, a novel weak-intrusive SFEM is first used to calculate structural stochastic displacements of all spatial positions. In this method, the stochastic displacement is decoupled into a combination of a series of deterministic displacements with random variable coefficients. An iterative algorithm is then given to solve the deterministic displacements and the corresponding random variables. Based on the stochastic displacement obtained by the SFEM, the limit state function described by the stochastic displacement (and/or its derivative) and the corresponding multidimensional integral encountered in reliability analysis can be calculated in a straightforward way. Failure probabilities of all spatial positions can be obtained at once since the stochastic displacements of all spatial points have been known by using the proposed SFEM. Furthermore, the proposed method can be applied to high-dimensional stochastic problems without any modification. One of the most challenging problems encountered in high-dimensional reliability analysis, known as the curse of dimensionality, can be circumvented with great success. Three numerical examples, including low- and high-dimensional reliability analysis, are given to demonstrate the good accuracy and the high efficiency of the proposed method.  相似文献   

7.
This paper presents an approach for efficient uncertainty analysis (UA) using an intrusive generalized polynomial chaos (gPC) expansion. The key step of the gPC-based uncertainty quantification ( UQ) is the stochastic Galerkin (SG) projection, which can convert a stochastic model into a set of coupled deterministic models. The SG projection generally yields a high-dimensional integration problem with respect to the number of random variables used to describe the parametric uncertainties in a model. However, when the number of uncertainties is large and when the governing equation of the system is highly nonlinear, the SG approach-based gPC can be challenging to derive explicit expressions for the gPC coefficients because of the low convergence in the SG projection. To tackle this challenge, we propose to use a bivariate dimension reduction method (BiDRM) in this work to approximate a high-dimensional integral in SG projection with a few one- and two-dimensional integrations. The efficiency of the proposed method is demonstrated with three different examples, including chemical reactions and cell signaling. As compared to other UA methods, such as the Monte Carlo simulations and nonintrusive stochastic collocation (SC), the proposed method shows its superior performance in terms of computational efficiency and UA accuracy.  相似文献   

8.
We address the curse of dimensionality in methods for solving stochastic coupled problems with an emphasis on stochastic expansion methods such as those involving polynomial chaos expansions. The proposed method entails a partitioned iterative solution algorithm that relies on a reduced‐dimensional representation of information exchanged between subproblems to allow each subproblem to be solved within its own stochastic dimension while interacting with a reduced projection of the other subproblems. The proposed method extends previous work by the authors by introducing a reduced chaos expansion with random coefficients. The representation of the exchanged information by using this reduced chaos expansion with random coefficients enables an expeditious construction of doubly stochastic polynomial chaos expansions that separate the effect of uncertainty local to a subproblem from the effect of statistically independent uncertainty coming from other subproblems through the coupling. After laying out the theoretical framework, we apply the proposed method to a multiphysics problem from nuclear engineering. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

9.
In many engineering optimization problems, the number of function evaluations is often very limited because of the computational cost to run one high-fidelity numerical simulation. Using a classic optimization algorithm, such as a derivative-based algorithm or an evolutionary algorithm, directly on a computational model is not suitable in this case. A common approach to addressing this challenge is to use black-box surrogate modelling techniques. The most popular surrogate-based optimization algorithm is the efficient global optimization (EGO) algorithm, which is an iterative sampling algorithm that adds one (or many) point(s) per iteration. This algorithm is often based on an infill sampling criterion, called expected improvement, which represents a trade-off between promising and uncertain areas. Many studies have shown the efficiency of EGO, particularly when the number of input variables is relatively low. However, its performance on high-dimensional problems is still poor since the Kriging models used are time-consuming to build. To deal with this issue, this article introduces a surrogate-based optimization method that is suited to high-dimensional problems. The method first uses the ‘locating the regional extreme’ criterion, which incorporates minimizing the surrogate model while also maximizing the expected improvement criterion. Then, it replaces the Kriging models by the KPLS(+K) models (Kriging combined with the partial least squares method), which are more suitable for high-dimensional problems. Finally, the proposed approach is validated by a comparison with alternative methods existing in the literature on some analytical functions and on 12-dimensional and 50-dimensional instances of the benchmark automotive problem ‘MOPTA08’.  相似文献   

10.
The control of probabilistic Boolean networks as a model of genetic regulatory networks is formulated as an optimal stochastic control problem and has been solved using dynamic programming; however, the proposed methods fail when the number of genes in the network goes beyond a small number. There are two dimensionality problems. First, the complexity of optimal stochastic control exponentially increases with the number of genes. Second, the complexity of estimating the probability distributions specifying the model increases exponentially with the number of genes. We propose an approximate stochastic control method based on reinforcement learning that mitigates the curses of dimensionality and provides polynomial time complexity. Using a simulator, the proposed method eliminates the complexity of estimating the probability distributions and, because the method is a model-free method, it eliminates the impediment of model estimation. The method can be applied on networks for which dynamic programming cannot be used owing to computational limitations. Experimental results demonstrate that the performance of the method is close to optimal stochastic control.  相似文献   

11.
Performance analysis for automated storage and retrieval systems   总被引:1,自引:0,他引:1  
Automated storage and retrieval (AS/R) systems have had a dramatic impact on material handling and inventory control in warehouses and production systems. A unit-load AS/R system is generic and other AS/R systems represent its variations. Common techniques that are used to predict performance of a unit-load AS/RS are a static analysis or computer simulation. A static analysis requires guessing a ratio of single cycles to dual cycles, which can lead to poor prediction. Computer simulation can be time-consuming and expensive. In order to resolve these weaknesses of both techniques, we present a stochastic analysis of a unit-load AS/RS by using a single-server queueing model with unique features. To our knowledge, this is the first study of a stochastic analysis of unit-load AS/R systems by an analytical method. Experimental results show that the proposed method is robust against violation of the underlying assumptions and is effective for both short-term and long-term planning of AS/R systems.  相似文献   

12.
The propagation of thermal uncertainty in composite structures has significant computational challenges. This paper presents the thermal, ply-level and material uncertainty propagation in frequency responses of laminated composite plates by employing surrogate model which is capable of dealing with both correlated and uncorrelated input parameters. The present approach introduces the generalized high dimensional model representation (GHDMR) wherein diffeomorphic modulation under observable response preserving homotopy (D-MORPH) regression is utilized to ensure the hierarchical orthogonality of high dimensional model representation component functions. The stochastic range of thermal field includes elevated temperatures up to 375 K and sub-zero temperatures up to cryogenic range of 125 K. Statistical analysis of the first three natural frequencies is presented to illustrate the results and its performance.  相似文献   

13.
闫海  邓忠民 《复合材料学报》2019,36(6):1413-1420
结合深度学习在图像识别领域的优势,将卷积神经网络(CNN)应用于有限元代理模型,预测了平面随机分布短纤维增强聚氨酯复合材料的有效弹性参数,并针对训练过程出现的过拟合,提出了一种数据增强的方法。为验证该代理模型的有效性,比较了其与传统代理模型在预测有效杨氏模量和剪切模量上的精度差异。在此基础上结合蒙特卡洛法利用卷积神经网络代理模型研究了材料微几何参数不确定性的误差正向传递。结果表明:相对于传统代理模型,卷积神经网络模型能更好地学习图像样本的内部特征,得到更加精确的预测结果,并在训练样本空间外的一定范围内可以保持较好的鲁棒性;随着纤维长宽比的增大,微几何参数的不确定性对材料有效性能预测结果会传递较大的误差。   相似文献   

14.
DIviding RECTangles (DIRECT), as a well-known derivative-free global optimization method, has been found to be effective and efficient for low-dimensional problems. When facing high-dimensional black-box problems, however, DIRECT's performance deteriorates. This work proposes a series of modifications to DIRECT for high-dimensional problems (dimensionality d>10). The principal idea is to increase the convergence speed by breaking its single initialization-to-convergence approach into several more intricate steps. Specifically, starting with the entire feasible area, the search domain will shrink gradually and adaptively to the region enclosing the potential optimum. Several stopping criteria have been introduced to avoid premature convergence. A diversification subroutine has also been developed to prevent the algorithm from being trapped in local minima. The proposed approach is benchmarked using nine standard high-dimensional test functions and one black-box engineering problem. All these tests show a significant efficiency improvement over the original DIRECT for high-dimensional design problems.  相似文献   

15.
Reliability based techniques has been an area of active research in structural design during the last decade, and different methods have been developed. The same has occurred with stochastic programming, which is a framework for modeling optimization problems involving uncertainty. The discipline of stochastic programming has grown and broadened to cover a wide range of applications, such as agriculture, capacity planning, energy, finance, fisheries management, production control, scheduling, transportation, water management, etc., and because of this, techniques for solving stochastic programming models are of great interest for the scientific community. This paper presents a new approach for solving a certain type of stochastic programming problems presenting the following characteristics: (i) the joint probability distributions of random variables are given, (ii) these do not depend on the decisions made, and (iii) random variables only affect the objective function. The method is based on mathematical programming decomposition procedures and first-order reliability methods, and constitutes an efficient method for optimizing quantiles in high-dimensional settings. The solution provided by the method allows us to make informed decisions accounting for uncertainty.  相似文献   

16.
Metamodels, also known as surrogate models, can be used in place of computationally expensive simulation models to increase computational efficiency for the purposes of design optimization or design space exploration. The accuracy of these metamodels varies with the scale and complexity of the underlying model. In this article, three metamodelling methods are evaluated with respect to their capabilities for modelling high-dimensional, nonlinear, multimodal functions. Methods analyzed include kriging, radial basis functions, and support vector regression. Each metamodelling technique is used to model a set of single output functions with dimensionality ranging from fifteen to fifty independent variables and modality ranging from one to ten local maxima. The number of points used to train the models is increased until a predetermined error threshold is met. Results show that kriging metamodels perform most consistently across a variety of functions, although radial basis functions and support vector regression are very competitive for highly multimodal functions and functions with large local gradients, respectively. Support vector regression metamodels consistently offer the shortest build and prediction times when applied to large scale multimodal problems.  相似文献   

17.
This paper is a first attempt to develop a numerical technique to analyze the sensitivity and the propagation of uncertainty through a system with stochastic processes having independent increments as input. Similar to Sobol’ indices for random variables, a meta-model based on Chaos expansions is used and it is shown to be well suited to address such problems. New global sensitivity indices are also introduced to tackle the specificity of stochastic processes. The accuracy and the efficiency of the proposed method is demonstrated on an analytical example with three different input stochastic processes: a Wiener process; an Ornstein–Uhlenbeck process and a Brownian bridge process. The considered output, which is function of these three processes, is a non-Gaussian process. Then, we apply the same ideas on an example without known analytical solution.  相似文献   

18.
A surrogate stochastic reduced order model is developed for the analysis of randomly parametered structural systems with complex geometries. It is assumed that the mathematical model is available in terms of large ordered finite element (FE) matrices. The structure material properties are assumed to have spatial random inhomogeneities and are modelled as non-Gaussian random fields. A polynomial chaos expansion (PCE) based framework is developed for modelling the random fields directly from measurements and for uncertainty quantification of the response. Difficulties in implementing PCE due to geometrical complexities are circumvented by adopting PCE on a geometrically regular domain that bounds the physical domain and are shown to lead to mathematically equivalent representation. The static condensation technique is subsequently extended for stochastic cases based on PCE formalism to obtain reduced order stochastic FE models. The efficacy of the method is illustrated through two numerical examples.  相似文献   

19.
In road roughness literature different stochastic models of parallel road tracks are suggested. A new method is proposed to evaluate their accuracy, by comparison of measured parallel tracks and synthetic parallel tracks, realized from a stochastic model. A model is judged accurate if synthetic and measured roads induce a similar amount of fatigue damage to a vehicle. A lack-of-fit measure is assigned to the evaluated models, facilitating a quick and simple comparison. The uncertainty of the vehicle fatigue indicated for the measured profile is considered in the definition of the lack-of-fit measure. A bootstrap technique is applied to estimate the uncertainty.  相似文献   

20.
This paper presents two techniques, i.e. the proper orthogonal decomposition (POD) and the stochastic collocation method (SCM), for constructing surrogate models to accelerate the Bayesian inference approach for parameter estimation problems associated with partial differential equations. POD is a model reduction technique that derives reduced‐order models using an optimal problem‐adapted basis to effect significant reduction of the problem size and hence computational cost. SCM is an uncertainty propagation technique that approximates the parameterized solution and reduces further forward solves to function evaluations. The utility of the techniques is assessed on the non‐linear inverse problem of probabilistically calibrating scalar Robin coefficients from boundary measurements arising in the quenching process and non‐destructive evaluation. A hierarchical Bayesian model that handles flexibly the regularization parameter and the noise level is employed, and the posterior state space is explored by the Markov chain Monte Carlo. The numerical results indicate that significant computational gains can be realized without sacrificing the accuracy. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号