共查询到20条相似文献,搜索用时 15 毫秒
1.
Nitin Agarwal N. R. Aluru 《International journal for numerical methods in engineering》2010,83(5):575-597
This work presents a data‐driven stochastic collocation approach to include the effect of uncertain design parameters during complex multi‐physics simulation of Micro‐ElectroMechanical Systems (MEMS). The proposed framework comprises of two key steps: first, probabilistic characterization of the input uncertain parameters based on available experimental information, and second, propagation of these uncertainties through the predictive model to relevant quantities of interest. The uncertain input parameters are modeled as independent random variables, for which the distributions are estimated based on available experimental observations, using a nonparametric diffusion‐mixing‐based estimator, Botev (Nonparametric density estimation via diffusion mixing. Technical Report, 2007). The diffusion‐based estimator derives from the analogy between the kernel density estimation (KDE) procedure and the heat dissipation equation and constructs density estimates that are smooth and asymptotically consistent. The diffusion model allows for the incorporation of the prior density and leads to an improved density estimate, in comparison with the standard KDE approach, as demonstrated through several numerical examples. Following the characterization step, the uncertainties are propagated to the output variables using the stochastic collocation approach, based on sparse grid interpolation, Smolyak (Soviet Math. Dokl. 1963; 4 :240–243). The developed framework is used to study the effect of variations in Young's modulus, induced as a result of variations in manufacturing process parameters or heterogeneous measurements on the performance of a MEMS switch. Copyright © 2010 John Wiley & Sons, Ltd. 相似文献
2.
Nitin Agarwal N. R. Aluru 《International journal for numerical methods in engineering》2011,85(11):1365-1389
This paper deals with numerical solution of differential equations with random inputs, defined on bounded random domain with non‐uniform probability measures. Recently, there has been a growing interest in the stochastic collocation approach, which seeks to approximate the unknown stochastic solution using polynomial interpolation in the multi‐dimensional random domain. Existing approaches employ sparse grid interpolation based on the Smolyak algorithm, which leads to orders of magnitude reduction in the number of support nodes as compared with usual tensor product. However, such sparse grid interpolation approaches based on piecewise linear interpolation employ uniformly sampled nodes from the random domain and do not take into account the probability measures during the construction of the sparse grids. Such a construction based on uniform sparse grids may not be ideal, especially for highly skewed or localized probability measures. To this end, this work proposes a weighted Smolyak algorithm based on piecewise linear basis functions, which incorporates information regarding non‐uniform probability measures, during the construction of sparse grids. The basic idea is to construct piecewise linear univariate interpolation formulas, where the support nodes are specially chosen based on the marginal probability distribution. These weighted univariate interpolation formulas are then used to construct weighted sparse grid interpolants, using the standard Smolyak algorithm. This algorithm results in sparse grids with higher number of support nodes in regions of the random domain with higher probability density. Several numerical examples are presented to demonstrate that the proposed approach results in a more efficient algorithm, for the purpose of computation of moments of the stochastic solution, while maintaining the accuracy of the approximation of the solution. Copyright © 2010 John Wiley & Sons, Ltd. 相似文献
3.
Kendra L. Van Buren François M. Hemez 《International journal for numerical methods in engineering》2016,105(5):351-371
This work proposes a method for statistical effect screening to identify design parameters of a numerical simulation that are influential to performance while simultaneously being robust to epistemic uncertainty introduced by calibration variables. Design parameters are controlled by the analyst, but the optimal design is often uncertain, while calibration variables are introduced by modeling choices. We argue that uncertainty introduced by design parameters and calibration variables should be treated differently, despite potential interactions between the two sets. Herein, a robustness criterion is embedded in our effect screening to guarantee the influence of design parameters, irrespective of values used for calibration variables. The Morris screening method is utilized to explore the design space, while robustness to uncertainty is quantified in the context of info‐gap decision theory. The proposed method is applied to the National Aeronautics and Space Administration Multidisciplinary Uncertainty Quantification Challenge Problem, which is a black‐box code for aeronautic flight guidance that requires 35 input parameters. The application demonstrates that a large number of variables can be handled without formulating simplifying assumptions about the potential coupling between calibration variables and design parameters. Because of the computational efficiency of the Morris screening method, we conclude that the analysis can be applied to even larger‐dimensional problems. (Approved for unlimited, public release on October 9, 2013, LA‐UR‐13‐27839, Unclassified.) Copyright © 2015 John Wiley & Sons, Ltd. 相似文献
4.
We describe a Gauss–Seidel algorithm for optimizing a three‐dimensional unstructured grid so as to conform to a given metric. The objective function for the optimization process is based on the maximum value of an elemental residual measuring the distance of any simplex in the grid to the local target metric. We analyse different possible choices for the objective function, and we highlight their relative merits and deficiencies. Alternative strategies for conducting the optimization are compared and contrasted in terms of resulting grid quality and computational costs. Numerical simulations are used for demonstrating the features of the proposed methodology, and for studying some of its characteristics. Copyright © 2004 John Wiley & Sons, Ltd. 相似文献
5.
Kunkun Tang Pietro M. Congedo Rémi Abgrall 《International journal for numerical methods in engineering》2015,102(9):1554-1584
An anchored analysis of variance (ANOVA) method is proposed in this paper to decompose the statistical moments. Compared to the standard ANOVA with mutually orthogonal component functions, the anchored ANOVA, with an arbitrary choice of the anchor point, loses the orthogonality if employing the same measure. However, an advantage of the anchored ANOVA consists in the considerably reduced number of deterministic solver's computations, which renders the uncertainty quantification of real engineering problems much easier. Different from existing methods, the covariance decomposition of the output variance is used in this work to take account of the interactions between non‐orthogonal components, yielding an exact variance expansion and thus, with a suitable numerical integration method, provides a strategy that converges. This convergence is verified by studying academic tests. In particular, the sensitivity problem of existing methods to the choice of anchor point is analyzed via the Ishigami case, and we point out that covariance decomposition survives from this issue. Also, with a truncated anchored ANOVA expansion, numerical results prove that the proposed approach is less sensitive to the anchor point. The covariance‐based sensitivity indices (SI) are also used, compared to the variance‐based SI. Furthermore, we emphasize that the covariance decomposition can be generalized in a straightforward way to decompose higher‐order moments. For academic problems, results show the method converges to exact solution regarding both the skewness and kurtosis. Finally, the proposed method is applied on a realistic case, that is, estimating the chemical reactions uncertainties in a hypersonic flow around a space vehicle during an atmospheric reentry. Copyright © 2015 John Wiley & Sons, Ltd. 相似文献
6.
Anton G. Zaicenco 《International journal for numerical methods in engineering》2017,110(13):1247-1271
This paper explores advantages offered by the stochastic collocation method based on the Smolyak grids for the solution of differential equations with random inputs in the parameter space. We use sparse Smolyak grids and the Chebyshev polynomials to construct multidimensional basis and approximate decoupled stochastic differential equations via interpolation. Disjoint set of grid points and basis functions allow us to gain significant improvement to conventional Smolyak algorithm. Density function and statistical moments of the solution are obtained by means of quadrature rules if inputs are uncorrelated and uniformly distributed. Otherwise, the Monte Carlo analysis can run inexpensively using obtained sparse approximation. An adaptive technique to sample from a multivariate density function using sparse grid is proposed to reduce the number of required sampling points. Global sensitivity analysis is viewed as an extension of the sparse interpolant construction and is performed by means of the Sobol' variance‐based or the Kullback–Leibler entropy methods identifying the degree of contribution from the individual inputs as well as the cross terms. Copyright © 2016 John Wiley & Sons, Ltd. 相似文献
7.
8.
Gustavo C. Buscaglia Enzo A. Dari 《International journal for numerical methods in engineering》1997,40(22):4119-4136
The construction of solution-adapted meshes is addressed within an optimization framework. An approximation of the second spatial derivative of the solution is used to get a suitable metric in the computational domain. A mesh quality is proposed and optimized under this metric, accounting for both the shape and the size of the elements. For this purpose, a topological and geometrical mesh improvement method of high generality is introduced. It is shown that the adaptive algorithm that results recovers optimal convergence rates in singular problems, and that it captures boundary and internal layers in convection-dominated problems. Several important implementation issues are discussed. © 1997 John Wiley & Sons, Ltd. 相似文献
9.
We discuss a control problem involving a stochastic Burgers equation with
a random diffusion coefficient. Numerical schemes are developed, involving the finite
element method for the spatial discretisation and the sparse grid stochastic collocation
method in the random parameter space. We also use these schemes to compute closed-loop
suboptimal state feedback control. Several numerical experiments are performed,
to demonstrate the efficiency and plausibility of our approximation methods for the
stochastic Burgers equation and the related control problem. 相似文献
10.
John D. Jakeman Michael S. Eldred Gianluca Geraci Alex Gorodetsky 《International journal for numerical methods in engineering》2020,121(6):1314-1343
In this paper, we present an adaptive algorithm to construct response surface approximations of high-fidelity models using a hierarchy of lower fidelity models. Our algorithm is based on multi-index stochastic collocation and automatically balances physical discretization error and response surface error to construct an approximation of model outputs. This surrogate can be used for uncertainty quantification (UQ) and sensitivity analysis (SA) at a fraction of the cost of a purely high-fidelity approach. We demonstrate the effectiveness of our algorithm on a canonical test problem from the UQ literature and a complex multiphysics model that simulates the performance of an integrated nozzle for an unmanned aerospace vehicle. We find that, when the input-output response is sufficiently smooth, our algorithm produces approximations that can be over two orders of magnitude more accurate than single fidelity approximations for a fixed computational budget. 相似文献
11.
研究包装件参数不确定性对振动可靠性变化的影响,并分析振动可靠性指标对各不确定参数的灵敏度.采用Karhunen-Loeve展开将具有一定谱特征的平稳随机振动表示在标准正态随机变量空间中,应用一阶可靠性方法分析线性包装件振动可靠性指标.考虑缓冲材料弹性特性、阻尼特性、产品主体和脆弱部件之间的弹性特性、阻尼特性四个随机参数... 相似文献
12.
Edoardo Menga María J. Sánchez Ignacio Romero 《International journal for numerical methods in engineering》2020,121(5):904-924
Nonintrusive methods are now established in the engineering community as a pragmatic approach for the uncertainty quantification (UQ) and global sensitivity analysis (GSA) of complex models. However, especially for computationally expensive models, both types of analyses can only be completed by employing surrogates that replace the original models and are considerably less expensive. This work studies the construction of accurate and predictive meta-models for their use in both UQ and GSA, and their application to complex problems in nonlinear mechanics. In particular, meta-models based on radial functions are examined and enhanced with anisotropic metrics for improved predictiveness and cost effectiveness. Three numerical examples illustrate the performance of the proposed methodology. 相似文献
13.
Nazmiye Acikgoz Carlo L. Bottasso 《International journal for numerical methods in engineering》2007,71(2):201-223
We report on results obtained with a metric-driven mesh optimization procedure for simplicial meshes based on the simulated annealing (SA) method. The use of SA improves the chances of removing pathological clusters of bad elements, that have the tendency to lock into frozen configurations in difficult regions of the model such as corners and complex face intersections, prejudicing the overall quality of the final grid. A local version of the algorithm is developed that significantly lowers the computational cost. Numerical examples illustrate the effectiveness of the proposed methodology, which is compared to a classical greedy Gauss–Seidel optimization. Substantial improvement in the quality of the worst elements of the grid is observed for the local simulated annealing optimization. Furthermore, the method appears to be robust to the choice of the algorithmic parameters. Copyright © 2006 John Wiley & Sons, Ltd. 相似文献
14.
M. Bogomolny 《International journal for numerical methods in engineering》2010,82(5):617-636
This study shows how the Combined Approximations (CA) can be used for reducing the computational effort in Topology Optimization for free vibrations. The previously developed approach is based on the integration of several concepts and methods, including matrix factorization, series expansion, and reduced basis. In this paper the CA method is used for repeated eigenvalue analysis. Adjoint sensitivity analysis is developed such that the inaccuracies of the approximation are taken into consideration. Several 2‐D and 3‐D numerical examples show how optimal topology designs can be achieved by the reduced computational effort compared with the exact eigenvalue analysis. Copyright © 2009 John Wiley & Sons, Ltd. 相似文献
15.
Kaan
cal Michael U. Gutmann Guido Sanguinetti Ramon Grima 《Journal of the Royal Society Interface》2022,19(192)
Estimating uncertainty in model predictions is a central task in quantitative biology. Biological models at the single-cell level are intrinsically stochastic and nonlinear, creating formidable challenges for their statistical estimation which inevitably has to rely on approximations that trade accuracy for tractability. Despite intensive interest, a sweet spot in this trade-off has not been found yet. We propose a flexible procedure for uncertainty quantification in a wide class of reaction networks describing stochastic gene expression including those with feedback. The method is based on creating a tractable coarse-graining of the model that is learned from simulations, a synthetic model, to approximate the likelihood function. We demonstrate that synthetic models can substantially outperform state-of-the-art approaches on a number of non-trivial systems and datasets, yielding an accurate and computationally viable solution to uncertainty quantification in stochastic models of gene expression. 相似文献
16.
Ryan J. Murphy Oliver J. Maclaren Alivia R. Calabrese Patrick B. Thomas David J. Warne Elizabeth D. Williams Matthew J. Simpson 《Journal of the Royal Society Interface》2022,19(197)
Throughout the life sciences, biological populations undergo multiple phases of growth, often referred to as biphasic growth for the commonly encountered situation involving two phases. Biphasic population growth occurs over a massive range of spatial and temporal scales, ranging from microscopic growth of tumours over several days, to decades-long regrowth of corals in coral reefs that can extend for hundreds of kilometres. Different mathematical models and statistical methods are used to diagnose, understand and predict biphasic growth. Common approaches can lead to inaccurate predictions of future growth that may result in inappropriate management and intervention strategies being implemented. Here, we develop a very general computationally efficient framework, based on profile likelihood analysis, for diagnosing, understanding and predicting biphasic population growth. The two key components of the framework are as follows: (i) an efficient method to form approximate confidence intervals for the change point of the growth dynamics and model parameters and (ii) parameter-wise profile predictions that systematically reveal the influence of individual model parameters on predictions. To illustrate our framework we explore real-world case studies across the life sciences. 相似文献
17.
P.G. Constantine E.T. Phipps T.M. Wildey 《International journal for numerical methods in engineering》2014,99(3):183-202
We consider a multiphysics system with multiple component PDE models coupled together through network coupling interfaces, that is, a handful of scalars. If each component model contains uncertainties represented by a set of parameters, a straightforward uncertainty quantification study would collect all uncertainties into a single set and treat the multiphysics model as a black box. Such an approach ignores the rich structure of the multiphysics system, and the combined space of uncertainties can have a large dimension that prohibits the use of polynomial surrogate models. We propose an intrusive methodology that exploits the structure of the network coupled multiphysics system to efficiently construct a polynomial surrogate of the model output as a function of uncertain inputs. Using a nonlinear elimination strategy, we treat the solution as a composite function: the model outputs are functions of the coupling terms, which are functions of the uncertain parameters. The composite structure allows us to construct and employ a reduced polynomial basis that depends on the coupling terms. The basis can be constructed with many fewer PDE solves than the naive approach, which results in substantial computational savings. We demonstrate the method on an idealized model of a nuclear reactor. Copyright © 2014 John Wiley & Sons, Ltd. 相似文献
18.
Masayuki Yano 《International journal for numerical methods in engineering》2020,121(23):5200-5226
We introduce a goal-oriented model reduction framework for rapid and reliable solution of parametrized nonlinear partial differential equations with applications in aerodynamics. Our goal is to provide quantitative and automatic control of various sources of errors in model reduction. Our framework builds on the following ingredients: a discontinuous Galerkin finite element (FE) method, which provides stability for convection-dominated problems; reduced basis (RB) spaces, which provide rapidly convergent approximations; the dual-weighted residual method, which provides effective output error estimates for both the FE and RB approximations; output-based adaptive RB snapshots; and the empirical quadrature procedure (EQP), which hyperreduces the primal residual, adjoint residual, and output forms to enable online-efficient evaluations while providing quantitative control of hyperreduction errors. The framework constructs a reduced model which provides, for parameter values in the training set, output predictions that meet the user-prescribed tolerance by controlling the FE, RB, and EQP errors; in addition, the reduced model equips, for any parameter value, the output prediction with an effective, online-efficient error estimate. We demonstrate the framework for parametrized aerodynamics problems modeled by the Reynolds-averaged Navier-Stokes equations; reduced models provide over two orders of magnitude online computational reduction and sharp error estimates for three-dimensional flows. 相似文献
19.
Anna Madra Piotr Breitkopf Alain Rassineux François Trochu 《International journal for numerical methods in engineering》2017,112(9):1235-1252
A method based on dual kriging is proposed to process X‐ray microtomographic scans of textile composites in order to construct a 3D representation of the fiber architecture with a regulated level of details. The geometry is optimized by using the curvature energy of fiber tow profiles in order to determine the best discretization scheme; then the nugget effect is applied in kriging to smooth the outward surface of fiber tows. This approach allows creating 3D models of variable resolution ranging from the X‐ray scan level to geometric representations with surface meshes required for numerical simulation. The method is applied to a glass fiber textile laminate embedded in a thermoplastic matrix, and preliminary results for the estimation of the local permeability of the fiber tows are presented. Copyright © 2017 John Wiley & Sons, Ltd. 相似文献
20.
In this study, an inventory-theory-based interval-parameter two-stage stochastic programming (IB-ITSP) model is proposed through integrating inventory theory into an interval-parameter two-stage stochastic optimization framework. This method can not only address system uncertainties with complex presentation but also reflect transferring batch (the transferring quantity at once) and period (the corresponding cycle time) in decision making problems. A case of water allocation problems in water resources management planning is studied to demonstrate the applicability of this method. Under different flow levels, different transferring measures are generated by this method when the promised water cannot be met. Moreover, interval solutions associated with different transferring costs also have been provided. They can be used for generating decision alternatives and thus help water resources managers to identify desired policies. Compared with the ITSP method, the IB-ITSP model can provide a positive measure for solving water shortage problems and afford useful information for decision makers under uncertainty. 相似文献