首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This work presents a data‐driven stochastic collocation approach to include the effect of uncertain design parameters during complex multi‐physics simulation of Micro‐ElectroMechanical Systems (MEMS). The proposed framework comprises of two key steps: first, probabilistic characterization of the input uncertain parameters based on available experimental information, and second, propagation of these uncertainties through the predictive model to relevant quantities of interest. The uncertain input parameters are modeled as independent random variables, for which the distributions are estimated based on available experimental observations, using a nonparametric diffusion‐mixing‐based estimator, Botev (Nonparametric density estimation via diffusion mixing. Technical Report, 2007). The diffusion‐based estimator derives from the analogy between the kernel density estimation (KDE) procedure and the heat dissipation equation and constructs density estimates that are smooth and asymptotically consistent. The diffusion model allows for the incorporation of the prior density and leads to an improved density estimate, in comparison with the standard KDE approach, as demonstrated through several numerical examples. Following the characterization step, the uncertainties are propagated to the output variables using the stochastic collocation approach, based on sparse grid interpolation, Smolyak (Soviet Math. Dokl. 1963; 4 :240–243). The developed framework is used to study the effect of variations in Young's modulus, induced as a result of variations in manufacturing process parameters or heterogeneous measurements on the performance of a MEMS switch. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

2.
This paper deals with numerical solution of differential equations with random inputs, defined on bounded random domain with non‐uniform probability measures. Recently, there has been a growing interest in the stochastic collocation approach, which seeks to approximate the unknown stochastic solution using polynomial interpolation in the multi‐dimensional random domain. Existing approaches employ sparse grid interpolation based on the Smolyak algorithm, which leads to orders of magnitude reduction in the number of support nodes as compared with usual tensor product. However, such sparse grid interpolation approaches based on piecewise linear interpolation employ uniformly sampled nodes from the random domain and do not take into account the probability measures during the construction of the sparse grids. Such a construction based on uniform sparse grids may not be ideal, especially for highly skewed or localized probability measures. To this end, this work proposes a weighted Smolyak algorithm based on piecewise linear basis functions, which incorporates information regarding non‐uniform probability measures, during the construction of sparse grids. The basic idea is to construct piecewise linear univariate interpolation formulas, where the support nodes are specially chosen based on the marginal probability distribution. These weighted univariate interpolation formulas are then used to construct weighted sparse grid interpolants, using the standard Smolyak algorithm. This algorithm results in sparse grids with higher number of support nodes in regions of the random domain with higher probability density. Several numerical examples are presented to demonstrate that the proposed approach results in a more efficient algorithm, for the purpose of computation of moments of the stochastic solution, while maintaining the accuracy of the approximation of the solution. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

3.
Moment‐independent regional sensitivity analysis (RSA) is a very useful guide tool for assessing the effect of a specific range of an individual input on the uncertainty of model output, while large computational burden is involved to perform RSA, which would certainty lead to the limitation of engineering application. Main tasks for performing RSA are to estimate the probability density function (PDF) of model output and the joint PDF of model output and the input variable by some certain smart techniques. Firstly, a method based on the concepts of maximum entropy, fractional moment and sparse grid integration is utilized to estimate the PDF of the model output. Secondly, Nataf transformation is applied to obtain the joint PDF of model output and the input variable. Finally, according to an integral transformation, those regional sensitivity indices can be easily computed by a Monte Carlo procedure without extra function evaluations. Because all the PDFs can be estimated with great efficiency, and only a small amount of function evaluations are involved in the whole process, the proposed method can greatly decrease the computational burden. Several examples with explicit or implicit input–output relations are introduced to demonstrate the accuracy and efficiency of the proposed method. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

4.
An anchored analysis of variance (ANOVA) method is proposed in this paper to decompose the statistical moments. Compared to the standard ANOVA with mutually orthogonal component functions, the anchored ANOVA, with an arbitrary choice of the anchor point, loses the orthogonality if employing the same measure. However, an advantage of the anchored ANOVA consists in the considerably reduced number of deterministic solver's computations, which renders the uncertainty quantification of real engineering problems much easier. Different from existing methods, the covariance decomposition of the output variance is used in this work to take account of the interactions between non‐orthogonal components, yielding an exact variance expansion and thus, with a suitable numerical integration method, provides a strategy that converges. This convergence is verified by studying academic tests. In particular, the sensitivity problem of existing methods to the choice of anchor point is analyzed via the Ishigami case, and we point out that covariance decomposition survives from this issue. Also, with a truncated anchored ANOVA expansion, numerical results prove that the proposed approach is less sensitive to the anchor point. The covariance‐based sensitivity indices (SI) are also used, compared to the variance‐based SI. Furthermore, we emphasize that the covariance decomposition can be generalized in a straightforward way to decompose higher‐order moments. For academic problems, results show the method converges to exact solution regarding both the skewness and kurtosis. Finally, the proposed method is applied on a realistic case, that is, estimating the chemical reactions uncertainties in a hypersonic flow around a space vehicle during an atmospheric reentry. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

5.
In this paper, we present an adaptive algorithm to construct response surface approximations of high-fidelity models using a hierarchy of lower fidelity models. Our algorithm is based on multi-index stochastic collocation and automatically balances physical discretization error and response surface error to construct an approximation of model outputs. This surrogate can be used for uncertainty quantification (UQ) and sensitivity analysis (SA) at a fraction of the cost of a purely high-fidelity approach. We demonstrate the effectiveness of our algorithm on a canonical test problem from the UQ literature and a complex multiphysics model that simulates the performance of an integrated nozzle for an unmanned aerospace vehicle. We find that, when the input-output response is sufficiently smooth, our algorithm produces approximations that can be over two orders of magnitude more accurate than single fidelity approximations for a fixed computational budget.  相似文献   

6.
研究包装件参数不确定性对振动可靠性变化的影响,并分析振动可靠性指标对各不确定参数的灵敏度.采用Karhunen-Loeve展开将具有一定谱特征的平稳随机振动表示在标准正态随机变量空间中,应用一阶可靠性方法分析线性包装件振动可靠性指标.考虑缓冲材料弹性特性、阻尼特性、产品主体和脆弱部件之间的弹性特性、阻尼特性四个随机参数...  相似文献   

7.
In this study, a post optimization technique for a correction of inaccurate optimum obtained using first‐order reliability method (FORM) is proposed for accurate reliability‐based design optimization (RBDO). In the proposed method, RBDO using FORM is first performed, and then the proposed second‐order reliability method (SORM) is performed at the optimum obtained using FORM for more accurate reliability assessment and its sensitivity analysis. In the proposed SORM, the Hessian of a performance function is approximated by reusing derivatives information accumulated during previous RBDO iterations using FORM, indicating that additional functional evaluations are not required in the proposed SORM. The proposed SORM calculates a probability of failure and its first‐order and second‐order stochastic sensitivity by applying the importance sampling to a complete second‐order Taylor series of the performance function. The proposed post optimization constructs a second‐order Taylor expansion of the probability of failure using results of the proposed SORM. Because the constructed Taylor expansion is based on the reliability method more accurate than FORM, the corrected optimum using this Taylor expansion can satisfy the target reliability more accurately. In this way, the proposed method simultaneously achieves both efficiency of FORM and accuracy of SORM. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

8.
This paper proposes a risk‐averse formulation for the problem of piezoelectric control of random vibrations of elastic structures. The proposed formulation, inspired by the notion of risk aversion in economy, is applied to the piezoelectric control of a Bernoulli‐Euler beam subjected to uncertainties in its input data. To address the high computational burden associated to the presence of random fields in the model and the discontinuities involved in the cost functional and its gradient, a combination of a nonintrusive anisotropic polynomial chaos approach for uncertainty propagation with a Monte Carlo sampling method is proposed. In the first part, the well‐posedness of the control problem is established by proving the existence of optimal controls. In the second part, an adaptive gradient‐based method is proposed for the numerical resolution of the problem. Several experiments illustrate the performance of the proposed approach and the significant differences that may occur between the classical deterministic formulation of the problem and its stochastic risk‐averse counterpart.  相似文献   

9.
We address the curse of dimensionality in methods for solving stochastic coupled problems with an emphasis on stochastic expansion methods such as those involving polynomial chaos expansions. The proposed method entails a partitioned iterative solution algorithm that relies on a reduced‐dimensional representation of information exchanged between subproblems to allow each subproblem to be solved within its own stochastic dimension while interacting with a reduced projection of the other subproblems. The proposed method extends previous work by the authors by introducing a reduced chaos expansion with random coefficients. The representation of the exchanged information by using this reduced chaos expansion with random coefficients enables an expeditious construction of doubly stochastic polynomial chaos expansions that separate the effect of uncertainty local to a subproblem from the effect of statistically independent uncertainty coming from other subproblems through the coupling. After laying out the theoretical framework, we apply the proposed method to a multiphysics problem from nuclear engineering. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

10.
Probabilistic sensitivity analysis identifies the influential uncertain input to guide decision-making. We propose a general sensitivity framework with respect to the input distribution parameters that unifies a wide range of sensitivity measures, including information theoretical metrics such as the Fisher information. The framework is derived analytically via a constrained maximisation and the sensitivity analysis is reformulated into an eigenvalue problem. There are only two main steps to implement the sensitivity framework utilising the likelihood ratio/score function method, a Monte Carlo type sampling followed by solving an eigenvalue equation. The resulting eigenvectors then provide the directions for simultaneous variations of the input parameters and guide the focus to perturb uncertainty the most. Not only is it conceptually simple, but numerical examples demonstrate that the proposed framework also provides new sensitivity insights, such as the combined sensitivity of multiple correlated uncertainty metrics, robust sensitivity analysis with an entropic constraint, and approximation of deterministic sensitivities. Three different examples, ranging from a simple cantilever beam to an offshore marine riser, are used to demonstrate the potential applications of the proposed sensitivity framework to applied mechanics problems.  相似文献   

11.
To support effective decision making, engineers should comprehend and manage various uncertainties throughout the design process. Unfortunately, in today's modern systems, uncertainty analysis can become cumbersome and computationally intractable for one individual or group to manage. This is particularly true for systems comprised of a large number of components. In many cases, these components may be developed by different groups and even run on different computational platforms. This paper proposes an approach for decomposing the uncertainty analysis task among the various components comprising a feed‐forward system and synthesizing the local uncertainty analyses into a system uncertainty analysis. Our proposed decomposition‐based multicomponent uncertainty analysis approach is shown to be provably convergent in distribution under certain conditions. The proposed method is illustrated on quantification of uncertainty for a multidisciplinary gas turbine system and is compared to a traditional system‐level Monte Carlo uncertainty analysis approach. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

12.
Uncertainty considered in robust optimization is usually treated as irreducible since it is not reduced during an optimization procedure. In contrast, uncertainty considered in sensitivity analysis is treated as partially or fully reducible in order to quantify the effect of input uncertainty on the outputs of the system. Considering this, and the usual existence of both reducible and irreducible uncertainty, an approach that can perform robust optimization and sensitivity analysis simultaneously is of much interest. This article presents such an integrated optimization model that can be used for both robust optimization and sensitivity analysis for problems that have irreducible and reducible interval uncertainty, multiple objective functions and mixed continuous-discrete design variables. The proposed model is demonstrated by two engineering examples with differing complexity to demonstrate its applicability.  相似文献   

13.
A performance‐based design sensitivity analysis procedure for inelastic steel moment frameworks under equivalent static earthquake loading is presented in this paper. Analytical formulations defining the sensitivity of displacements to modifications in member sizes are derived based on a load‐control pushover analysis procedure. Material non‐linearity under bending moment is alone accounted. Although the formulations were derived based on continuous design variables, they are readily extended to the case of discrete design variables. A 3‐storey moment frame example illustrates the applicability and accuracy of the developed methodology. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

14.
This work focuses on devising an efficient hierarchy of higher‐order methods for linear transient analysis, equipped with an effective dissipative action on the spurious high modes of the response. The proposed strategy stems from the Nørsett idea and is based on a multi‐stage algorithm, designed to hierarchically improve accuracy while retaining the desired dissipative behaviour. Computational efficiency is pursued by requiring that each stage should involve just one set of implicit equations of the size of the problem to be solved (as standard time integration methods) and, in addition, all the stages should share the same coefficient matrix. This target is achieved by rationally formulating the methods based on the discontinuous collocation approach. The resultant procedure is shown to be well suited for adaptive solution strategies. In particular, it embeds two natural tools to effectively control the error propagation. One estimates the local error through the next‐stage solution, which is one‐order more accurate, the other through the solution discontinuity at the beginning of the current time step, which is permitted by the present formulation. The performance of the procedure and the quality of the two error estimators are experimentally verified on different classes of problems. Some typical numerical tests in transient heat conduction and elasto‐dynamics are presented. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

15.
The paper presents a 3D‐based adaptive first‐order shell finite element to be applied to hierarchical modelling and adaptive analysis of complex structures. The main feature of the element is that it is equipped with 3D degrees of freedom, while its mechanical model corresponds to classical first‐order shell theory. Other useful features of the element are its modelling and adaptive capabilities. The element is assigned to hierarchical modelling and hpq‐adaptive analysis of shell parts of complex structures consisting of solid, thick‐ and thin‐shell parts, as well as of transition zones, where h, p and q denote the mesh density parameter and the longitudinal and transverse orders of approximation, respectively. The proposed hp‐adaptive first‐order shell element can be joined with 3D‐based hpq‐adaptive hierarchical shell elements or 3D hpp‐adaptive solid elements by means of the family of 3D‐based hpq/hp‐ or hpp/hp‐adaptive transition elements. The main objective of the first part of our research, presented in the first part of the paper, was to provide non‐standard information on the original parts of the element algorithm. Here we describe the second part of the research, devoted to the methodology and results of the application of the element to various plate and shell problems. The main objective of this part is to verify algorithms of the element and to show its usefulness in modelling and adaptive analysis of shell and plate parts of complex structures. In order to do that, there is a presentation of the results of a comparative analysis of model plate and shell problems using the classical and our elements, and equidistributed and integrated Legendre shape functions. For the plate problem a comparison of the results obtained from the adaptive and non‐adaptive analysis is also included. Additionally, some advantages of the application of our element are shown through a comparative analysis of p‐convergence of the thin plate problem and an adaptive analysis of the exemplary complex structure. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

16.
Numerical simulators are widely used to model physical phenomena and global sensitivity analysis (GSA) aims at studying the global impact of the input uncertainties on the simulator output. To perform GSA, statistical tools based on inputs/output dependence measures are commonly used. We focus here on the Hilbert–Schmidt independence criterion (HSIC). Sometimes, the probability distributions modeling the uncertainty of inputs may be themselves uncertain and it is important to quantify their impact on GSA results. We call it here the second-level global sensitivity analysis (GSA2). However, GSA2, when performed with a Monte Carlo double-loop, requires a large number of model evaluations, which is intractable with CPU time expensive simulators. To cope with this limitation, we propose a new statistical methodology based on a Monte Carlo single-loop with a limited calculation budget. First, we build a unique sample of inputs and simulator outputs, from a well-chosen probability distribution of inputs. From this sample, we perform GSA for various assumed probability distributions of inputs by using weighted HSIC measures estimators. Statistical properties of these weighted estimators are demonstrated. Subsequently, we define 2nd-level HSIC-based measures between the distributions of inputs and GSA results, which constitute GSA2 indices. The efficiency of our GSA2 methodology is illustrated on an analytical example, thereby comparing several technical options. Finally, an application to a test case simulating a severe accidental scenario on nuclear reactor is provided.  相似文献   

17.
This paper presents a method to compute consistent response sensitivities of force‐based finite element models of structural frame systems to both material constitutive and discrete loading parameters. It has been shown that force‐based frame elements are superior to classical displacement‐based elements in the sense that they enable, at no significant additional costs, a drastic reduction in the number of elements required for a given level of accuracy in the computed response of the finite element model. This advantage of force‐based elements is of even more interest in structural reliability analysis, which requires accurate and efficient computation of structural response and structural response sensitivities. This paper focuses on material non‐linearities in the context of both static and dynamic response analysis. The formulation presented herein assumes the use of a general‐purpose non‐linear finite element analysis program based on the direct stiffness method. It is based on the general so‐called direct differentiation method (DDM) for computing response sensitivities. The complete analytical formulation is presented at the element level and details are provided about its implementation in a general‐purpose finite element analysis program. The new formulation and its implementation are validated through some application examples, in which analytical response sensitivities are compared with their counterparts obtained using forward finite difference (FFD) analysis. The force‐based finite element methodology augmented with the developed procedure for analytical response sensitivity computation offers a powerful general tool for structural response sensitivity analysis. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

18.
This paper presents a level‐set‐based topology optimization method based on numerically consistent sensitivity analysis. The proposed method uses a direct steepest‐descent update of the design variables in a level‐set method; the level‐set nodal values. An exact Heaviside formulation is used to relate the level‐set function to element densities. The level‐set function is not required to be a signed‐distance function, and reinitialization is not necessary. Using this approach, level‐set‐based topology optimization problems can be solved consistently and multiple constraints treated simultaneously. The proposed method leads to more insight in the nature of level‐set‐based topology optimization problems. The level‐set‐based design parametrization can describe gray areas and numerical hinges. Consistency causes results to contain these numerical artifacts. We demonstrate that alternative parameterizations, level‐set‐based or density‐based regularization can be used to avoid artifacts in the final results. The effectiveness of the proposed method is demonstrated using several benchmark problems. The capability to treat multiple constraints shows the potential of the method. Furthermore, due to the consistency, the optimizer can run into local minima; a fundamental difficulty of level‐set‐based topology optimization. More advanced optimization strategies and more efficient optimizers may increase the performance in the future. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

19.
A continuum sensitivity analysis is presented for large inelastic deformations and metal forming processes. The formulation is based on the differentiation of the governing field equations of the direct problem and development of weak forms for the corresponding field sensitivity equations. Special attention is given to modelling of the sensitivity boundary conditions that result due to frictional contact between the die and the workpiece. The contact problem in the direct deformation analysis is modelled using an augmented Lagrangian formulation. To avoid issues of non‐differentiability of the contact conditions, appropriate regularizing assumptions are introduced for the calculation of the sensitivity of the contact tractions. The proposed analysis is used for the calculation of sensitivity fields with respect to various process parameters including the die surface. The accuracy and effectiveness of the proposed method are demonstrated with a number of representative example problems. In the die design applications, a Bézier representation of the die curve is introduced. The control points of the Bézier curve are used as the design parameters. Comparison of the computed sensitivity results with those obtained using the direct analysis for two nearby dies and a finite difference approximation indicate a very high accuracy of the proposed analysis. The method is applied to the design of extrusion dies that minimize the standard deviation of the material state in the final product or minimize the required extrusion force for a given reduction ratio. An open‐forging die is also designed which for a specified stroke and initial workpiece produces a final product of desired shape. Copyright © 2000 John Wiley & Sons, Ltd.  相似文献   

20.
Most of physical parameters of high temperature components are usually correlated with the temperature parameter, which makes the reliability‐based structural optimization design more complicated. So far, few efficient sensitivity analysis methods adapted to highly nonlinear performance function with correlated non‐normal variables have been invented. An effective simulation approach is presented to perform reliability sensitivity analysis with the aid of Cholesky factorization and curve fitting. Moreover, in order to improve the programming and computational efficiency, the mixed programming technique of Visual Basic and MATLAB was employed to develop a reliability sensitivity analysis program. The numerical case shows that the results are consistent with those from other traditional methods, and the engineering cases indicate that the proposed method can be effectively used in highly nonlinear function with correlated non‐normal parameters. Meanwhile, the results also demonstrate that the correlation coefficient parameter of high temperature components is very important in the reliability‐based structural optimization design; hence, this parameter should be fully taken into consideration to ensure that the analysis results are accurate and reasonable. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号