首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
This paper is a first attempt to develop a numerical technique to analyze the sensitivity and the propagation of uncertainty through a system with stochastic processes having independent increments as input. Similar to Sobol’ indices for random variables, a meta-model based on Chaos expansions is used and it is shown to be well suited to address such problems. New global sensitivity indices are also introduced to tackle the specificity of stochastic processes. The accuracy and the efficiency of the proposed method is demonstrated on an analytical example with three different input stochastic processes: a Wiener process; an Ornstein–Uhlenbeck process and a Brownian bridge process. The considered output, which is function of these three processes, is a non-Gaussian process. Then, we apply the same ideas on an example without known analytical solution.  相似文献   

2.
A method is proposed for the optimization, by finite element analysis, of design variables of sheet metal forming processes. The method is useful when the non-controllable process parameters (e.g. the coefficient of friction or the material properties) can be modelled as random variables, introducing a degree of uncertainty into any process solution. The method is suited for problems with large FEM computational times and small process window. The problem is formulated as the minimization of a cost function, subject to a reliability constraint. The cost function is indirectly optimized through a “metamodel”, built by “Kriging” interpolation. The reliability, i.e. the failure probability, is assessed by a binary logistic regression analysis of the simulation results. The method is applied to the u-channel forming and springback problem presented in Numisheet 1993, modified by handling the blankholder force as a time-dependent variable.  相似文献   

3.
In deterministic computer experiments, it is often known that the output is a monotonic function of some of the inputs. In these cases, a monotonic metamodel will tend to give more accurate and interpretable predictions with less prediction uncertainty than a nonmonotonic metamodel. The widely used Gaussian process (GP) models are not monotonic. A recent article in Biometrika offers a modification that projects GP sample paths onto the cone of monotonic functions. However, their approach does not account for the fact that the GP model is more informative about the true function at locations near design points than at locations far away. Moreover, a grid-based method is used, which is memory intensive and gives predictions only at grid points. This article proposes the weighted projection approach that more effectively uses information in the GP model together with two computational implementations. The first is isotonic regression on a grid while the second is projection onto a cone of monotone splines, which alleviates problems faced by a grid-based approach. Simulations show that the monotone B-spline metamodel gives particularly good results. Supplementary materials for this article are available online.  相似文献   

4.
Statistical estimates from simulation involve uncertainty caused by the variability in the input random variables due to limited data. Allocating resources to obtain more experimental data of the input variables to better characterize their probability distributions can reduce the variance of statistical estimates. The methodology proposed determines the optimal number of additional experiments required to minimize the variance of the output moments given single or multiple constraints. The method uses multivariate t-distribution and Wishart distribution to generate realizations of the population mean and covariance of the input variables, respectively, given an amount of available data. This method handles independent and correlated random variables. A particle swarm method is used for the optimization. The optimal number of additional experiments per variable depends on the number and variance of the initial data, the influence of the variable in the output function and the cost of each additional experiment. The methodology is demonstrated using a fretting fatigue example.  相似文献   

5.
It is important to include correlations among input variables in uncertainty propagation in probabilistic safety assessment (PSA), because otherwise the output variable (e.g. system failure probability) may be underestimated significantly. As an improvement of the method presented previously by the author (Qin Zhang, A general method dealing with correlation in uncertainty propagation in fault trees. Reliability Engineering & System Safety. 26 (1989) 231–247), this paper provides further solution to the problem that uses the traditional correlation coefficients (TCCs) instead of the correlation fraction coefficients (CFCs) newly defined in the author's previous paper. An example is provided to illustrate the method and shows that the results of using CFC and TCC are the same.  相似文献   

6.
《Composites Part B》2007,38(5-6):651-673
Current design approaches for seismic retrofit use deterministic variables to describe the geometry, material properties and the applied loads on the bridge column. Using a mechanistic model that considers nonlinear material behavior, these deterministic input variables can be directly mapped to the design parameters. However the results often give a false sense of reliability due to neglecting uncertainties related to the input variables of the analysis (data uncertainty), unpredictable fluctuations of loads and natural variability of material properties, and/or the uncertainty in the analytical model itself (model uncertainty). While methods of reliability analysis can provide a means for designing so as not to exceed specific levels of “acceptable” risk, they do not consider the uncertainty in the assumption of distribution functions for each of the input variables and are built on the basic assumption that the models used perfectly describe reality. This, however, still results in significant unknowns and often design models that are not truly validated across their response space. This paper describes the application of a fuzzy probabilistic approach to capture the inherent uncertainty in such applications. The application of the approach is demonstrated through an example and results are compared to those obtained from conventional deterministic analytical models. It is noted that the confidence in the achieved safety of the retrofit system that is based on the use of the fuzzy probabilistic approach is much higher than that achieved using the deterministic approach. This is due to the consideration of uncertainty in the material parameters as well as the consideration of uncertainty in the assumed crack angle during the design process.  相似文献   

7.
This work proposes a method for statistical effect screening to identify design parameters of a numerical simulation that are influential to performance while simultaneously being robust to epistemic uncertainty introduced by calibration variables. Design parameters are controlled by the analyst, but the optimal design is often uncertain, while calibration variables are introduced by modeling choices. We argue that uncertainty introduced by design parameters and calibration variables should be treated differently, despite potential interactions between the two sets. Herein, a robustness criterion is embedded in our effect screening to guarantee the influence of design parameters, irrespective of values used for calibration variables. The Morris screening method is utilized to explore the design space, while robustness to uncertainty is quantified in the context of info‐gap decision theory. The proposed method is applied to the National Aeronautics and Space Administration Multidisciplinary Uncertainty Quantification Challenge Problem, which is a black‐box code for aeronautic flight guidance that requires 35 input parameters. The application demonstrates that a large number of variables can be handled without formulating simplifying assumptions about the potential coupling between calibration variables and design parameters. Because of the computational efficiency of the Morris screening method, we conclude that the analysis can be applied to even larger‐dimensional problems. (Approved for unlimited, public release on October 9, 2013, LA‐UR‐13‐27839, Unclassified.) Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

8.
This paper develops a metamodel approach to approximate the transient relationship between a univariate output response and one or more continuous-valued input factors in a computer simulation. The approach is based on a variant of Frequency Domain Methodology (FDM) in which a metamodel is hypothesized in the time domain; the analysis and parameter estimation are performed in the frequency domain; and finally prediction and inference are made back in the time domain. The switching of domains for analysis and estimation is advantageous because it permits the simultaneous consideration of multiple input factor changes. The metamodel is then used to estimate the mean transient function of the output response after a discontinuous change has been made to each individual input factor. The methodology is illustrated on an M/M/1 queue and a three-station tandem queue.  相似文献   

9.
Uncertainty analysis (UA) is the process that quantitatively identifies and characterizes the output uncertainty and has a crucial implication in engineering applications. The research of efficient estimation of structural output moments in probability space plays an important part in the UA and has great engineering significance. Given this point, a new UA method based on the Kriging surrogate model related to closed-form expressions for the perception of the estimation of mean and variance is proposed in this paper. The new proposed method is proven effective because of its direct reflection on the prediction uncertainty of the output moments of metamodel to quantify the accuracy level. The estimation can be completed by directly using the redefined closed-form expressions of the model’s output mean and variance to avoid excess post-processing computational costs and errors. Furthermore, a novel framework of adaptive Kriging estimating mean (AKEM) is demonstrated for more efficiently reducing uncertainty in the estimation of output moment. In the adaptive strategy of AKEM, a new learning function based on the closed-form expression is proposed. Based on the closed-form expression which modifies the computational error caused by the metamodeling uncertainty, the proposed learning function enables the updating of metamodel to reduce prediction uncertainty efficiently and realize the decrease in computational costs. Several applications are introduced to prove the effectiveness and efficiency of the AKEM compared with a universal adaptive Kriging method. Through the good performance of AKEM, its potential in engineering applications can be spotted.  相似文献   

10.
This work presents a data‐driven stochastic collocation approach to include the effect of uncertain design parameters during complex multi‐physics simulation of Micro‐ElectroMechanical Systems (MEMS). The proposed framework comprises of two key steps: first, probabilistic characterization of the input uncertain parameters based on available experimental information, and second, propagation of these uncertainties through the predictive model to relevant quantities of interest. The uncertain input parameters are modeled as independent random variables, for which the distributions are estimated based on available experimental observations, using a nonparametric diffusion‐mixing‐based estimator, Botev (Nonparametric density estimation via diffusion mixing. Technical Report, 2007). The diffusion‐based estimator derives from the analogy between the kernel density estimation (KDE) procedure and the heat dissipation equation and constructs density estimates that are smooth and asymptotically consistent. The diffusion model allows for the incorporation of the prior density and leads to an improved density estimate, in comparison with the standard KDE approach, as demonstrated through several numerical examples. Following the characterization step, the uncertainties are propagated to the output variables using the stochastic collocation approach, based on sparse grid interpolation, Smolyak (Soviet Math. Dokl. 1963; 4 :240–243). The developed framework is used to study the effect of variations in Young's modulus, induced as a result of variations in manufacturing process parameters or heterogeneous measurements on the performance of a MEMS switch. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

11.
An uncertainty-based sensitivity index represents the contribution that uncertainty in model input Xi makes to the uncertainty in model output Y. This paper addresses the situation where the uncertainties in the model inputs are expressed as closed convex sets of probability measures, a situation that exists when inputs are expressed as intervals or sets of intervals with no particular distribution specified over the intervals, or as probability distributions with interval-valued parameters. Three different approaches to measuring uncertainty, and hence uncertainty-based sensitivity, are explored. Variance-based sensitivity analysis (VBSA) estimates the contribution that each uncertain input, acting individually or in combination, makes to variance in the model output. The partial expected value of perfect information (partial EVPI), quantifies the (financial) value of learning the true numeric value of an input. For both of these sensitivity indices the generalization to closed convex sets of probability measures yields lower and upper sensitivity indices. Finally, the use of relative entropy as an uncertainty-based sensitivity index is introduced and extended to the imprecise setting, drawing upon recent work on entropy measures for imprecise information.  相似文献   

12.
Non-probabilistic convex models need to be provided only the changing boundary of parameters rather than their exact probability distributions; thus, such models can be applied to uncertainty analysis of complex structures when experimental information is lacking. The interval and the ellipsoidal models are the two most commonly used modeling methods in the field of non-probabilistic convex modeling. However, the former can only deal with independent variables, while the latter can only deal with dependent variables. This paper presents a more general non-probabilistic convex model, the multidimensional parallelepiped model. This model can include the independent and dependent uncertain variables in a unified framework and can effectively deal with complex ‘multi-source uncertainty’ problems in which dependent variables and independent variables coexist. For any two parameters, the concepts of the correlation angle and the correlation coefficient are defined. Through the marginal intervals of all the parameters and also their correlation coefficients, a multidimensional parallelepiped can easily be built as the uncertainty domain for parameters. Through the introduction of affine coordinates, the parallelepiped model in the original parameter space is converted to an interval model in the affine space, thus greatly facilitating subsequent structural uncertainty analysis. The parallelepiped model is applied to structural uncertainty propagation analysis, and the response interval of the structure is obtained in the case of uncertain initial parameters. Finally, the method described in this paper was applied to several numerical examples. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

13.
A stochastic response surface method (SRSM) which has been previously proposed for problems dealing only with random variables is extended in this paper for problems in which physical properties exhibit spatial random variation and may be modeled as random fields. The formalism of the extended SRSM is similar to the spectral stochastic finite element method (SSFEM) in the sense that both of them utilize Karhunen–Loeve (K–L) expansion to represent the input, and polynomial chaos expansion to represent the output. However, the coefficients in the polynomial chaos expansion are calculated using a probabilistic collocation approach in SRSM. This strategy helps us to decouple the finite element and stochastic computations, and the finite element code can be treated as a black box, as in the case of a commercial code. The collocation-based SRSM approach is compared in this paper with an existing analytical SSFEM approach, which uses a Galerkin-based weighted residual formulation, and with a black-box SSFEM approach, which uses Latin Hypercube sampling for the design of experiments. Numerical examples are used to illustrate the features of the extended SRSM and to compare its efficiency and accuracy with the existing analytical and black-box versions of SSFEM.  相似文献   

14.
In the past several years there has been considerable commercial and academic interest in methods for variance-based sensitivity analysis. The industrial focus is motivated by the importance of attributing variance contributions to input factors. A more complete understanding of these relationships enables companies to achieve goals related to quality, safety and asset utilization. In a number of applications, it is possible to distinguish between two types of input variables—regressive variables and model parameters. Regressive variables are those that can be influenced by process design or by a control strategy. With model parameters, there are typically no opportunities to directly influence their variability. In this paper, we propose a new method to perform sensitivity analysis through a partitioning of the input variables into these two groupings: regressive variables and model parameters. A sequential analysis is proposed, where first an sensitivity analysis is performed with respect to the regressive variables. In the second step, the uncertainty effects arising from the model parameters are included. This strategy can be quite useful in understanding process variability and in developing strategies to reduce overall variability. When this method is used for nonlinear models which are linear in the parameters, analytical solutions can be utilized. In the more general case of models that are nonlinear in both the regressive variables and the parameters, either first order approximations can be used, or numerically intensive methods must be used.  相似文献   

15.
The uncertainty inverse problems with insufficiency and imprecision in the input and/or output parameters are widely existing and unsolved in the practical engineering. The insufficiency refers to the partly known parameters in the input and/or output, and the imprecision refers to the measurement errors of these ones. In this paper, a combined method is proposed to deal with such problems. In this method, the imprecision of these known parameters can be described by probability distribution with a certain mean value and variance. Sensitive matrix method is first used to transform the insufficient formulation in the input and/or output to a resolvable one, and then the mean values of these unknown parameters can be identified by maximizing the likelihood of the measurements. Finally, to quantify the uncertainty propagation, confidence intervals of the obtained solutions are calculated based on linearization and Monte Carlo methods. Two numerical examples are presented to demonstrate the effectiveness of the present method.  相似文献   

16.
In this paper, the authors propose an analytical method for estimating the possible worst-case measurement due to the propagation of uncertainty. This analytical method uses polynomial chaos theory (PCT) to formally include the effects of uncertainty as it propagates through an indirect measurement. The main assumption is that an analytical model of the measurement process is available. To demonstrate the use of PCT to assess a worst-case measurement, the authors present two examples. The first one involves the use of PCT to estimate the possible worst case of a measurement due to the propagation of parametric uncertainty of a low-pass filter. This case study concerns the analysis of nonlinear effects on the propagation of uncertainty of a signal-conditioning stage used in power measurement. In this paper, the PCT method is applied to determine the probability density function (pdf) of magnitude and phase of the frequency response of the filter and their impact on the power measurement. Of particular interest is the use of PCT to determine the worst-case, expected-case, and best-case effects of the filter, avoiding the reconstruction of the complete pdf of the filter output. The results illustrate the potential of this method to determine the significant boundary of measurement uncertainty, even when the uncertainty propagates through a nonlinear nonpolynomial function. In the second example, the authors use PCT to perform a worst-case analysis for an indirect measurement of a loop impedance. For both examples, the PCT method is compared with the numerical Monte Carlo analysis and the analytical method described in the guide on uncertainty on measurements (GUM).   相似文献   

17.
The numerical solution of a nonlinear chance constrained optimization problem poses a major challenge. The idea of back-mapping as introduced by M. Wendt, P. Li and G. Wozny in 2002 is a viable approach for transforming chance constraints on output variables (of unknown distribution) into chance constraints on uncertain input variables (of known distribution) based on a monotony relation. Once transformation of chance constraints has been accomplished, the resulting optimization problem can be solved by using a gradient-based algorithm. However, the computation of values and gradients of chance constraints and the objective function involves the evaluation of multi-dimensional integrals, which is computationally very expensive. This study proposes an easy-to-use method for analysing monotonic relations between constrained outputs and uncertain inputs. In addition, sparse-grid integration techniques are used to reduce the computational time decisively. Two examples from process optimization under uncertainty demonstrate the performance of the proposed approach.  相似文献   

18.
Haoxiang Jie  Jianwan Ding 《工程优选》2013,45(11):1459-1480
In this article, an adaptive metamodel-based global optimization (AMGO) algorithm is presented to solve unconstrained black-box problems. In the AMGO algorithm, a type of hybrid model composed of kriging and augmented radial basis function (RBF) is used as the surrogate model. The weight factors of hybrid model are adaptively selected in the optimization process. To balance the local and global search, a sub-optimization problem is constructed during each iteration to determine the new iterative points. As numerical experiments, six standard two-dimensional test functions are selected to show the distributions of iterative points. The AMGO algorithm is also tested on seven well-known benchmark optimization problems and contrasted with three representative metamodel-based optimization methods: efficient global optimization (EGO), GutmannRBF and hybrid and adaptive metamodel (HAM). The test results demonstrate the efficiency and robustness of the proposed method. The AMGO algorithm is finally applied to the structural design of the import and export chamber of a cycloid gear pump, achieving satisfactory results.  相似文献   

19.
Junqi Yang  Kai Zheng  Jie Hu  Ling Zheng 《工程优选》2016,48(12):2026-2045
Metamodels are becoming increasingly popular for handling large-scale optimization problems in product development. Metamodel-based reliability-based design optimization (RBDO) helps to improve the computational efficiency and reliability of optimal design. However, a metamodel in engineering applications is an approximation of a high-fidelity computer-aided engineering model and it frequently suffers from a significant loss of predictive accuracy. This issue must be appropriately addressed before the metamodels are ready to be applied in RBDO. In this article, an enhanced strategy with metamodel selection and bias correction is proposed to improve the predictive capability of metamodels. A similarity-based assessment for metamodel selection (SAMS) is derived from the cross-validation and similarity theories. The selected metamodel is then improved by Bayesian inference-based bias correction. The proposed strategy is illustrated through an analytical example and further demonstrated with a lightweight vehicle design problem. The results show its potential in handling real-world engineering problems.  相似文献   

20.
A new supplement to the GUM outlines uncertainty calculations using matrix algebra for models with more than one output quantity. This technique is applied to the problem of uncertainty propagation for platinum resistance thermometers (PRTs). PRTs are calibrated at specified sets of defining fixed points dependent on the desired temperature range. The problem of uncertainty propagation from the fixed-point calibration results plus the resistance of the PRT in use as input quantities to the coefficients of the deviation function as intermediate results and the temperature as a sole output quantity is discussed. A general solution in matrix form for any temperature range of the ITS-90 defined by PRTs is highlighted. The presented method allows for an easy consideration of the input quantity correlations, which differ with the circumstances of the accomplishment of the fixed-point calibrations and the resistance measurement of the thermometer in use. An example calculation for a specific temperature range based on a simplified model for the input quantity correlations demonstrates this benefit.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号