共查询到20条相似文献,搜索用时 15 毫秒
1.
Erin C. DeCarlo Sankaran Mahadevan Benjamin P. Smarslok 《Structural and Multidisciplinary Optimization》2018,58(6):2325-2340
Existing methods for the computation of global sensitivity indices are challenged by both number of input-output samples required and the presence of dependent or correlated variables. First, a methodology is developed to increase the efficiency of sensitivity computations with independent variables by incorporating optimal space-filling quasi-random sequences into an existing importance sampling-based kernel regression sensitivity method. Two prominent situations where parameter correlations cannot be ignored, however, are (1) posterior distributions of calibrated parameters and (2) transient, coupled simulations. Therefore, the sensitivity methodology is generalized to dependent variables allowing for efficient post-calibration sensitivity analyses using input-output samples obtained directly from Bayesian calibration. These methods are illustrated using coupled, aerothermal simulations where it is observed that model errors and parameter correlations control the sensitivity estimates until coupling effects become dominant over time. 相似文献
2.
Assessing the time-varying sensitivity of environmental models has become a common approach to understand both the value of different data periods for estimating specific parameters, and as part of a diagnostic analysis of the model structure itself (i.e. whether dominant processes are emerging in the model at the right times and over the appropriate time periods). It is not straightforward to visualize these results though, given that the window size over which the time-varying sensitivity is best integrated generally varies for different parameters. In this short communication we present a new approach to visualizing such time-varying sensitivity across time scales of integration. As a case study, we estimate first order sensitivity indices with the FAST (Fourier Amplitude Sensitivity Test) method for a typical conceptual rainfall–runoff model. The resulting plots can guide data selection for model calibration, support diagnostic model evaluation and help to define the timing and length of spot gauging campaigns in places where long-term calibration data are not yet available. 相似文献
3.
Sensitivity analysis (SA) is a commonly used approach for identifying important parameters that dominate model behaviors. We use a newly developed software package, a Problem Solving environment for Uncertainty Analysis and Design Exploration (PSUADE), to evaluate the effectiveness and efficiency of ten widely used SA methods, including seven qualitative and three quantitative ones. All SA methods are tested using a variety of sampling techniques to screen out the most sensitive (i.e., important) parameters from the insensitive ones. The Sacramento Soil Moisture Accounting (SAC-SMA) model, which has thirteen tunable parameters, is used for illustration. The South Branch Potomac River basin near Springfield, West Virginia in the U.S. is chosen as the study area. The key findings from this study are: (1) For qualitative SA methods, Correlation Analysis (CA), Regression Analysis (RA), and Gaussian Process (GP) screening methods are shown to be not effective in this example. Morris One-At-a-Time (MOAT) screening is the most efficient, needing only 280 samples to identify the most important parameters, but it is the least robust method. Multivariate Adaptive Regression Splines (MARS), Delta Test (DT) and Sum-Of-Trees (SOT) screening methods need about 400–600 samples for the same purpose. Monte Carlo (MC), Orthogonal Array (OA) and Orthogonal Array based Latin Hypercube (OALH) are appropriate sampling techniques for them; (2) For quantitative SA methods, at least 2777 samples are needed for Fourier Amplitude Sensitivity Test (FAST) to identity parameter main effect. McKay method needs about 360 samples to evaluate the main effect, more than 1000 samples to assess the two-way interaction effect. OALH and LPτ (LPTAU) sampling techniques are more appropriate for McKay method. For the Sobol' method, the minimum samples needed are 1050 to compute the first-order and total sensitivity indices correctly. These comparisons show that qualitative SA methods are more efficient but less accurate and robust than quantitative ones. 相似文献
4.
P. Baraldi M. Librizzi E. Zio L. Podofillini V.N. Dang 《Expert systems with applications》2009,36(10):12461-12471
Problems characterized by qualitative uncertainty described by expert judgments can be addressed by the fuzzy logic modeling paradigm, structured within a so-called fuzzy expert system (FES) to handle and propagate the qualitative, linguistic assessments by the experts. Once constructed, the FES model should be verified to make sure that it represents correctly the experts’ knowledge. For FES verification, typically there is not enough data to support and compare directly the expert- and FES-inferred solutions. Thus, there is the necessity to develop indirect methods for determining whether the expert system model provides a proper representation of the expert knowledge. A possible way to proceed is to examine the importance of the different input factors in determining the output of the FES model and to verify whether it is in agreement with the expert conceptualization of the model. In this view, two sensitivity and uncertainty analysis techniques applicable to generic FES models are proposed in this paper with the objective of providing appropriate tools of verification in support of the experts in the FES design phase. To analyze the insights gained by using the proposed techniques, a case study concerning a FES developed in the field of human reliability analysis has been considered. 相似文献
5.
A new variance-based global sensitivity analysis technique 总被引:2,自引:0,他引:2
A new set of variance-based sensitivity indices, called W-indices, is proposed. Similar to the Sobol’s indices, both main and total effect indices are defined. The W-main effect indices measure the average reduction of model output variance when the ranges of a set of inputs are reduced, and the total effect indices quantify the average residual variance when the ranges of the remaining inputs are reduced. Geometrical interpretations show that the W-indices gather the full information of the variance ratio function, whereas, Sobol’s indices only reflect the marginal information. Then the double-loop-repeated-set Monte Carlo (MC) (denoted as DLRS MC) procedure, the double-loop-single-set MC (denoted as DLSS MC) procedure and the model emulation procedure are introduced for estimating the W-indices. It is shown that the DLRS MC procedure is suitable for computing all the W-indices despite its highly computational cost. The DLSS MC procedure is computationally efficient, however, it is only applicable for computing low order indices. The model emulation is able to estimate all the W-indices with low computational cost as long as the model behavior is correctly captured by the emulator. The Ishigami function, a modified Sobol’s function and two engineering models are utilized for comparing the W- and Sobol’s indices and verifying the efficiency and convergence of the three numerical methods. Results show that, for even an additive model, the W-total effect index of one input may be significantly larger than its W-main effect index. This indicates that there may exist interaction effects among the inputs of an additive model when their distribution ranges are reduced. 相似文献
6.
Sensitivity Analysis (SA) investigates how the variation in the output of a numerical model can be attributed to variations of its input factors. SA is increasingly being used in environmental modelling for a variety of purposes, including uncertainty assessment, model calibration and diagnostic evaluation, dominant control analysis and robust decision-making. In this paper we review the SA literature with the goal of providing: (i) a comprehensive view of SA approaches also in relation to other methodologies for model identification and application; (ii) a systematic classification of the most commonly used SA methods; (iii) practical guidelines for the application of SA. The paper aims at delivering an introduction to SA for non-specialist readers, as well as practical advice with best practice examples from the literature; and at stimulating the discussion within the community of SA developers and users regarding the setting of good practices and on defining priorities for future research. 相似文献
7.
During the last 10 years different interpretative methods for analysing the effect or importance of input variables on the output of a feedforward neural network have been proposed. These methods can be grouped into two sets: analysis based on the magnitude of weights; and sensitivity analysis. However, as described throughout this study, these methods present a series of limitations. We have defined and validated a new method, called Numeric Sensitivity Analysis (NSA), that overcomes these limitations, proving to be the procedure that, in general terms, best describes the effect or importance of the input variables on the output, independently of the nature (quantitative or discrete) of the variables included. The interpretative methods used in this study are implemented in the software program Sensitivity Neural Network 1.0, created by our team. 相似文献
8.
Variance-based approaches are widely used for Global Sensitivity Analysis (GSA) of environmental models. However, methods that consider the entire Probability Density Function (PDF) of the model output, rather than its variance only, are preferable in cases where variance is not an adequate proxy of uncertainty, e.g. when the output distribution is highly-skewed or when it is multi-modal. Still, the adoption of density-based methods has been limited so far, possibly because they are relatively more difficult to implement. Here we present a novel GSA method, called PAWN, to efficiently compute density-based sensitivity indices. The key idea is to characterise output distributions by their Cumulative Distribution Functions (CDF), which are easier to derive than PDFs. We discuss and demonstrate the advantages of PAWN through applications to numerical and environmental modelling examples. We expect PAWN to increase the application of density-based approaches and to be a complementary approach to variance-based GSA. 相似文献
9.
This paper considers the outcome of a formal sensitivity analysis on a series of epidemic model structures developed to study the population level effects of maternal antibodies. The analysis is used to compare the potential influence of maternally acquired immunity on various age and time domain observations of infection and serology, with and without seasonality. The results of the analysis indicate that time series observations are largely insensitive to variations in the average duration of this protection, and that age related empirical data are likely to be most appropriate for estimating these characteristics. 相似文献
10.
Malcolm McPhee Jim Oltjen James Fadel David Mayer Roberto Sainz 《Mathematics and computers in simulation》2009
The Davis Growth Model (a dynamic steer growth model encompassing 4 fat deposition models) is currently being used by the phenotypic prediction program of the Cooperative Research Centre (CRC) for Beef Genetic Technologies to predict P8 fat (mm) in beef cattle to assist beef producers meet market specifications. The concepts of cellular hyperplasia and hypertrophy are integral components of the Davis Growth Model. The net synthesis of total body fat (kg) is calculated from the net energy available after accounting for energy needs for maintenance and protein synthesis. Total body fat (kg) is then partitioned into 4 fat depots (intermuscular, intramuscular, subcutaneous, and visceral). This paper reports on the parameter estimation and sensitivity analysis of the DNA (deoxyribonucleic acid) logistic growth equations and the fat deposition first-order differential equations in the Davis Growth Model using acslXtreme (Hunstville, AL, USA, Xcellon). The DNA and fat deposition parameter coefficients were found to be important determinants of model function; the DNA parameter coefficients with days on feed >100 days and the fat deposition parameter coefficients for all days on feed. The generalized NL2SOL optimization algorithm had the fastest processing time and the minimum number of objective function evaluations when estimating the 4 fat deposition parameter coefficients with 2 observed values (initial and final fat). The subcutaneous fat parameter coefficient did indicate a metabolic difference for frame sizes. The results look promising and the prototype Davis Growth Model has the potential to assist the beef industry meet market specifications. 相似文献
11.
Elizabeth M. HashimotoEdwin M.M. Ortega Gilberto A. PaulaMauricio L. Barreto 《Computational statistics & data analysis》2011,55(2):993-1007
In this study, regression models are evaluated for grouped survival data when the effect of censoring time is considered in the model and the regression structure is modeled through four link functions. The methodology for grouped survival data is based on life tables, and the times are grouped in k intervals so that ties are eliminated. Thus, the data modeling is performed by considering the discrete models of lifetime regression. The model parameters are estimated by using the maximum likelihood and jackknife methods. To detect influential observations in the proposed models, diagnostic measures based on case deletion, which are denominated global influence, and influence measures based on small perturbations in the data or in the model, referred to as local influence, are used. In addition to those measures, the local influence and the total influential estimate are also employed. Various simulation studies are performed and compared to the performance of the four link functions of the regression models for grouped survival data for different parameter settings, sample sizes and numbers of intervals. Finally, a data set is analyzed by using the proposed regression models. 相似文献
12.
Hemalatha Sathyanarayanamurthy Ratna Babu Chinnam 《Computers & Industrial Engineering》2009,57(3):996-1007
It is routine in probabilistic engineering design to conduct modeling studies to determine the influence of an input variable (or a combination) on the output variable(s). The output or the response can then be fine-tuned by changing the design parameters based on this information. However, simply fine-tuning the output to the desired or target value is not adequate. Robust design principles suggest that we not only study the mean response for a given input vector but also the variance in the output attributed to noise and other unaccounted factors. Given our desire to reduce variability in any process, it is also important to understand which of the input factors affect the variability in the output the most. Given the significant computational overhead associated with most Computer Aided Engineering models, it is becoming popular to conduct such analysis through surrogate models built using a variety of metamodeling techniques. In this regard, existing literature on metamodeling and sensitivity analysis techniques provides useful insights into the various scenarios that they suit the best. However, there has been a limitation of studies that simultaneously consider the combination of metamodeling and sensitivity analysis and the environments in which they operate the best. This paper aims at contributing to reduce this limitation by basing the study on multiple metrics and using two test problems. Two test functions have been used to build metamodels, using three popular metamodeling techniques: Kriging, Radial-Basis Function (RBF) networks, and Support Vector Machines (SVMs). The metamodels are then used for sensitivity analysis, using two popular sensitivity analysis methods, Fourier Amplitude Sensitivity Test (FAST) and Sobol, to determine the influence of variance in the input variables on the variance of the output variables. The advantages and disadvantages of the different metamodeling techniques, in combination with the sensitivity analysis methods, in determining the extent to which the variabilities in the input affect the variabilities in the output are analyzed. 相似文献
13.
Zhou Yicheng Lu Zhenzhou Xiao Sinan Yun Wanying 《Structural and Multidisciplinary Optimization》2019,60(3):1189-1207
Structural and Multidisciplinary Optimization - Global sensitivity analysis (GSA) plays an important role to quantify the relative importance of uncertain parameters to the model response. However,... 相似文献
14.
Andrea Saltelli 《Computer Physics Communications》2002,145(2):280-297
15.
A multi-agent system (MAS) model is coupled with a physically-based groundwater model to understand the declining water table in the heavily irrigated Republican River basin. Each agent in the MAS model is associated with five behavioral parameters, and we estimate their influences on the coupled models using Global Sensitivity Analysis (GSA). This paper utilizes Hadoop-based Cloud Computing techniques and Polynomial Chaos Expansion (PCE) based variance decomposition approach for the improvement of GSA with large-scale socio-hydrological models. With the techniques, running 1000 scenarios of the coupled models can be completed within two hours with Hadoop clusters, a substantial improvement over the 42 days required to run these scenarios sequentially on a desktop machine. Based on the model results, GSA is conducted with the surrogate model derived from using PCE to measure the impacts of the spatio-temporal variations of the behavioral parameters on crop profits and the water table, identifying influential parameters. 相似文献
16.
H.-R. Bae R. V. Grandhi R. A. Canfield 《Structural and Multidisciplinary Optimization》2006,31(4):270-279
Sensitivity analysis for the quantified uncertainty in evidence theory is developed. In reliability quantification, classical
probabilistic analysis has been a popular approach in many engineering disciplines. However, when we cannot obtain sufficient
data to construct probability distributions in a large-complex system, the classical probability methodology may not be appropriate
to quantify the uncertainty. Evidence theory, also called Dempster–Shafer Theory, has the potential to quantify aleatory (random)
and epistemic (subjective) uncertainties because it can directly handle insufficient data and incomplete knowledge situations.
In this paper, interval information is assumed for the best representation of imprecise information, and the sensitivity analysis
of plausibility in evidence theory is analytically derived with respect to expert opinions and structural parameters. The
results from the sensitivity analysis are expected to be very useful in finding the major contributors for quantified uncertainty
and also in redesigning the structural system for risk minimization. 相似文献
17.
Sensitivity analysis is indispensable to structural design and optimization. This paper focuses on sensitivity analysis for models with correlated inputs. To explore the contributions of correlated inputs to the uncertainty in a model output, the universal expressions of the variance contributions of the correlated inputs are first derived in the paper based on the high dimensional model representation (HDMR) of the model function. Then by analyzing the composition of these variance contributions, the variance contributions by an individual correlated input to the model output are further decomposed into independent contribution by the individual input itself, independent contribution by interaction between the individual input and the others, contribution purely by correlation between the individual input and the others, and contribution by interaction associated with correlation between the individual input and the others. The general expressions of these components are also derived. Based on the characteristics of these general expressions, a universal framework for estimating the various variance contributions of the correlated inputs is developed by taking the efficient state dependent parameter (SDP) method as an illustration. Numerical and engineering tests show that this decomposition of the variance contributions of the correlated inputs can provide useful information for exploring the sources of the output uncertainty and identifying the structure of the model function for the complicated models with correlated inputs. The efficiency and accuracy of the SDP-based method for estimating the various variance contributions of the correlated inputs are also demonstrated by the examples. 相似文献
18.
The mathematical model basic to the results of The Limits to Growth has been found to be very sensitive to small perturbations. Even qualitatively different results, such as lack of evidence on which to base a prediction of the collapse of world population, can be obtained by a combination of small changes. Nevertheless comment is made on the value of the application of system-theoretic techniques to non-technical systems. 相似文献
19.
Francesca Campolongo Andrea Saltelli Jessica Cariboni 《Computer Physics Communications》2011,(4):978-988
The present work is a sequel to a recent one published on this journal where the superiority of ‘radial design’ to compute the ‘total sensitivity index’ was ascertained. Both concepts belong to sensitivity analysis of model output. A radial design is the one whereby starting from a random point in the hyperspace of the input factors one step in turn is taken for each factor. The procedure is iterated a number of times with a different starting random point as to collect a sample of elementary shifts for each factor. The total sensitivity index is a powerful sensitivity measure which can be estimated based on such a sample. Given the similarity between the total sensitivity index and a screening test known as method of the elementary effects (or method of Morris), we test the radial design on this method. Both methods are best practices: the total sensitivity index in the class of the quantitative measures and the elementary effects in that of the screening methods. We find that the radial design is indeed superior even for the computation of the elementary effects method. This opens the door to a sensitivity analysis strategy whereby the analyst can start with a small number of points (screening-wise) and then – depending on the results – possibly increase the numeral of points up to compute a fully quantitative measure. Also of interest to practitioners is that a radial design is nothing else than an iterated ‘One factor At a Time’ (OAT) approach. OAT is a radial design of size one. While OAT is not a good practice, modelers in all domains keep using it for sensitivity analysis for reasons discussed elsewhere (Saltelli and Annoni, 2010) [23]. With the present approach modelers are offered a straightforward and economic upgrade of their OAT which maintain OAT's appeal of having just one factor moved at each step. 相似文献
20.
For clearly exploring the origin of the variance of the output response in case the correlated input variables are involved, a novel method on the state dependent parameters (SDP) approach is proposed to decompose the contribution by correlated input variables to the variance of output response into two parts: the uncorrelated contribution due to the unique variations of a variable and the correlated one due to the variations of a variable correlated with other variables. The correlated contribution is composed by the components of the individual input variable correlated with each of the other input variables. An effective and simple SDP method in concept is further proposed to decompose the correlated contribution into the components, on which a second order importance matrix can be solved for explicitly exposing the contribution components of the correlated input variable to the variance of the output response. Compared with the existing regression-based method for decomposing the contribution by correlated input variables to the variance of the output response, the proposed method is not only applicable for linear response functions, but is also suitable for nonlinear response functions. It has advantages both in efficiency and accuracy, which are demonstrated by several numerical and engineering examples. 相似文献