首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 468 毫秒
1.
Dynamic crop models usually have a complex structure and a large number of parameters. Those parameter values usually cannot be directly measured, and they vary with crop cultivars, environmental conditions and managements. Thus, parameter estimation and model calibration are always difficult issues for crop models. Therefore, the quantification of parameter sensitivity and the identification of influential parameters are very important and useful. In this work, late-season rice was simulated with meteorological data in Nanchang, China. Furthermore, we conducted a sensitivity analysis of 20 selected parameters in ORYZA_V3 using the Extended FAST method. We presented the sensitivity results for four model outputs (LAI, WAGT, WST and WSO) at four development stages and the results for yield. Meanwhile, we compared the differences among the sensitivity results for the model outputs simulated in cold, normal and hot years. The uncertainty of output variables derived from parameter variation and weather conditions were also quantified. We found that the development rates, RGRLMN and FLV0.5 had strong effects on all model outputs in all conditions, and parameters WGRMX and SPGF had relative high effects on yield in cold year. Only LAI was sensitive to ASLA. Those influential parameters had unequal effects on different outputs, and they had different effects at four development stages. With the interaction effects of parameter variation and different weather conditions, the uncertainty of model outputs varied significantly. However, the weather conditions had negligible effects on the identification of influential parameters, although they had slight effects on the ranks of the parameters' sensitivity for outputs in the panicle-formation phase and the grain-filling phase, including yield at maturity. The results suggested that the influential parameters should be recalibrated in priority and fine-tuned with higher accuracy during model calibration.  相似文献   

2.
This paper is concerned with the development of a calibration procedure for dynamic CGE models for non-steady state situations. While the literature on calibration of dynamic models mainly concentrates on the process of calibration in the context of a steady state, this essay centers upon the calibration procedure for the case that the model's state variables are not initially at their steady state values. Though this is the more realistic interpretation of a base year's data set it raises a number of additional difficulties including intertemporal feedback and the requirement that the basic data are consistent with the intertemporal equations of the model. These difficulties are discussed and a procedure is presented which can cope with them. Finally, the procedure is applied to an overlapping generations growth model with foresight and the steps involved in calibration are illustrated in detail.  相似文献   

3.
Parameterizing the phenology of new crop varieties is a major challenge in crop modeling. Here we consider calibration of the phenology sub-model of the widely used crop model APSIM-Oryza, using commonly available varietal data. We show that the dynamic phenology sub-model can be well approximated by a static model, with three equations. It is then straightforward to estimate the parameters using any standard statistical software package. The approach is applied to four rice varieties from Sri Lanka. The software provides not only the best-fit parameters, but also uncertainty information about those parameters. This is essential for understanding how well the model will predict out of sample. Here the photoperiod sensitivity coefficient has large uncertainty, and so predictions for day lengths outside the data set are very unreliable. The uncertainty information is also used to show that in our case, doing more field trials would have very little effect on uncertainty.  相似文献   

4.
The Taylor series approach for uncertainty analyses is advanced as an efficient method of producing a probabilistic output from air dispersion models. A probabilistic estimate helps in making better-informed decisions when compared to results of deterministic models. In this work, the Industrial Source Complex Short Term (ISCST) model is used as an analytical model to predict pollutant transport from a point source. First- and second-order Taylor series approximations are used to calculate the uncertainty in ground level concentrations of ISCST calculations. The results of the combined ISCST and uncertainty calculations are then validated with traditional Monte Carlo (MC) simulations. The Taylor series uncertainty estimates are a function of the variance in input parameters (wind speed and temperature) and the model sensitivities to input parameters. While the input variance is spatially invariant, sensitivity is spatially variable; hence the uncertainty in modeled output varies spatially. A comparison with the MC approach shows that uncertainty estimated by first-order Taylor series is found to be appropriate for ambient temperature, while second-order Taylor series is observed to be more accurate for wind speed. Since the Taylor series approach is simple and time-efficient compared to the MC method, it provides an attractive alternative.  相似文献   

5.
Crisp input and output data are fundamentally indispensable in traditional data envelopment analysis (DEA). However, the input and output data in real-world problems are often imprecise or ambiguous. Some researchers have proposed interval DEA (IDEA) and fuzzy DEA (FDEA) to deal with imprecise and ambiguous data in DEA. Nevertheless, many real-life problems use linguistic data that cannot be used as interval data and a large number of input variables in fuzzy logic could result in a significant number of rules that are needed to specify a dynamic model. In this paper, we propose an adaptation of the standard DEA under conditions of uncertainty. The proposed approach is based on a robust optimization model in which the input and output parameters are constrained to be within an uncertainty set with additional constraints based on the worst case solution with respect to the uncertainty set. Our robust DEA (RDEA) model seeks to maximize efficiency (similar to standard DEA) but under the assumption of a worst case efficiency defied by the uncertainty set and it’s supporting constraint. A Monte-Carlo simulation is used to compute the conformity of the rankings in the RDEA model. The contribution of this paper is fourfold: (1) we consider ambiguous, uncertain and imprecise input and output data in DEA; (2) we address the gap in the imprecise DEA literature for problems not suitable or difficult to model with interval or fuzzy representations; (3) we propose a robust optimization model in which the input and output parameters are constrained to be within an uncertainty set with additional constraints based on the worst case solution with respect to the uncertainty set; and (4) we use Monte-Carlo simulation to specify a range of Gamma in which the rankings of the DMUs occur with high probability.  相似文献   

6.
The present study proposes a General Probabilistic Framework (GPF) for uncertainty and global sensitivity analysis of deterministic models in which, in addition to scalar inputs, non-scalar and correlated inputs can be considered as well. The analysis is conducted with the variance-based approach of Sobol/Saltelli where first and total sensitivity indices are estimated. The results of the framework can be used in a loop for model improvement, parameter estimation or model simplification. The framework is applied to SWAP, a 1D hydrological model for the transport of water, solutes and heat in unsaturated and saturated soils. The sources of uncertainty are grouped in five main classes: model structure (soil discretization), input (weather data), time-varying (crop) parameters, scalar parameters (soil properties) and observations (measured soil moisture). For each source of uncertainty, different realizations are created based on direct monitoring activities. Uncertainty of evapotranspiration, soil moisture in the root zone and bottom fluxes below the root zone are considered in the analysis. The results show that the sources of uncertainty are different for each output considered and it is necessary to consider multiple output variables for a proper assessment of the model. Improvements on the performance of the model can be achieved reducing the uncertainty in the observations, in the soil parameters and in the weather data. Overall, the study shows the capability of the GPF to quantify the relative contribution of the different sources of uncertainty and to identify the priorities required to improve the performance of the model. The proposed framework can be extended to a wide variety of modelling applications, also when direct measurements of model output are not available.  相似文献   

7.
Manual calibration of distributed models with many unknown parameters can result in problems of equifinality and high uncertainty. In this study, the Generalized Likelihood Uncertainty Estimation (GLUE) technique was used to address these issues through uncertainty and sensitivity analysis of a distributed watershed scale model (SAHYSMOD) for predicting changes in the groundwater levels of the Rechna Doab basin, Pakistan. The study proposes and then describes a stepwise methodology for SAHYSMOD uncertainty analysis that has not been explored in any study before. One thousand input data files created through Monte Carlo simulations were classified as behavior and non-behavior sets using threshold likelihood values. The model was calibrated (1983–1988) and validated (1998–2003) through satisfactory agreement between simulated and observed data. Acceptable values were observed in the statistical performance indices. Approximately 70% of the observed groundwater level values fell within uncertainty bounds. Groundwater pumping (Gw) and hydraulic conductivity (Kaq) were found to be highly sensitive parameters affecting groundwater recharge.  相似文献   

8.
We demonstrate the use of sensitivity analysis to rank sources of uncertainty in models for economic appraisal of flood risk management policies, taking into account spatial scale issues. A methodology of multi-scale variance-based global sensitivity analysis is developed, and illustrated on the NOE model on the Orb River, France. The variability of the amount of expected annual flood avoided damages, and the associated sensitivity indices, are estimated over different spatial supports, ranging from small cells to the entire floodplain. Both uncertainty maps and sensitivity maps are produced to identify the key input variables in the NOE model at different spatial scales. Our results show that on small spatial supports, variance of the output indicator is mainly due to the water depth maps and the assets map (spatially distributed model inputs), while on large spatial supports, it is mainly due to the flood frequencies and depth–damage curves (non spatial inputs).  相似文献   

9.
Existing methods for the computation of global sensitivity indices are challenged by both number of input-output samples required and the presence of dependent or correlated variables. First, a methodology is developed to increase the efficiency of sensitivity computations with independent variables by incorporating optimal space-filling quasi-random sequences into an existing importance sampling-based kernel regression sensitivity method. Two prominent situations where parameter correlations cannot be ignored, however, are (1) posterior distributions of calibrated parameters and (2) transient, coupled simulations. Therefore, the sensitivity methodology is generalized to dependent variables allowing for efficient post-calibration sensitivity analyses using input-output samples obtained directly from Bayesian calibration. These methods are illustrated using coupled, aerothermal simulations where it is observed that model errors and parameter correlations control the sensitivity estimates until coupling effects become dominant over time.  相似文献   

10.
Global Sensitivity Analysis (GSA) is an essential technique to support the calibration of environmental models by identifying the influential parameters (screening) and ranking them.In this paper, the widely-used variance-based method (Sobol') and the recently proposed moment-independent PAWN method for GSA are applied to the Soil and Water Assessment Tool (SWAT), and compared in terms of ranking and screening results of 26 SWAT parameters. In order to set a threshold for parameter screening, we propose the use of a “dummy parameter”, which has no influence on the model output. The sensitivity index of the dummy parameter is calculated from sampled data, without changing the model equations. We find that Sobol' and PAWN identify the same 12 influential parameters but rank them differently, and discuss how this result may be related to the limitations of the Sobol' method when the output distribution is asymmetric.  相似文献   

11.
Complex social-ecological systems models typically need to consider deeply uncertain long run future conditions. The influence of this deep (i.e. incalculable, uncontrollable) uncertainty on model parameter sensitivities needs to be understood and robustly quantified to reliably inform investment in data collection and model refinement. Using a variance-based global sensitivity analysis method (eFAST), we produced comprehensive model diagnostics of a complex social-ecological systems model under deep uncertainty characterised by four global change scenarios. The uncertainty of the outputs, and the influence of input parameters differed substantially between scenarios. We then developed sensitivity indicators that were robust to this deep uncertainty using four criteria from decision theory. The proposed methods can increase our understanding of the effects of deep uncertainty on output uncertainty and parameter sensitivity, and incorporate the decision maker's risk preference into modelling-related activities to obtain greater resilience of decisions to surprise.  相似文献   

12.
现有灵敏度指标在描述经过复杂校正之后的综合孔径辐射计的性能时存在困难。考虑到反演亮温分布与可见度采样的线性关系,提出了一种可见度采样测量不确定度估计方法来衡量综合孔径辐射计性能。首先,根据校正流程建立校正参数与可见度采样之间的数学模型;然后,分别对各校正参数的测量不确定度进行估计;最后,基于上述两项工作估计可见度采样的合成测量不确定度。对于综合孔径辐射计校正,有利于选择合理的校正参数并优化校正流程。  相似文献   

13.
《Computers & Geosciences》2006,32(6):803-817
Analysis of the sensitivity of predictions of slope instability to input data and model uncertainties provides a rationale for targeted site investigation and iterative refinement of geotechnical models. However, sensitivity methods based on local derivatives do not reflect model behaviour over the whole range of input variables, whereas methods based on standardised regression or correlation coefficients cannot detect non-linear and non-monotonic relationships between model input and output. Variance-based sensitivity analysis (VBSA) provides a global, model-independent sensitivity measure. The approach is demonstrated using the Combined Hydrology and Stability Model (CHASM) and is applicable to a wide variety of computer models. The method of Sobol’, assuming independence between input variables, was used to identify interactions between model input variables, whilst replicated Latin Hypercube Sampling (LHS) is used to investigate the effects of statistical dependence between the input variables. The SIMLAB software was used, both to generate the input sample and to calculate the sensitivity indices. The analysis provided quantified evidence of well-known sensitivities as well demonstrating how uncertainty in slope failure during rainfall is, for the examples tested here, more attributable to uncertainty in the soil strength than to uncertainty in the rainfall.  相似文献   

14.
Model-based reliability analysis is affected by different types of epistemic uncertainty, due to inadequate data and modeling errors. When the physics-based simulation model is computationally expensive, a surrogate has often been used in reliability analysis, introducing additional uncertainty due to the surrogate. This paper proposes a framework to include statistical uncertainty and model uncertainty in surrogate-based reliability analysis. Two types of surrogates have been considered: (1) general-purpose surrogate models that compute the system model output over the desired ranges of the random variables; and (2) limit-state surrogates. A unified approach to connect the model calibration analysis using the Kennedy and O’Hagan (KOH) framework to the construction of limit state surrogate and to estimating the uncertainty in reliability analysis is developed. The Gaussian Process (GP) general-purpose surrogate of the physics-based simulation model obtained from the KOH calibration analysis is further refined at the limit state (local refinement) to construct the limit state surrogate, which is used for reliability analysis. An efficient single-loop sampling approach using the probability integral transform is used for sampling the input variables with statistical uncertainty. The variability in the GP prediction (surrogate uncertainty) is included in reliability analysis through correlated sampling of the model predictions at different inputs. The Monte Carlo sampling (MCS) error, which represents the error due to limited Monte Carlo samples, is quantified by constructing a probability density function. All the different sources of epistemic uncertainty are quantified and aggregated to estimate the uncertainty in the reliability analysis. Two examples are used to demonstrate the proposed techniques.  相似文献   

15.
We propose a modification to the Levenberg-Marquardt minimization algorithm for a more robust and more efficient calibration of highly parameterized, strongly nonlinear models of multiphase flow through porous media. The new method combines the advantages of truncated singular value decomposition with those of the classical Levenberg-Marquardt algorithm, thus enabling a more robust solution of underdetermined inverse problems with complex relations between the parameters to be estimated and the observable state variables used for calibration. The truncation limit separating the solution space from the calibration null space is re-evaluated during the iterative calibration process. In between these re-evaluations, fewer forward simulations are required, compared to the standard approach, to calculate the approximate sensitivity matrix. Truncated singular values are used to calculate the Levenberg-Marquardt parameter updates, ensuring that safe small steps along the steepest-descent direction are taken for highly correlated parameters of low sensitivity, whereas efficient quasi-Gauss-Newton steps are taken for independent parameters with high impact. The performance of the proposed scheme is demonstrated for a synthetic data set representing infiltration into a partially saturated, heterogeneous soil, where hydrogeological, petrophysical, and geostatistical parameters are estimated based on the joint inversion of hydrological and geophysical data.  相似文献   

16.
This work proposes a robust near-optimal non-linear output feedback controller design for a broad class of non-linear systems with time-varying bounded uncertain variables. Both vanishing and non-vanishing uncertainties are considered. Under the assumptions of input-to-state stable (ISS) inverse dynamics and vanishing uncertainty, a robust dynamic output feedback controller is constructed through combination of a high-gain observer with a robust optimal state feedback controller synthesized via Lyapunov's direct method and the inverse optimal approach. The controller enforces exponential stability and robust asymptotic output tracking with arbitrary degree of attenuation of the effect of the uncertain variables on the output of the closed-loop system, for initial conditions and uncertainty in arbitrarily large compact sets, provided that the observer gain is sufficiently large. Utilizing the inverse optimal control approach and singular perturbation techniques, the controller is shown to be near-optimal in the sense that its performance can be made arbitrarily close to the optimal performance of the robust optimal state feedback controller on the infinite time-interval by selecting the observer gain to be sufficiently large. For systems with non-vanishing uncertainties, the same controller is shown to ensure boundedness of the states, uncertainty attenuation and near-optimality on a finite time-interval. The developed controller is successfully applied to a chemical reactor example.  相似文献   

17.
18.
In this study we present an application of a sensitivity analysis to identify a set of important factors that are allowed to be calibrated in the grid setup of a prognostic meteorological model for laboratory-based simulations. The use of a calibrated grid is of paramount importance for repeated procedures by leaving unnecessary information unprocessed and inevitably reducing run times. Identification and evaluation of sensitivity, importance and uncertainty elements is attempted by the design of a ‘good practice experiment’ for site-specific calibration. The factors of varied grid size and resolution, based on a one-factor-at-a-time approach, are used for the determination of local sensitivity in the area of interest. A total of five simulations was performed for a grid configuration study that aimed to calibrate The Air Pollution Model.  相似文献   

19.
Global sensitivity analysis has been widely used to detect the relative contributions of input variables to the uncertainty of model output, and then more resources can be assigned to the important input variables to reduce the uncertainty of model output more efficiently. In this paper, a new kind of global sensitivity index based on Gini’s mean difference is proposed. The proposed sensitivity index is more robust than the variance-based first order sensitivity index for the cases with non-normal distributions. Through the decomposition of Gini’s mean difference, it shows that the proposed sensitivity index can be represented by the energy distance, which measures the difference between probability distributions. Therefore, the proposed sensitivity index also takes the probability distribution of model output into consideration. In order to estimate the proposed sensitivity index efficiently, an efficient Monte Carlo simulation method is also proposed, which avoids the nested sampling procedure. The test examples show that the proposed sensitivity index is more robust than the variance-based first order sensitivity index for the cases with non-normal distributions.  相似文献   

20.
The identification and representation of uncertainty is recognized as an essential component in model applications. One important approach in the identification of uncertainty is sensitivity analysis. Sensitivity analysis evaluates how the variations in the model output can be apportioned to variations in model parameters. One of the most popular sensitivity analysis techniques is Fourier amplitude sensitivity test (FAST). The main mechanism of FAST is to assign each parameter with a distinct integer frequency (characteristic frequency) through a periodic sampling function. Then, for a specific parameter, the variance contribution can be singled out of the model output by the characteristic frequency based on a Fourier transformation. One limitation of FAST is that it can only be applied for models with independent parameters. However, in many cases, the parameters are correlated with one another. In this study, we propose to extend FAST to models with correlated parameters. The extension is based on the reordering of the independent sample in the traditional FAST. We apply the improved FAST to linear, nonlinear, nonmonotonic and real application models. The results show that the sensitivity indices derived by FAST are in a good agreement with those from the correlation ratio sensitivity method, which is a nonparametric method for models with correlated parameters.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号