首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The purpose of model calibration is to make the model predictions closer to reality. The classical Kennedy–O’Hagan approach is widely used for model calibration, which can account for the inadequacy of the computer model while simultaneously estimating the unknown calibration parameters. In many applications, the phenomenon of censoring occurs when the exact outcome of the physical experiment is not observed, but is only known to fall within a certain region. In such cases, the Kennedy–O’Hagan approach cannot be used directly, and we propose a method to incorporate the censoring information when performing model calibration. The method is applied to study the compression phenomenon of liquid inside a bottle. The results show significant improvement over the traditional calibration methods, especially when the number of censored observations is large. Supplementary materials for this article are available online.  相似文献   

2.
The calibration of constitutive models is based on the solution of an optimization problem, whereby the sought parameter values minimize an objective function that measures the discrepancy between experimental observations and the corresponding simulated response. By the introduction of an appropriate adjoint problem, the resulting formulation becomes well suited for a gradient‐based optimization scheme. A class of viscoelastic models is studied, where a discontinuous Galerkin method is used to integrate the governing evolution equation in time. A practical solution algorithm, which utilizes the time‐flow structure of the underlying evolution equation, is presented. Based on the proposed formulation it is convenient to estimate the sensitivity of the calibrated parameters with respect to measurement noise. The sensitivity is computed using a dual method, which compares favourably with the conventional primal method. The strategy is applied to a viscoelasticity model using experimental data from a uniaxial compression test. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

3.
Gaussian process (GP) metamodels have been widely used as surrogates for computer simulations or physical experiments. The heart of GP modeling lies in optimizing the log‐likelihood function with respect to the hyperparameters to fit the model to a set of observations. The complexity of the log‐likelihood function, computational expense, and numerical instabilities challenge this process. These issues limit the applicability of GP models more when the size of the training data set and/or problem dimensionality increase. To address these issues, we develop a novel approach for fitting GP models that significantly improves computational expense and prediction accuracy. Our approach leverages the smoothing effect of the nugget parameter on the log‐likelihood profile to track the evolution of the optimal hyperparameter estimates as the nugget parameter is adaptively varied. The new approach is implemented in the R package GPM and compared to a popular GP modeling R package ( GPfit) for a set of benchmark problems. The effectiveness of the approach is also demonstrated using an engineering problem to learn the constitutive law of a hyperelastic composite where the required level of accuracy in estimating the response gradient necessitates a large training data set.  相似文献   

4.
《Advanced Powder Technology》2020,31(9):3947-3959
The real sand is usually idealized by using upscaled particles, due to the large number of particles of tire-sand interaction. This study aims to determine a unique and complete set of DEM-FEM model parameters to improve numerical accuracy of tire-sand interaction after particles idealization. To achieve this aim, a novel method based on experimental design is proposed to calibrate the DEM-FEM model parameters by a series of single-factor numerical calibration tests. Initially, the interaction properties such as equivalent friction coefficients of particle-particle, particle-soil bin and particle-tire are determined successively by comparing experimental test with numerical simulation using the angle of repose as a bulk response. The material parameters of particles are then obtained by modified iteratively to match the stress-strain behavior of the granular assembly in triaxial test. After that, the calibrated parameter set is used to investigate the interaction mechanisms between the off-road tire and the granular terrain. Finally, the simulation results are qualitatively in agreement with the soil bin experiments, which verifies the effectiveness of the calibrated parameter set for the tractive performance analysis of tire-sand interaction.  相似文献   

5.
This paper surveys issues associated with the statistical calibration of physics-based computer simulators. Even in solidly physics-based models there are usually a number of parameters that are suitable targets for calibration. Statistical calibration means refining the prior distributions of such uncertain parameters based on matching some simulation outputs with data, as opposed to the practice of “tuning” or point estimation that is commonly called calibration in non-statistical contexts. Older methods for statistical calibration are reviewed before turning to recent work in which the calibration problem is embedded in a Gaussian process model. In procedures of this type, parameter estimation is carried out simultaneously with the estimation of the relationship between the calibrated simulator and truth.  相似文献   

6.
In this research, a universal framework for automated calibration of microscopic properties of modeled granular materials is proposed. The proposed framework aims at industrial scale applications, where optimization of the computational time step is important. It can be generally applied to all types of DEM simulation setups. It consists of three phases: data base generation, parameter optimization, and verification. In the first phase, DEM simulations are carried out on a multi-dimensional grid of sampled input parameter values to generate a database of macroscopic material responses. The database and experimental data are then used to interpolate the objective functions with respect to an arbitrary set of parameters. In the second phase, the Non-dominated Sorting Genetic Algorithm II (NSGA-II) is used to solve the calibration multi-objective optimization problem. In the third phase, the DEM simulations using the results of the calibrated input parameters are carried out to calculate the macroscopic responses that are then compared with experimental measurements for verification and validation.The proposed calibration framework has been successfully demonstrated by a case study with two-objective optimization for the model accuracy and the simulation time. Based on the concept of Pareto dominance, the trade-off between these two conflicting objectives becomes apparent. Through verification and validation steps, the approach has proven to be successful for accurate calibration of material parameters with the optimal simulation time.  相似文献   

7.
Corncob is one of the main components of corn ears. Because the mechanical properties of different parts of corncob are very different, and there is a lack of research on the simulation and calibration of corncob parameters at present, the established corn ear or corncob simulation model has low accuracy and poor reliability. In this study, the simulation tests of corncob calibration parameters are carried out based on DEM. Firstly, a modelling method of corncob is proposed to establish sample models of corncob. Then, the DEM simulation parameters that restitution coefficient, static friction coefficient, and rolling friction coefficient of “particle–particle” and “particle-geometry”, and Poisson’s ratio of particle are determined by the Plackett-Burman test and Box-Behnken test. Next, the simulated bending test of corncob is carried out using the calibrated parameters. Finally, by comparing the physical and simulated bending test results, it shows the anti-destructive forces of corncob are 204.52 N and 197.3 N, respectively, with a relative error of 3.53%. This study verifies the reliability of parameter calibration for the discrete element model of corncob and provides a new method for establishing simulation models of corn ear and other materials.  相似文献   

8.
This paper presents a simplified calibration procedure for the microscopic Weibull stress model to estimate the cumulative probability of cleavage fracture for ferritic steels. The proposed method requires two discrete values of the macroscopic Weibull scale parameter (K0) in contrast to the two sets of statistically significant fracture toughness data mandated in previous calibration schemes. The proposed approach predicates on the fundamental assumption that the macroscopic toughness, for specimens dominated by cleavage mechanisms, follow the three‐parameter Weibull model outlined in the testing standards. The calibration procedure thus generates two sets of fictitious toughness data corresponding to two sets of specimens with marked differences in crack‐front constraints. The calibrated Weibull parameters agree closely with the calibration results based on the conventional approach for the Euro steels. The proposed calibration also leads to an improved method to determine a limiting load level, beyond which extensive plastic deformation propagates in the specimen.  相似文献   

9.
Hydrologic models are composed of several components which are all parameter dependent. In the general setting, parameter values are selected based on regionalization of observed rainfall-runoff events, or upon calibration at local stream gauge data when available. Based on these data, a selected parameter set is then used for the hydrologic model. However, seldom are hydrologic model outputs examined as to the total variations in output due to the independent but coupled variations in parameter input values. In this paper, three of the more common techniques for evaluating model output distributions are compared as applied to a selected hydrologic model; i.e., an exhaustion techniques, Monte Carlo simulation method, and the more recently advanced Rosenblueth technique. It is concluded that, for the hydrologic model considered, the Monte Carlo technique provides more accuracy in comparison to Rosenblueth technique (for the same computational effort), but is less accurate than Exhaustion.  相似文献   

10.
In this paper, we investigate a joint modeling method for hard failures where both degradation signals and time‐to‐event data are available. The mixed‐effects model is used to model degradation signals, and extended hazard model is used for the time‐to‐event data. The extended hazard is a general model which includes two well‐known hazard rate models, the Cox proportional hazards model and accelerated failure time model, as special cases. A two‐stage estimation approach is used to obtain model parameters, based on which remaining useful life for the in‐service unit can be predicted. The performance of the method is demonstrated through both simulation studies and a real case study.  相似文献   

11.
Several studies have investigated the relationship between field-measured conflicts and the conflicts obtained from micro-simulation models using the Surrogate Safety Assessment Model (SSAM). Results from recent studies have shown that while reasonable correlation between simulated and real traffic conflicts can be obtained especially after proper calibration, more work is still needed to confirm that simulated conflicts provide safety measures beyond what can be expected from exposure. As well, the results have emphasized that using micro-simulation model to evaluate safety without proper model calibration should be avoided. The calibration process adjusts relevant simulation parameters to maximize the correlation between field-measured and simulated conflicts.The main objective of this study is to investigate the transferability of calibrated parameters of the traffic simulation model (VISSIM) for safety analysis between different sites. The main purpose is to examine whether the calibrated parameters, when applied to other sites, give reasonable results in terms of the correlation between the field-measured and the simulated conflicts. Eighty-three hours of video data from two signalized intersections in Surrey, BC were used in this study. Automated video-based computer vision techniques were used to extract vehicle trajectories and identify field-measured rear-end conflicts. Calibrated VISSIM parameters obtained from the first intersection which maximized the correlation between simulated and field-observed conflicts were used to estimate traffic conflicts at the second intersection and to compare the results to parameters optimized specifically for the second intersection. The results show that the VISSIM parameters are generally transferable between the two locations as the transferred parameters provided better correlation between simulated and field-measured conflicts than using the default VISSIM parameters. Of the six VISSIM parameters identified as important for the safety analysis, two parameters were directly transferable, three parameters were transferable to some degree, and one parameter was not transferable.  相似文献   

12.
13.
New model fusion techniques based on spatial‐random‐process modeling are developed in this work for combining multi‐fidelity data from simulations and experiments. Existing works in multi‐fidelity modeling generally assume a hierarchical structure in which the levels of fidelity of the simulation models can be clearly ranked. In contrast, we consider the nonhierarchical situation in which one wishes to incorporate multiple models whose levels of fidelity are unknown or cannot be differentiated (e.g., if the fidelity of the models changes over the input domain). We propose three new nonhierarchical multi‐model fusion approaches with different assumptions or structures regarding the relationships between the simulation models and physical observations. One approach models the true response as a weighted sum of the multiple simulation models and a single discrepancy function. The other two approaches model the true response as the sum of one simulation model and a corresponding discrepancy function, and differ in their assumptions regarding the statistical behavior of the discrepancy functions, such as independence with the true response or a common spatial correlation function. The proposed approaches are compared via numerical examples and a real engineering application. Furthermore, the effectiveness and relative merits of the different approaches are discussed. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

14.
In this study, to evaluate the chemical and mechanical properties of polypropylene (PP), activation‐energy and tensile tests were performed at room temperature (25°C) on pure PP and PP reinforced with glass fibre (GF). To improve the prediction accuracy of the fatigue life, three models based on the calibration of the Zhurkov model were proposed: a regression model, modified strain‐rate model and lethargy coefficient‐based model. Based on the experimental data analysis and statistical assessment results, we proposed a modified strain‐rate model that satisfies the dependency of the physical parameters and is congruent with the predicted fatigue life data. The experimental data and modified strain‐rate model were compared with the direct cyclic analysis results. The tendency of the frequency factor as a correction parameter in the modified strain‐rate model corresponded to the experimental activation energy and the increasing GF content.  相似文献   

15.
This paper presents a new hybrid approach for multiaxial fatigue life estimation, based on continuum damage mechanics theory and a genetic algorithm with critical plane model formulation. The hybrid model employs a genetic algorithm based setup for calibration with standard proportional and non‐proportional profiles to predict fatigue life for complex loading profiles. The model is evaluated using experimental fatigue life data for SS304 steel. Calibration using simplified profiles is in agreement with the requirement for cost‐effective experimental fatigue life testing. In‐phase and out‐of‐phase loads are used for calibration, and fatigue life is predicted for more complicated profiles. The results show good agreement between the estimated and experimental fatigue life, and calibration through simple loading histories to predict fatigue life for complex histories appears to be an effective solution using the proposed model. A brief comparison is presented with fatigue life estimation performance of the proposed model with models available in commercial codes. Proposed model found to be more consistent in fatigue life prediction against various loading conditions.  相似文献   

16.
Poisson integer valued autoregressive (INAR) models have been proposed for modeling correlated count data. Poisson lognormal (PLN) INAR models extend their use to overdispersed contexts. In this paper, we will propose the use of a repeated Sequential Probability Ratio Test (SPRT) procedure to detect change in first‐order INAR and PLN INAR models. We consider change in the mean, the autocorrelation parameter, and the overdispersion parameter. Simulation results show the repeated SPRT procedure performs favorably relative to previously proposed CUSUM procedures that are based on either the observations themselves or residuals of the observations from predicted values. A dataset on invasive insect species is used to illustrate the repeated SPRT procedure.  相似文献   

17.
Computer model calibration is the process of determining input parameter settings to a computational model that are consistent with physical observations. This is often quite challenging due to the computational demands of running the model. In this article, we use the ensemble Kalman filter (EnKF) for computer model calibration. The EnKF has proven effective in quantifying uncertainty in data assimilation problems such as weather forecasting and ocean modeling. We find that the EnKF can be directly adapted to Bayesian computer model calibration. It is motivated by the mean and covariance relationship between the model inputs and outputs, producing an approximate posterior ensemble of the calibration parameters. While this approach may not fully capture effects due to nonlinearities in the computer model response, its computational efficiency makes it a viable choice for exploratory analyses, design problems, or problems with large numbers of model runs, inputs, and outputs.  相似文献   

18.
Fatigue crack growth models, having application to Mirage III0 aircraft, have been calibrated with test data. Of the four crack growth retardation models examined—Wheeler, Willenborg, modified Willenborg and Crack Closure—the Wheeler and modified Willenborg models are the most satisfactory but both require calibration by test. Even so, crack growth is not accurately predicted when the specimen geometry and the test sequence are varied from those used in calibrating the models. Apart from the crack growth models, the main sources of inaccuracy in predicting crack growth are the inadequacy of the growth rate basic data, incorrect assumptions of crack shape and uncertainty in stress history. Thus, crack growth life may not be confidently predicted to better than a factor of two on actual life and, in some cases, the factor may be as high as ten.  相似文献   

19.
为了提升串联机器人绝对定位精度,提出了基于零参考模型(ZRM)的机器人几何参数标定方法。建立了包含方向矢量和连接矢量的机器人零参考模型;针对模型特点,利用改进遗传算法(IGA)优化求解零位方向分量和位置方向分量,给出了用IGA标定几何参数目标函数值计算方法及求解几何参数误差的具体步骤。通过对ER10L-C10工业机器人不同测点下仿真标定及实测研究结果表明:IGA方法能够快速对机器人ZRM的几何参数实现标定,当标定点设定为50个左右时,标定后的机器人在测试点的精度提升泛化能力较好,对ER10L-C10机器人在整个工作空间内实测标定其末端绝对定位精度提升约90%,该方法适于在有高定位精度要求的串联机器人中推广应用。  相似文献   

20.
The problem of simultaneous and accurate measurements of two dynamic values, time dependencies of flow velocity and ultrasound velocity in the flow, is analyzed. In order to measure two dynamic values simultaneously a theory of the transit time method has developed, and the theoretical model of a microprocessor-based measuring system has been derived. The ways to improve the accuracy and information of such dual-channel measurement systems have been examined. It is shown that invariance between two channels of a measurement system can be achieved when dynamic, nonlinear, parametric models of these channels are identified in real time during the process of measurement, and when the multipulse irradiation of flow is used. The results of computer simulation of transit time method dynamic errors are represented. A method of minimizing these errors has been proposed  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号