首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 593 毫秒
1.
A building block approach to model validation may proceed through various levels, such as material to component to subsystem to system, comparing model predictions with experimental observations at each level. Usually, experimental data becomes scarce as one proceeds from lower to higher levels. This paper presents a structural equation modeling approach to make use of the lower-level data for higher-level model validation under uncertainty, integrating several components: lower-level data, higher-level data, computational model, and latent variables. The method proposed in this paper uses latent variables to model two sets of relationships, namely, the computational model to system-level data, and lower-level data to system-level data. A Bayesian network with Markov chain Monte Carlo simulation is applied to represent the two relationships and to estimate the influencing factors between them. Bayesian hypothesis testing is employed to quantify the confidence in the predictive model at the system level, and the role of lower-level data in the model validation assessment at the system level. The proposed methodology is implemented for hierarchical assessment of three validation problems, using discrete observations and time-series data.  相似文献   

2.
Computational methods for model reliability assessment   总被引:1,自引:0,他引:1  
This paper investigates various statistical approaches for the validation of computational models when both model prediction and experimental observation have uncertainties, and proposes two new methods for this purpose. The first method utilizes hypothesis testing to accept or reject a model at a desired significance level. Interval-based hypothesis testing is found to be more practically useful for model validation than the commonly used point null hypothesis testing. Both classical and Bayesian approaches are investigated. The second and more direct method formulates model validation as a limit state-based reliability estimation problem. Both simulation-based and analytical methods are presented to compute the model reliability for single or multiple comparisons of the model output and observed data. The proposed methods are illustrated and compared using numerical examples.  相似文献   

3.
Validation of reliability computational models using Bayes networks   总被引:9,自引:2,他引:9  
This paper proposes a methodology based on Bayesian statistics to assess the validity of reliability computational models when full-scale testing is not possible. Sub-module validation results are used to derive a validation measure for the overall reliability estimate. Bayes networks are used for the propagation and updating of validation information from the sub-modules to the overall model prediction. The methodology includes uncertainty in the experimental measurement, and the posterior and prior distributions of the model output are used to compute a validation metric based on Bayesian hypothesis testing. Validation of a reliability prediction model for an engine blade under high-cycle fatigue is illustrated using the proposed methodology.  相似文献   

4.
Bayesian risk-based decision method for model validation under uncertainty   总被引:2,自引:0,他引:2  
This paper develops a decision-making methodology for computational model validation, considering the risk of using the current model, data support for the current model, and cost of acquiring new information to improve the model. A Bayesian decision theory-based method is developed for this purpose, using a likelihood ratio as the validation metric for model assessment. An expected risk or cost function is defined as a function of the decision costs, and the likelihood and prior of each hypothesis. The risk is minimized through correctly assigning experimental data to two decision regions based on the comparison of the likelihood ratio with a decision threshold. A Bayesian validation metric is derived based on the risk minimization criterion. Two types of validation tests are considered: pass/fail tests and system response value measurement tests. The methodology is illustrated for the validation of reliability prediction models in a tension bar and an engine blade subjected to high cycle fatigue. The proposed method can effectively integrate optimal experimental design into model validation to simultaneously reduce the cost and improve the accuracy of reliability model assessment.  相似文献   

5.
Validation of models with multivariate output   总被引:2,自引:0,他引:2  
This paper develops metrics for validating computational models with experimental data, considering uncertainties in both. A computational model may generate multiple response quantities and the validation experiment might yield corresponding measured values. Alternatively, a single response quantity may be predicted and observed at different spatial and temporal points. Model validation in such cases involves comparison of multiple correlated quantities. Multiple univariate comparisons may give conflicting inferences. Therefore, aggregate validation metrics are developed in this paper. Both classical and Bayesian hypothesis testing are investigated for this purpose, using multivariate analysis. Since, commonly used statistical significance tests are based on normality assumptions, appropriate transformations are investigated in the case of non-normal data. The methodology is implemented to validate an empirical model for energy dissipation in lap joints under dynamic loading.  相似文献   

6.
This paper presents a methodology for uncertainty quantification and model validation in fatigue crack growth analysis. Several models – finite element model, crack growth model, surrogate model, etc. – are connected through a Bayes network that aids in model calibration, uncertainty quantification, and model validation. Three types of uncertainty are included in both uncertainty quantification and model validation: (1) natural variability in loading and material properties; (2) data uncertainty due to measurement errors, sparse data, and different inspection results (crack not detected, crack detected but size not measured, and crack detected with size measurement); and (3) modeling uncertainty and errors during crack growth analysis, numerical approximations, and finite element discretization. Global sensitivity analysis is used to quantify the contribution of each source of uncertainty to the overall prediction uncertainty and to identify the important parameters that need to be calibrated. Bayesian hypothesis testing is used for model validation and the Bayes factor metric is used to quantify the confidence in the model prediction. The proposed methodology is illustrated using a numerical example of surface cracking in a cylindrical component.  相似文献   

7.
This paper develops a methodology to assess the validity of computational models when some quantities may be affected by epistemic uncertainty. Three types of epistemic uncertainty regarding input random variables - interval data, sparse point data, and probability distributions with parameter uncertainty - are considered. When the model inputs are described using sparse point data and/or interval data, a likelihood-based methodology is used to represent these variables as probability distributions. Two approaches - a parametric approach and a non-parametric approach - are pursued for this purpose. While the parametric approach leads to a family of distributions due to distribution parameter uncertainty, the principles of conditional probability and total probability can be used to integrate the family of distributions into a single distribution. The non-parametric approach directly yields a single probability distribution. The probabilistic model predictions are compared against experimental observations, which may again be point data or interval data. A generalized likelihood function is constructed for Bayesian updating, and the posterior distribution of the model output is estimated. The Bayes factor metric is extended to assess the validity of the model under both aleatory and epistemic uncertainty and to estimate the confidence in the model prediction. The proposed method is illustrated using a numerical example.  相似文献   

8.
Bayesian uncertainty analysis with applications to turbulence modeling   总被引:2,自引:0,他引:2  
In this paper, we apply Bayesian uncertainty quantification techniques to the processes of calibrating complex mathematical models and predicting quantities of interest (QoI's) with such models. These techniques also enable the systematic comparison of competing model classes. The processes of calibration and comparison constitute the building blocks of a larger validation process, the goal of which is to accept or reject a given mathematical model for the prediction of a particular QoI for a particular scenario. In this work, we take the first step in this process by applying the methodology to the analysis of the Spalart-Allmaras turbulence model in the context of incompressible, boundary layer flows. Three competing model classes based on the Spalart-Allmaras model are formulated, calibrated against experimental data, and used to issue a prediction with quantified uncertainty. The model classes are compared in terms of their posterior probabilities and their prediction of QoI's. The model posterior probability represents the relative plausibility of a model class given the data. Thus, it incorporates the model's ability to fit experimental observations. Alternatively, comparing models using the predicted QoI connects the process to the needs of decision makers that use the results of the model. We show that by using both the model plausibility and predicted QoI, one has the opportunity to reject some model classes after calibration, before subjecting the remaining classes to additional validation challenges.  相似文献   

9.
Two problems which are of great interest in relation to software reliability are the prediction of future times to failure and the calculation of the optimal release time. An important assumption in software reliability analysis is that the reliability grows whenever bugs are found and removed. In this paper we present a model for software reliability analysis using the Bayesian statistical approach in order to incorporate in the analysis prior assumptions such as the (decreasing) ordering in the assumed constant failure rates of prescribed intervals. We use as prior model the product of gamma functions for each pair of subsequent interval constant failure rates, considering as the location parameter of the first interval the failure rate of the following interval. In this way we include the failure rate ordering information. Using this approach sequentially, we predict the time to failure for the next failure using the previous information obtained. Using also the relevant predictive distributions obtained, we calculate the optimal release time for two different requirements of interest: (a) the probability of an in‐service failure in a prescribed time t; (b) the cost associated with a single or more failures in a prescribed time t. Finally a numerical example is presented. Copyright © 2000 John Wiley & Sons, Ltd.  相似文献   

10.
Bayesian networks for multilevel system reliability   总被引:1,自引:0,他引:1  
Bayesian networks have recently found many applications in systems reliability; however, the focus has been on binary outcomes. In this paper we extend their use to multilevel discrete data and discuss how to make joint inference about all of the nodes in the network. These methods are applicable when system structures are too complex to be represented by fault trees. The methods are illustrated through four examples that are structured to clarify the scope of the problem.  相似文献   

11.
This paper compares Evidence Theory (ET) and Bayesian Theory (BT) for uncertainty modeling and decision under uncertainty, when the evidence about uncertainty is imprecise. The basic concepts of ET and BT are introduced and the ways these theories model uncertainties, propagate them through systems and assess the safety of these systems are presented. ET and BT approaches are demonstrated and compared on challenge problems involving an algebraic function whose input variables are uncertain. The evidence about the input variables consists of intervals provided by experts. It is recommended that a decision-maker compute both the Bayesian probabilities of the outcomes of alternative actions and their plausibility and belief measures when evidence about uncertainty is imprecise, because this helps assess the importance of imprecision and the value of additional information. Finally, the paper presents and demonstrates a method for testing approaches for decision under uncertainty in terms of their effectiveness in making decisions.  相似文献   

12.
Elías Moreno 《TEST》2005,14(1):181-198
The one-sided testing problem can be naturally formulated as the comparison between two nonnested models. In an objective Bayesian setting, that is, when subjective prior information is not available, no general method exists either for deriving proper prior distributions on parameters or for computing Bayes factor and model posterior probabilities. The encompassing approach solves this difficulty by converting the problem into a nested model comparison for which standard methods can be applied to derive proper priors. We argue that the usual way of encompassing does not have a Bayesian justification. and propose a variant of this method that provides an objective Bayesian solution. The solution proposed here is further extended to the case where nuisance parameters are present and where the hypotheses to be tested are separated by an interval. Some illustrative examples are given for regular and non-regular sampling distributions. This paper has been supported by Ministerio de Ciencia y Tecnología under grant BEC20001-2982  相似文献   

13.
This paper presents an adaptive, surrogate-based, engineering design methodology for the efficient use of numerical simulations of physical models. These surrogates are nonlinear regression models fitted with data obtained from deterministic numerical simulations using optimal sampling. A multistage Bayesian procedure is followed in the formulation of surrogates to support the evolutionary nature of engineering design. Information from computer simulations of different levels of accuracy and detail is integrated, updating surrogates sequentially to improve their accuracy. Data-adaptive optimal sampling is conducted by minimizing the sum of the eigenvalues of the prior covariance matrix. Metrics to quantify prediction errors are proposed and tested to evaluate surrogate accuracy given cost and time constraints. The proposed methodology is tested with a known analytical function to illustrate accuracy and cost tradeoffs. This methodology is then applied to the thermal design of embedded electronic packages with five design parameters. Temperature distributions of embedded electronic chip configurations are calculated using spectral element direct numerical simulations. Surrogates, built from 30 simulations in two stages, are used to predict responses of new design combinations and to minimize the maximum chip temperature.  相似文献   

14.
This paper addresses the problem of assessing the current performance of a system, given that performance can change over time. In many problems of interest, although a significant body of historical evidence is available, current performance information is too sparse to be the sole basis for an assessment of how well the system is currently performing. Therefore, it is desirable to apply current data within a Bayesian framework, making use of a broader body of evidence. However, both noninformative priors and simple informative priors have drawbacks for this application. The present work shows that ‘mixture’ priors have relatively desirable properties for performance assessment. These properties are illustrated using a simple example in assessment of reliability performance. It is also shown that one implementation of the mixture prior (the ‘fixed-constituent model’) is formally equivalent to methods used in signal detection, statistical decision rules in medical diagnosis, and many other applications. Building on the medical analogy, the potential benefits of an integrated treatment of reliability data and inspection results are illustrated. A companion paper develops a more sophisticated implementation of the mixture prior (the ‘variable-constituent model’), and extends the treatment to more complex examples.  相似文献   

15.
Sometimes the assessment of very high reliability levels is difficult for the following main reasons:
the high reliability level of each item makes it impossible to obtain, in a reasonably short time, a sufficient number of failures;
the high cost of the high reliability items to submit to life tests makes it unfeasible to collect enough data for ‘classical’ statistical analyses.
In the above context, this paper presents a Bayesian solution to the problem of estimation of the parameters of the Weibull–inverse power law model, on the basis of a limited number (say six) of life tests, carried out at different stress levels, all higher than the normal one.The over-stressed (i.e. accelerated) tests allow the use of experimental data obtained in a reasonably short time. The Bayesian approach enables one to reduce the required number of failures adding to the failure information the available a priori engineers' knowledge. This engineers' involvement conforms to the most advanced management policy that aims at involving everyone's commitment in order to obtain total quality.A Monte Carlo study of the non-asymptotic properties of the proposed estimators and a comparison with the properties of maximum likelihood estimators closes the work.  相似文献   

16.
This paper deals with the use of Bayesian networks to compute system reliability. The reliability analysis problem is described and the usual methods for quantitative reliability analysis are presented within a case study. Some drawbacks that justify the use of Bayesian networks are identified. The basic concepts of the Bayesian networks application to reliability analysis are introduced and a model to compute the reliability for the case study is presented. Dempster Shafer theory to treat epistemic uncertainty in reliability analysis is then discussed and its basic concepts that can be applied thanks to the Bayesian network inference algorithm are introduced. Finally, it is shown, with a numerical example, how Bayesian networks’ inference algorithms compute complex system reliability and what the Dempster Shafer theory can provide to reliability analysis.  相似文献   

17.
This paper uses a Bayesian Belief Networks (BBN) methodology to model the reliability of Search And Rescue (SAR) operations within UK Coastguard (Maritime Rescue) coordination centres. This is an extension of earlier work, which investigated the rationale of the government's decision to close a number of coordination centres. The previous study made use of secondary data sources and employed a binary logistic regression methodology to support the analysis. This study focused on the collection of primary data through a structured elicitation process, which resulted in the construction of a BBN. The main findings of the study are that statistical analysis of secondary data can be used to complement BBNs. The former provided a more objective assessment of associations between variables, but was restricted in the level of detail that could be explicitly expressed within the model due to a lack of available data. The latter method provided a much more detailed model, but the validity of the numeric assessments was more questionable. Each method can be used to inform and defend the development of the other. The paper describes in detail the elicitation process employed to construct the BBN and reflects on the potential for bias.  相似文献   

18.
We study sample sizes for testing as required for Bayesian reliability demonstration in terms of failure-free periods after testing, under the assumption that tests lead to zero failures. For the process after testing, we consider both deterministic and random numbers of tasks, including tasks arriving as Poisson processes. It turns out that the deterministic case is worst in the sense that it requires most tasks to be tested. We consider such reliability demonstration for a single type of task, as well as for multiple types of tasks to be performed by one system. We also consider the situation, where tests of different types of tasks may have different costs, aiming at minimal expected total costs, assuming that failure in the process would be catastrophic, in the sense that the process would be discontinued. Generally, these inferences are very sensitive to the choice of prior distribution, so one must be very careful with interpretation of non-informativeness of priors.  相似文献   

19.
Ranking a group of candidate sites and selecting from it the high-risk locations or hotspots for detailed engineering study and countermeasure evaluation is the first step in a transport safety improvement program. Past studies have however mainly focused on the task of applying appropriate methods for ranking locations, with few focusing on the issue of how to define selection methods or threshold rules for hotspot identification. The primary goal of this paper is to introduce a multiple testing-based approach to the problem of selecting hotspots. Following the recent developments in the literature, two testing procedures are studied under a Bayesian framework: Bayesian test with weights (BTW) and a Bayesian test controlling for the posterior false discovery rate (FDR) or false negative rate (FNR). The hypotheses tests are implemented on the basis of two random effect or Bayesian models, namely, the hierarchical Poisson/Gamma or Negative Binomial model and the hierarchical Poisson/Lognormal model. A dataset of highway–railway grade crossings is used as an application example to illustrate the proposed procedures incorporating both the posterior distribution of accident frequency and the posterior distribution of ranks. Results on the effects of various decision parameters used in hotspot identification procedures are discussed.  相似文献   

20.
This paper presents a general methodology to improve risk assessment in the specific workshops of semiconductor manufacturers. We are concerned, in this case, with the problem of equipment failures and drifts. These failures are generally observed, with delay, during the product metrology phase. To improve the reactivity of the control system, we propose a predictive approach based on the Bayesian technique. Increased use of these techniques is the result of the advantages obtained. This approach allows early action to maintain, for example, the equipment before it can drift. Also, our contribution consists in proposing a generic model to predict the equipment health factor (EHF), which will define decision support strategies on preventive maintenance to avoid unscheduled equipment downtime. Following the proposed methodology, a data extraction and processing prototype is also designed to identify the real failure modes which will instantiate the Bayesian model. EHF results are decision support elements. They can be further used to improve production performance: reduced cycle time, improved yield and enhanced equipment effectiveness.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号