共查询到20条相似文献,搜索用时 15 毫秒
1.
Eduard Hofer Martina Kloos Bernard Krzykacz-Hausmann Jrg Peschke Martin Woltereck 《Reliability Engineering & System Safety》2002,77(3)
Epistemic uncertainty analysis is an essential feature of any model application subject to ‘state of knowledge’ uncertainties. Such analysis is usually carried out on the basis of a Monte Carlo simulation sampling the epistemic variables and performing the corresponding model runs.In situations, however, where aleatory uncertainties are also present in the model, an adequate treatment of both types of uncertainties would require a two-stage nested Monte Carlo simulation, i.e. sampling the epistemic variables (‘outer loop’) and nested sampling of the aleatory variables (‘inner loop’). It is clear that for complex and long running codes the computational effort to perform all the resulting model runs may be prohibitive.Therefore, an approach of an approximate epistemic uncertainty analysis is suggested which is based solely on two simple Monte Carlo samples: (a) joint sampling of both, epistemic and aleatory variables simultaneously, (b) sampling of aleatory variables alone with the epistemic variables held fixed at their reference values.The applications of this approach to dynamic reliability analyses presented in this paper look quite promising and suggest that performing such an approximate epistemic uncertainty analysis is preferable to the alternative of not performing any. 相似文献
2.
Quantification of epistemic and aleatory uncertainties in level-1 probabilistic safety assessment studies 总被引:2,自引:1,他引:2
K. Durga Rao H.S. Kushwaha A.K. Verma A. Srividya 《Reliability Engineering & System Safety》2007,92(7):947-956
There will be simplifying assumptions and idealizations in the availability models of complex processes and phenomena. These simplifications and idealizations generate uncertainties which can be classified as aleatory (arising due to randomness) and/or epistemic (due to lack of knowledge). The problem of acknowledging and treating uncertainty is vital for practical usability of reliability analysis results. The distinction of uncertainties is useful for taking the reliability/risk informed decisions with confidence and also for effective management of uncertainty. In level-1 probabilistic safety assessment (PSA) of nuclear power plants (NPP), the current practice is carrying out epistemic uncertainty analysis on the basis of a simple Monte-Carlo simulation by sampling the epistemic variables in the model. However, the aleatory uncertainty is neglected and point estimates of aleatory variables, viz., time to failure and time to repair are considered. Treatment of both types of uncertainties would require a two-phase Monte-Carlo simulation, outer loop samples epistemic variables and inner loop samples aleatory variables. A methodology based on two-phase Monte-Carlo simulation is presented for distinguishing both the kinds of uncertainty in the context of availability/reliability evaluation in level-1 PSA studies of NPP. 相似文献
3.
Jon C. Helton Jay D. JohnsonCédric J. Sallaberry 《Reliability Engineering & System Safety》2011,96(9):1014-1033
In 2001, the National Nuclear Security Administration (NNSA) of the U.S. Department of Energy (DOE) in conjunction with the national security laboratories (i.e., Los Alamos National Laboratory, Lawrence Livermore National Laboratory, and Sandia National Laboratories) initiated development of a process designated quantification of margins and uncertainties (QMU) for the use of risk assessment methodologies in the certification of the reliability and safety of the nation's nuclear weapons stockpile. A previous presentation, “Quantification of Margins and Uncertainties: Conceptual and Computational Basis,” describes the basic ideas that underlie QMU and illustrates these ideas with two notional examples. The basic ideas and challenges that underlie NNSA's mandate for QMU are present, and have been successfully addressed, in a number of past analyses for complex systems. To provide perspective on the implementation of a requirement for QMU in the analysis of a complex system, three past analyses are presented as examples: (i) the probabilistic risk assessment carried out for the Surry Nuclear Power Station as part of the U.S. Nuclear Regulatory Commission's (NRC's) reassessment of the risk from commercial nuclear power in the United States (i.e., the NUREG-1150 study), (ii) the performance assessment for the Waste Isolation Pilot Plant carried out by the DOE in support of a successful compliance certification application to the U.S. Environmental Agency, and (iii) the performance assessment for the proposed high-level radioactive waste repository at Yucca Mountain, Nevada, carried out by the DOE in support of a license application to the NRC. Each of the preceding analyses involved a detailed treatment of uncertainty and produced results used to establish compliance with specific numerical requirements on the performance of the system under study. As a result, these studies illustrate the determination of both margins and the uncertainty in margins in real analyses. 相似文献
4.
Ricardo Cao 《TEST》1999,8(1):95-116
In this paper an overview of the existing literature about bootstrapping for estimation and prediction in time series is presented.
Some of the methods are detailed, organized according to the aim they are designed for (estimation or prediction) and to the
fact that some parametric structure is assumed, or not, for the dependence. Finally, some new bootstrap (kernel based) method
is presented for prediction when no parametric assumption is made for the dependence.
This research was partially supported by the “Dirección General de Investigación Científica y Tćnica” Grants PB94-0494 and
PB95-0826 and by the “Xunta de Galicia” Grant XUGA 10501B97. 相似文献
5.
J.C. Helton J.D. Johnson W.L. Oberkampf C.J. Sallaberry 《Reliability Engineering & System Safety》2006,91(10-11):1414-1434
Three applications of sampling-based sensitivity analysis in conjunction with evidence theory representations for epistemic uncertainty in model inputs are described: (i) an initial exploratory analysis to assess model behavior and provide insights for additional analysis; (ii) a stepwise analysis showing the incremental effects of uncertain variables on complementary cumulative belief functions and complementary cumulative plausibility functions; and (iii) a summary analysis showing a spectrum of variance-based sensitivity analysis results that derive from probability spaces that are consistent with the evidence space under consideration. 相似文献
6.
Quantification of margins and uncertainties: Alternative representations of epistemic uncertainty 总被引:1,自引:0,他引:1
In 2001, the National Nuclear Security Administration of the U.S. Department of Energy in conjunction with the national security laboratories (i.e., Los Alamos National Laboratory, Lawrence Livermore National Laboratory and Sandia National Laboratories) initiated development of a process designated Quantification of Margins and Uncertainties (QMU) for the use of risk assessment methodologies in the certification of the reliability and safety of the nation's nuclear weapons stockpile. A previous presentation, “Quantification of Margins and Uncertainties: Conceptual and Computational Basis,” describes the basic ideas that underlie QMU and illustrates these ideas with two notional examples that employ probability for the representation of aleatory and epistemic uncertainty. The current presentation introduces and illustrates the use of interval analysis, possibility theory and evidence theory as alternatives to the use of probability theory for the representation of epistemic uncertainty in QMU-type analyses. The following topics are considered: the mathematical structure of alternative representations of uncertainty, alternative representations of epistemic uncertainty in QMU analyses involving only epistemic uncertainty, and alternative representations of epistemic uncertainty in QMU analyses involving a separation of aleatory and epistemic uncertainty. Analyses involving interval analysis, possibility theory and evidence theory are illustrated with the same two notional examples used in the presentation indicated above to illustrate the use of probability to represent aleatory and epistemic uncertainty in QMU analyses. 相似文献
7.
Davit Khachatryan Søren Bisgaard 《Quality and Reliability Engineering International》2009,25(8):947-960
Data used for monitoring and control of industrial processes are often best modeled as a time series. An important issue is to determine whether such time series are stationary. In this article we discuss the variogram—a graphical tool for assessing stationarity. We build on previous work and provide further details and more general results including analytical structures of variogram for various non‐stationary processes, and illustrate with a number of examples of variograms using standard data sets from the literature and simulated data sets. Copyright © 2009 John Wiley & Sons, Ltd. 相似文献
8.
9.
10.
Y. R. Tao X. Han S. Y. Duan C. Jiang 《International journal for numerical methods in engineering》2014,97(1):68-78
Epistemic and aleatory uncertain variables always exist in multidisciplinary system simultaneously and can be modeled by probability and evidence theories, respectively. The propagation of uncertainty through coupled subsystem and the strong nonlinearity of the multidisciplinary system make the reliability analysis difficult and computational cost expensive. In this paper, a novel reliability analysis procedure is proposed for multidisciplinary system with epistemic and aleatory uncertain variables. First, the probability density function of the aleatory variables is assumed piecewise uniform distribution based on Bayes method, and approximate most probability point is solved by equivalent normalization method. Then, important sampling method is used to calculate failure probability and its variance and variation coefficient. The effectiveness of the procedure is demonstrated by two numerical examples. Copyright © 2013 John Wiley & Sons, Ltd. 相似文献
11.
Failure and reliability prediction by support vector machines regression of time series data 总被引:4,自引:0,他引:4
Márcio das Chagas Moura Enrico Zio Isis Didier Lins 《Reliability Engineering & System Safety》2011,96(11):1527-1534
Support Vector Machines (SVMs) are kernel-based learning methods, which have been successfully adopted for regression problems. However, their use in reliability applications has not been widely explored. In this paper, a comparative analysis is presented in order to evaluate the SVM effectiveness in forecasting time-to-failure and reliability of engineered components based on time series data. The performance on literature case studies of SVM regression is measured against other advanced learning methods such as the Radial Basis Function, the traditional MultiLayer Perceptron model, Box-Jenkins autoregressive-integrated-moving average and the Infinite Impulse Response Locally Recurrent Neural Networks. The comparison shows that in the analyzed cases, SVM outperforms or is comparable to other techniques. 相似文献
12.
Error and uncertainty in modeling and simulation 总被引:1,自引:0,他引:1
William L. Oberkampf Sharon M. DeLand Brian M. Rutherford Kathleen V. Diegert Kenneth F. Alvin 《Reliability Engineering & System Safety》2002,75(3)
This article develops a general framework for identifying error and uncertainty in computational simulations that deal with the numerical solution of a set of partial differential equations (PDEs). A comprehensive, new view of the general phases of modeling and simulation is proposed, consisting of the following phases: conceptual modeling of the physical system, mathematical modeling of the conceptual model, discretization and algorithm selection for the mathematical model, computer programming of the discrete model, numerical solution of the computer program model, and representation of the numerical solution. Our view incorporates the modeling and simulation phases that are recognized in the systems engineering and operations research communities, but it adds phases that are specific to the numerical solution of PDEs. In each of these phases, general sources of uncertainty, both aleatory and epistemic, and error are identified. Our general framework is applicable to any numerical discretization procedure for solving ODEs or PDEs. To demonstrate this framework, we describe a system-level example: the flight of an unguided, rocket-boosted, aircraft-launched missile. This example is discussed in detail at each of the six phases of modeling and simulation. Two alternative models of the flight dynamics are considered, along with aleatory uncertainty of the initial mass of the missile and epistemic uncertainty in the thrust of the rocket motor. We also investigate the interaction of modeling uncertainties and numerical integration error in the solution of the ordinary differential equations for the flight dynamics. 相似文献
13.
Optimization leads to specialized structures which are not robust to disturbance events like unanticipated abnormal loading or human errors. Typical reliability-based and robust optimization mainly address objective aleatory uncertainties. To date, the impact of subjective epistemic uncertainties in optimal design has not been comprehensively investigated. In this paper, we use an independent parameter to investigate the effects of epistemic uncertainties in optimal design: the latent failure probability. Reliability-based and risk-based truss topology optimization are addressed. It is shown that optimal risk-based designs can be divided in three groups: (A) when epistemic uncertainty is small (in comparison to aleatory uncertainty), the optimal design is indifferent to it and yields isostatic structures; (B) when aleatory and epistemic uncertainties are relevant, optimal design is controlled by epistemic uncertainty and yields hyperstatic but nonredundant structures, for which expected costs of direct collapse are controlled; (C) when epistemic uncertainty becomes too large, the optimal design becomes redundant, as a way to control increasing expected costs of collapse. The three regions above are divided by hyperstatic and redundancy thresholds. The redundancy threshold is the point where the structure needs to become redundant so that its reliability becomes larger than the latent reliability of the simplest isostatic system. Simple truss topology optimization is considered herein, but the conclusions have immediate relevance to the optimal design of realistic structures subject to aleatory and epistemic uncertainties. 相似文献
14.
Søren Bisgaard Davit Khachatryan 《Quality and Reliability Engineering International》2010,26(3):259-265
Industrial processes are often monitored via data sampled at a high frequency and hence are likely to be autocorrelated time series that may or may not be stationary. To determine if a time series is stationary or not the standard approach is to check whether sample autocorrelation function fades out relatively quickly. An alternative and somewhat sounder approach is to use the variogram. In this article we review the basic properties of the variogram and then derive a general expression for asymptotic confidence intervals for variogram based on the Delta method. We illustrate the computations with an industrial process example. Copyright © 2009 John Wiley & Sons, Ltd. 相似文献
15.
This paper focuses on sensitivity analysis of results from computer models in which both epistemic and aleatory uncertainties are present. Sensitivity is defined in the sense of “uncertainty importance” in order to identify and to rank the principal sources of epistemic uncertainty. A natural and consistent way to arrive at sensitivity results in such cases would be a two-dimensional or double-loop nested Monte Carlo sampling strategy in which the epistemic parameters are sampled in the outer loop and the aleatory variables are sampled in the nested inner loop. However, the computational effort of this procedure may be prohibitive for complex and time-demanding codes. This paper therefore suggests an approximate method for sensitivity analysis based on particular one-dimensional or single-loop sampling procedures, which require substantially less computational effort. From the results of such sampling one can obtain approximate estimates of several standard uncertainty importance measures for the aleatory probability distributions and related probabilistic quantities of the model outcomes of interest. The reliability of the approximate sensitivity results depends on the effect of all epistemic uncertainties on the total joint epistemic and aleatory uncertainty of the outcome. The magnitude of this effect can be expressed quantitatively and estimated from the same single-loop samples. The higher it is the more accurate the approximate sensitivity results will be. A case study, which shows that the results from the proposed approximate method are comparable to those obtained with the full two-dimensional approach, is provided. 相似文献
16.
For real engineering systems, it is sometimes difficult to obtain sufficient data to estimate the precise values of some parameters in reliability analysis. This kind of uncertainty is called epistemic uncertainty. Because of the epistemic uncertainty, traditional universal generating function (UGF) technique is not appropriate to analyze the reliability of systems with performance sharing mechanism under epistemic uncertainty. This paper proposes a belief UGF (BUGF)‐based method to evaluate the reliability of multi‐state series systems with performance sharing mechanism under epistemic uncertainty. The proposed BUGF‐based reliability analysis method is validated by an illustrative example and compared with the interval UGF (IUGF)‐based methods with interval arithmetic or affine arithmetic. The illustrative example shows that the proposed BUGF‐based method is more efficient than the IUGF‐based methods in the reliability analysis of multi‐state systems (MSSs) with performance sharing mechanism under epistemic uncertainty. 相似文献
17.
In this paper, a method for classifying objects based on the use of autoregressive model parameters which are obtained from
a time series representation of shape boundaries in digital images of objects is presented. This technique is insensitive
to size and is rotation invariant. The objects chosen are four types of aircraft from a digital photograph. Recognition accuracies
of more than 90% were obtained for all the pattern classes. All pattern recognition problems involve two random variables,
the pattern vector and the class to which it belongs. The interdependence of the two variables is given by the conditional
probability density function. The degree of dependence between the pattern vector and the particular class is measured by
the “distance”. A simple Bhattacharya distance classifier was used for the purpose. 相似文献
18.
Angel Urbina Sankaran MahadevanThomas L. Paez 《Reliability Engineering & System Safety》2011,96(9):1114-1125
Performance assessment of complex systems is ideally done through full system-level testing which is seldom available for high consequence systems. Further, a reality of engineering practice is that some features of system behavior are not known from experimental data, but from expert assessment, only. On the other hand, individual component data, which are part of the full system are more readily available. The lack of system level data and the complexity of the system lead to a need to build computational models of a system in a hierarchical or building block approach (from simple components to the full system). The models are then used for performance prediction in lieu of experiments, to estimate the confidence in the performance of these systems. Central to this are the need to quantify the uncertainties present in the system and to compare the system response to an expected performance measure. This is the basic idea behind Quantification of Margins and Uncertainties (QMU). QMU is applied in decision making—there are many uncertainties caused by inherent variability (aleatoric) in materials, configurations, environments, etc., and lack of information (epistemic) in models for deterministic and random variables that influence system behavior and performance. This paper proposes a methodology to quantify margins and uncertainty in the presence of both aleatoric and epistemic uncertainty. It presents a framework based on Bayes networks to use available data at multiple levels of complexity (i.e. components, subsystem, etc.) and demonstrates a method to incorporate epistemic uncertainty given in terms of intervals on a model parameter. 相似文献
19.
Prabhu Soundappan Efstratios Nikolaidis Raphael T. Haftka Ramana Grandhi Robert Canfield 《Reliability Engineering & System Safety》2004,85(1-3):295
This paper compares Evidence Theory (ET) and Bayesian Theory (BT) for uncertainty modeling and decision under uncertainty, when the evidence about uncertainty is imprecise. The basic concepts of ET and BT are introduced and the ways these theories model uncertainties, propagate them through systems and assess the safety of these systems are presented. ET and BT approaches are demonstrated and compared on challenge problems involving an algebraic function whose input variables are uncertain. The evidence about the input variables consists of intervals provided by experts. It is recommended that a decision-maker compute both the Bayesian probabilities of the outcomes of alternative actions and their plausibility and belief measures when evidence about uncertainty is imprecise, because this helps assess the importance of imprecision and the value of additional information. Finally, the paper presents and demonstrates a method for testing approaches for decision under uncertainty in terms of their effectiveness in making decisions. 相似文献
20.
Ramstedt M 《Accident; analysis and prevention》2008,40(4):1273-1281
AIMS: To estimate the association between per capita alcohol consumption and fatal accidents in the United States and to compare the outcome with findings from Europe and Canada. DATA AND METHOD: Yearly data on fatal accidents by gender and age were analysed in relation to per capita alcohol consumption for 1950-2002 using the Box-Jenkins technique for time series analysis. FINDINGS: A 1-L increase in per capita consumption was on average followed by 4.4 male deaths per 100,000 inhabitants, but had no significant effect on female accident mortality. Regarding specific categories of accidents, the effect on fatal motor vehicle accidents accounted for a large part of the overall effect for men and was also significant for women. With respect to fatal falling accidents and other accidents, the only significant effects were found among young males. As concerns women, the association with per capita consumption in the US was weak in comparison with Canada and Europe. The US effect estimate for overall male accidents was however equally strong as in Northern Europe (5.2) or Canada (5.9), and stronger than that found in Central and Southern Europe (2.1 and 1.6, respectively). With respect to alcohol and fatal motor vehicle accidents, the association for men of 3.2 was stronger than in Europe and more similar to the Canadian finding (3.6). CONCLUSIONS: Per capita alcohol consumption has at least partly been an explanation for the development of male fatal accidents and particularly motor vehicle accident rates in the post-war United States. High traffic density and relatively high legal limits for drunken driving blood alcohol concentration (BAC) are suggested to explain the strong association found between alcohol and fatal motor vehicle accidents. The results also suggest that a reduction in per capita consumption would have its most preventive impact on fatal accidents among younger males. 相似文献