首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 531 毫秒
1.
工程设计中往往需要同时处理固有不确定性与认知不确定性。对于固有不确定性分析与量化,国内外已有诸多研究,例如Monte Carlo方法、正交多项式展开理论和概率密度演化理论等。而对认知不确定性、特别是固有不确定性与认知不确定性耦合情况下的研究,则还相对缺乏。该文中,针对数据稀缺与数据更新导致的认知不确定性,首先分别引入Bootstrap方法和Bayes更新方法进行不确定性表征。在此基础上,结合基于概率密度演化-测度变换的两类不确定性量化统一理论新框架,提出了存在认知不确定性情况下的不确定性传播与可靠性分析高效方法及其具体数值算法。由此,给出了基于数据进行工程系统不确定性量化、传播与可靠性分析的基本途径。通过具有工程实际数据的3个工程实例分析,包括无限边坡稳定性分析、挡土墙稳定性分析和屋面桁架结构可靠性分析,验证了该文方法的精度和效率。  相似文献   

2.
There will be simplifying assumptions and idealizations in the availability models of complex processes and phenomena. These simplifications and idealizations generate uncertainties which can be classified as aleatory (arising due to randomness) and/or epistemic (due to lack of knowledge). The problem of acknowledging and treating uncertainty is vital for practical usability of reliability analysis results. The distinction of uncertainties is useful for taking the reliability/risk informed decisions with confidence and also for effective management of uncertainty. In level-1 probabilistic safety assessment (PSA) of nuclear power plants (NPP), the current practice is carrying out epistemic uncertainty analysis on the basis of a simple Monte-Carlo simulation by sampling the epistemic variables in the model. However, the aleatory uncertainty is neglected and point estimates of aleatory variables, viz., time to failure and time to repair are considered. Treatment of both types of uncertainties would require a two-phase Monte-Carlo simulation, outer loop samples epistemic variables and inner loop samples aleatory variables. A methodology based on two-phase Monte-Carlo simulation is presented for distinguishing both the kinds of uncertainty in the context of availability/reliability evaluation in level-1 PSA studies of NPP.  相似文献   

3.
Epistemic uncertainty analysis is an essential feature of any model application subject to ‘state of knowledge’ uncertainties. Such analysis is usually carried out on the basis of a Monte Carlo simulation sampling the epistemic variables and performing the corresponding model runs.In situations, however, where aleatory uncertainties are also present in the model, an adequate treatment of both types of uncertainties would require a two-stage nested Monte Carlo simulation, i.e. sampling the epistemic variables (‘outer loop’) and nested sampling of the aleatory variables (‘inner loop’). It is clear that for complex and long running codes the computational effort to perform all the resulting model runs may be prohibitive.Therefore, an approach of an approximate epistemic uncertainty analysis is suggested which is based solely on two simple Monte Carlo samples: (a) joint sampling of both, epistemic and aleatory variables simultaneously, (b) sampling of aleatory variables alone with the epistemic variables held fixed at their reference values.The applications of this approach to dynamic reliability analyses presented in this paper look quite promising and suggest that performing such an approximate epistemic uncertainty analysis is preferable to the alternative of not performing any.  相似文献   

4.
The paper describes an approach to representing, aggregating and propagating aleatory and epistemic uncertainty through computational models. The framework for the approach employs the theory of imprecise coherent probabilities. The approach is exemplified by a simple algebraic system, the inputs of which are uncertain. Six different uncertainty situations are considered, including mixtures of epistemic and aleatory uncertainty.  相似文献   

5.
The problem of accounting for epistemic uncertainty in risk management decisions is conceptually straightforward, but is riddled with practical difficulties. Simple approximations are often used whereby future variations in epistemic uncertainty are ignored or worst-case scenarios are postulated. These strategies tend to produce sub-optimal decisions. We develop a general framework based on Bayesian decision theory and exemplify it for the case of seismic design of buildings. When temporal fluctuations of the epistemic uncertainties and regulatory safety constraints are included, the optimal level of seismic protection exceeds the normative level at the time of construction. Optimal Bayesian decisions do not depend on the aleatory or epistemic nature of the uncertainties, but only on the total (epistemic plus aleatory) uncertainty and how that total uncertainty varies randomly during the lifetime of the project.  相似文献   

6.
The risk assessment community has begun to make a clear distinction between aleatory and epistemic uncertainty in theory and in practice. Aleatory uncertainty is also referred to in the literature as variability, irreducible uncertainty, inherent uncertainty, and stochastic uncertainty. Epistemic uncertainty is also termed reducible uncertainty, subjective uncertainty, and state-of-knowledge uncertainty. Methods to efficiently represent, aggregate, and propagate different types of uncertainty through computational models are clearly of vital importance. The most widely known and developed methods are available within the mathematics of probability theory, whether frequentist or subjectivist. Newer mathematical approaches, which extend or otherwise depart from probability theory, are also available, and are sometimes referred to as generalized information theory (GIT). For example, possibility theory, fuzzy set theory, and evidence theory are three components of GIT. To try to develop a better understanding of the relative advantages and disadvantages of traditional and newer methods and encourage a dialog between the risk assessment, reliability engineering, and GIT communities, a workshop was held. To focus discussion and debate at the workshop, a set of prototype problems, generally referred to as challenge problems, was constructed. The challenge problems concentrate on the representation, aggregation, and propagation of epistemic uncertainty and mixtures of epistemic and aleatory uncertainty through two simple model systems. This paper describes the challenge problems and gives numerical values for the different input parameters so that results from different investigators can be directly compared.  相似文献   

7.
The ‘Epistemic Uncertainty Workshop’ sponsored by Sandia National Laboratories was held in Albuquerque, New Mexico, on 6–7 August 2002. The workshop was organized around a set of Challenge Problems involving both epistemic and aleatory uncertainty that the workshop participants were invited to solve and discuss. This concluding article in a special issue of Reliability Engineering and System Safety based on the workshop discusses the intent of the Challenge Problems, summarizes some discussions from the workshop, and provides a technical comparison among the papers in this special issue. The Challenge Problems were computationally simple models that were intended as vehicles for the illustration and comparison of conceptual and numerical techniques for use in analyses that involve: (i) epistemic uncertainty, (ii) aggregation of multiple characterizations of epistemic uncertainty, (iii) combination of epistemic and aleatory uncertainty, and (iv) models with repeated parameters. There was considerable diversity of opinion at the workshop about both methods and fundamental issues, and yet substantial consensus about what the answers to the problems were, and even about how each of the four issues should be addressed. Among the technical approaches advanced were probability theory, Dempster–Shafer evidence theory, random sets, sets of probability measures, imprecise coherent probabilities, coherent lower previsions, probability boxes, possibility theory, fuzzy sets, joint distribution tableaux, polynomial chaos expansions, and info-gap models. Although some participants maintained that a purely probabilistic approach is fully capable of accounting for all forms of uncertainty, most agreed that the treatment of epistemic uncertainty introduces important considerations and that the issues underlying the Challenge Problems are legitimate and significant. Topics identified as meriting additional research include elicitation of uncertainty representations, aggregation of multiple uncertainty representations, dependence and independence, model uncertainty, solution of black-box problems, efficient sampling strategies for computation, and communication of analysis results.  相似文献   

8.
In 2001, the National Nuclear Security Administration of the U.S. Department of Energy in conjunction with the national security laboratories (i.e., Los Alamos National Laboratory, Lawrence Livermore National Laboratory and Sandia National Laboratories) initiated development of a process designated Quantification of Margins and Uncertainties (QMU) for the use of risk assessment methodologies in the certification of the reliability and safety of the nation's nuclear weapons stockpile. This presentation discusses and illustrates the conceptual and computational basis of QMU in analyses that use computational models to predict the behavior of complex systems. The following topics are considered: (i) the role of aleatory and epistemic uncertainty in QMU, (ii) the representation of uncertainty with probability, (iii) the probabilistic representation of uncertainty in QMU analyses involving only epistemic uncertainty, and (iv) the probabilistic representation of uncertainty in QMU analyses involving aleatory and epistemic uncertainty.  相似文献   

9.
In 2001, the National Nuclear Security Administration of the U.S. Department of Energy in conjunction with the national security laboratories (i.e., Los Alamos National Laboratory, Lawrence Livermore National Laboratory and Sandia National Laboratories) initiated development of a process designated Quantification of Margins and Uncertainties (QMU) for the use of risk assessment methodologies in the certification of the reliability and safety of the nation's nuclear weapons stockpile. A previous presentation, “Quantification of Margins and Uncertainties: Conceptual and Computational Basis,” describes the basic ideas that underlie QMU and illustrates these ideas with two notional examples that employ probability for the representation of aleatory and epistemic uncertainty. The current presentation introduces and illustrates the use of interval analysis, possibility theory and evidence theory as alternatives to the use of probability theory for the representation of epistemic uncertainty in QMU-type analyses. The following topics are considered: the mathematical structure of alternative representations of uncertainty, alternative representations of epistemic uncertainty in QMU analyses involving only epistemic uncertainty, and alternative representations of epistemic uncertainty in QMU analyses involving a separation of aleatory and epistemic uncertainty. Analyses involving interval analysis, possibility theory and evidence theory are illustrated with the same two notional examples used in the presentation indicated above to illustrate the use of probability to represent aleatory and epistemic uncertainty in QMU analyses.  相似文献   

10.
This paper focuses on sensitivity analysis of results from computer models in which both epistemic and aleatory uncertainties are present. Sensitivity is defined in the sense of “uncertainty importance” in order to identify and to rank the principal sources of epistemic uncertainty. A natural and consistent way to arrive at sensitivity results in such cases would be a two-dimensional or double-loop nested Monte Carlo sampling strategy in which the epistemic parameters are sampled in the outer loop and the aleatory variables are sampled in the nested inner loop. However, the computational effort of this procedure may be prohibitive for complex and time-demanding codes. This paper therefore suggests an approximate method for sensitivity analysis based on particular one-dimensional or single-loop sampling procedures, which require substantially less computational effort. From the results of such sampling one can obtain approximate estimates of several standard uncertainty importance measures for the aleatory probability distributions and related probabilistic quantities of the model outcomes of interest. The reliability of the approximate sensitivity results depends on the effect of all epistemic uncertainties on the total joint epistemic and aleatory uncertainty of the outcome. The magnitude of this effect can be expressed quantitatively and estimated from the same single-loop samples. The higher it is the more accurate the approximate sensitivity results will be. A case study, which shows that the results from the proposed approximate method are comparable to those obtained with the full two-dimensional approach, is provided.  相似文献   

11.
This paper develops a novel computational framework to compute the Sobol indices that quantify the relative contributions of various uncertainty sources towards the system response prediction uncertainty. In the presence of both aleatory and epistemic uncertainty, two challenges are addressed in this paper for the model-based computation of the Sobol indices: due to data uncertainty, input distributions are not precisely known; and due to model uncertainty, the model output is uncertain even for a fixed realization of the input. An auxiliary variable method based on the probability integral transform is introduced to distinguish and represent each uncertainty source explicitly, whether aleatory or epistemic. The auxiliary variables facilitate building a deterministic relationship between the uncertainty sources and the output, which is needed in the Sobol indices computation. The proposed framework is developed for two types of model inputs: random variable input and time series input. A Bayesian autoregressive moving average (ARMA) approach is chosen to model the time series input due to its capability to represent both natural variability and epistemic uncertainty due to limited data. A novel controlled-seed computational technique based on pseudo-random number generation is proposed to efficiently represent the natural variability in the time series input. This controlled-seed method significantly accelerates the Sobol indices computation under time series input, and makes it computationally affordable.  相似文献   

12.
Two approaches to the calculation of probability of loss of assured safety (PLOAS) in temperature dependent weak link/strong link systems are described and compared on the basis of three test problems. The approaches differ in that the first approach permits a separation of epistemic and aleatory uncertainty in the calculation of PLOAS and the second approach combines epistemic and aleatory uncertainty before the calculation of PLOAS. A discrepancy in the results obtained with the test problems led to the identification of an implementation error for one of the approaches. The importance and efficacy of well-designed verification test problems are demonstrated.  相似文献   

13.
Epistemic and aleatory uncertain variables always exist in multidisciplinary system simultaneously and can be modeled by probability and evidence theories, respectively. The propagation of uncertainty through coupled subsystem and the strong nonlinearity of the multidisciplinary system make the reliability analysis difficult and computational cost expensive. In this paper, a novel reliability analysis procedure is proposed for multidisciplinary system with epistemic and aleatory uncertain variables. First, the probability density function of the aleatory variables is assumed piecewise uniform distribution based on Bayes method, and approximate most probability point is solved by equivalent normalization method. Then, important sampling method is used to calculate failure probability and its variance and variation coefficient. The effectiveness of the procedure is demonstrated by two numerical examples. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

14.
Key ideas underlying the application of Quantification of Margins and Uncertainties (QMU) to nuclear weapons stockpile lifecycle decisions are described. While QMU is a broad process and methodology for generating critical technical information to be used in U.S. nuclear weapon stockpile management, this paper emphasizes one component, which is information produced by computational modeling and simulation. In particular, the following topics are discussed: (i) the key principles of developing QMU information in the form of Best Estimate Plus Uncertainty, (ii) the need to separate aleatory and epistemic uncertainty in QMU, and (iii) the properties of risk-informed decision making (RIDM) that are best suited for effective application of QMU. The paper is written at a high level, but provides an extensive bibliography of useful papers for interested readers to deepen their understanding of the presented ideas.  相似文献   

15.
We present stochastic projection schemes for approximating the solution of a class of deterministic linear elliptic partial differential equations defined on random domains. The key idea is to carry out spatial discretization using a combination of finite element methods and stochastic mesh representations. We prove a result to establish the conditions that the input uncertainty model must satisfy to ensure the validity of the stochastic mesh representation and hence the well posedness of the problem. Finite element spatial discretization of the governing equations using a stochastic mesh representation results in a linear random algebraic system of equations in a polynomial chaos basis whose coefficients of expansion can be non‐intrusively computed either at the element or the global level. The resulting randomly parametrized algebraic equations are solved using stochastic projection schemes to approximate the response statistics. The proposed approach is demonstrated for modeling diffusion in a square domain with a rough wall and heat transfer analysis of a three‐dimensional gas turbine blade model with uncertainty in the cooling core geometry. The numerical results are compared against Monte–Carlo simulations, and it is shown that the proposed approach provides high‐quality approximations for the first two statistical moments at modest computational effort. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

16.
钢筋混凝土柱的“强剪弱弯”可靠性区间分析   总被引:1,自引:0,他引:1  
易伟建  李浩 《工程力学》2007,24(9):72-79
在钢筋混凝土结构抗震设计中,"强剪弱弯"是保证结构延性的一个重要设计概念。引进区间变量表达认知不确定性,对钢筋混凝土框架柱进行失效概率区间分析。通过结合代表认知不确定性的区间变量与代表偶遇不确定性的随机变量完成了对不确定性的数学描述。在此基础上,根据对基本事件的包含关系建立"强剪弱弯"区间可靠性概率模型,并从证据理论出发论证了该失效概率区间的上下界实质上等价于证据理论中的信任与似然函数。对于含有区间值不确定性参数的结构承载力计算,将Berz-Taylor模型引进计算过程中,减少由于区间扩张而导致的误差。在数值模拟计算中,运用模拟退火遗传算法(SAGA)确定了"强剪弱弯"的大致设计区间。根据该设计区间构造了特殊的采样函数进行重要性采样模拟从而得到了失效概率区间。误差分析表明该方法具有较好的精度。最后通过算例分析了各设计因素对"强剪弱弯"可靠性的影响,并提出了相应的设计建议。  相似文献   

17.
The traditional reliability analysis method based on probabilistic method requires probability distributions of all the uncertain parameters. However, in practical applications, the distributions of some parameters may not be precisely known due to the lack of sufficient sample data. The probabilistic theory cannot directly measure the reliability of structures with epistemic uncertainty, ie, subjective randomness and fuzziness. Hence, a hybrid reliability analysis (HRA) problem will be caused when the aleatory and epistemic uncertainties coexist in a structure. In this paper, by combining the probability theory and the uncertainty theory into a chance theory, a probability‐uncertainty hybrid model is established, and a new quantification method based on the uncertain random variables for the structural reliability is presented in order to simultaneously satisfy the duality of random variables and the subadditivity of uncertain variables; then, a reliability index is explored based on the chance expected value and variance. Besides, the formulas of the chance theory‐based reliability and reliability index are derived to uniformly assess the reliability of structures under the hybrid aleatory and epistemic uncertainties. The numerical experiments illustrate the validity of the proposed method, and the results of the proposed method can provide a more accurate assessment of the structural system under the mixed uncertainties than the ones obtained separately from the probability theory and the uncertainty theory.  相似文献   

18.
Accelerated life testing (ALT) design is usually performed based on assumptions of life distributions, stress–life relationship, and empirical reliability models. Time‐dependent reliability analysis on the other hand seeks to predict product and system life distribution based on physics‐informed simulation models. This paper proposes an ALT design framework that takes advantages of both types of analyses. For a given testing plan, the corresponding life distributions under different stress levels are estimated based on time‐dependent reliability analysis. Because both aleatory and epistemic uncertainty sources are involved in the reliability analysis, ALT data is used in this paper to update the epistemic uncertainty using Bayesian statistics. The variance of reliability estimation at the nominal stress level is then estimated based on the updated time‐dependent reliability analysis model. A design optimization model is formulated to minimize the overall expected testing cost with constraint on confidence of variance of the reliability estimate. Computational effort for solving the optimization model is minimized in three directions: (i) efficient time‐dependent reliability analysis method; (ii) a surrogate model is constructed for time‐dependent reliability under different stress levels; and (iii) the ALT design optimization model is decoupled into a deterministic design optimization model and a probabilistic analysis model. A cantilever beam and a helicopter rotor hub are used to demonstrate the proposed method. The results show the effectiveness of the proposed ALT design optimization model. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

19.
20.
Optimization leads to specialized structures which are not robust to disturbance events like unanticipated abnormal loading or human errors. Typical reliability-based and robust optimization mainly address objective aleatory uncertainties. To date, the impact of subjective epistemic uncertainties in optimal design has not been comprehensively investigated. In this paper, we use an independent parameter to investigate the effects of epistemic uncertainties in optimal design: the latent failure probability. Reliability-based and risk-based truss topology optimization are addressed. It is shown that optimal risk-based designs can be divided in three groups: (A) when epistemic uncertainty is small (in comparison to aleatory uncertainty), the optimal design is indifferent to it and yields isostatic structures; (B) when aleatory and epistemic uncertainties are relevant, optimal design is controlled by epistemic uncertainty and yields hyperstatic but nonredundant structures, for which expected costs of direct collapse are controlled; (C) when epistemic uncertainty becomes too large, the optimal design becomes redundant, as a way to control increasing expected costs of collapse. The three regions above are divided by hyperstatic and redundancy thresholds. The redundancy threshold is the point where the structure needs to become redundant so that its reliability becomes larger than the latent reliability of the simplest isostatic system. Simple truss topology optimization is considered herein, but the conclusions have immediate relevance to the optimal design of realistic structures subject to aleatory and epistemic uncertainties.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号