首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
    
Residual‐based control charts for autocorrelated processes are known to be sensitive to time series modeling errors, which can seriously inflate the false alarm rate. This paper presents a design approach for a residual‐based exponentially weighted moving average (EWMA) chart that mitigates this problem by modifying the control limits based on the level of model uncertainty. Using a Bayesian analysis, we derive the approximate expected variance of the EWMA statistic, where the expectation is with respect to the posterior distribution of the unknown model parameters. The result is a relatively clean expression for the expected variance as a function of the estimated parameters and their covariance matrix. We use control limits proportional to the square root of the expected variance. We compare our approach to two other approaches for designing robust residual‐based EWMA charts and argue that our approach generally results in a more appropriate widening of the control limits. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

2.
Ricardo Cao 《TEST》1999,8(1):95-116
In this paper an overview of the existing literature about bootstrapping for estimation and prediction in time series is presented. Some of the methods are detailed, organized according to the aim they are designed for (estimation or prediction) and to the fact that some parametric structure is assumed, or not, for the dependence. Finally, some new bootstrap (kernel based) method is presented for prediction when no parametric assumption is made for the dependence. This research was partially supported by the “Dirección General de Investigación Científica y Tćnica” Grants PB94-0494 and PB95-0826 and by the “Xunta de Galicia” Grant XUGA 10501B97.  相似文献   

3.
A probabilistic approach for representation of interval uncertainty   总被引:1,自引:0,他引:1  
In this paper, we propose a probabilistic approach to represent interval data for input variables in reliability and uncertainty analysis problems, using flexible families of continuous Johnson distributions. Such a probabilistic representation of interval data facilitates a unified framework for handling aleatory and epistemic uncertainty. For fitting probability distributions, methods such as moment matching are commonly used in the literature. However, unlike point data where single estimates for the moments of data can be calculated, moments of interval data can only be computed in terms of upper and lower bounds. Finding bounds on the moments of interval data has been generally considered an NP-hard problem because it includes a search among the combinations of multiple values of the variables, including interval endpoints. In this paper, we present efficient algorithms based on continuous optimization to find the bounds on second and higher moments of interval data. With numerical examples, we show that the proposed bounding algorithms are scalable in polynomial time with respect to increasing number of intervals. Using the bounds on moments computed using the proposed approach, we fit a family of Johnson distributions to interval data. Furthermore, using an optimization approach based on percentiles, we find the bounding envelopes of the family of distributions, termed as a Johnson p-box. The idea of bounding envelopes for the family of Johnson distributions is analogous to the notion of empirical p-box in the literature. Several sets of interval data with different numbers of intervals and type of overlap are presented to demonstrate the proposed methods. As against the computationally expensive nested analysis that is typically required in the presence of interval variables, the proposed probabilistic representation enables inexpensive optimization-based strategies to estimate bounds on an output quantity of interest.  相似文献   

4.
The risk assessment community has begun to make a clear distinction between aleatory and epistemic uncertainty in theory and in practice. Aleatory uncertainty is also referred to in the literature as variability, irreducible uncertainty, inherent uncertainty, and stochastic uncertainty. Epistemic uncertainty is also termed reducible uncertainty, subjective uncertainty, and state-of-knowledge uncertainty. Methods to efficiently represent, aggregate, and propagate different types of uncertainty through computational models are clearly of vital importance. The most widely known and developed methods are available within the mathematics of probability theory, whether frequentist or subjectivist. Newer mathematical approaches, which extend or otherwise depart from probability theory, are also available, and are sometimes referred to as generalized information theory (GIT). For example, possibility theory, fuzzy set theory, and evidence theory are three components of GIT. To try to develop a better understanding of the relative advantages and disadvantages of traditional and newer methods and encourage a dialog between the risk assessment, reliability engineering, and GIT communities, a workshop was held. To focus discussion and debate at the workshop, a set of prototype problems, generally referred to as challenge problems, was constructed. The challenge problems concentrate on the representation, aggregation, and propagation of epistemic uncertainty and mixtures of epistemic and aleatory uncertainty through two simple model systems. This paper describes the challenge problems and gives numerical values for the different input parameters so that results from different investigators can be directly compared.  相似文献   

5.
The paper describes an approach to representing, aggregating and propagating aleatory and epistemic uncertainty through computational models. The framework for the approach employs the theory of imprecise coherent probabilities. The approach is exemplified by a simple algebraic system, the inputs of which are uncertain. Six different uncertainty situations are considered, including mixtures of epistemic and aleatory uncertainty.  相似文献   

6.
There will be simplifying assumptions and idealizations in the availability models of complex processes and phenomena. These simplifications and idealizations generate uncertainties which can be classified as aleatory (arising due to randomness) and/or epistemic (due to lack of knowledge). The problem of acknowledging and treating uncertainty is vital for practical usability of reliability analysis results. The distinction of uncertainties is useful for taking the reliability/risk informed decisions with confidence and also for effective management of uncertainty. In level-1 probabilistic safety assessment (PSA) of nuclear power plants (NPP), the current practice is carrying out epistemic uncertainty analysis on the basis of a simple Monte-Carlo simulation by sampling the epistemic variables in the model. However, the aleatory uncertainty is neglected and point estimates of aleatory variables, viz., time to failure and time to repair are considered. Treatment of both types of uncertainties would require a two-phase Monte-Carlo simulation, outer loop samples epistemic variables and inner loop samples aleatory variables. A methodology based on two-phase Monte-Carlo simulation is presented for distinguishing both the kinds of uncertainty in the context of availability/reliability evaluation in level-1 PSA studies of NPP.  相似文献   

7.
    
This paper is a first attempt to develop a numerical technique to analyze the sensitivity and the propagation of uncertainty through a system with stochastic processes having independent increments as input. Similar to Sobol’ indices for random variables, a meta-model based on Chaos expansions is used and it is shown to be well suited to address such problems. New global sensitivity indices are also introduced to tackle the specificity of stochastic processes. The accuracy and the efficiency of the proposed method is demonstrated on an analytical example with three different input stochastic processes: a Wiener process; an Ornstein–Uhlenbeck process and a Brownian bridge process. The considered output, which is function of these three processes, is a non-Gaussian process. Then, we apply the same ideas on an example without known analytical solution.  相似文献   

8.
    
Thanks to its very simple recursive computing scheme, exponential smoothing has become a popular technique to forecast time series. In this work, we show the advantages of its multivariate version and present some properties of the model, which allows us to perform a dynamic factor analysis. This analysis leads to a simple methodology to reduce the number of parameters (useful when the dimension of observations is large) via a linear transformation that decomposes the multivariate process into independent univariate exponential smoothing processes, characterized by a single smoothing parameter that goes from zero (white-noise process) to one (random walk process). A computer implementation of the expectation-maximization (EM) algorithm has been built for the maximum likelihood estimation of the models. The practicality of the method is demonstrated by its application to hourly electricity price predictions in some day-ahead markets, such as Omel, Powernext, and Nord Pool markets, whose forecasts are given as examples. This article has supplementary material online.  相似文献   

9.
  总被引:1,自引:0,他引:1  
This article develops a general framework for identifying error and uncertainty in computational simulations that deal with the numerical solution of a set of partial differential equations (PDEs). A comprehensive, new view of the general phases of modeling and simulation is proposed, consisting of the following phases: conceptual modeling of the physical system, mathematical modeling of the conceptual model, discretization and algorithm selection for the mathematical model, computer programming of the discrete model, numerical solution of the computer program model, and representation of the numerical solution. Our view incorporates the modeling and simulation phases that are recognized in the systems engineering and operations research communities, but it adds phases that are specific to the numerical solution of PDEs. In each of these phases, general sources of uncertainty, both aleatory and epistemic, and error are identified. Our general framework is applicable to any numerical discretization procedure for solving ODEs or PDEs. To demonstrate this framework, we describe a system-level example: the flight of an unguided, rocket-boosted, aircraft-launched missile. This example is discussed in detail at each of the six phases of modeling and simulation. Two alternative models of the flight dynamics are considered, along with aleatory uncertainty of the initial mass of the missile and epistemic uncertainty in the thrust of the rocket motor. We also investigate the interaction of modeling uncertainties and numerical integration error in the solution of the ordinary differential equations for the flight dynamics.  相似文献   

10.
    
Data used for monitoring and control of industrial processes are often best modeled as a time series. An important issue is to determine whether such time series are stationary. In this article we discuss the variogram—a graphical tool for assessing stationarity. We build on previous work and provide further details and more general results including analytical structures of variogram for various non‐stationary processes, and illustrate with a number of examples of variograms using standard data sets from the literature and simulated data sets. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

11.
The challenge problems for the Epistemic Uncertainty Workshop at Sandia National Laboratories provide common ground for comparing different mathematical theories of uncertainty, referred to as General Information Theories (GITs). These problems also present the opportunity to discuss the use of expert knowledge as an important constituent of uncertainty quantification. More specifically, how do the principles and methods of eliciting and analyzing expert knowledge apply to these problems and similar ones encountered in complex technical problem solving and decision making? We will address this question, demonstrating how the elicitation issues and the knowledge that experts provide can be used to assess the uncertainty in outputs that emerge from a black box model or computational code represented by the challenge problems. In our experience, the rich collection of GITs provides an opportunity to capture the experts' knowledge and associated uncertainties consistent with their thinking, problem solving, and problem representation. The elicitation process is rightly treated as part of an overall analytical approach, and the information elicited is not simply a source of data. In this paper, we detail how the elicitation process itself impacts the analyst's ability to represent, aggregate, and propagate uncertainty, as well as how to interpret uncertainties in outputs. While this approach does not advocate a specific GIT, answers under uncertainty do result from the elicitation.  相似文献   

12.
In this paper, we present an application of sensitivity analysis for design verification of nuclear turbosets. Before the acquisition of a turbogenerator, energy power operators perform independent design assessment in order to assure safe operating conditions of the new machine in its environment. Variables of interest are related to the vibration behaviour of the machine: its eigenfrequencies and dynamic sensitivity to unbalance. In the framework of design verification, epistemic uncertainties are preponderant. This lack of knowledge is due to inexistent or imprecise information about the design as well as to interaction of the rotating machinery with supporting and sub-structures. Sensitivity analysis enables the analyst to rank sources of uncertainty with respect to their importance and, possibly, to screen out insignificant sources of uncertainty. Further studies, if necessary, can then focus on predominant parameters. In particular, the constructor can be asked for detailed information only about the most significant parameters.  相似文献   

13.
Uncertainty, probability and information-gaps   总被引:1,自引:0,他引:1  
This paper discusses two main ideas. First, we focus on info-gap uncertainty, as distinct from probability. Info-gap theory is especially suited for modelling and managing uncertainty in system models: we invest all our knowledge in formulating the best possible model; this leaves the modeller with very faulty and fragmentary information about the variation of reality around that optimal model.Second, we examine the interdependence between uncertainty modelling and decision-making. Good uncertainty modelling requires contact with the end-use, namely, with the decision-making application of the uncertainty model. The most important avenue of uncertainty-propagation is from initial data- and model-uncertainties into uncertainty in the decision-domain. Two questions arise. Is the decision robust to the initial uncertainties? Is the decision prone to opportune windfall success?We apply info-gap robustness and opportunity functions to the analysis of representation and propagation of uncertainty in several of the Sandia Challenge Problems.  相似文献   

14.
    
Parametric (or traditional) control charts are based on the assumption that the quality characteristic of interest follows a specific distribution. However, in many applications, there is a lack of knowledge about the underlying distribution. To this end, nonparametric (or distribution-free) control charts have been developed in recent years. In this article, a nonparametric double homogeneously weighted moving average (DHWMA) control chart based on the sign statistic is proposed for monitoring the location parameter of an unknown and continuous distribution. The performance of the proposed chart is measured through the run-length distribution and its associated characteristics by performing Monte Carlo simulations. The DHWMA sign chart is compared with other nonparametric sign charts, such as the homogeneously weighted moving average, generally weighted moving average (GWMA), double GWMA, and triple exponentially weighted moving average sign charts, as well as the traditional DHWMA chart. The results indicate that the proposed chart performs just as well as and in some cases better than its competitors, especially for small shifts. Finally, two examples are provided to show the application and implementation of the proposed chart.  相似文献   

15.
This paper compares Evidence Theory (ET) and Bayesian Theory (BT) for uncertainty modeling and decision under uncertainty, when the evidence about uncertainty is imprecise. The basic concepts of ET and BT are introduced and the ways these theories model uncertainties, propagate them through systems and assess the safety of these systems are presented. ET and BT approaches are demonstrated and compared on challenge problems involving an algebraic function whose input variables are uncertain. The evidence about the input variables consists of intervals provided by experts. It is recommended that a decision-maker compute both the Bayesian probabilities of the outcomes of alternative actions and their plausibility and belief measures when evidence about uncertainty is imprecise, because this helps assess the importance of imprecision and the value of additional information. Finally, the paper presents and demonstrates a method for testing approaches for decision under uncertainty in terms of their effectiveness in making decisions.  相似文献   

16.
    
Nonparametric control charts are used in process monitoring when there is insufficient information about the form of the underlying distribution. In this article, we propose a triple exponentially weighted moving average (TEWMA) control chart based on the sign statistic for monitoring the location parameter of an unknown continuous distribution. The run-length characteristics of the proposed chart are evaluated performing Monte Carlo simulations. We also compare its statistical performance with existing nonparametric sign charts, such as the cumulative sum (CUSUM), exponentially weighted moving average (EWMA), generally weighted moving average (GWMA), and double exponentially weighted moving average (DEWMA) sign charts as well as the parametric TEWMA-X¯ chart. The results show that the TEWMA sign chart is superior to its competitors, especially for small shifts. Moreover, two examples are given to demonstrate the application of the new scheme.  相似文献   

17.
    
Most control charts have been developed based on the actual distribution of the quality characteristic of interest. However, in many applications, there is a lack of knowledge about the process distribution. Therefore, in recent years, nonparametric (or distribution-free) control charts have been introduced for monitoring the process location or scale parameter. In this article, a nonparametric double generally weighted moving average control chart based on the signed-rank statistic (referred as DGWMA-SR chart) is proposed for monitoring the location parameter. We provide the exact approach to compute the run-length distribution, and through an extensive simulation study, we compare the performance of the proposed chart with existing nonparametric charts, such as the exponentially weighted moving average signed-rank (EWMA-SR), the generally weighted moving average signed-rank (GWMA-SR), the double exponentially weighted moving average signed-rank (DEWMA-SR), and the double generally weighted moving average sign (DGWMA-SN) charts, as well as the parametric DGWMA- X¯ chart for subgroup averages. The simulation results show that the DGWMA-SR chart (with suitable parameters) is more sensitive than the other competing charts for small shifts in the location parameter and performs as well as the other nonparametric charts for larger shifts. Finally, two examples are given to illustrate the application of the proposed chart.  相似文献   

18.
The determination of an exact distribution function of a random phenomena is not possible using a limited number of observations. Therefore, in the present paper the stochastic properties of a random variable are assumed as uncertain quantities and instead of predefined distribution types the maximum entropy distribution is used. Efficient methods for a reliability analysis considering these uncertain stochastic parameters are presented. Based on approximation strategies this extended analysis requires no additional limit state function evaluations. Later, variance based sensitivity measures are used to evaluate the contribution of the uncertainty of each stochastic parameter to the total variation of the failure probability.  相似文献   

19.
Epistemic uncertainty analysis is an essential feature of any model application subject to ‘state of knowledge’ uncertainties. Such analysis is usually carried out on the basis of a Monte Carlo simulation sampling the epistemic variables and performing the corresponding model runs.In situations, however, where aleatory uncertainties are also present in the model, an adequate treatment of both types of uncertainties would require a two-stage nested Monte Carlo simulation, i.e. sampling the epistemic variables (‘outer loop’) and nested sampling of the aleatory variables (‘inner loop’). It is clear that for complex and long running codes the computational effort to perform all the resulting model runs may be prohibitive.Therefore, an approach of an approximate epistemic uncertainty analysis is suggested which is based solely on two simple Monte Carlo samples: (a) joint sampling of both, epistemic and aleatory variables simultaneously, (b) sampling of aleatory variables alone with the epistemic variables held fixed at their reference values.The applications of this approach to dynamic reliability analyses presented in this paper look quite promising and suggest that performing such an approximate epistemic uncertainty analysis is preferable to the alternative of not performing any.  相似文献   

20.
    
Nonparametric (or distribution-free) control charts are used for monitoring processes where there is a lack of knowledge about the underlying distribution. In this article, a triple exponentially weighted moving average control chart based on the signed-rank statistic (referred as TEWMA-SR chart) is proposed for monitoring shifts in the location parameter of an unknown, but continuous and symmetric, distribution. The run-length characteristics of the proposed chart are evaluated performing Monte Carlo simulations. A comparison study with other existing nonparametric control charts based on the signed-rank statistic, the TEWMA sign chart, and the parametric TEWMA-X¯ chart indicates that the proposed chart is more effective in detecting small shifts, while it is comparable with the other charts for moderate and large shifts. Finally, two illustrative examples are provided to demonstrate the application of the proposed chart.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号