首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper gives a general method for use in the chemical industry for eliciting and quantifying an expert's subjective opinion concerning a normal linear regression model. The intention is to ask the expert assessment questions that he or she can meaningfully answer and to use the elicited values to determine a probability distribution on the regression parameters that quantifies and expresses the expert's opinions. A regression model may represent a chemical production process, for example, and the corresponding elicited distribution would embody the expert's opinion concerning the effects on product output of independent variables for process control and environmental factors. It may be uncertain what independent variables should be featured in the regression, so the expert's opinion is represented by a mixture of multivariate distributions, where each distribution in the mixture corresponds to a different subset of independent variables. Among the uses to which an elicited distribution might be put is design of experiments, discussed here with regard to Bayesian design criteria. An example is given of the elicitation and use of a subjective distribution in which an industrial chemist quantified his opinion about a chemical process.  相似文献   

2.
The concept of a Bayesian probability of agreement was recently introduced to give the posterior probabilities that the response surfaces for two different groups are within δ of one another. For example, a difference of less than δ in the mean response at fixed levels of the predictor variables might be thought to be practically unimportant. In such a case, we would say that the mean responses are in agreement. The posterior probability of this is called the Bayesian probability of agreement. In this article, we quantify the probability that new response observations from two groups will be within δ for a continuous response, and the probability that the two responses agree completely for categorical cases such as logistic regression and Poisson regression. We call these Bayesian comparative predictive probabilities, with the former being the predictive probability of agreement. We use Markov chain Monte Carlo simulation to estimate the posterior distribution of the model parameters and then the predictive probability of agreement. We illustrate the use of this methodology with three examples and provide a freely available R Shiny app that automates the computation and estimation associated with the methodology.  相似文献   

3.
One standard approach for estimating a subjective distribution is to elicit subjective quantiles from a human expert. However, most decision-making models require a random variable's moments and/or distribution function instead of its quantiles. In the literature little attention has been given to the problem of converting a given set of subjective quantiles into moments and/or a distribution function. We show that this conversion problem is far from trivial, and that the most commonly used conversion procedure often produces large errors. An alternative procedure using “Tocher's curve” is proposed, and its performance is evaluated with a wide variety of test distributions. The method is shown to be more accurate than a commonly used procedure.  相似文献   

4.
I. J. Good 《TEST》1980,31(1):489-519
Summary A standard technique in subjective “Bayesian” methodology is for a subject (“you”) to make judgements of the probabilities that a physical probability lies in various intervals. In the hierarchical Bayesian technique you make probability judgements (of a higher type, order, level, or stage) concerning the judgements of lower type. The paper will outlinesome of the history of this hierarchical technique with emphasis on the contributions by I. J. Good because I have read every word written by him.  相似文献   

5.
We describe two stochastic network interdiction models for thwarting nuclear smuggling. In the first model, the smuggler travels through a transportation network on a path that maximizes the probability of evading detection, and the interdictor installs radiation sensors to minimize that evasion probability. The problem is stochastic because the smuggler's origin-destination pair is known only through a probability distribution at the time when the sensors are installed. In this model, the smuggler knows the locations of all sensors and the interdictor and the smuggler “agree” on key network parameters, namely the probabilities the smuggler will be detected while traversing the arcs of the transportation network. Our second model differs in that the interdictor and smuggler can have differing perceptions of these network parameters. This model captures the case in which the smuggler is aware of only a subset of the sensor locations. For both models, we develop the important special case in which the sensors can only be installed at border crossings of a single country so that the resulting model is defined on a bipartite network. In this special case, a class of valid inequalities reduces the computation time for the identical-perceptions model.  相似文献   

6.
In the present paper the applications of the integro-differential Chapman-Kolmogorov equation to the problems of pure-jump stochastic processes and continuous-jump response processes are discussed. The pure-jump processes considered herein are the counting Poisson process, a two-state jump process, and a multi-state jump process. The differential equations governing the Markov state probabilities are obtained from the degenerate, pure differential form, of the general, integro-differential Chapman-Kolmogorov equation, with the aid of the jump probability intensity functions. The continuous-jump response process is the response of a dynamic system to a multi-component renewal impulse process excitation. The excitation consists of a number of n statistically independent random trains of impulses, each of which is driven by an Erlang renewal process with parameters νj,kj. Each of the impulse processes is characterized by an auxiliary zero-one jump stochastic process, which consists of kj negative exponential distributed phases. The Markov states for the whole problem are determined by the coincidences of the phases of the individual jump processes. Thus the response probability distribution may be characterized by a joint probability density-discrete distribution of the state variables of the dynamic system and of the states of the pertinent Markov chain. The explicit integro-differential equations governing the joint probability density-discrete distribution of the response are obtained from the general forward integro-differential Chapman-Kolmogorov equation, after the determination of the jump probability intensity functions for the continuous-jump and pure-jump processes.  相似文献   

7.
In this paper, we develop a Bayesian approach for monitoring Weibull quantiles under Type II censoring when prior information is negligible relative to the data. The posterior median of quantiles is considered as the monitored statistic. A method based on the relationship between Bayesian and conditional limits under an appropriate prior distribution is proposed to obtain the posterior median of quantiles in closed form. A pivotal quantity based on the monitored statistic is proposed, and its distribution is conditionally derived. Then, the Bayes‐conditional control limits are proposed. For the proposed charts, the probability of out‐of‐control can be derived without use of simulation. The performance of the Bayes‐conditional charts is compared with the bootstrap charts through the simulation methods. The results show that to monitor the first quantiles, the lower‐sided Bayes‐conditional charts perform better than bootstrap charts in detecting a downward shift caused by decreasing in the shape parameter. Finally, an illustrative example is provided. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

8.
Software reliability assessment models in use today treat software as a monolithic block. An aversion towards ‘atomic' models seems to exist. These models appear to add complexity to the modeling, to the data collection and seem intrinsically difficult to generalize. In 1997, we introduced an architecturally based software reliability model called FASRE. The model is based on an architecture derived from the requirements which captures both functional and nonfunctional requirements and on a generic classification of functions, attributes and failure modes. The model focuses on evaluation of failure mode probabilities and uses a Bayesian quantification framework. Failure mode probabilities of functions and attributes are propagated to the system level using fault trees. It can incorporate any type of prior information such as results of developers' testing, historical information on a specific functionality and its attributes, and, is ideally suited for reusable software. By building an architecture and deriving its potential failure modes, the model forces early appraisal and understanding of the weaknesses of the software, allows reliability analysis of the structure of the system, provides assessments at a functional level as well as at a systems' level. In order to quantify the probability of failure (or the probability of success) of a specific element of our architecture, data are needed. The term element of the architecture is used here in its broadest sense to mean a single failure mode or a higher level of abstraction such as a function. The paper surveys the potential sources of software reliability data available during software development. Next the mechanisms for incorporating these sources of relevant data to the FASRE model are identified.  相似文献   

9.
The assessment of structural integrity data requires a statistical assessment. However, most statistical analysis methods make some assumption regarding the underlying distribution. Here, a new distribution-free statistical assessment method based on a combination of Rank and Bimodal probability estimates is presented and shown to result in consistent estimates of different probability quantiles. The method is applicable for any data set expressed as a function of two parameters. Data for more than two parameters can always be expressed as different subsets varying only two parameters. In principle, this makes the method applicable to the analysis of more complex data sets. The strength in the statistical analysis method presented lies in the objectiveness of the result. There is no need to make any subjective assumptions regarding the underlying distribution, or of the relationship between the parameters considered.  相似文献   

10.
In this article we consider a generalization of the univariate g-and-h distribution to the multivariate situation with the aim of providing a flexible family of multivariate distributions that incorporate skewness and kurtosis. The approach is to modify the underlying random variables and their quantiles, directly giving rise to a family of distributions in which the quantiles rather than the densities are the foci of attention. Using the ideas of multivariate quantiles, we show how to fit multivariate data to our multivariate g-and-h distribution. This provides a more flexible family than the skew-normal and skew-elliptical distributions when quantiles are of principal interest. Unlike those families, the distribution of quadratic forms from the multivariate g-and-h distribution depends on the underlying skewness. We illustrate our methods on Australian athletes data, as well as on some wind speed data from the northwest Pacific.  相似文献   

11.
There are a wide variety of short fiber reinforced cement composites. Among these materials are Strain Hardening Cementitious Composites (SHCC) that exhibit strain hardening and multiple cracking in tension. Quantitative material design methods considering the properties of matrix, fiber and their interface should be established. In addition, numerical models to simulate the fracture process including crack width and crack distribution for the material are needed.This paper introduces a numerical model for three-dimensional analysis of SHCC fracture, in which the salient features of the material meso-scale (i.e. matrix, fibers and their interface) are discretized. The fibers are randomly arranged within the specimen models. Load test simulations are conducted and compared with experimental results. It is seen that the proposed model can well simulate the tensile failure of Ultra High Performance-Strain Hardening Cementitious Composites (UHP-SHCC) including strain-hardening behavior and crack patterns. The effects of matrix strength, its probability distribution inside the specimen and fiber distribution on the tensile fracture are numerically investigated. Consideration of the probability distributions of material properties, such as matrix strength, appears to be essential for predicting the fracture process of SHCC.  相似文献   

12.
Theoretical analyses have always resulted in nanomaterials’ grain size probability distribution being of varied form: approximately either lognormal, Rayleigh, normal, Weibull, etc. The isotropic Hillert’s model of grain growth which is more suitable for soap froth has been frequently used to establish these distributions with the hope of approximating experimental observations. Observed grain growth in nanomaterials shows departures from the Hillert’s model.In the present paper, the probability distribution of grain size in nanomaterials is dealt with. Use is made of a modified model of grain growth in polycrystalline nanomaterials developed recently by the authors. The modified model accounts for grain growth caused by curvature driven grain boundary migration and grain rotation-coalescence mechanisms. Since the grain size in the aggregate is random, the stochastic counterpart of the expression governing the incremental change in individual grain size is obtained by the addition of two fluctuation terms.The integro-differential equation governing the development of the probability density function of the grain size is obtained which is the generalised Fokker–Planck–Kolmogorov equation. Numerical solution to the integro-differential equation is obtained.Results from analytical modelling of grain size probability distribution in polycrystalline nanomaterials are different if the effect of grain rotation-coalescence mechanism on grain growth process is taken into account and, further, due to the addition of the fluctuation terms. Results also depend on the nature of the fluctuation term, which is a material property as the fluctuation in grain sizes varies from one material to another. It is shown that many of the major attributes of grain growth, such as self similarity (probability density approaching a stationary one), can be predicted by the solution of the Fokker–Planck–Kolmogorov equation.  相似文献   

13.
An aerospace vehicle in atmospheric flight can be exposed to random air turbulence which may cause critical structural failure. Especially, for high aspect ratio wing, the effect of gust becomes more significant and its response to random gust need to be analyzed precisely. In this paper, the reliability analysis is conducted for composite wing subject to gust loads. For this, the probability distribution function of bending moment from random gust is calculated by power spectral analysis and the material properties of composite skin are assumed to be normal random variables to consider uncertainty. With these distributions of random variables, the probability of failure of the wing structure is calculated by Monte Carlo simulation.  相似文献   

14.
《技术计量学》2012,54(4):429-444
Abstract

The empirical quantiles of independent data provide a good summary of the underlying distribution of the observations. For high-dimensional time series defined in two dimensions, such as in space and time, one can define empirical quantiles of all observations at a given time point, but such time-wise quantiles can only reflect properties of the data at that time point. They often fail to capture the dynamic dependence of the data. In this article, we propose a new definition of empirical dynamic quantiles (EDQ) for high-dimensional time series that mitigates this limitation by imposing that the quantile must be one of the observed time series. The word dynamic emphasizes the fact that these newly defined quantiles capture the time evolution of the data. We prove that the EDQ converge to the time-wise quantiles under some weak conditions as the dimension increases. A fast algorithm to compute the dynamic quantiles is presented and the resulting quantiles are used to produce summary plots for a collection of many time series. We illustrate with two real datasets that the time-wise and dynamic quantiles convey different and complementary information. We also briefly compare the visualization provided by EDQ with that obtained by functional depth. The R code and a vignette for computing and plotting EDQ are available at https://github.com/dpena157/HDts/.  相似文献   

15.
Variable-stress accelerated life testing trials are experiments in which each of the units in a random sample of units of a product is run under increasingly severe conditions to get information quickly on its life distribution. We consider a fatigue failure model in which accumulated decay is governed by a continuous Gaussian process W(y) whose distribution changes at certain stress change points to < t l < < … <t k , Continuously increasing stress is also considered. Failure occurs the first time W(y) crosses a critical boundary ω. The distribution of time to failure for the models can be represented in terms of time-transformed inverse Gaussian distribution functions, and the parameters in models for experiments with censored data can be estimated using maximum likelihood methods. A common approach to the modeling of failure times for experimental units subject to increased stress at certain stress change points is to assume that the failure times follow a distribution that consists of segments of Weibull distributions with the same shape parameter. Our Wiener-process approach gives an alternative flexible class of time-transformed inverse Gaussian models in which time to failure is modeled in terms of accumulated decay reaching a critical level and in which parametric functions are used to express how higher stresses accelerate the rate of decay and the time to failure. Key parameters such as mean life under normal stress, quantiles of the normal stress distribution, and decay rate under normal and accelerated stress appear naturally in the model. A variety of possible parameterizations of the decay rate leads to flexible modeling. Model fit can be checked by percentage-percentage plots.  相似文献   

16.
不完备概率信息条件下变量联合分布函数的确定及其对结构系统可靠度的影响还缺少系统地研究,该文目的在于研究表征变量间相关性的Copula函数对结构系统可靠度的影响规律。首先,简要介绍了变量联合分布函数构造的Copula函数方法。其次,提出了并联系统失效概率计算方法,并推导了相应的计算公式。最后以几种典型Copula函数为例研究了Copula函数类型对结构并联系统可靠度的影响规律。结果表明:表征变量间相关性的Copula函数类型对结构系统可靠度具有明显的影响,不同Copula函数计算的系统失效概率存在明显的差别,这种差别随构件失效概率的减小而增大。当并联系统的失效区域位于Copula函数尾部时,Copula函数的尾部相关性对系统可靠度有明显的影响,计算的失效概率比没有尾部相关性的Copula函数的失效概率大。当组成并联系统的两构件功能函数间正相关时,系统失效概率随相关系数的增大而增加;当构件功能函数间负相关时,系统失效概率随相关系数的增大而减小。此外,无论构件失效概率和变量间相关系数如何变化,Copula函数计算的失效概率都位于系统失效概率的上下限内。  相似文献   

17.
《TEST》1990,5(1):1-60
Summary In Bayesian inference and decision analysis, inferences and predictions are inherently probabilistic in nature. Scoring rules, which involve the computation of a score based on probability forecasts and what actually occurs, can be used to evaluate probabilities and to provide appropriate incentives for “good” probabilities. This paper review scoring rules and some related measures for evaluating probabilities, including decompositions of scoring rules and attributes of “goodness” of probabilites, comparability of scores, and the design of scoring rules for specific inferential and decision-making problems Read before the Spanish Statistical Society at a meeting organized by the Universitat de València on Tuesday, April 23, 1996  相似文献   

18.
An empirical approach is proposed to estimate the transition probabilities associated with non-homogenous Markov chains typically used in developing stochastic-based pavement performance prediction models. A reliable pavement performance prediction model is a key component of any advanced pavement management system. The proposed empirical approach is designed to account for two major factors that cause the transition probabilities (i.e. deterioration rates) to increase over time. The first major factor is the progressive increase in traffic loading as represented by the equivalent single axle load applications. The second major factor is the gradual decline in the pavement structural capacity which can be represented by an appropriate pavement strength indicator such as the structural number. The proposed empirical model can recursively estimate the non-homogenous transition probabilities for an analysis period of (n) transitions by simply multiplying the first-year (i.e. present) transition probabilities by two adjustment factors, namely the load and strength factors. Once the empirical model is calibrated, these two factors can capture the impact of traffic load increases and gradual pavement structural losses on the transition probabilities over time. The calibration process requires the estimation of the model two exponents to be obtained from the minimisation of sum of squared errors wherein the error is defined as the difference between the observed and predicted pavement distress ratings (DRs). The predicted DRs are mainly estimated based on the state probabilities, which are recursively derived from the non-homogenous Markov model. A sample empirical model is presented with results indicating its effectiveness in estimating the pavement non-homogenous transition probabilities.  相似文献   

19.
In this article, we study exponentially weighted moving average (EWMA) charts for monitoring Weibull quantiles (percentiles) based on a monitoring statistic conditioned on ancillary statistics when samples may be Type II censored. The monitoring statistic has a distribution form that is intractable, but analytic forms of the density and distribution functions can be derived when it is conditioned on ancillary statistics. We use these results to develop EWMA control charts and, in certain cases, evaluate their average run length without resorting to simulations. We compare the average run length performance of the EWMA charts with those of probability‐limit charts, studied by the authors, and probability‐limit charts enhanced with Western Electric alarm rules. We apply the charts to the breaking strength of carbon fibers to detect shifts in a specific Weibull quantile. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

20.
Abstract

The performance of reliability inference strongly depends on the modeling of the product’s lifetime distribution. Many products have complex lifetime distributions whose optimal settings are not easily found. Practitioners prefer to use simpler lifetime distribution to facilitate the data modeling process while knowing the true distribution. Therefore, the effects of model mis-specification on the product’s lifetime prediction is an interesting research area. This article presents some results on the behavior of the relative bias (RB) and relative variability (RV) of pth quantile of the accelerated lifetime (ALT) experiment when the generalized Gamma (GG3) distribution is incorrectly specified as Lognormal or Weibull distribution. Both complete and censored ALT models are analyzed. At first, the analytical expressions for the expected log-likelihood function of the misspecified model with respect to the true model is derived. Consequently, the best parameter for the incorrect model is obtained directly via a numerical optimization to achieve a higher accuracy model than the wrong one for the end-goal task. The results demonstrate that the tail quantiles are significantly overestimated (underestimated) when data are wrongly fitted by Lognormal (Weibull) distribution. Moreover, the variability of the tail quantiles is significantly enlarged when the model is incorrectly specified as Lognormal or Weibull distribution. Precisely, the effect on the tail quantiles is more significant when the sample size and censoring ratio are not large enough. Supplementary materials for this article are available online.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号