首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Monte Carlo studies of a modification of the original version of the half-normal plot (Daniel, Technometrics, 1 (1959), 311–341) and four new versions are reported. Data representative of the 15 contrasts from a 2p–q , pq = 4, factorial experirnenl. are generated. Design parameters in the Main Simiulation Study are the probability error rate, the number of real contrasts, and the size of the real contrasts.

The critical values used by the various versions control the probability error rate. These critical values are considerably different, than those given by Daniel.

The Monte Carlo strtdies indicate that the detection rate, i.e., the proportion of real contrasts declared significant, is larger for one of the new versions than for the original version. The detection rate of all versions decreases drastically when the number of real contrasts present increases from one to two to four.

Nomination procedures for analyzing single replication 24 factorial experiments have a smaller detection rate than the half-normal plot with an eqrlivalent probability error rate, unless the experimenter can accuately nominate ten error contrasts in the 24 experiment.  相似文献   

2.
Plotting the empirical cumulative distribution of the usual set of orthogonal contrasts computed from a 2 p experiment on a special grid may aid in its criticism and interpretation. Bad values, heteroscedasticity, dependence of variance on mean, and some types of defective randomization, all leave characteristic stigmata. The halfnormal plot can be used to estimate the error standard deviation and to make judgments about the reality of the observed effects. An accompanying paper by A. Birnbaum gives some operating characteristics of these judgments. Examples are given of the use of half-normal plots in each of these ways.  相似文献   

3.
We reduce the dimension of the integration of the computation of the cumulative distribution function of version X of the half-normal plot. This speeds computation of the critical values.  相似文献   

4.
This paper discusses several methods for judging which of m contraste provided by a factorial design without replication may be different from zero. These include the technique of half-normal pIotting, proposed by C. Daniel, for which some operating characteristics are given.  相似文献   

5.
A modified version of Weibull's statistical theory of the strength of brittle materials is proposed, in which the expression for failure probability contains an additional term. While this term is negligible when failure originates from a flaw of relatively large size, it becomes increasingly significant as the flaw size is reduced. The resulting revised expressions for failure probability under uniform, uniaxial tension and under Hertzian indentation loading are given, and the effect of a bimodal flaw size distribution is considered in both cases. The implications with regard to the assumed invariance of Weibull statistical parameters under different experimental conditions are discussed.  相似文献   

6.
J. M. Dickey 《TEST》1980,31(1):471-487
Summary Parameterized families of subjective probability distributions can be used to great advantage to model beliefs of experts, especially when such models include dependence on concomitant variables. In one such model, probabilities of simple events can be expressed in loglinear form. In another, a generalization of the multivariatet distribution has concomitant variables entering linearly through the location vector. Interactive interview methods for assessing this second model and matrix extensions thereof were given in recent joint work of the author with A.P. Dawid, J.B. Kadane and others. In any such verbal assessment method, elicited quantiles must be fitted by subjective probability models. The fitting requires the use of a further probability model for errors of elicitation. This paper gives new theory relating the form of the distribution of elicited probabilities and elicited quantiles to the form of the subjective probability distribution. The first and second order moment structures are developed to permit generalized least squares fits. Present affiliation: State University of New York, Albany  相似文献   

7.
In the online and printed versions of the original article, there was a typographical error in I. Peker’s name. His name is spelled correctly above. This error occurred in both the html and pdf versions of the online article. The online version of the original article can be found at  相似文献   

8.
Requirements by Fracture Mechanics on Nondestructive Testing Methods Fracture mechanics is a tool in evaluating the magnitude of critical flaws in structures. By means of Material properties such as fracture toughness, subcritical flaw growth, and existing primary and secondary stress it becomes possible to evaluate critical values for given flaw configurations. In Section XI, Appendix A of the ASME-Nuclear Pressure Vessel Code allowable flaw dephts are usually determined as a function of flaw configuration and localisation and wall thickness. Thus one wants to enable a fracture mechanical assessment of detected defects and of the safety of a component against brittle and tough fracture. Besides, it shall be possible to get an idea about the subcritical flaw growth. Therefore Practical application of fracture mechanics is based on progress in nondestructive surveillance methods. In the presented work the guidelines for integrity assessment of flawed structures based on Appendix A are described and the problems relating with fracture mechanical approaches are outlined. Up to this day it is not possible to get quantitative statements about the configuration, localisation and magnitude of flaws in structures by means of non destructive testing. Therefore factors of safety are introduced with the goal of assuring the integrity of flawed structures.  相似文献   

9.
In References 1–3 we presented a computer-based theory for analysing the asymptotic accuracy (quality of robustness) of error estimators for mesh-patches in the interior of the domain. In this paper we review the approach employed in References 1–3 and extend it to analyse the asymptotic quality of error estimators for mesh-patches at or near a domain boundary. We analyse two error estimators which were found in References 1–3 to be robust in the interior of the mesh (the element residual with p-order equilibrated fluxes and (p+1)) degree bubble solution or (p+1) degree polynomial solution (ERpB or ERpPp+1; see References 1–3) and the Zienkiewicz–Zhu Superconvergent Patch Recovery (ZZ-SPR; see References 4–7) and we show that the robustness of these estimators for elements adjacent to the boundary can be significantly inferior to their robustness for interior elements. This deterioration is due to the difference in the definition of the estimators for the elements in the interior of the mesh and the elements adjacent to the boundary. In order to demonstrate how our approach can be employed to determine the most robust version of an estimator we analysed the versions of the ZZ estimator proposed in References 9–12. We found that the original ZZ-SPR proposed in References 4–7 is the most robust one, among the various versions tested, and some of the proposed ‘enhancements’ can lead to a significant deterioration of the asymptotic robustness of the estimator. From the analyses given in References 1–3 and in this paper, we found that the original ZZ estimator (given in References 4–7) is the most robust among all estimators analysed in References 1–3 and in this study. © 1997 John Wiley & Sons, Ltd.  相似文献   

10.
11.
Nondestructive evaluation (NDE) techniques are widely used to detect flaws in critical components of systems like aircraft engines, nuclear power plants, and oil pipelines to prevent catastrophic events. Many modern NDE systems generate image data. In some applications, an experienced inspector performs the tedious task of visually examining every image to provide accurate conclusions about the existence of flaws. This approach is labor-intensive and can cause misses due to operator ennui. Automated evaluation methods seek to eliminate human-factors variability and improve throughput. Simple methods based on peak amplitude in an image are sometimes employed and a trained-operator-controlled refinement that uses a dynamic threshold based on signal-to-noise ratio (SNR) has also been implemented. We develop an automated and optimized detection procedure that mimics these operations. The primary goal of our methodology is to reduce the number of images requiring expert visual evaluation by filtering out images that are overwhelmingly definitive on the existence or absence of a flaw. We use an appropriate model for the observed values of the SNR-detection criterion to estimate the probability of detection. Our methodology outperforms current methods in terms of its ability to detect flaws. Supplementary materials for this article are available online.  相似文献   

12.
For ageing airframe structures, a critical challenge for next generation linear elastic fracture mechanics (LEFM) modelling is to predict the effect of corrosion damage on the remaining fatigue life and structural integrity of components. This effort aims to extend a previously developed LEFM modelling approach to field corroded specimens and variable amplitude loading. Iterations of LEFM modelling were performed with different initial flaw sizes and crack growth rate laws and compared to detailed experimental measurements of crack formation and small crack growth. Conservative LEFM‐based lifetime predictions of corroded components were achieved using a corrosion modified‐equivalent initial flaw size along with crack growth rates from a constant Kmax‐decreasing ΔK protocol. The source of the error in each of the LEFM iterations is critiqued to identify the bounds for engineering application.  相似文献   

13.
In this study, the authors will investigate five methods for predicting the failure rate of a modified, upgraded version of a switching power converter. Considerable data were available on the operational reliability of the original version of this power converter and three others of a similar design. We will predict not a single, point estimate of the failure rate through each method, but rather obtain a probability density distribution in each case. The associated cumulative density function would then show the probability that the power converter's failure rate exceeded any particular, chosen value.  相似文献   

14.
We present a hierarchical Bayesian method for estimating the density and size distribution of subclad-flaws in French Pressurized Water Reactor (PWR) vessels. This model takes into account in-service inspection (ISI) data, a flaw size-dependent probability of detection (different functions are considered) with a threshold of detection, and a flaw sizing error distribution (different distributions are considered). The resulting model is identified through a Markov Chain Monte Carlo (MCMC) algorithm. The article includes discussion for choosing the prior distribution parameters and an illustrative application is presented highlighting the model's ability to provide good parameter estimates even when a small number of flaws are observed.  相似文献   

15.
An efficient algorithm has been proposed for determining the probability of failure of structures containing flaws. The algorithm is based on a powerful generic equation, a central parameter in which is the conditional individual probability of initiating failure by a single flaw. The equation avoids conservative predictions related to the probability of locally initiated failure and is a powerful alternative to existing approaches. It is based on the concept of ‘conditional individual probability of initiating failure’ characterising a single fault, which permits us to relate in a simple fashion the conditional individual probability of failure characterising a single fault to the probability of failure characterising a population of faults. A method for estimating the conditional individual probability has been proposed based on combining a Monte Carlo simulation and a failure criterion.The generic equation has been modified to determine the probability of fatigue failure initiated by flaws. Other important applications discussed in the paper also include: comparing different types of loading and selecting the type of loading associated with the smallest probability of over-stress failure; optimizing designs by minimizing their vulnerability to over-stress failure initiated by flaws; determining failure triggered by random faults in a large system and determining the probability of overloading of a supply system from random demands.  相似文献   

16.
Industrial scientists and engineers often use experimental designs in which all degrees of freedom are used to estimate effects and consequently no classical estimate of the error is possible. Robust scale estimates provide an alternative measure of the error. In this study, several such scale estimators are evaluated based on the power or related significance tests. The pseudo standard error method of Lenth provides the best overall performance. Lenth's t approximation for critical values was found to be inaccurate, however, so new tables are provided. Additional recommendations are made according to the experimenter's prior belief in the number of likely important factors.  相似文献   

17.
The process region at the tip of a crack in a linear elastic structure has been modelled by a cohesive zone. Growth of the front end of the cohesive zone is governed by a critical stress intensity factor criterion, and advance of the original traction free crack is determined by a critical crack opening at the rear end of the cohesive zone. Damage resistance curves relating the applied stress intensity factor to the growth of the cohesive zone have been calculated for an idealized structure containing two characteristic dimensions. Instability resulting in failure of the structure is found to occur either by unstable growth of the front end of the cohesive zone, without a fully developed cohesive zone, or by unstable growth of the original flaw, when the crack opening displacement at the rear end of the cohesive zone reaches a critical value. The influence of the size of the structure compared to the length of the cohesive zone is investigated, and conditions for the limits of validity of the small scale yielding assumption are discussed. Comparisons are made between the maximum load and the length of the cohesive zone at instability resulting from the present analysis, and the values predicted by linear elastic fracture mechanics.  相似文献   

18.
Performance standards for detector systems often include requirements for probability of detection and probability of false alarm at a specified level of statistical confidence. This paper reviews the accepted definitions of confidence level and of critical value. It describes the testing requirements for establishing either of these probabilities at a desired confidence level. These requirements are computable in terms of functions that are readily available in statistical software packages and general spreadsheet applications. The statistical interpretations of the critical values are discussed. A table is included for illustration, and a plot is presented showing the minimum required numbers of pass-fail tests. The results given here are applicable to one-sided testing of any system with performance characteristics conforming to a binomial distribution.  相似文献   

19.
The mechanical performance of ceramic materials is highly dependent on the existence of incipient flaws. This paper investigates the relationship between the size of the pre-existing flaw and failure stress for disc-shaped specimens of zirconia bioceramic subjected to an equibiaxial stress field. As the size of initiating flaw increased, the stress under which discs failed decreased, sensibly allowing the fracture toughness of the material to be calculated. The value obtained, 8 MPam1/2, is in reasonable agreement with previous experience, giving confidence in the validation procedure used and the data obtained. For cyclic loading, periods of stable fatigue crack growth occurred with initial defects extending to reach critical values. Based on data for discs that failed under monotonic loading conditions, it was possible to determine the critical flaw size and hence degree of crack growth necessary for discs to fail from fatigue at a given peak cyclic stress. Predictive constant flaw size fatigue curves showed reasonable accuracy in that the estimated incipient flaw size at a given fatigue life was equivalent to the true flaw size, measured from the fracture surface of failed disc specimens.  相似文献   

20.
Y. S. Dai  M. Xie  K. L. Poh  S. H. Ng 《IIE Transactions》2004,36(12):1183-1192
The multi-version programming technique is a method to increase the reliability of safety critical software. In this technique a number of versions are developed and a voting scheme is used before a final result is provided. In the analysis of this type of systems, a common assumption is the independence of the different versions. However, the different versions are usually interdependent and failures are correlated due to the nature of the product design and development. One version may fail simultaneously with another version because of a common cause. In this paper, a model for these dependent failures is developed and studied. Using the developed model, a reliability function can be easily computed. A method is also proposed to estimate the parameters of the model. Finally, as an application of the developed model, an optimal testing resource allocation problem is formulated and a genetic algorithm is presented to solve the problem.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号