首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Many reliability experiments are not completely randomized. Instead they involve subsamples, blocks, split-plot structures, etc. A common analysis often uses random effects to account for the impact of the experimental protocol. The two-stage method is an easy way for practitioners to incorporate random effects in the analysis. This article compares performance of the two-stage method under Type I, Type II censored, and uncensored data from a Weibull distribution. We evaluate the effects of censoring type, censoring rate, sample size, and shape parameter on the two-stage method. Then, we apply the two-stage method to a real experiment. Finally, we give practitioners some recommendations for designing and analyzing reliability experiments.  相似文献   

2.
In the traditional industrial verification process, when the aim is the compliance to assigned specifications, it is difficult to find an affordable statistical method for the purpose. Most data tables in industrial procedures and standards deal with tolerance limits neglecting the potential needs to verify assigned specification limits. A two-sided tolerance interval, combined with a bivariate statistical hypothesis test can be used to address this problem. The proposed risk-based approach leads to the determination of the minimum sample size with preestablished probabilities of Type I and Type II errors, that are essential elements for estimating the safety and reliability risk. A novel method is proposed for determination of the tolerance interval testing factors. This approach calculates the testing factors based on the deviation of the mean and the variance from the null hypothesis when a specified value of Type II error is found. The deviations of the mean and variance are determined in such a way that an assigned proportion of the population falls within the specification limits. Additional studies are provided to assess the robustness of the method for nonnormal environments and to compare it with other methods.  相似文献   

3.
A statistical model for the evaluation of the effectiveness of motor vehicle inspection programs in reducing highway crashes is presented. The model is based on the assumption that the waiting time between highway crashes follows an exponential distribution. Since highway crashes are relatively rare events, it is assumed that the length of the study period is such that censoring occurs. Under these assumptions, maximum likelihood estimates of the mean waiting time θ until a crash for the non-inspected (inspected) vehicles is obtained and the corresponding test statistic is derived. As mechanically-caused accidents are but a small part of the overall accident picture and since inspection should only affect this portion, sample size requirements are investigated for various combinations of θ, Δ (increase in average time until a crash due to the effect of inspection), L (length of study period), and = β (probability of Type I error equalling probability of Type II error). For reasonable Δ, the sample required is indeed sizable.  相似文献   

4.
Maximum product spacing for stress–strength model based on progressive Type-II hybrid censored samples with different cases has been obtained. This paper deals with estimation of the stress strength reliability model R = P(Y < X) when the stress and strength are two independent exponentiated Gumbel distribution random variables with different shape parameters but having the same scale parameter. The stress–strength reliability model is estimated under progressive Type-II hybrid censoring samples. Two progressive Type-II hybrid censoring schemes were used, Case I: A sample size of stress is the equal sample size of strength, and same time of hybrid censoring, the product of spacing function under progressive Type-II hybrid censoring schemes. Case II: The sample size of stress is a different sample size of strength, in which the life-testing experiment with a progressive censoring scheme is terminated at a random time T ∈ (0,∞). The maximum likelihood estimation and maximum product spacing estimation methods under progressive Type-II hybrid censored samples for the stress strength model have been discussed. A comparison study with classical methods as the maximum likelihood estimation method is discussed. Furthermore, to compare the performance of various cases, Markov chain Monte Carlo simulation is conducted by using iterative procedures as Newton Raphson or conjugate-gradient procedures. Finally, two real datasets are analyzed for illustrative purposes, first data for the breaking strengths of jute fiber, and the second data for the waiting times before the service of the customers of two banks.  相似文献   

5.
This paper develops formulas that can be used in the design of multiple criteria sampling plans or charts for fraction nonconforming, with sampling on variables or attributes. Products often have multiple requirements and the usual acceptance tests or charts do not take this into account, and hence the overall quality of the product may be in poor control. We design tests or charts on the basis of probability of Type I and Type II errors (α and β) that refer to acceptable and rejectable levels of the overall fraction of the product that is nonconforming. Further, recognizing that the average proportion of the product that is actually nonconforming on each of the characteristics may vary independently of the other characteristics, our formulas give protection on a ‘worst case’ basis.  相似文献   

6.
Using mean square error as the criterion, we compare two least squares estimates of the Weibull parameters based on non‐parametric estimates of the unreliability with the maximum likelihood estimates (MLEs). The two non‐parametric estimators are that of Herd–Johnson and one recently proposed by Zimmer. Data was generated using computer simulation with three small sample sizes (5, 10 and 15) with three multiply‐censored patterns for each sample size. Our results indicate that the MLE is a better estimator of the Weibull characteristic value, θ, than the least squares estimators considered. No firm conclusions may be made regarding the best estimate of the Weibull shape parameter, although the use of maximum likelihood is not recommended for small sample sizes. Whenever least squares estimation of both Weibull parameters is appropriate, we recommend the use of the Zimmer estimator of reliability. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

7.
One of the most interesting decision problems is how to select the most reliable design from among k competing designs. Under a Type II censoring plan, this paper constructs an MLR (modified likelihood ratio) rule associated with a simple algorithm to compute the sample size, number of failures and critical value which are called for by this rule. Besides, the performance of this selection rule was subject to several criteria to compare with the intuitive selection rule. It is seen that the MLR selection rule is better than the intuitive selection rule.  相似文献   

8.
The effects of honing process on the friction and wear behaviors of Twin Wire Arc Spray (TWAS) coated aluminum cylinder liners were investigated using a pin-on-reciprocating type of a tribotester. Two types of coated cylinder liners were prepared for the tests: Type I — smooth honing (SH) with ~ 40° honing angle, and Type II — helical structure honing (HSH) with ~ 140° honing angle. The aluminum cylinder liners were coated with an Fe0.8C wire by the TWAS process. In un-lubricated condition, Type II specimen showed lower coefficient of friction (COF) compared to Type I specimen. This result was due to the fact that the groove of Type II was sufficiently large to trap the wear particles that may otherwise contribute to three body abrasive wear. In lubricated condition, Type I showed lower COF due to its lower roughness in comparison to Type II. The experimental results indicate that TWAS process can be effectively utilized for engine applications in conjunction with optimum honing process for the cylinder liner.  相似文献   

9.
Lifetime data collected from reliability tests are among data that often exhibit significant heterogeneity caused by variations in manufacturing, which makes standard lifetime models inadequate. Finite mixture models provide more flexibility for modeling such data. In this paper, the Weibull-log-logistic mixture distributions model is introduced as a new class of flexible models for heterogeneous lifetime data. Some statistical properties of the model are presented including the failure rate function, moments generating function, and characteristic function. The identifiability property of the class of all finite mixtures of Weibull-log-logistic distributions is proved. The maximum likelihood estimation (MLE) of model parameters under the Type I and Type II censoring schemes is derived. Some numerical illustrations are performed to study the behavior of the obtained estimators. The model is applied to the hard drive failure data made by the Backblaze data center, where it is found that the proposed model provides more flexibility than the univariate life distributions (Weibull, Exponential, logistic, log-logistic, Frechet). The failure rate of hard disk drives (HDDs) is obtained based on MLE estimates. The analysis of the failure rate function on the basis of SMART attributes shows that the failure of HDDs can have different causes and mechanisms.  相似文献   

10.
When analysing the effects of a factorial design, it is customary to take into account the probability of making a Type I error (the probability of considering an effect significant when it is non‐significant), but not to consider the probability of making a Type II error (the probability of considering an effect as non‐significant when it is significant). Making a Type II error, however, may lead to incorrect decisions regarding the values that the factors should take or how subsequent experiments should be conducted. In this paper, we introduce the concept of minimum effect size of interest and present a visualization method for selecting the critical value of the effects, the threshold value above which an effect should be considered significant, which takes into account the probability of Type I and Type II errors. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

11.
Point estimation for the scale and location parameters of the extreme-value (Type I) distribution by linear functions of order statistics from Type II progressively censored samples is investigated. Four types of linear estimators are considered: the best linear unbiased (BLU), an approximation to the BLU, unweighted regression, and a linearized maximum likelihood. Linear transformations of the estimators are also considered for reducing mean square errors. Exact bias, variance, and mean square error comparisons of the estimators are made for several censoring patterns. Since the natural logarithms of Weibull variates have extreme-value distributions, the investigation is applicable to estimation for Weibull distributions.  相似文献   

12.
13.
Modern products frequently feature monitors designed to detect actual or impending malfunctions. False alarms (Type I errors) or excessive delays in detecting real malfunctions (Type II errors) can seriously reduce monitor utility. Sound engineering practice includes physical evaluation of error rates. Type II error rates are relatively easy to evaluate empirically. However, adequate evaluation of a low Type I error rate is difficult without using accelerated testing concepts, inducing false alarms using artificially low thresholds and then selecting production thresholds by appropriate extrapolation, as outlined here. This acceleration methodology allows for informed determination of detection thresholds and confidence in monitor performance with substantial reductions over current alternatives in time and cost required for monitor development. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

14.
Reliability experiments determine which factors drive product reliability. Often, the reliability or lifetime data collected in these experiments tend to follow distinctly non‐normal distributions and typically include censored observations. The experimental design should accommodate the skewed nature of the response and allow for censored observations, which occur when products do not fail within the allotted test time. To account for these design and analysis considerations, Monte‐Carlo simulations are frequently used to evaluate experimental design properties. Simulation provides accurate power calculations as a function of sample size, allowing researchers to determine adequate sample sizes at each level of the treatment. However, simulation may be inefficient for comparing multiple experiments of various sizes. We present a closed‐form approach for calculating power, based on the noncentral chi‐squared approximation to the distribution of the likelihood ratio statistic for large samples. The solution can be used to rapidly compare multiple designs and accommodate trade‐space analyses between power, effect size, model formulation, sample size, censoring rates, and design type. To demonstrate the efficiency of our approach, we provide a comparison to estimates from simulation.  相似文献   

15.
In this paper, we give variables sampling plans for items whose failure times are distributed as either extreme-value variates or Weibull variates (the logarithms of which are from an extreme-value distribution). Tables applying to acceptance regions and operating characteristics for sample size n, ranging from 3 to 18, are given. The tables allow for Type II censoring, with censoring number r ranging from 3 to n. In order to fix the maximum time on test, the sampling plan also allows for Type I censoring.

Acceptance/rejection is based upon a statistic incorporating best linear invariant estimates, or, alternatively, maximum likelihood estimates of the location and scale parameters of the underlying extreme value distribution. The operating characteristics are computed using an approximation discussed by Fertig and Mann (1980).  相似文献   

16.
Multi-parameter one-sided hypothesis test problems arise naturally in many applications. We are particularly interested in effective tests for monitoring multiple quality indices in forestry products. Our search reveals that there are many effective statistical methods in the literature for normal data, and that they can easily be used to test hypotheses regarding parameter values permitting asymptotically normal estimators. We find that the classical likelihood ratio test is unsatisfactory, because to control the size, it must cope with the least favorable distributions at the cost of power. In this article, we find a novel way to slightly ease the size control, obtaining a much more powerful test. Simulation confirms that the new test retains good control of the Type I error and is markedly more powerful than the likelihood ratio test as well as many competitors based on normal data. The new method performs well in the context of monitoring multiple quality indices.  相似文献   

17.
This paper attempts to evaluate different methods of calculating the type‐I and type‐II errors in the measurement system. Furthermore, we apply the Bootstrap method to construct the confidence intervals for the type‐I and type‐II errors. Also, the proposed method is compared with the generalized inference method. Several factors such as the sample size, the measurement error, the process mean, and the process variation are simulated to validate the performance. The simulation results show that both methods almost have the same performance. In addition, we develop a computer program that can evaluate the error of measurement system without changing information or data. Two case studies of the nano measurement data are used to demonstrate the application. The simulation results indicate that the sample size has an influence for all cases. The type‐I and type‐II errors are decreased when the measurement error is increased. The type‐I and type‐II errors are affected by the measurement error, the process mean, and the process deviation. The case studies show that the development of nano technology requires the immediate attention of the measurement capability. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

18.
The standard efficient testing procedures in the Generalized Inverse Gaussian (GIG) family (also known as Halphen Type A family) are likelihood ratio tests, and hence rely on Maximum Likelihood (ML) estimation of the three parameters of the GIG. The particular form of GIG densities, involving modified Bessel functions, prevents in general form a closed-form expression for ML estimators, which are obtained at the expense of complex numerical approximation methods. On the contrary, Method of Moments (MM) estimators allow for concise expressions, but tests based on these estimators suffer from a lack of efficiency as compared to likelihood ratio tests. This is why, in recent years, trade-offs between ML and MM estimators have been proposed, resulting in simpler yet not completely efficient estimators and tests. In the present paper, we do not propose such a trade-off but rather an optimal combination of both methods, our tests inheriting efficiency from an ML-like construction and simplicity from the MM estimators of the nuisance parameters. This goal shall be reached by attacking the problem from a new angle, namely via the Le Cam methodology. Besides providing simple efficient testing methods, the theoretical background of this methodology further allows us to write out explicitly power expressions for our tests. A Monte Carlo simulation study shows that, also at small sample sizes, our simpler procedures do at least as good as the complex likelihood ratio tests. We conclude the paper by applying our findings on two real-data sets.  相似文献   

19.
The primary aim of this paper is to extend the inspection error consideration to chain sampling schemes, an area that has not been dealt with in the literature. A mathematical model is developed to investigate the performance of chain sampling schemes under constant inspection errors. Expressions of performance measures, such as operating characteristic function, average total inspection and average outgoing quality, are derived to aid the analysis of a general chain sampling scheme, ChSP‐4A ( ) r, developed by Frishman. This study reveals that as Type I inspection error increases the probability of acceptance will decrease and as Type II inspection error increases the acceptance probability will increase. The effect of Type II error on the probability of acceptance is very marginal compared with that of Type I error, especially when the true fraction non‐conforming is small. In addition, the effects of inspection errors can be ‘eliminated’ by transforming to its equivalent perfect inspection counterpart, hence greatly reducing the complexity of the analysis. The effects of other sampling parameters are also studied to serve as a foundation for future plan designing purposes. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

20.
A common type of reliability data is the right censored time‐to‐failure data. In this article, we developed a control chart to monitor the time‐to‐failure data in the presence of right censoring using weighted rank tests. On the basis of the asymptotic properties of the rank statistics, we derived the generic formulae for the operating characteristic functions of the control chart to show the relationship between type I error probability, type II error probability, sample size, and hazard rate change. We presented case studies to illustrate the design procedure and the effectiveness of the proposed control chart system. We also investigated and compared the performance of the proposed monitoring procedure with some available monitoring techniques for nonconformities. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号