首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
ABSTRACT

Many practical situations have both a quality characteristic and a reliability characteristic with the goal to find an appropriate compromise for the optimum conditions. Standard analyses of quality and reliability characteristics in designed experiments usually assume a completely randomized design. However, many experiments involve restrictions on randomization, e.g., subsampling, blocking, split-plot.

This article considers an experiment involving both a quality characteristic and a reliability characteristic (lifetime) within a subsampling protocol. The particular experiment uses Type I censoring for the lifetime. Previous work on analyzing reliability data within a subsampling protocol assumed Type II censoring. This article extends such an analysis for Type I censoring. The method then uses a desirability function approach combined with the Pareto front to obtain a trade-off between the quality and reliability characteristics. A case study illustrates the methodology.  相似文献   

2.
A method is presented for estimating dispersion effects (DE) from robust design experiments (RDE) with control and noise factors involving censored response data. This method is developed to discern the significance of DE from RDE and the method aims at analyzing a multi‐level/multi‐factor experiment. This method imputes censored data by a regression based imputation technique, assuming that the distribution of lifetime before and after censoring is identical. This method also models the residuals to identify important DE, assuming that the distribution of the observed random variables of the model is the same with or without censored response data. Finally, the method is demonstrated through a numerical example. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

3.
In this paper, we propose 3 new control charts for monitoring the lower Weibull percentiles under complete data and Type‐II censoring. In transforming the Weibull distribution to the smallest extreme value distribution, Pascaul et al (2017) presented an exponentially weighted moving average (EWMA) control chart, hereafter referred to as EWMA‐SEV‐Q, based on a pivotal quantity conditioned on ancillary statistics. We extended their concept to construct a cumulative sum (CUSUM) control chart denoted by CUSUM‐SEV‐Q. We provide more insights of the statistical properties of the monitoring statistic. Additionally, in transforming a Weibull distribution to a standard normal distribution, we propose EWMA and CUSUM control charts, denoted as EWMA‐YP and CUSUM‐YP, respectively, based on a pivotal quantity for monitoring the Weibull percentiles with complete data. With complete data, the EWMA‐YP and CUSUM‐YP control charts perform better than the EWMA‐SEV‐Q and CUSUM‐SEV‐Q control charts in terms of average run length. In Type‐II censoring, the EWMA‐SEV‐Q chart is slightly better than the CUSUM‐SEV‐Q chart in terms of average run length. Two numerical examples are used to illustrate the applications of the proposed control charts.  相似文献   

4.
Left censoring or left truncation occurs when specific failure information on machines is not available before a certain age. If only the number of failures but not the actual failure times before a certain age is known, we have left censoring. If neither the number of failures nor the times of failure are known, we have left truncation. A datacenter will typically include servers and storage equipment installed on different dates. However, data collection on failures and repairs may not begin on the installation date. Often, the capture of reliability data starts only after the initiation of a service contract on a particular date. Thus, such data may exhibit severe left censoring or truncation, since machines may have operated for considerable time periods without any reliability history being recorded. This situation is quite different from the notion of left censoring in non-repairable systems, which has been dealt with extensively in the literature. Parametric modeling methods are less intuitive when the data has severe left censoring. In contrast, non-parametric methods based on the Mean Cumulative Function (MCF), recurrence rate plots, and calendar time analysis are simple to use and can provide valuable insights into the reliability of repairable systems, even under severe left censoring or truncation. The techniques shown have been successfully applied at a large server manufacturer to quantify the reliability of computer servers at customer sites. In this discussion, the techniques will be illustrated with actual field examples.  相似文献   

5.
Maximum product spacing for stress–strength model based on progressive Type-II hybrid censored samples with different cases has been obtained. This paper deals with estimation of the stress strength reliability model R = P(Y < X) when the stress and strength are two independent exponentiated Gumbel distribution random variables with different shape parameters but having the same scale parameter. The stress–strength reliability model is estimated under progressive Type-II hybrid censoring samples. Two progressive Type-II hybrid censoring schemes were used, Case I: A sample size of stress is the equal sample size of strength, and same time of hybrid censoring, the product of spacing function under progressive Type-II hybrid censoring schemes. Case II: The sample size of stress is a different sample size of strength, in which the life-testing experiment with a progressive censoring scheme is terminated at a random time T ∈ (0,∞). The maximum likelihood estimation and maximum product spacing estimation methods under progressive Type-II hybrid censored samples for the stress strength model have been discussed. A comparison study with classical methods as the maximum likelihood estimation method is discussed. Furthermore, to compare the performance of various cases, Markov chain Monte Carlo simulation is conducted by using iterative procedures as Newton Raphson or conjugate-gradient procedures. Finally, two real datasets are analyzed for illustrative purposes, first data for the breaking strengths of jute fiber, and the second data for the waiting times before the service of the customers of two banks.  相似文献   

6.
The progressive censoring scheme has received a considerable amount of attention in the last 15 years. During the last few years, joint progressive censoring scheme has gained some popularity. Recently, the authors Mondal and Kundu (“A New Two Sample Type-II Progressive Censoring Scheme,” Communications in Statistics-Theory and Methods) introduced a balanced two-sample type II progressive censoring scheme and provided the exact inference when the two populations are exponentially distributed. In this article, we consider the case when the two populations follow Weibull distributions with the common shape parameter and different scale parameters. We obtain the maximum likelihood estimators of the unknown parameters. It is observed that the maximum likelihood estimators cannot be obtained in explicit forms; hence, we propose approximate maximum likelihood estimators, which can be obtained in explicit forms. We construct the asymptotic and bootstrap confidence intervals of the population parameters. Further, we derive an exact joint confidence region of the unknown parameters. We propose an objective function based on the expected volume of this confidence region, and using that, we obtain the optimum progressive censoring scheme. Extensive simulations have been performed to see the performances of the proposed method, and one real data set has been analyzed for illustrative purposes.  相似文献   

7.
When analysing the effects of a factorial design, it is customary to take into account the probability of making a Type I error (the probability of considering an effect significant when it is non‐significant), but not to consider the probability of making a Type II error (the probability of considering an effect as non‐significant when it is significant). Making a Type II error, however, may lead to incorrect decisions regarding the values that the factors should take or how subsequent experiments should be conducted. In this paper, we introduce the concept of minimum effect size of interest and present a visualization method for selecting the critical value of the effects, the threshold value above which an effect should be considered significant, which takes into account the probability of Type I and Type II errors. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

8.
In many industrial applications, it is not always feasible to continuously monitor the life testing experiments to collect lifetime data. Moreover, intermediate removals of the test units from the life testing experiment are sometimes essential. Progressive Type‐I interval censoring schemes are useful in these scenarios. Optimal planning of such progressive Type‐I interval censoring schemes is an important issue to the experimenter, as the optimal plans can achieve the desired objectives using much lesser resources. This article provides Bayesian D‐optimal progressive Type‐I interval censoring schemes, assuming that the lifetime follows a log‐normal distribution. An algorithm is provided to find the optimal censoring schemes and the number of inspections. The algorithm is then used to obtain the optimal Bayesian progressive Type‐I interval censoring schemes in 2 different contexts. The resulting optimal Bayesian censoring schemes are compared with the corresponding locally optimal censoring schemes. A detailed sensitivity analysis is performed to investigate the effect of prior information. The sampling variation associated with the optimal censoring schemes is visualized through a simulation study.  相似文献   

9.
Robust parameter design (RPD) aims to build product quality in the early design phase of product development by optimizing operating conditions of process parameters. A vast majority of the current RPD studies are based on an uncensored random sample from a process distribution. In reality, censoring schemes are widely implemented in lifetime testing, survival analysis, and reliability studies in which the value of a measurement is only partially known. However, there has been little work on the development of RPD when censored data are under study. To fill in the research gaps given practical needs, this paper proposes response surface–based RPD models that focus on survival times and hazard rate. Primary tools used in this paper include the Kaplan‐Meier estimator, Greenwood's formula, the Cox proportional hazards regression method, and a nonlinear programming method. The experimental modeling and optimization procedures are demonstrated through a numerical example. Various response surface–based RPD optimization models are proposed, and their RPD solutions are compared.  相似文献   

10.
In this article we compute the expected Fisher information and the asymptotic variance–covariance matrix of the maximum likelihood estimates based on a progressively type II censored sample from a Weibull distribution by direct calculation as well as the missing-information principle. We then use these values to determine the optimal progressive censoring plans. Three optimality criteria are considered, and some selected optimal progressive censoring plans are presented according to these optimality criteria. We also discuss the construction of progressively censored reliability sampling plans for the Weibull distribution. Three illustrative examples are provided with discussion.  相似文献   

11.
基于屏蔽的系统寿命数据,讨论串联系统中Burr XII部件的可靠性估计问题.利用定数截尾样本,通过Bayes分析方法分别在平方损失、q-对称熵损失、Linex损失以及MLinex损失下给出了部件未知参数、可靠度函数和失效率函数的Bayes估计.最后通过Monte-Carlo方法进行随机模拟,研究截尾数和屏蔽水平对估计效果的影响并对各种估计进行了比较.  相似文献   

12.
Data collection and its analysis in the field of nuclear safety is an important task in the sense that it powers the improvement of safety as well as reliability of the plant. Thus, occupational exposure data analysis is presented to measure the safety or reliability of radiation protection of a given facility. It also is required as a basic input in making decisions on radiation protection regulations and recommendations. A common practice in radiation protection is to record a zero for observation below minimum detection limit (MDL) doses, which leads to an underestimation of true doses and overestimation of the dose-response relationship. Exposure data (both external and internal) are collected by monitoring each individual and this kind of monitoring generally is graded as low-level monitoring. So, in such low-level monitoring, the occurrence of exposure below MDL invites statistical complications for estimating mean and variance because the data are generally censored, i.e observations below MDL are marked. In Type I censoring, the point of censoring (e.g. the detection limit) is 'fixed' a priori for all observations and the number of the censored observations varies. In Type II censoring, the number of censored observations is fixed a priori, and the point of censoring vary. The methodology generally followed in estimating mean and variance with these censored data was the replacement of missing dose by half the MDL. In this paper, authors have used the maximum likelihood estimation (MLE) approach for the estimation of mean and standard deviation. A computer code BDLCENSOR has been developed in which all these MLE-based advanced algorithms are implemented. In addition to the MLE-based method, an expectation maximisation algorithm has also been implemented. The code is written using Visual BASIC 6.0. The paper describes the details of the algorithms adopted for handling such censored data to estimate bias free mean and standard deviation.  相似文献   

13.
The ability to model lifetime data from life test experiments is of paramount importance to all manufacturers, engineers and consumers. The Weibull distribution is commonly used to model the data from life tests. Standard Weibull analysis assume completely randomized designs. However, not all life test experiments come from completely randomized designs. Experiments involving sub‐sampling require a method for properly modeling the data. We provide a Weibull nonlinear mixed models (NLLMs) methodology for incorporating random effects in the analysis. We apply this methodology to a reliability life test on glass capacitors. We compare the NLLMs methodology to other available methods for incorporating random effects in reliability analysis. A simulation study reveals the method proposed in this paper is robust to both model misspecification and increasing levels of variance on the random effect. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

14.
Accelerated life testing has been widely used in product life testing experiments because it can quickly provide information on the lifetime distributions by testing products or materials at higher than basic conditional levels of stress, such as pressure, temperature, vibration, voltage, or load to induce early failures. In this paper, a step stress partially accelerated life test (SS-PALT) is regarded under the progressive type-II censored data with random removals. The removals from the test are considered to have the binomial distribution. The life times of the testing items are assumed to follow length-biased weighted Lomax distribution. The maximum likelihood method is used for estimating the model parameters of length-biased weighted Lomax. The asymptotic confidence interval estimates of the model parameters are evaluated using the Fisher information matrix. The Bayesian estimators cannot be obtained in the explicit form, so the Markov chain Monte Carlo method is employed to address this problem, which ensures both obtaining the Bayesian estimates as well as constructing the credible interval of the involved parameters. The precision of the Bayesian estimates and the maximum likelihood estimates are compared by simulations. In addition, to compare the performance of the considered confidence intervals for different parameter values and sample sizes. The Bootstrap confidence intervals give more accurate results than the approximate confidence intervals since the lengths of the former are less than the lengths of latter, for different sample sizes, observed failures, and censoring schemes, in most cases. Also, the percentile Bootstrap confidence intervals give more accurate results than Bootstrap-t since the lengths of the former are less than the lengths of latter for different sample sizes, observed failures, and censoring schemes, in most cases. Further performance comparison is conducted by the experiments with real data.  相似文献   

15.
A common type of reliability data is the right censored time‐to‐failure data. In this article, we developed a control chart to monitor the time‐to‐failure data in the presence of right censoring using weighted rank tests. On the basis of the asymptotic properties of the rank statistics, we derived the generic formulae for the operating characteristic functions of the control chart to show the relationship between type I error probability, type II error probability, sample size, and hazard rate change. We presented case studies to illustrate the design procedure and the effectiveness of the proposed control chart system. We also investigated and compared the performance of the proposed monitoring procedure with some available monitoring techniques for nonconformities. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

16.
We consider the problem of estimating multicomponent stress-strength (MSS) reliability under progressive Type II censoring when stress and strength variables follow unit Gompertz distributions with common scale parameter. We estimate MSS reliability under frequentist and Bayesian approaches. Bayes estimates are obtained by using Lindley approximation and Metropolis-Hastings algorithm methods. Further, we obtain uniformly minimum variance unbiased estimates of the reliability when common scale parameter is known. Asymptotic, bootstrap confidence interval and highest posterior density credible intervals have been constructed. We perform Monte Carlo simulations to compare the performance of proposed estimates and also present a discussion. Finally, three real data sets are analyzed for illustrative purposes.  相似文献   

17.
Exponentially weighted moving average (EWMA) control charts are widely used for the detection of small shifts as opposed to Shewhart charts, which are commonly used for the detection of large‐size shifts in a process. Many interesting features of EWMA charts are available in literature mainly for complete data. This study intends to investigate the EWMA control charts under Type‐I censoring for Poisson–exponential distributed lifetimes. The two commonly used sampling schemes, that is, simple random sampling and rank set sampling, are used in this study. The monitoring of mean level shifts using censored data is of a great interest in many applied problems. The idea of conditional expected values is employed in the monitoring of small mean level shifts in the current study. The performance of the EWMA charts is evaluated using the average run length extra quadratic loss and performance comparison index measures. The optimum sample‐size comparisons for the specified and unspecified parameter are also part of this study. Moreover, an illustrative example and a case study for practical considerations are also discussed. It is observed that varying censoring rates affect the performance of the chart depending upon the type of sampling scheme and the amount of shifts. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

18.
When accelerated life tests can be applied to simulate the normal product operating conditions, engineers usually terminate the test upon successfully running to a multiple of a prespecified bogey without any failure to demonstrate a required minimum reliability level. This testing philosophy is called ‘bogey test’ (or extended test) in the automotive industry and is a subset of type I censored tests in standard reliability literature. Sometimes engineers encounter the difficulty of using this reliability demonstration approach when incidents do occur during the test. The incident might be a legitimate failure or a withdrawal caused by external forces such as a broken fixture of a non-functional power supply. This paper derives a two-stage sampling plan for possible back-up solution on planning and running a bogey test with possible occurrence of incidents. A Weibull distribution with a given shape parameter is assumed for the underlying life characteristic.  相似文献   

19.
This paper presents a new method for estimating the modified failure ranks used for probability plotting and least squares estimation from randomly censored failure samples. The proposed method uses a Bayesian smoothed piecewise hazard rate approximation to derive modified failure ranks that are sensitive to the age at censoring. Using a simulation study, the new estimator is shown to be more robust than the known failure ranks estimators, particularly for heavy censoring and small failure samples. Practical use includes field failure data analysis, for which the survival and failure probability predictions are more realistic, i.e. less pessimistic, when the new estimator is used compared with predictions obtained from using the traditional Johnson method for failure rank estimators. Copyright © 2000 John Wiley & Sons, Ltd.  相似文献   

20.
Right‐censored failure time data is a common data type in manufacturing industry and healthcare applications. Some control charting procedures were previously proposed to monitor the right‐censored failure time data under some specific distributional assumptions for the observed failure times and censoring times. But these assumptions may not be always satisfied in the real‐world data. Therefore, a more generalized control chart technique, which can handle different types of distributions of the data, is highly needed. Considering the limitations of existing methodologies for detecting changes of hazard rate, this paper develops a generalized statistical procedure to monitor the failure time data in the presence of random right censoring when abundant historical failure times are available. The developed method makes use of the one‐sample nonparametric rank tests without any specific assumptions of the data distribution. The operating characteristic functions of the control chart are derived on the basis of the asymptotic properties of the rank statistics. Case studies are presented to show the effectiveness of the proposed control chart technique, and its performance is investigated and compared with some Shewhart‐type control charts based on the conditional expected value weight. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号