首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The critical values presented in the standard tables of the Kolmogorov-Smirnov statistic do not apply when one of more of the parameters must be estimated from the data. However, it is possible, using Monte Carlo methods, to construct a table of this statistic for use when the parameters must be estimated from the data. A table of values for use with the two-parameter Weibull is presented herein. Moreover, a Monte Carlo study of the power of the test against commonly used statistical models is presented. Furthermore, additional studies suggest that the same table can be used for the normal, lognormal and Gumbel (Extreme Value Distribution, Type I of Maxima) distributions.  相似文献   

2.
A flexible procedure is described and demonstrated to determine approximate confidence intervals for system reliability when there is uncertainty regarding component reliability information. The approach is robust, and applies to many system-design configurations and component time-to-failure distributions, resulting in few restrictions for the use of these confidence intervals. The methods do not require any parametric assumptions for component reliability or time-to-failure, and allows type-I or -II censored data records. The confidence intervals are based on the variance of the component and system reliability estimates and a lognormal distribution assumption for the system reliability estimate. This approach applies to any system design which can be decomposed into series and/or parallel connections between the components. To evaluate the validity of the confidence limits, numerous simulations were performed for two hypothetical systems with different data sample-sizes and confidence levels. The test cases and empirical results demonstrate that this new method for estimating confidence intervals provides good coverage, can be readily applied, requires only minimal computational effort, and applies for a much greater range of design configurations and data types compared to other methods. For many design problems, these confidence intervals are preferable because there is no requirement for an exponential time-to-failure distribution nor are component data limited to binomial data  相似文献   

3.
Persons designing testing programs to specify reliability functions and failure rates need theoretical means of assigning confidence intervals to the quantities obtained. Also, persons designing acceptance tests for items with unknown reliability functions need simple, general methods of test design which do not rely on the myriad specialized tables for specific mathematical models available in the literature. The two problems are really opposite ways of looking at the same theoretical structure. The theoretical structure is examined here and solutions to the problems are posed.  相似文献   

4.
For censored Weibull regression data arising from typical accelerated life tests (ALTs), the performance of small-sample normal-theory confidence intervals is summarized by three points: (1) they have highly asymmetric error rates; (2) they can be extremely anti-conservative; and (3) these effects worsen when higher confidence levels are used. Likelihood-ratio-based confidence intervals have much more symmetric error rates which are not as extremely anti-conservative as normal-theory intervals can be. For typical ALTs, likelihood-ratio-based confidence intervals are better than those based on asymptotic normal theory. Likelihood-ratio-based confidence intervals require more computation than intervals based on the asymptotic normality of the maximum-likelihood estimators. The resource spent on computing is, however, usually very small compared to the other costs involved in an ALT  相似文献   

5.
The simple step-stress model under Type-II censoring based on Weibull lifetimes, which provides a more flexible model than the exponential model, is considered in this paper. For this model, the maximum likelihood estimates (MLE) of its parameters, as well as the corresponding observed Fisher Information Matrix, are derived. The likelihood equations do not lead to closed-form expressions for the MLE, and they need to be solved by using an iterative procedure, such as the Newton-Raphson method. We also present a simplified estimator, which is easier to compute, and hence is suitable to use as an initial estimate in the iterative process for the determination of the MLE. We then evaluate the bias, and mean square error of these estimates; and provide asymptotic, and bootstrap confidence intervals for the parameters of the Weibull simple step-stress model. Finally, the results are illustrated with some examples.   相似文献   

6.
A hybrid censoring scheme is a mixture of type-I and type-II censoring schemes. This article presents the statistical inferences on Weibull parameters when the data are type-II hybrid censored. The maximum likelihood estimators, and the approximate maximum likelihood estimators are developed for estimating the unknown parameters. Asymptotic distributions of the maximum likelihood estimators are used to construct approximate confidence intervals. Bayes estimates, and the corresponding highest posterior density credible intervals of the unknown parameters, are obtained using suitable priors on the unknown parameters, and by using Markov chain Monte Carlo techniques. The method of obtaining the optimum censoring scheme based on the maximum information measure is also developed. We perform Monte Carlo simulations to compare the performances of the different methods, and we analyse one data set for illustrative purposes.  相似文献   

7.
We propose to construct confidence intervals of parameters of 1/f-type signals using a nonparametric wavelet-based bootstrap method. Bootstrap-based confidence intervals of maximum likelihood parameter estimates are compared to the confidence intervals derived from the Cramer-Rao lower bound (CRLB). For moderately large data sample sizes, the bootstrap approach achieves the nominal coverage and may perform better than the CRLB-based parametric approach  相似文献   

8.
The average experiment-time under a type-ll censoring plan, when the lifetime follows a 2-parameter Weibull distribution, is expressed here in computable form. The formulae can be used to compute ratios of average experiment times under type-II censoring and complete sampling plans. The ratios provide information on how much experiment time can be saved by using a type-II censoring plan instead of a complete sampling plan. These measures can help reliability analysts choose a suitable combination of censoring number and initial sample size. Tables of average experiment times and ratios are provided. Applications of the tables, confidence intervals for the ratio, and computational issues are discussed  相似文献   

9.
The Fail-Safe principle as applied to aircraft structural design implies that there is insufficient knowledge of the life capability of the design. Control of inspection intervals is not supported by risk calculations, yet only a sample of aircraft is inspected, at intervals whose duration is rapidly increased. This paper provides risk estimates based on a simple mathematical model. Catastrophic failure is treated in two stages modeled respectively by 2-parameter and 3-parameter Weibull distributions. Bayes inferences are made about the scale parameter using in-service survivor times. Only those cases are treated for which no failures have occurred. This results in a suggested form of inspection policy. A separate non-Bayes analysis confirms the Bayes risk estimate; thus the assumed improper prior is interesting. This prior, the only simple one which is tractable for the case of no failures, transforms, for the exponential distribution, to the uniform prior, in contrast to the hyperbolic one usually used. The analysis is simplistic but provides a ball-park estimate which would otherwise be unavailable. It can be used with caution as a check on inspection programs already derived by other means. It can also serve in tutorial demonstration of the statistical effects of the various parameters, to airworthiness managers. Possibly it might form the basis of a more sophisticated analysis.  相似文献   

10.
In this paper, the estimation of parameters based on a progressively Type-II censored sample from a modified Weibull distribution is studied. The likelihood equations, and the maximum likelihood estimators are derived. The estimators based on a least-squares fit of a multiple linear regression on a Weibull probability paper plot are compared with the MLE via Monte Carlo simulations. The observed Fisher information matrix, as well as the asymptotic variance-covariance matrix of the MLE are derived. Approximate confidence intervals for the parameters are constructed based on the s-normal approximation to the asymptotic distribution of MLE, and log-transformed MLE. The coverage probabilities of the individual s-normal-approximation confidence intervals for the parameters are examined numerically. Some recommendations are made from the results of a Monte Carlo simulation study, and a numerical example is presented to illustrate all of the methods of inference developed here.  相似文献   

11.
Using degradation measurements is becoming more important in reliability studies because fewer failures are observed during short experiment times. Most of the literature discusses continuous degradation processes such as Wiener, gamma, linear, and nonlinear random effect processes. However, some types of degradation processes do not occur in a continuous pattern. Discrete degradations have been found in many practical problems, such as leakage current of thin gate oxides in nano-technology, crack growth of metal fatigue, and fatigue damage of laminates used for industrial specimens. In this research, we establish a procedure based on a likelihood approach to assess the reliability using a discrete degradation model. A non-homogeneous Weibull compound Poisson model with accelerated stress variables is considered. We provide a general maximum likelihood approach for the estimates of model parameters, and derive the breakdown time distributions. A data set measuring the leakage current of nanometer scale gate oxides is analyzed by using the procedure. Goodness-of-fit tests are considered to check the proposed models for the amount of degradation increment, and the rate of event occurrence. The estimated reliabilities are calculated at lower stress of the accelerated variable, and the approximate confidence intervals of quantiles for breakdown time distribution are given to quantify the uncertainty of the estimates. Finally, a simulation study based on the gate oxide data is built for the discrete degradation model to explore the finite sample properties of the proposed procedure.  相似文献   

12.
We introduce formula-based approaches for statistical tests regarding peri-stimulus time histograms (PSTHs) in cases when a conditioning stimulus was triggered by a motor-unit discharge with a constant delay time. The individual bin test and the cumulative sum test have recently been reported as methods for the quantitative analysis of PSTHs in the evaluation of human neural projections. These tests calculate confidence intervals using simulated point processes on the discharge of a motor unit, but require a significant amount of calculation time for the point process simulation. In order to overcome such a disadvantage in practical use, we provide a statistical formulization of the two above-mentioned tests using a combination theory. In general, the time required for calculating confidence intervals using these formula-based approaches was 2-13 times faster than when using the previous approaches. Unlike the previous ones, these formula-based approaches do not need to judge that a simulation process converges to the substationary state, and calculate an ideal distribution of statistical noise on the histogram, thereby providing high accuracy. We conclude that the formula-based approaches increase reliability and are sufficiently sophisticated for practical use.  相似文献   

13.
Parametric statistical methods assume samples that have a normal distribution and representative sample sizes (i.e. n >20). Quantitative electron microscopy is inherently restricted to small sample sizes and a priori there is no way to know if the expression of the ligand being studied has a normal distribution. Thus to make statistical inferences based on data generated by quantitative electron microscopy using parametric methods may not be justified. Nonparametric statistical methods offer a tool for the evaluation of data that do not meet the criteria for analysis by parametric methods. In this report I show the utility of using nonparametric statistical methods for the analysis of data generated by quantitative electron microscopy.  相似文献   

14.
A model is developed to determine the variance of system reliability estimates and to estimate confidence intervals for series-parallel systems with arbitrarily repeated components. For these systems, different copies of the same component-type are used several or many times within the system, but only a single reliability estimate is available for each distinct component-type. The single estimate is used everywhere the component appears in the system design, and component estimation-error is then magnified at the system-level. The "system-reliability estimate" variance and confidence intervals are derived when the number of component failures follow the binomial distribution with an unknown, yet estimable, probability of failure. The "system-reliability estimate" variance and confidence intervals are obtained by expressing system reliability as a linear sum of products of higher order moments for component unreliability. The generating function is used to determine the moments of the component-unreliability estimates. This model is preferable for many system reliability estimation problems because it does not require independent component and subsystem reliability estimates; it is demonstrated with an example  相似文献   

15.
Critically examined several important aspects concerning the experimental determination of Weibull shape factors (slopes). Statistical characteristics of breakdown distribution such as area scaling property and the extreme-value distribution are reviewed. We discuss the experimental measurement methodology of time-to-breakdown (T/sub BD/) or charge-to-charge (Q/sub BD/) distributions with the emphasis on the accuracy. The influence of sample numbers on the estimation of Weibull distribution parameters such as characteristic T/sub BD/ and Weibull slopes are investigated in the context of confidence limits. Some examples of the measurement fallacy on Weibull slopes are given. Three different experimental techniques to measure Weibull slopes are described and compared in terms of their advantages and disadvantages. Finally, we will give a comparison of these three methods. Having established these fundamental aspects of the Weibull slope measurements, we will present our extensive experimental data on thickness, voltage, temperature, and polarity dependence of Weibull slopes in part II.  相似文献   

16.
A number of applications, including claims made under Federal social welfare programs, require retrospective sampling over multiple time periods. A common characteristic of such samples is that population members could appear in multiple time periods. When this occurs, and when the marginal cost of obtaining multiperiod information is minimum for a member appearing in the sample of the period being actively sampled, then a method which is herein called progressive random sampling (PRS) may be applied. The proposed method serves to either improve sampling estimates or reduce sample sizes, as demonstrated by two example applications  相似文献   

17.
Finite-difference, time-domain (FDTD) is based upon the assumption that field behavior between sample points (i.e., cell nodes) is linear; for propagation in lossless or low-loss materials, the assumption of linearity will be valid as long as the number of cells per wavelength is kept above some minimum value. For good conductors, where the wavelength decreases many orders of magnitude from its free-space size, and the fields are decaying exponentially, it becomes impractical to shrink the cell size so as to maintain linearity between cells. When the number of cells per wavelength criterion is violated at a boundary, FDTD will not yield correct estimates of reflection from, or transmission into, that boundary. The work presented details and provides validation for two approaches that can be used to achieve realistic results when modeling good conductors with FDTD using practical cell sizes. These approaches do not require modifications to the FDTD algorithms, and do not affect program execution times. Achieving accurate loss estimates will be of particular interest to those modeling resonant structures using FDTD  相似文献   

18.
19.
Methods are presented for estimating the fractional area coverage of geophysical phenomena from measurements taken along a transect. These methods are most useful for assessing potential errors in sampling strategies but may also be used for the analysis of data. The procedure provides a means to compute confidence interval estimates of the true area fraction when the autocovariance function for the geophysical field is known or assumed. Another approach that does not require a priori knowledge of the underlying autocovariance function is described for the special case of linear features modeled as a Poisson line process  相似文献   

20.
Monte Carlo simulation is used to assess the statistical properties of some Bayes procedures in situations where only a few data on a system governed by a NHPP (nonhomogeneous Poisson process) can be collected and where there is little or imprecise prior information available. In particular, in the case of failure truncated data, two Bayes procedures are analyzed. The first uses a uniform prior PDF (probability distribution function) for the power law and a noninformative prior PDF for α, while the other uses a uniform PDF for the power law while assuming an informative PDF for the scale parameter obtained by using a gamma distribution for the prior knowledge of the mean number of failures in a given time interval. For both cases, point and interval estimation of the power law and point estimation of the scale parameter are discussed. Comparisons are given with the corresponding point and interval maximum-likelihood estimates for sample sizes of 5 and 10. The Bayes procedures are computationally much more onerous than the corresponding maximum-likelihood ones, since they in general require a numerical integration. In the case of small sample sizes, however, their use may be justified by the exceptionally favorable statistical properties shown when compared with the classical ones. In particular, their robustness with respect to a wrong assumption on the prior β mean is interesting  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号