首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
It is shown that the Laplace transform of a continuous lifetime random variable with a polynomial failure rate function satisfies a certain differential equation. This generates a set of differential equations which can be used to express the polynomial coefficients in terms of the derivatives of the Laplace transform at the origin. The technique described here establishes a procedure for estimating the polynomial coefficients from the sample moments of the distribution. Some special cases are worked through symbolically using computer algebra. Real data from the literature recording bus motor failures is used to compare the proposed approach with results based on the least squares procedure.  相似文献   

2.
Often the quality of a process is determined by several correlated univariate variables. In such cases, the considered quality characteristic should be treated as a vector. Several different multivariate process capability indices (MPCIs) have been developed for such a situation, but confidence intervals or tests have been derived for only a handful of these. In practice, the conclusion about process capability needs to be drawn from a random sample, making confidence intervals or tests for the MPCIs important. Principal component analysis (PCA) is a well‐known tool to use in multivariate situations. We present, under the assumption of multivariate normality, a new MPCI by applying PCA to a set of suitably transformed variables. We also propose a decision procedure, based on a test of this new index, to be used to decide whether a process can be claimed capable or not at a stated significance level. This new MPCI and its accompanying decision procedure avoid drawbacks found for previously published MPCIs with confidence intervals. By transforming the original variables, we need to consider the first principal component only. Hence, a multivariate situation can be converted into a familiar univariate process capability index. Furthermore, the proposed new MPCI has the property that if the index exceeds a given threshold value the probability of non‐conformance is bounded by a known value. Properties, like significance level and power, of the proposed decision procedure is evaluated through a simulation study in the two‐dimensional case. A comparative simulation study between our new MPCI and an MPCI previously suggested in the literature is also performed. These studies show that our proposed MPCI with accompanying decision procedure has desirable properties and is worth to study further. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

3.
《技术计量学》2013,55(3):436-444
Goodness-of-fit tests are proposed for the assumption of normality of random errors in experimental designs where the variance of the response may vary with the levels of the covariates. The exact distribution of standardized residuals is used to make the probability integral transform for use in tests based on the empirical distribution function. A different mean and variance is estimated for each level of the covariate; corresponding large sample theory is provided. The proposed tests are robust to a possible misspecification of the model and permit data collected from several similar experiments to be pooled to improve the power of the test.  相似文献   

4.
In this article, we propose a general procedure for multivariate generalizations of univariate distribution-free tests involving two independent samples as well as matched pair data. This proposed procedure is based on ranks of real-valued linear functions of multivariate observations. The linear function used to rank the observations is obtained by solving a classification problem between the two multivariate distributions from which the observations are generated. Our proposed tests retain the distribution-free property of their univariate analogs, and they perform well for high-dimensional data even when the dimension exceeds the sample size. Asymptotic results on their power properties are derived when the dimension grows to infinity and the sample size may or may not grow with the dimension. We analyze several high-dimensional simulated and real data sets to compare the empirical performance of our proposed tests with several other tests available in the literature.  相似文献   

5.
Book Review     
The two major assumptions required by the two-sample t test to guarantee α are normality and a known ratio (usually 1) of variances in the two populations. Alternatives to this test are reviewed for situations where either or both of these assumptions are in doubt. Small sample power curves were derived for each test procedure by Monte-Carlo sampling. The sampling was done under conditions most favorable to the t test. The results showed that, for the cases studied, the power curves for two of the alternative tests compared very well with the t even for small samples.  相似文献   

6.
A global crack-line displacement fitting procedure to extract the stress intensity factors (SIFs) is proposed in this paper. The proposed procedure uses the entire crack opening displacement (COD) data and its numerical calculation only involves in displacement fields. The post processing is greatly reduced and no new contour and remeshing are needed. The procedure can be easily applied to mixed mode crack problems with arbitrary crack shapes. In addition, the errors of the obtained SIFs can be estimated from the error information of COD data by this procedure. The procedure has been applied to several test examples of crack problems with their COD data being calculated by using a constant element boundary element method with two special crack tip elements. The results verified that the proposed procedure is reliable, accurate and easily implementing to extract SIFs. This revised version was published online in July 2006 with corrections to the Cover Date.  相似文献   

7.
The Weibull distribution is widely used as a failure model, particularly for mechanical components. This distribution is rich in shape and requires a fairly large sample size to produce accurate statistical estimators, particularly for the lower percentiles, as is usually required for a reliability analysis. In practice, sample sizes are almost always small and subjective judgement is applied, aided by a Weibull plot of the test data to determine the adequacy of the component design to meet reliability goals. The procedure is somewhat ad hoc, but apparently reasonably good results are obtained based on our experience with many past design and development programs and by comparison with actual field performance. We conjecture that the reason this procedure is successful is that test programs and methodology are standardized and quite well documented, from the standpoint of the physical test parameters. Test personnel have a wealth of experience in testing components, from one program to the next, and reliability judgements are made with regard to well-known points in the product's life. All of these factors tend to promote correct outcomes from the decision process even though sample sizes are small.

The Bayesian approach provides some structure for the application of subjective judgement to this decision process. To apply this approach, several complex decisions must be made. In this article, we have provided a structure for this decision process.  相似文献   

8.
In this article, we present a range test using a two-stage sampling procedure for testing the hypothesis that the normal means are falling into a practical indifference zone. Both the level and the power associated with the proposed test are controllable and are completely independent of the unknown variances. Tables needed for implementation are given.  相似文献   

9.
Multinomial sampling, in which the total number of sampled subjects is fixed, is probably one of the most commonly used samplig schemes in categorical data analysis. When we apply multinomial sampling to collect subjects who are subject to a random exclusion from our data analysis, the number of subjects falling into each comparison group is random and can be small with a positive probability. Thus, the application of the traditional statistics derived from large sample theory for testing equality between two independent proportions can sometimes be theoretically invalid. On the other hand, using fisher's exact test can always assure that the true type I error is less than or equal to a nominal α-level. Thus, we discuss here power and sample size calculation based on this exact test. For a desired power at a given α-level, we develop an exact sample size calculation procedure, that accounts for a random loss of sampled subjects, for testing equality between two independent proportions under multinomial sampling. Because the exact sample size calculation procedure requires intensive computations when the underlying required sample size is large, we also present an approximate sample size formula using large sample theory. On the basis of Monte Carlo simulation, we note that the power of using this approximate sample size formula generally agrees well with the desired power on the basis of the exact test. Finally, we propose a trial-and-error procedure using the approximate sample size as an initial estimate and Monte Carlo simulation to expedite the procedure for searching the minimum required sample size.  相似文献   

10.
An extended failure mode effect and criticality analysis (FMECA)-based sample allocation method for testability verification is presented in this study to deal with the poor representativeness of test sample sets and the randomness of the testability evaluation results caused by unreasonable selection of failure samples. First, the fault propagation intensity is introduced as part of the extended information of FMECA, and the sample allocation impact factors of component units and failure modes are determined under this framework. Then, the failure mode similarity and impact factor support are defined, and the game decision method for weighing the relationship between similarity and support is proposed to obtain the weight of failure mode impact factor. Finally, a two-step allocation framework of test samples is formulated to realize the sample allocation of component units and failure modes. This method is applied to the testability verification test of a launch control system. Results show that this method can obtain more representative test samples compared with the traditional sample allocation method while effectively reducing randomness of single testability evaluation result.  相似文献   

11.
To estimate power plant reliability, a probabilistic safety assessment might combine failure data from various sites. Because dependent failures are a critical concern in the nuclear industry, combining failure data from component groups of different sizes is a challenging problem. One procedure, called data mapping, translates failure data across component group sizes. This includes common cause failures, which are simultaneous failure events of two or more components in a group. In this paper, we present a framework for predicting future plant reliability using mapped common cause failure data. The prediction technique is motivated by discrete failure data from emergency diesel generators at US plants. The underlying failure distributions are based on homogeneous Poisson processes. Both Bayesian and frequentist prediction methods are presented, and if non-informative prior distributions are applied, the upper prediction bounds for the generators are the same.  相似文献   

12.
An improved harmony search algorithm is proposed which is found to be more efficient than the original harmony search algorithm for slope stability analysis. The effectiveness of the proposed algorithm is examined by considering several published cases. The improved harmony search method is applied to slope stability problems with five types of procedure for generating trial slip surfaces. It is demonstrated that the improved harmony search algorithm is efficient and effective for the minimization of factors of safety for various difficult problems, and the method of generating the trial failure surfaces can be important in the minimization process.  相似文献   

13.
Trend analysis is a common statistical method used to investigate the operation and changes of a repairable system over time. This method takes historical failure data of a system or a group of similar systems and determines whether the recurrent failures exhibit an increasing or decreasing trend. Most trend analysis methods proposed in the literature assume that the failure times are known, so the failure data is statistically complete; however, in many situations, such as hidden failures, failure times are subject to censoring. In this paper we assume that the failure process of a group of similar independent repairable units follows a non-homogenous Poisson process with a power law intensity function. Moreover, the failure data are subject to left, interval and right censoring. The paper proposes using the likelihood ratio test to check for trends in the failure data. It uses the Expectation-Maximization (EM) algorithm to find the parameters, which maximize the data likelihood in the case of null and alternative hypotheses. A recursive procedure is used to solve the main technical problem of calculating the expected values in the Expectation step. The proposed method is applied to a hospital's maintenance data for trend analysis of the components of a general infusion pump.  相似文献   

14.
Operating characteristic (OC) curves are often useful in determining how large a sample is required to detect a specified difference for a particular consumer and producer risk. In this paper, OC curves with Bayes stopping rules for the exponential distribution are developed. Example curves are provided for the sequential and batch testing situations. The power of the test is greater under batch testing. A table illustrates the performance of the plans for stopping at each opportunity in a sample of size 20. Some examples are used to demonstrate the application of the proposed methodology. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

15.
In this paper, we considered the Length-biased weighted Lomax distribution and constructed new acceptance sampling plans (ASPs) where the life test is assumed to be truncated at a pre-assigned time. For the new suggested ASPs, the tables of the minimum samples sizes needed to assert a specific mean life of the test units are obtained. In addition, the values of the corresponding operating characteristic function and the associated producer’s risks are calculated. Analyses of two real data sets are presented to investigate the applicability of the proposed acceptance sampling plans; one data set contains the first failure of 20 small electric carts, and the other data set contains the failure times of the air conditioning system of an airplane. Comparisons are made between the proposed acceptance sampling plans and some existing acceptance sampling plans considered in this study based on the minimum sample sizes. It is observed that the samples sizes based on the proposed acceptance sampling plans are less than their competitors considered in this study. The suggested acceptance sampling plans are recommended for practitioners in the field.  相似文献   

16.
This paper discusses procedures for analyzing factorial experiments, where the experiment deals with the life testing of components or equipment. These procedures assume an underlying general distribution of “times-to-failure”, of which the exponential, Weibull, and extreme value distributions are special cases. Statistical tests and confidence procedures are outlined, and an example illustrating the procedure for life-test results of glass capacitors is included. Small sample approximations, which are adequate for practical applications, are given for the proposed procedures. This is shown empirically by generating thousands of life-test experiments on an electronic computer. An empirical sampling investigation is given of the robustness of the proposed procedures. From the sampling results, it is concluded that these techniques are sensitive (non-robust) to departures from the original assumptions on the probability distribution of failure-times. An investigation is also given of a transformation which appears to give robust results. These same techniques carry over exactly to the situation where one is analyzing an array of variance estimates from an underlying normal population.  相似文献   

17.
In this paper, we propose degradation test sampling plans (DTSPs) used to determine the acceptability of a product in a Wiener process model. A test statistic and the acceptance criterion based on Wiener process parameter estimates are proposed. The design of a degradation test is investigated using a model incorporating test cost constraint to minimize the asymptotic variance of the proposed test statistic. Some important variables, including the sample size, measurement frequency, and the total test time, are chosen as decision variables in a degradation test plan. The asymptotic variance of the test statistic and the approximate functional forms of the optimal solutions are derived. A search algorithm is also represented in a flow chart for the purpose of finding the optimal DTSPs. In addition, we assess the minimum cost requirement for the result of the test procedure to satisfy the minimum requirements for the producer's risk and the consumer's risk. When the given test budget is not large enough, we suggest some methods to find appropriate solutions. Finally, a numerical example is used to illustrate the proposed methodology. Optimum DTSPs are obtained and tabulated for some combinations of commonly used producer and consumer risk requirements. A sensitivity analysis is also conducted to investigate the sensitivity of the obtained DTSPs to the cost parameters used.  相似文献   

18.
The approaches for software failure probability estimation are mainly based on the results of testing. Test cases represent the inputs, which are encountered in an actual use. The test inputs for the safety-critical application such as a reactor protection system (RPS) of a nuclear power plant are the inputs which cause the activation of protective action such as a reactor trip. A digital system treats inputs from instrumentation sensors as discrete digital values by using an analog-to-digital converter. Input profile must be determined in consideration of these characteristics for effective software failure probability quantification. Another important characteristic of software testing is that we do not have to repeat the test for the same input value since the software response is deterministic for each specific digital input. With these considerations, we propose an effective software testing method for quantifying the failure probability. As an example application, the input profile of the digital RPS is developed based on the typical plant data. The proposed method in this study is expected to provide a simple but realistic mean to quantify the software failure probability based on input profile and system dynamics.  相似文献   

19.
The design of supplementary damping controllers to mitigate the effects of electromechanical oscillations in power systems is a highly complex and time-consuming process, which requires a significant amount of knowledge from the part of the designer. In this study, the authors propose an automatic technique that takes the burden of tuning the controller parameters away from the power engineer and places it on the computer. Unlike other approaches that do the same based on robust control theories or evolutionary computing techniques, our proposed procedure uses an optimisation algorithm that works over a formulation of the classical tuning problem in terms of bilinear matrix inequalities. Using this formulation, it is possible to apply linear matrix inequality solvers to find a solution to the tuning problem via an iterative process, with the advantage that these solvers are widely available and have well-known convergence properties. The proposed algorithm is applied to tune the parameters of supplementary controllers for thyristor controlled series capacitors placed in the New England/New York benchmark test system, aiming at the improvement of the damping factor of inter-area modes, under several different operating conditions. The results of the linear analysis are validated by non-linear simulation and demonstrate the effectiveness of the proposed procedure.  相似文献   

20.
A heterogeneous approach for FE upper bound limit analyses of out-of-plane loaded masonry panels is presented. Under the assumption of associated plasticity for the constituent materials, mortar joints are reduced to interfaces with a Mohr–Coulomb failure criterion with tension cut-off and cap in compression, whereas for bricks both limited and unlimited strength are taken into account. At each interface, plastic dissipation can occur as a combination of out-of-plane shear, bending and torsion. In order to test the reliability of the model proposed, several examples of dry-joint panels out-of-plane loaded tested at the University of Calabria (Italy) are discussed. Numerical results are compared with experimental data for three different series of walls at different values of the in-plane compressive vertical loads applied. The comparisons show that reliable predictions of both collapse loads and failure mechanisms can be obtained by means of the numerical procedure employed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号