首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper proposes a Bayesian method to set tolerance or specification limits on one or more responses and obtain optimal values for a set of controllable factors. The existence of such controllable factors (or parameters) that can be manipulated by the process engineer and that affect the responses is assumed. The dependence between the controllable factors and the responses is assumed to be captured by a regression model fit from experimental data, where the data are assumed to be available. The proposed method finds the optimal setting of the control factors (parameter design) and the corresponding specification limits for the responses (tolerance control) in order to achieve a desired posterior probability of conformance of the responses to their specifications. Contrary to standard approaches in this area, the proposed Bayesian approach uses the complete posterior predictive distribution of the responses, thus the tolerances and settings obtained consider implicitly both the mean and variance of the responses and the uncertainty in the regression model parameters.  相似文献   

2.
Recent Food and Drug Administration (FDA) validation guidelines and comments indicate that applying finished product content uniformity specifications to blend testing is unacceptable. The scenario the FDA has presented is one in which disorder increases as the process progresses so that blend test specifications should be more restrictive (tighter) than finished product testing specifications. In other publications, it has been suggested that finished product assay limits be applied as a blend specification along with a lower relative standard deviation value than the current USP content uniformity limit (6.0%). This approach is questionable since assay results are applied to an aggregate finished product sample rather than individual doses. A method is presented in this paper for applying statistical tolerance limits (Sib) to blend data. This procedure provides a 95% confidence level that at least 90% of the values for the entire population are within the calculated limits. These statistical tolerance limits provide an acceptable criterion that is statistically tighter than the application of USP XXIII finished product content uniformity specifications. In addition, this method involves a decision process or multiple-level evaluation based on a statistical comparison of the variance and mean for the blend and finished product. In cases where the calculated STLs are unacceptable, the decision process allows for determining if the out-of-specification values from the first level of testing are due to a true blend failure or if the cause of the aberration is due to other phenomena, which could include sampling technique, thief design, and analytical testing problems.  相似文献   

3.
Abstract

Recent Food and Drug Administration (FDA) validation guidelines and comments indicate that applying finished product content uniformity specifications to blend testing is unacceptable. The scenario the FDA has presented is one in which disorder increases as the process progresses so that blend test specifications should be more restrictive (tighter) than finished product testing specifications. In other publications, it has been suggested that finished product assay limits be applied as a blend specification along with a lower relative standard deviation value than the current USP content uniformity limit (6.0%). This approach is questionable since assay results are applied to an aggregate finished product sample rather than individual doses. A method is presented in this paper for applying statistical tolerance limits (Sib) to blend data. This procedure provides a 95% confidence level that at least 90% of the values for the entire population are within the calculated limits. These statistical tolerance limits provide an acceptable criterion that is statistically tighter than the application of USP XXIII finished product content uniformity specifications. In addition, this method involves a decision process or multiple-level evaluation based on a statistical comparison of the variance and mean for the blend and finished product. In cases where the calculated STLs are unacceptable, the decision process allows for determining if the out-of-specification values from the first level of testing are due to a true blend failure or if the cause of the aberration is due to other phenomena, which could include sampling technique, thief design, and analytical testing problems.  相似文献   

4.
Statistical tolerance intervals are often used during design verification or process validation in diverse applications, such as the manufacturing of medical devices, the construction of nuclear reactors, and the development of protective armor for the military. Like other statistical problems, the determination of a minimum required sample size when using tolerance intervals commonly arises. Under the Faulkenberry-Weeks approach for sample size determination of parametric tolerance intervals, the user must specify two quantities—typically set to rule-of-thumb values—that characterize the desired precision of the tolerance interval. Practical applications of sample size determination for tolerance intervals often have historical data that one expects to closely follow the distribution of the future data to be collected. Moreover, such data are typically required to meet specification limits. We provide a strategy for specifying the precision quantities in the Faulkenberry-Weeks approach that utilizes both historical data and the required specification limits. Our strategy is motivated by a sampling plan problem for the manufacturing of a certain medical device that requires calculation of normal tolerance intervals. Both classical and Bayesian normal tolerance intervals are considered. Additional numerical studies are provided to demonstrate the general applicability of our strategy for setting the precision quantities.  相似文献   

5.
This paper deals with an approach for choosing the proper subgroup size for control charts. The approach is particularly appropriate for batch industries where some batch-to-batch variation is to be expected and should be accommodated. It uses two tests to evaluate the proper subgroup sizes. The tests are ANOVA for testing that the process mean is in control and Bartlett's test for testing that the process variance is in control. Besides the two tests the width of the process from measurements based on the selected subgroup size needs to be compared with the desired specification limits. An example is included to illustrate the use of the approach.  相似文献   

6.
In industrial processes, capability indices like the Cpk are frequently used. This index requires that specification limits have been set. The current paper focuses on the issue to set preliminary specification limits based on the statistical performance of the process alone. This preliminary interval should correspond to a Cpk value that is considered to be sufficiently high, with a pre-specified confidence level. The paper presents a theoretical framework to set these limits.  相似文献   

7.
A β-content tolerance interval (TI) is a statistical interval which contains at least some fraction (proportion) β of the population with a given confidence level. When we are interested in the precision of a quality characteristic, a TI for the sample variance is useful. In this paper, we consider an exact two-sided β-content TI for the sample variance from a normal distribution with a specified ratio of the tail probabilities. The proposed tolerance interval allows the practitioner more control over how the probabilities in the tails are distributed, which may be useful in certain applications. A comparison with an existing two-sided β-content TI shows that the proposed TI is better on the basis of expected coverage and standard deviation of the coverage. In addition, the proposed TI is shown to require fewer subgroups to achieve a specific accuracy level. Moreover, a phase II control chart with guaranteed performance is obtained from the proposed TI. Finally, a real and a simulated data are used for illustration.  相似文献   

8.
Tolerance analysis of an assembly is an important issue for mechanical design. Among various tolerance analysis methods, statistical analysis is the most commonly employed method. However, the conventional statistical tolerance method is often based on the normal distribution. It fails to predict the resultant tolerance of an assembly of parts with non-normal distributions. In this paper, a novel method based on statistical moments is proposed. Tolerance distributions of parts are first transferred into statistical moments that are then used for computing tolerance stack-up. The computed moments, particularly the variance, the skewness and the kurtosis, are then mapped back to probability distributions in order to calculate the resultant tolerance of the assembly. The proposed method can be used to analyse the resultant tolerance specification for non-normal distributions with different skewness and kurtosis. Simulated results showed that tail coefficients of different distributions with the same kurtosis are close to each other for normalised probabilities between ?3 and 3. That is, the tail coefficients of a statistical distribution can be predicted by the coefficients of skewness and kurtosis. Two examples are illustrated in the paper to demonstrate the proposed method. The predicted resultant tolerances of the two examples are only 0.5% and 1.5% differences compared with that by the Monte Carlo simulation for 1,000,000 samples. The proposed method is much faster in computation with higher accuracy than conventional statistical tolerance methods. The merit of the proposed method is that the computation is fast and comparatively accurate for both symmetrical and unsymmetrical distributions, particularly when the required probability is between ±2σ and ±3σ.  相似文献   

9.
We investigate acceptance-sampling methods for univariate and multivariate normal data in which the quality of the process relative to specification limits is measured by an estimate of the proportion nonconforming, and the mean and variance are unknown. A maximum likelihood method is developed, and we compare it with existing approaches to acceptance sampling. This method is applied to a problem involving government regulation of the gas industry in Canada. The justification for basing national and international standards on an approach based on the minimum variance unbiased estimator of the proportion nonconforming is examined.  相似文献   

10.
Robust parameter design (RPD) and tolerance design (TD) are two important stages in design process for quality improvement. Simultaneous optimization of RPD and TD is well established on the basis of linear models with constant variance assumption. However, little attention has been paid to RPD and TD with non‐constant variance of residuals or non‐normal responses. In order to obtain further quality improvement and cost reduction, a hybrid approach for simultaneous optimization of RPD and TD with non‐constant variance or non‐normal responses is proposed from generalized linear models (GLMs). First, the mathematical relationship among the process mean, process variance and control factors, noise factors and tolerances is derived from a dual‐response approach based on GLMs, and the quality loss function integrating with tolerance is developed. Second, the total cost model for RPD‐TD concurrent optimization based on GLMs is proposed to determine the best control factors settings and the optimal tolerance values synchronously, which is solved by genetic algorithm in detail. Finally, the proposed approach is applied into an example of electronic circuit design with non‐constant variance, and the results show that the proposed approach performs better on quality improvement and cost reduction. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

11.
Tolerance is one of the most important parameters in design and manufacturing. Tolerance synthesis has a significant impact on manufacturing cost and product quality. In the international standards community two approaches for statistical tolerancing of mechanical parts are being discussed: process capability indices and distribution function zone. The distribution function zone (DFZone) approach defines the acceptability of a population of parts by requiring that the distribution function of relevant values of the parts be bounded by a pair of specified distribution functions. In order to apply this approach to statistical tolerancing, one needs a method to decompose the assembly level tolerance specification to obtain tolerance parameters for each component in conjunction with a corresponding tolerance-cost model. This paper introduces an optimization-based statistical tolerance synthesis model based on the DFZone tolerance specifications. A new tolerance-cost model is proposed and the model is illustrated with an assembly example.  相似文献   

12.
非统计假设检验原理及其应用   总被引:2,自引:0,他引:2  
夏新涛  王中宇 《计量学报》2006,27(2):190-195
以模糊集合理论为依据,提出一种新的假设检验方法———非统计假设检验。这种方法可以从少量的采样数据出发,用线性估计法自动识别总体分布的隶属函数,并直接估计出总体分布的真值及其分布区间。根据模糊集合理论建立了用隶属度描述的假设检验否定域,通过大量案例与统计方法进行对比分析,表明所提出的非统计假设检验方法具有很好的检验效果,置信度达到95%。  相似文献   

13.
Statistical tolerance intervals are widely used in the industry and in various areas of sciences, especially in conformity assessment and acceptance of products or processes in terms of quality. When the interest is in precision, a tolerance interval for the variance is useful. In this paper, we consider two‐sided tolerance intervals for the population of sample variances for data that arise from a normal distribution. These intervals are useful in applications where one needs information about process deterioration as well as process improvement, to properly assess product quality. In this paper, the theory for these tolerance intervals is developed and tables for the tolerance factors, required to calculate the proposed tolerance limits, are provided for various settings. Construction and implementation of the proposed tolerance intervals are illustrated using a dataset from a real application. Summary and conclusions are offered.  相似文献   

14.
This paper will develop a new robust topology optimization method for the concurrent design of cellular composites with an array of identical microstructures subject to random‐interval hybrid uncertainties. A concurrent topology optimization framework is formulated to optimize both the composite macrostructure and the material microstructure. The robust objective function is defined based on the interval mean and interval variance of the corresponding objective function. A new uncertain propagation approach, termed as a hybrid univariate dimension reduction method, is proposed to estimate the interval mean and variance. The sensitivity information of the robust objective function can be obtained after the uncertainty analysis. Several numerical examples are used to validate the effectiveness of the proposed robust topology optimization method.  相似文献   

15.
Continuous improvement of the quality of industrial products is an essential factor in modern‐day manufacturing. The investigation of those factors that affect process mean and process dispersion (standard deviation) is an important step in such improvements. Most often, experiments are executed for such investigations. To detect mean factors, I use the usual analysis of variance on the experimental data. However, there is no unified method to identify dispersion factors. In recent years several methods have been proposed for identifying such factors with two levels. Multilevel factors, especially three‐level factors, are common in industrial experiments, but we lack methods for identifying dispersion effects in multilevel factors. In this paper, I develop a method for identifying dispersion effects from general fractional factorial experiments. This method consists of two stages. The first stage involves the identification of mean factors using the performance characteristic as the response. The second stage involves the computation of a dispersion measure and the identification of dispersion factors using the dispersion measure as the response. The sequence for identifying dispersion factors is first to test the significance of the total dispersion effect of a factor, then to test the dispersion contrasts of interest, which is a method similar to the typical post hoc testing procedure in the ANOVA analysis. This familiar approach should be appealing to practitioners. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

16.
Statistical hypothesis testing is useful for controlling and improving processes, products, and services. This most fundamental, yet powerful, continuous improvement tool has a wide range of applications in quality and reliability engineering. Some application areas include statistical process control, process capability analysis, design of experiments, life testing, and reliability analysis. It is well‐known that most parametric hypothesis tests on a population mean, such as z‐test and t‐test, require a random sample from the population under study. However, there are special situations in engineering, where the specification limits, such as the lower and upper specification limits, on the process are implemented externally, and the product is typically reworked or scrapped if the performance of a product does not fall in the range. As such, a random sample needs to be taken from a truncated distribution; however, there has been little work on the theoretical foundation of statistical hypothesis procedures under this special situation. The objective of this paper is twofold. First, we provide the mathematical justifications that the central limit theorem works quite well for a large sample size, given samples taken from a truncated distribution. We also verify this finding using simulation. Second, we then develop the new one‐sided and two‐sided z‐test and t‐test procedures, including their test statistics, confidence intervals, and P‐values, using appropriate truncated statistics. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

17.
According to International Council for Harmonisation (ICH) guideline Q6A, dissolution testing can be replaced by disintegration testing if it can be shown that the active pharmaceutical ingredient is highly soluble and the formulation is rapidly releasing. In addition, a relationship between dissolution and disintegration has to be established. For a fixed-dose combination tablet of empagliflozin and linagliptin, this relationship was established by applying two different approaches. In the first approach, the extent to which the disintegration process of the film-coated tablets contributes to the release of the active ingredients was investigated. In the second approach, the mean disintegration times in a disintegration tester were correlated with the mean dissolution rates at a selected sampling time point. By correlating disintegration times in the dissolution vessel with the dissolution rate at selected sampling times it is demonstrated that the disintegration into primary particles is the rate limiting step for the dissolution process. A direct correlation of disintegration times in the disintegration tester with dissolution rate at a selected sampling time is established supporting a relationship between dissolution and disintegration testing for this type of formulation. Additionally, it could also be shown that the disintegration test method exhibits at least a similar discriminatory power compared to the proposed dissolution method. Based on a statistical approach and data from a bioavailability study, a clinical relevant specification for the disintegration time was established. All presented data support the replacement of dissolution by disintegration testing according to ICH Q6A for the selected fixed-dose combination product.  相似文献   

18.
Engineering tolerance design plays an important role in modern manufacturing. Both symmetric and asymmetric tolerances are common in many manufacturing processes. Recently, various revised loss functions have been proposed for overcoming the drawbacks of Taguchi's loss function. In this article, Kapur's economic tolerance design model is modified and the economic specification limits for both symmetric and asymmetric losses are established. Three different loss functions are compared in the optimal symmetric and asymmetric tolerance design: a revised Taguchi quadratic loss function, an inverted normal loss function and a revised inverted normal loss function. The relationships among the three loss functions and process capability indices are established. A numerical example is given to compare the economic specification limits established by using the three loss functions. The results suggest that the revised inverted normal loss function be used in determining economic specification limits.  相似文献   

19.
This article investigates computation of pointwise and simultaneous tolerance limits under the logistic regression model for binary data. The data consist of n binary responses, where the probability of a positive response depends on covariates via the logistic regression function. Upper tolerance limits are constructed for the number of positive responses in m future trials for fixed as well as varying levels of the covariates. The former provides pointwise upper tolerance limits, and the latter provides simultaneous upper tolerance limits. The upper tolerance limits are obtained from upper confidence limits for the probability of a positive response, modeled using the logistic function. To compute pointwise upper confidence limits for the logistic function, likelihood-based asymptotic methods, small sample asymptotics, as well as bootstrap methods are investigated and numerically compared. To compute simultaneous upper tolerance limits, a bootstrap approach is investigated. The problems have been motivated by an application of interest to the U.S. Army, dealing with the testing of ballistic armor plates for protecting soldiers from projectiles and shrapnel, where the success probability depends on covariates such as the projectile velocity, size of the armor plate, etc. Such an application is used to illustrate the tolerance interval computations in the article. We provide the R codes used for the calculations presented in the examples in the article as supplementary material, available online.  相似文献   

20.
Knowing the time of changes in mean and variance in a process is crucial for engineers to identify the special cause quickly and correctly. Because assignable causes may give rise to changes in mean and variance at the same time, monitoring the mean and variance simultaneously is required. In this paper, a mixture likelihood approach is proposed to detect shifts in mean and variance simultaneously in a normal process. We first transfer the change point model formulation into a mixture model and then employ the expectation and maximization algorithm to estimate the time of shifts in mean and variance simultaneously. The proposed method called EMCP (expectation and maximization change point) can be used in both phase I and II applications without the knowledge of in‐control process parameters. Moreover, EMCP can detect the time of multiple shifts and simultaneously produce the estimates of shifts in each individual segment. Numerical data and real datasets are employed to compare EMCP with the direct statistical maximum likelihood method without the use of mixture models. The experimental results show the superiority and effectiveness of the proposed EMCP. The outperformance of EMCP in detecting the time of small shifts is particularly important and beneficial for engineers to identify assignable causes rapidly and accurately in phase II applications in which small shifts occur more often and hence lead to a large average run length. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号