首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
An economic model is developed, to assist in the selection of minimum cost acceptance sampling plans by variables. The quadratic Taguchi loss function is adopted to model the cost of accepting items, with quality characteristics deviating from the target value. The case of a normally distributed quality characteristic with known variance is examined, and a simple and efficient optimization algorithm is proposed. Comparisons with other methods of deriving sampling plans reveal that the cost penalties for using an inappropriate plan may be very large.  相似文献   

2.
The maximum exponentially weighted moving average (MaxEWMA) control chart effectively combines the two EWMA charts into one chart and monitors both increases and decreases in the mean and/or variability. In this paper, we develop the economic–statistical design of the MaxEWMA control chart in which the Taguchi's quadratic loss function is incorporated into the Duncan's economical model. Numerical simulations are executed to minimize the expected total cost model and determine the optimal decision variables, including the sample size, sampling interval, control limit width, and the smoothing constant of the MaxEWMA control chart. It is shown that the optimal control limit width and smoothing constant increase as the optimal cost value increases and that both the optimal sample size and sampling interval always decrease as the magnitudes of mean and/or variance shifts increase. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

3.
The-Nominal-The-Best (N-type) loss function is established based on the Taylor expansion. Results become more accurate as more Taylor expansion items are retained. N-type loss function neglects terms with powers higher than two, which inevitably leads to a certain deviation between the calculated result and the true value. In this paper, Taylor expansion is retained to the third-order, and the quality loss function is extended to three items. The quality loss coefficients of each item are determined, and the asymmetric piecewise cubic quality loss function is established. The deviation between the cubic and quadratic functions is evaluated. The formula for calculating the hidden quality cost of a product is derived by choosing an appropriate density distribution function and using process capability. Two cases are utilised to analyse and discuss the quality loss and hidden quality cost of a product using the cubic quality loss and quadratic quality loss functions. This paper provides a more accurate approach for the study of product quality management.  相似文献   

4.
We consider double sampling inspection plans for a given lot size, cost function, and prior distribution of the number of defectives. The restriction we impose on the sampling plan is that the size and acceptance number of the second sample do not depend on the outcome of the first sample. Hence, a double sampling plan is determined by five parameters: two sample sizes, two acceptance numbers, and one rejection number. The optimal plan is defined as the plan which minimizes the expected cost. Our results consist of finding optimal relations between the sample sizes, acceptance and rejection numbers and the lot size as the lot size tends to infinity. An asymptotic expansion of the regret function shows that for the optimal plan it increases as the lot size raised to the power 2/5 times the logarithm of the lot size raised to the power 1/5.  相似文献   

5.
Approximate models for the optimum economic design of double sampling plans for attributes are presented. Total cost is assumed to consist of sampling costs, the cost of accepting defective items, and the cost of rejecting good items. This cost is minimized relative to a specified prior distribution of process fraction defective. Models are developed for situations in which rejected lots are either scrapped or subjected to 100 percent inspection with defective items removed. Other variations of the basic model include the incorporation of restrictions among the sampling plan parameters, which may be helpful in improving administrative efficiency, and the use of curtailment on the second sample. Numerical examples for the various models are presented. Model sensitivity to the cost coefficients and to potential misspecification of the parameters of the prior distribution is investigated.  相似文献   

6.
A Bayesian life test sampling plan is considered for products with Weibull lifetime distribution which are sold under a warranty policy. It is assumed that the shape parameter of the distribution is a known constant, but the scale parameter is a random variable varying from lot to lot according to a known prior distribution. A cost model is constructed which involves three cost components; test cost, accept cost, and reject cost. A method of finding optimal sampling plans which minimize the expected average cost per lot is presented and sensitivity analyses for the parameters of the lifetime and prior distributions are performed.  相似文献   

7.
Assume that lot quality characteristics obey a normal distribution. Kanagawa et al. have proposed the ( x, s ) control chart which enable the user to monitor both changes of the mean and variance simultaneously. Further, the results of Watakabe and Arizono enable the user to evaluate the performance for out-of-control state in the case of using the ( x, s ) control chart. On the other hand, Taguchi has presented an approach to quality improvement in which reduction of deviation from the target value is the guiding principle. In this approach, the loss is expressed as a quadratic form with respect to the difference between the measured value x of a product characteristic X and the target value T of a product quality characteristic. Then, we can evaluate the process quality based on the Taguchi's loss criterion. We consider here the economical operation of the ( x, s ) control chart in conformity with the expected total operation cost function based on the sampling cost and the loss due to derivation in the process quality. First, we consider the economical operation of the ( x, s ) control chart in the situation that the loss to be considered is known. Further, the economical operation of the ( x, s ) control chart is also discussed under any loss instead of a known loss. Then, the relationship between the two economical operations proposed here corresponds to the relationship between the lot tolerance per cent defective plans under the fixed fraction defective and the average outgoing quality limit plans under any fraction defective in rectifying inspection plans.  相似文献   

8.
As short run manufacturing becomes more prevalent, run-to-run components of variation, such as that contributed by set-up error, have greater potential to crucially affect product quality. While efforts should be made to eliminate such between-run variance contributing factors, some will always be present. Here, we assume there is one such factor which we envisage as set-up error that, unless the process is adjusted, remains fixed throughout a run. We develop a single adjustment strategy based on taking a sample of fixed size from the process. If a significant set-up error is indicated, a single compensatory adjustment, equal to the predicted process offset, is executed. The actual procedure depends on process parameters, including adjustment error, run size, and adjustment and sampling costs. The procedure not only specifies the adjustment amount, if any, but the time during the run at which to adjust. The procedure is, optimal among all fixed sample size procedures for the chosen cost function. Besides incorporating adjustment and sampling costs, the cost function is based on a 0-1 loss criterion, where the loss is 0 (1) units per item produced if the process offset caused by set-up error is less than or equal to (greater than) a specified amount. Tables are provided, with examples, illustrating the procedure for representative values of process parameters, costs, and run sizes.  相似文献   

9.
Clearing functions that describe the expected output of a production resource as a function of its expected workload have yielded promising production planning models. However, there is as yet no fully satisfactory approach to estimating clearing functions from data. We identify several issues that arise in estimating clearing functions such as sampling issues, systematic underestimation and model misspecification. We address the model misspecification problem by introducing a generalised functional form, and the sampling issues via iterative refinement of initial parameter estimates. The iterative refinement approach yields improved performance for planning models at higher levels of utilisation, and the generalised functional form results in significantly better production plans both alone and when combined with the iterative refinement approach. The IR approach also obtains solutions of similar quality to the much more computationally demanding simulation optimisation approaches used in previous work.  相似文献   

10.
This paper provides a few general mathematical models for determining product tolerances which minimize the combined manufacturing costs and quality loss. The models contain quality cost with a quadratic loss function and represent manufacturing costs with geometrical decay functions. The models are also formulated with multiple variables which represent the set of characteristics in a part. Applications of these models include minimizing the total cost with effective tolerance allocation in product design.  相似文献   

11.
Burn‐in is a quality control process used to minimize the warranty cost of a product by screening out defective products through prior operation for a period of time before sale. Two decision criteria used to determine the optimal burn‐in time are the maximization of the reliability of the delivered product and the minimization of the total cost, which are composed of the cost of burn‐in process and the cost of warranty claims. Because of uncertainty regarding the underlying lifetime distribution of the product, both the product reliability and the total cost are random variables. In this paper, the uncertainty in reliability and cost is quantified by use of Bayesian analysis. The joint distribution of reliability and cost is inferred from the uncertainty distribution of the parameters of the product lifetime distribution. To incorporate the uncertainty in reliability and cost as well as the tradeoff between them into the selection of optimal burn‐in time, the joint utility function of reliability and cost is constructed using the joint distribution of reliability and cost. The optimal burn‐in time is selected as the time that maximizes the joint utility function. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

12.
Robust product and process design is an important technique for achieving high quality at low cost. It involves making the product's function much less sensitive to various sources of noise such as manufacturing variation, environmental variation and deterioration. This is a problem in optimization involving minimization of the mean square loss resulting from the deviation of the product's function from its target. Here we show that the optimization can be carried out in two steps: first maximize a quantity called signal-to-noise ratio (S/N) and then bring the performance on target by special adjustment parameters. The two-step procedure works for a wide variety of product functions and makes the optimization process more efficient and practical compared to the direct minimization of the quadratic loss function.  相似文献   

13.
A general model for multiattribute Bayesian acceptance sampling plans is developed which incorporates the multiattribute utility function of a decision maker in its design. The model accommodates various dispositions of rejected lots such as screening and scrapping. The disposition of rejected lots is shown to have a substantial impact on the solution approach used and on the ease of incorporation of multiattribute utility functions in terms of their measurement complexity, functional form, and parameter estimation. For example, if all attributes are screenable upon rejection, and the prior distributions of lot quality on each attribute are independent, then an optimal multiattribute sampling plan can be obtained simply by solving for an optimal single sampling plan on each attribute independently. A discrete search algorithm, based on pattern search, is also developed and shown to be very effective in obtaining an optimal multiattribute inspection plan when such separability cannot be accomplished.  相似文献   

14.
In this paper, we propose degradation test sampling plans (DTSPs) used to determine the acceptability of a product in a Wiener process model. A test statistic and the acceptance criterion based on Wiener process parameter estimates are proposed. The design of a degradation test is investigated using a model incorporating test cost constraint to minimize the asymptotic variance of the proposed test statistic. Some important variables, including the sample size, measurement frequency, and the total test time, are chosen as decision variables in a degradation test plan. The asymptotic variance of the test statistic and the approximate functional forms of the optimal solutions are derived. A search algorithm is also represented in a flow chart for the purpose of finding the optimal DTSPs. In addition, we assess the minimum cost requirement for the result of the test procedure to satisfy the minimum requirements for the producer's risk and the consumer's risk. When the given test budget is not large enough, we suggest some methods to find appropriate solutions. Finally, a numerical example is used to illustrate the proposed methodology. Optimum DTSPs are obtained and tabulated for some combinations of commonly used producer and consumer risk requirements. A sensitivity analysis is also conducted to investigate the sensitivity of the obtained DTSPs to the cost parameters used.  相似文献   

15.
Probabilistic approaches to flaw detection, classification, or characterization often assume prior knowledge of the flaw distribution. It is implicit that there is a scattering amplitude distribution associated with the flaw distribution. In a number of previously published probabilistic analyses, it has been assumed that scattering amplitude is an uncorrelated, Gaussian random variable with zero mean and known variance. In the work reported here, these assumptions are evaluated for the case of a lognormal distribution of spherical flaws. The correlation, mean, variance, and nature of the scattering amplitude distribution are considered as a function of frequency and as a function of the breadth of the assumed flaw distribution. It is shown for the assumed flaw distributions that scattering amplitude is not uncorrelated and does not have zero mean. It is shown that errors in estimating the flaw distribution variance affect both the scattering amplitude mean and variance. Using both analytical and numerical procedures, the scattering amplitude distribution is shown to be lognormal at long wavelength for a lognormal distribution of spherical scatterers. At high frequency, the distribution is shown to have a bimodal character.  相似文献   

16.
This paper introduces a mathematical model for tolerance chart balancing during machining process planning. The criteria considered in this study are based on the combined effects of manufacturing cost and quality loss, under the constraints of process capability limits, design functionality restrictions, and product quality requirements. Manufacturing cost is expressed in geometrical decreasing functions, which represent tolerances to be assigned. Process variability is expressed in quadratic loss functions, which represent the deviation between part measurement and the target value. Application of this model minimizes the total cost of manufacturing activities and quality issues relating to machining process planning, particularly in the early stages of planning.  相似文献   

17.
《技术计量学》2013,55(3):436-444
Goodness-of-fit tests are proposed for the assumption of normality of random errors in experimental designs where the variance of the response may vary with the levels of the covariates. The exact distribution of standardized residuals is used to make the probability integral transform for use in tests based on the empirical distribution function. A different mean and variance is estimated for each level of the covariate; corresponding large sample theory is provided. The proposed tests are robust to a possible misspecification of the model and permit data collected from several similar experiments to be pooled to improve the power of the test.  相似文献   

18.
Optimal asymmetric tolerance design   总被引:7,自引:0,他引:7  
An asymmetric tolerance design occurs when deviation (from the ideal target) of a quality characteristic in one direction is more harmful than in the opposite direction. Asymmetric tolerances are common in many manufacturing processes. Traditionally, the designer of a manufactured component either would choose the smaller tolerance as the tolerance for both sides of the ideal target, or would set a process mean at the middle of the tolerances. Both methods fail to minimize the expected value of Taguchi's societal quality losses when the quality loss function is asymmetric. Linear and quadratic quality loss functions are considered to determine the optimal value of a process mean that minimizes the expected value of the quality loss function. Also, a quality loss model involving a pokayoke defect prevention procedure is investigated.  相似文献   

19.
《技术计量学》2013,55(3):242-249
In industry, one sometimes compares a sample mean and minimum, or a mean and maximum, to reference values to determine whether a lot should be accepted. Particularly prominent examples of such procedures are “Category B” sampling plans for checking the net contents of packaged goods. Because the exact joint distribution of an extremum and the mean of a sample is usually complicated, establishing these reference values using statistical considerations typically involves crude approximations or simulation, even under the assumption of normality. The purpose of this article is to use the saddlepoint method to develop a fairly simple and very accurate approximation to the joint cumulative distribution function (cdf) of the mean and an extremum of a normal sample. This approximation can be used to establish statistically based acceptance criteria or to evaluate the performance of sampling plans based on criteria derived in other ways. These uses are illustrated with examples.  相似文献   

20.
The traditional quality evaluation method believed that when the product quality characteristic was within the specification limit, no loss was produced. Taguchi proposed that even if the characteristic was within the range of users' demand, the fluctuation of quality characteristic would still cause loss to users and society. Therefore, he proposed a quadratic quality loss function to describe this loss. The function was established based on the Taylor expansion. It neglected terms with powers higher than 2, which would cause a certain deviation between the calculated and true value. Moreover, the tolerance and loss in the quadratic loss function must satisfied specific relationship, which limited its use. In this paper, the Taylor expansion items are raised to third order. The quality loss coefficients are discussed and analyzed so that the cubic quality loss function is established. In addition, the method of calculating hidden quality cost is given by using the cubic loss function. The hidden quality cost is affected by the quality loss coefficients and should be a range. The cubic quality loss function studies the inapplicability of quadratic loss function, so it widens the scope of application of quality loss function.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号