首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
In the design of tolerance allocation the cost–tolerance function is usually employed to represent the objective function which is to be minimized. The traditional cost–tolerance functions in the literature are concerned with only one characteristic. In this paper we obtain a bivariate cost–tolerance function to describe the relationship between the cost and tolerances of two characteristics (i.e. the thickness and inner diameter) of a lock wheel. Then the bivariate loss function is combined with the bivariate cost–tolerance function to determine the optimal tolerances for the thickness and inner diameter of a lock wheel such that the user's potential loss/cost may be evaluated. When the quality loss is considered, the tolerances of both characteristics become tighter. By including the effect of product degradation, the present work of expected bivariate quality loss is then introduced as a quality performance measure. By assuming linear drifts on both the thickness and inner diameter of the lock wheels, the model with the present worth of quality loss leads to tighter tolerances of both characteristics. In addition, a longer planning horizon (or a longer useful life of the product) leads to tighter tolerances and a larger user's discount rate results in looser tolerances for both characteristics. Copyright © 2000 John Wiley & Sons, Ltd.  相似文献   

2.
An assembly is the integrative process of joining components to make a completed product. It brings together the upstream process of design, engineering and manufacturing processes. The functional performance of an assembled product and its manufacturing cost are directly affected by the individual component tolerances. But, the selective assembly method can achieve tight assembly tolerance through the components manufactured with wider tolerances. The components are segregated by the selective groups (bins) and mated according to a purposeful strategy rather than being at random, so that small clearances are obtained at the assembly level at lower manufacturing cost. In this paper, the effect of mean shift in the manufacturing of the mating components and the selection of number of groups for selective assembly are analysed. A new model is proposed based on their effect to obtain the minimum assembly clearance within the specification range. However, according to Taguchi's concept, manufacturing a product within the specification may not be sufficient. Rather, it must be manufactured to the target dimension. The concept of Taguchi's loss function is applied into the selective assembly method to evaluate the deviation from the mean. Subsequently, a genetic algorithm is used to obtain the best combination of selective groups with minimum clearance and least loss value within the clearance specification. The effect of the ratio between the mating part quality characteristic's dimensional distributions is also analysed in this paper.  相似文献   

3.
An important issue for design engineers is how to assign tolerance limits economically. Most work related to tolerance design is for nominal-is-best (N-type) quality characteristics and restricted by a normality assumption. However, smaller-is-better (S-type) quality characteristics and larger-is-better (L-type) quality characteristics are common in real applications. The practical distributions for S-type data or L-type data are typically skewed, and the normality assumption is violated. Determining tolerance with non-normal data using methodologies based on a normality assumption is not appropriate. This study considers the case in which measurements are recorded without their algebraic signs. The folded normal distribution works well to fit these absolute data. Based on the statistical properties of the folded normal distribution, this study develops an economic model encompassing quality loss, manufacturing costs, and re-work costs to determine tolerances. By minimising total costs, a procedure based on the Newton-Raphson method is utilised to obtain the optimal solution. Finally, a welding machine experiment is carried out to demonstrate the applicability of the proposed model.  相似文献   

4.
Taguchi's robust design provides an important paradigm for producing robust products. There are many successful applications of this paradigm, but few have dealt with reliability, i.e. when the quality characteristic is lifetime. In this paper, an actual experiment is presented which was performed to achieve robust reliability of light emitting diodes. Three major factors chosen from many potentially important manufacturing factors and one noise factor were investigated. For light emitting diodes, failure occurs when their luminosity or light intensity fall below a specified level. An interesting feature of this experiment is the periodic monitoring of the luminosity. The paper shows how the luminosity's degradation over time provides a practical way to achieve robust reliability of light emitting diodes which are already highly reliable.  相似文献   

5.
Most of the published literature on robust design is basically concerned with a single response. However, the reality is that common industrial problems usually involve several quality characteristics, which are often correlated. Traditional approaches to multidimensional quality do not offer much information on how much better or worse a process is when finding optimal settings. Köksoy and Fan [Engineering Optimization 44 (8): 935–945] pointed out that the upside-down normal loss function provides a more reasonable risk assessment to the losses of being off-target in product engineering research. However, they only consider the single-response case. This article generalizes their idea to more than one response under possible correlations and co-movement effects of responses on the process loss. The response surface methodology has been adapted, estimating the expected multivariate upside-down normal loss function of a multidimensional system to find the optimal control factor settings of a given problem. The procedure and its merits are illustrated through an example.  相似文献   

6.
7.
基于模拟试验法的离散公差稳健设计   总被引:4,自引:0,他引:4       下载免费PDF全文
 在分析计算机辅助公差设计技术的研究现状的基础上,提出了基于模拟试验法的离散公差稳健设计方法.在该方法中将加工成本与质量损失作为两个独立的目标函数,从而建立离散公差优化设计模型,采用试验设计法和CP方法相结合的技术实现公差的稳健设计,最后用实例证明该方法的可行性.  相似文献   

8.
The maximum exponentially weighted moving average (MaxEWMA) control chart effectively combines the two EWMA charts into one chart and monitors both increases and decreases in the mean and/or variability. In this paper, we develop the economic–statistical design of the MaxEWMA control chart in which the Taguchi's quadratic loss function is incorporated into the Duncan's economical model. Numerical simulations are executed to minimize the expected total cost model and determine the optimal decision variables, including the sample size, sampling interval, control limit width, and the smoothing constant of the MaxEWMA control chart. It is shown that the optimal control limit width and smoothing constant increase as the optimal cost value increases and that both the optimal sample size and sampling interval always decrease as the magnitudes of mean and/or variance shifts increase. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

9.
10.
A complex product often requires a high machining precision. This is often achieved by a close-loop machining process to be carried out in several stages, and the measurements, fixture adjustments, and feedback or feed-forward control are inserted after each of these stages. The Complex Product Machining Process (CPMP) Capability Index (CPMPCI) is affected by the control and adjustments in a CPMP, and hence the calculation results of CPMPCI can be used as references to select a proper CPMP. In this paper, we present a novel calculation method of CPMPCI as quality control and improvement technology in a CPMP. A linear model is proposed for describing the variation propagation effect throughout all stages in a CPMP, and an observation model with the pre-specified control and adjustment strategy is employed to calculate the process mean and variation during a CPMP. Finally, through application of Taguchi's quality loss function, the CPMPCI calculation method is derived. The feasibility and effectiveness of this method are validated by a case study on a three-stage CPMP.  相似文献   

11.
12.
Mixing errors in the manufacturing process of a mixture may cause a sizeable variation in the performance of the product, leading to the need for the tolerance design. Even though a variety of procedures have been proposed for the optimal tolerance design based on quality loss and manufacturing costs, there are no available tolerance design methods when mixing errors exist in the manufacturing process of a mixture. In this article, we propose a new tolerance design method for the case where mixing errors are involved in massive manufacturing process of a secondary rechargeable battery. Using an approximation method, we derive quality loss function, reflecting the effects of mixing errors on the product performances. Statistical design of mixture experiments is applied to build empirical models of performances as functions of component proportions in the corresponding quality loss function. A real‐life case study on the tolerance design of a secondary battery is provided for the illustration of the proposed method. The results show the efficiency of the proposed method in designing the tolerances to minimize the quality loss and manufacturing costs. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

13.
In this globally competitive business environment, design engineers are constantly striving to establish new and effective tools and techniques to ensure a robust and reliable product design. Robust design (RD) and reliability‐based design approaches have shown the potential to deal with variability in the life cycle of a product. This paper explores the possibilities of combining both approaches into a single model and proposes a hybrid quality loss function‐based multi‐objective optimization model. The model is unique because it uses a hybrid form of quality loss‐based objective function that is defined in terms of desirable as well as undesirable deviations to obtain efficient design points with minimum quality loss. The proposed approach attempts to optimize the product design by addressing quality loss, variability, and life‐cycle issues simultaneously by combining both reliability‐based and RD approaches into a single model with various customer aspirations. The application of the approach is demonstrated using a leaf spring design example. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

14.
System design, parameter design and tolerance design are the three stages of design process as presented by G. Taguchi. Systems design identifies the basic elements of the design to provide new or improved products to customers. Parameter design determines the optimal parameter settings, which will minimize variation from the target performance of the product. Tolerance design finally identifies the components of the design, which are sensitive in terms of affecting the quality of the product, and establishes tolerance limits that will give the required level of variation in the design. Most studies have focused primarily on optimizing the parameter design or tolerance design for multiple static quality characteristics. In this paper, a mathematical formula corresponding to the model is derived from Taguchi's quadratic quality loss function to minimize the expected total cost for the parameter design of multiple dynamic quality characteristics. When the optimal parameter design is not sufficient to reduce the output variation, the first-order Taylor series expansion is then used to analyse the variations of noise factors for optimizing the tolerance design. It concludes with an example demonstrating this approach.  相似文献   

15.
In an attempt to improve the effectiveness of statistical process control (SPC) procedures, a variety of adaptive schemes has been developed in the last decades. However, considering control charts for attributes, relatively few works about adaptive schemes have been proposed, and most of them were proposed only recently. The common characteristic of those schemes is that one or more chart parameters are allowed to adaptively vary during the SPC operations according to the sampling information history, typically the current point plotted on the chart. In this way, the adaptive schemes are smarter than the related static ones, but they are also more complicated in terms of implementation. The purpose of the present work is to evaluate and compare the economic performance of the main adaptive schemes of a control chart for attributes, in order to derive conclusions on their relative effectiveness. In particular, the analysis is focused on the c chart that is used to monitor the total nonconformities number in an inspection unit. A numerical comparative study, based on a fractional factorial design scheme, to investigate on the influence of several operating and costs parameters, is carried out, and the related considerations are given. The obtained results show that the chart parameter having the most impact on the economic performance is the sampling interval. Therefore, in most cases, the use of a c chart with adaptive sampling intervals is the better choice than other adaptive schemes, which are also more complicated in terms of implementation. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

16.
17.
Cyano-functionalized spherical silica nanoparticles (SNPs) were synthesized via Stöber method. A 2?k-pIV–fractional factorial design (2k-pIV–FFD) was used to smartly prepare monodispersed evenly distributed SNPs. Six factors were considered; concentrations of tetraethylorthosilicate (TEOS), 3-Cyanopropyltriethoxysilane (CPTS), water, and ammonia, reaction time (RT) and stirring time (ST). Two responses; particle size (PS, measured by SEM) and particle-size distribution (PSD, calculated as standard deviation, ±SD) were measured. Control charts were used to decide on impacts of linear and two-way interactions on both responses. Derringer’s function was used to consolidate these multifarious responses into a uniform execution characteristic. Both screening and optimization were always accompanied by ANOVA testing at a 95.0% confidence interval (CI). The ideal synthetic conditions were obtained from the composite desirability plots. Cyano-functionalized SNPs with an average PS of 474.04?±?86.71?nm were produced. Raman spectroscopy and FTIR were used to confirm the functionalization process. Thermogravimetric analysis (TGA) was used to evaluate the thermal behavior of synthesized particles.  相似文献   

18.
Sparsity features of simultaneous analysis and design (SAND) formulations are studied and exploited for optimization of large‐scale truss structures. Three formulations are described and implemented with an existing analysis code. SAND formulations have large number of variables; however, gradients of the functions and Hessian of the Lagrangian are quite sparsely populated. Therefore, this structure of the problem is exploited and an optimization algorithm with sparse matrix capability is used to solve large‐scale problems. An existing analysis software is integrated with an optimizer based on a sparse sequential quadratic programming (SQP) algorithm to solve sample problems. The formulations and algorithms work quite well for the sample problems, and their performances are compared and discussed. For all the cases considered, the SAND formulations out perform the conventional formulation except one case. Further research is suggested to fully study and utilize sparse features of the alternative SAND formulations and to develop more efficient sparse solvers for large‐scale and more complex applications. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

19.
Infinitesimal deformations of a functionally graded thick elastic plate are analyzed by using a meshless local Petrov–Galerkin (MLPG) method, and a higher-order shear and normal deformable plate theory (HOSNDPT). Two types of Radial basis functions RBFs, i.e. Multiquadrics and Thin Plate Splines, are employed for constructing the trial solutions, while a fourth-order Spline function is used as the weight/test function over a local subdomain. Effective material moduli of the plate, made of two isotropic constituents with volume contents varying only in the thickness direction, are computed using the Mori–Tanaka homogenization technique. Computed results for a simply supported aluminum/ceramic plate are found to agree well with those obtained analytically. Results for a plate with two opposite edges free and the other two simply supported agree very well with those obtained by analyzing three-dimensional deformations of the plate by the finite element method. The distributions of the deflection and stresses through the plate thickness are also presented for different boundary conditions. It is found that both types of basis functions give accurate values of plate deflection, but the multiquadrics give better values of stresses than the thin plate splines.  相似文献   

20.
Recent studies have shown that enhancing the common T2 control chart by using variable sample sizes (VSS) and variable sample intervals (VSI) sampling policies with a double warning line scheme (DWL) yields improvements in shift detection times over either pure VSI or VSS schemes in detecting almost all shifts in the process mean. In this paper, we look at this problem from an economical perspective, certainly at least as an important criterion as shift detection time if one considers what occurs in the industry today. Our method is to first construct a cost model to find the economic statistical design (ESD) of the DWL T2 control chart using the general model of Lorenzen and Vance (Technometrics 1986; 28 :3–11). Subsequently, we find the values of the chart parameters which minimize the cost model using a genetic algorithm optimization method. Cost comparisons of Fixed ratio sampling, VSI, VSS, VSIVSS with DWL, and multivariate exponentially weighted moving average (MEWMA) charts are made, which indicate the economic efficacy of using either VSIVSS with DWL or MEWMA charts in practice if cost minimization is of interest to the control chart user. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号