首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 343 毫秒
1.
Mixing errors in the manufacturing process of a mixture may cause a sizeable variation in the performance of the product, leading to the need for the tolerance design. Even though a variety of procedures have been proposed for the optimal tolerance design based on quality loss and manufacturing costs, there are no available tolerance design methods when mixing errors exist in the manufacturing process of a mixture. In this article, we propose a new tolerance design method for the case where mixing errors are involved in massive manufacturing process of a secondary rechargeable battery. Using an approximation method, we derive quality loss function, reflecting the effects of mixing errors on the product performances. Statistical design of mixture experiments is applied to build empirical models of performances as functions of component proportions in the corresponding quality loss function. A real‐life case study on the tolerance design of a secondary battery is provided for the illustration of the proposed method. The results show the efficiency of the proposed method in designing the tolerances to minimize the quality loss and manufacturing costs. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

2.
Tolerance design affects the quality and cost of a product cycle time. Most of the literature on tolerance design problems has focused on developing exact methods to minimize manufacturing cost or quality loss. The inherent assumption in this approach is that the assembly function is known before a tolerance design problem is analysed. With the current development in CAD (Computer-Aided Design) software, design engineers can proceed with the tolerance design problems, without knowing assembly functions in advance. In this study, the Monte Carlo simulation is employed using VSA-3D/Pro software to obtain experimental data. Then the design of experiments (DOE) approach is adopted for data analysis in order to select critical components for cost reduction and quality improvement. Implementing the discussed computer experiments, a tolerance design analysis which improves quality and reduces cost can be performed for any complex assembly via computer during the early stage of design.  相似文献   

3.
When the component proportions in mixture experiments are restricted by lower and upper bounds, the design space can become an irregular region that can induce multicollinearity among the component proportions. Thus, we suggest the use of ridge regression as a means of stabilizing the estimates of the coefficients in the fitted model. We use fraction of design space plots and violin plots to illustrate and evaluate the effect of ridge regression estimators with respect to the prediction variance and to guide the decision about the value of ridge constant k. We illustrate the methods with three examples from the literature. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

4.
Most tolerance design optimization problems have focused on developing exact methods to reduce manufacturing cost or to increase product quality. The inherent assumption with this approach is that assembly functions are known before a tolerance design problem is analyzed. With the current development of CAD (Computer‐Aided Design) software, design engineers can address the tolerance design problem without knowing assembly functions in advance. In this study, VSA‐3D/Pro software, which contains a set of simulation tools, is employed to generate experimental assembly data. These computer experimental data will be converted into other forms such as total cost and Process Capability Index. Total cost consists of tolerance cost and quality loss. Then, empirical equations representing two variables can be obtained through a statistical regression method. After that, mathematical optimization and sensitivity analysis are performed within the constrained ‘desired design and process’ space. Consequently, tolerance design via computer experiments enables engineers to optimize design tolerance and manufacturing variation to achieve the highest quality at the most cost effective price during the design and planning stage. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

5.
Mixture experiments involve developing a dedicated formulation for specific applications. We propose the weighted optimality criterion using the geometric mean as the objective function for the genetic algorithms. We generate a robust mixture design using genetic algorithms (GAs) of which the region of interest is an irregularly shaped polyhedral region formed by constraints on proportions of the mixture component. When specific terms in the initial model display unimportant effects, it is assumed that they are removed. The design generation objective requires model robustness across the set of the reduced models of the design. Proposing an alternative way to tackle the problem, we find that the proposed GA designs based on G- or/and IV-efficiency are robust to model misspecification.  相似文献   

6.
Unconstrained mixture experiments seldomly occur in practice and the constraints imposed on the component proportions considerably complicate the design of the experiment. In addition, the constraints initially specified by the engineers often leave the statistician with an empty design region. In this technical note, we show how integer programming may help the statistician to relax one or more constraints in order to remove the inconsistency.  相似文献   

7.
Robust parameter designs are widely used to produce products/processes that perform consistently well across various conditions known as noise factors. Recently, the robust parameter design method is implemented in computer experiments. The structure of conventional product array design becomes unsuitable due to its extensive number of runs and the polynomial modeling. In this article, we propose a new framework robust parameter design via stochastic approximation (RPD-SA) to efficiently optimize the robust parameter design criteria. It can be applied to general robust parameter design problems, but is particularly powerful in the context of computer experiments. It has the following four advantages: (1) fast convergence to the optimal product setting with fewer number of function evaluations; (2) incorporation of high-order effects of both design and noise factors; (3) adaptation to constrained irregular region of operability; (4) no requirement of statistical analysis phase. In the numerical studies, we compare RPD-SA to the Monte Carlo sampling with Newton–Raphson-type optimization. An “Airfoil” example is used to compare the performance of RPD-SA, conventional product array designs, and space-filling designs with the Gaussian process. The studies show that RPD-SA has preferable performance in terms of effectiveness, efficiency and reliability.  相似文献   

8.
Conventionally, parameter design precedes tolerance design in the course of product design or process planning. To lower the production costs, as well as to improve quality, this study proposes the simultaneous determination of parameter and tolerance values when designing an electronic circuit. With the current development of CAD (Computer-Aided Design) software for electronic circuit design, engineers can determine parameter and tolerance values without providing transfer functions for circuit analysis. In this study, a computer experiment is performed by using CAD software (PSpice) to obtain outputs that will be converted into the total cost, which includes the quality loss, the tolerance cost and the failure cost. Then, Response Surface Methodology (RSM) is employed to minimize the total cost and to find the optimal parameter and tolerance values statistically. Consequently, a parameter and tolerance design for quality improvement and cost reduction can be achieved for any complex electronic circuit during the early stages of design.  相似文献   

9.
The general problem considered is an optimization problem involving selecting design parameters that yield an optimal response. We assume some initial response data are available and further experimentation (physical experiments and/or computer simulations) are to be used to obtain more information. We assume further that resources and system complexity together restrict the number of experiments or simulations that can be performed. Consequently, levels of the design parameters used in the simulations must be selected in a way that will efficiently approximate the optimal design ‘location’ and the optimal value. This paper describes an algorithmic ‘response-modeling’ approach for performing this selection. The algorithm is demonstrated using a simple analytical surface and is applied to two additional problems that have been addressed in the literature for comparison with other approaches.  相似文献   

10.
This article presents a case study of developing a space-filling design (SFD) for a constrained mixture experiment when the experimental region is specified by single-component constraints (SCCs), linear multiple-component constraints (LMCCs), and nonlinear multiple-component constraints (NMCCs). Traditional methods and software for designing constrained mixture experiments with SCCs and LMCCs (using either optimal design or SFD approaches) are not directly applicable because of the NMCCs. A SFD algorithm in the JMP® software was modified to accommodate the NMCCs; the modification is described in this article. The case study involves high-level waste (HLW) glass that is subject to the formation of nepheline crystals as the glass cools. This can significantly reduce the durability of HLW glass (which is undesirable). The goal of the study was to develop a SFD for the HLW glass compositional region where nepheline may form, and generate data for modeling nepheline formation as a function of HLW glass composition. The HLW glass composition region was specified in terms of eight components with SCCs, two LMCCs, and two NMCCs. The NMCCs were based on a nonlinear logistic regression model for a binary nepheline response that was developed from previous data. This article discusses the HLW glass example, the constraints specifying the experimental composition region, and how an existing algorithm for generating SFDs was modified to accommodate the NMCCs. The methodology discussed in this article can be applied to any example in which the experimental region is specified by one or more nonlinear constraints in addition to linear constraints on mixture components and/or non-mixture variables.  相似文献   

11.
We propose and develop a genetic algorithm (GA) for generating D‐optimal designs where the experimental region is an irregularly shaped polyhedral region. Our approach does not require selection of points from a user‐defined candidate set of mixtures and allows movement through a continuous region that includes highly constrained mixture regions. This approach is useful in situations where extreme vertices (EV) designs or conventional exchange algorithms fail to find a near‐optimal design. For illustration, examples with three and four components are presented with comparisons of our GA designs with those obtained using EV designs and exchange‐point algorithms over an irregularly shaped polyhedral region. The results show that the designs produced by the GA perform better than, if not as well as, the designs produced by the exchange‐point algorithms; however, the designs produced by the GA perform better than the designs produced by the EV. This suggests that GA is an alternative approach for constructing the D‐optimal designs in problems of mixture experiments when EV designs or exchange‐point algorithms are insufficient. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

12.
Computer experiments have received a great deal of attention in many fields of science and technology. Most literature assumes that all the input variables are quantitative. However, researchers often encounter computer experiments involving both qualitative and quantitative variables (BQQV). In this article, a new interface on design and analysis for computer experiments with BQQV is proposed. The new designs are one kind of sliced Latin hypercube designs with points clustered in the design region and possess good uniformity for each slice. For computer experiments with BQQV, such designs help to measure the similarities among responses of different level-combinations in the qualitative variables. An adaptive analysis strategy intended for the proposed designs is developed. The proposed strategy allows us to automatically extract information from useful auxiliary responses to increase the precision of prediction for the target response. The interface between the proposed design and the analysis strategy is demonstrated to be effective via simulation and a real-life example from the food engineering literature. Supplementary materials for this article are available online.  相似文献   

13.
Conventional parameter or tolerance designs focus on developing exact methods to minimize quality loss or manufacturing cost. The inherent assumption is that the response functions which represent the link between controllable variables and response values of quality characteristics are known before a design is developed. Moreover, parameter and tolerance values are assumed to be independent controllable variables in previous works; namely, they are determined separately in design activities. Currently, advanced computer software, such as computer-aided engineering, can help engineers to handle design problems with unknown response functions, at the stage of product design and process planning. Therefore, in this study, the software ANSYS was employed to obtain simulation data which represent the response values of quality characteristics. These response values will be used to fit a set of response functions for later analysis. However, previous works in computer simulation for design and planning usually lack consideration of the noise impact from an external design system. To approximate a realistic design environment, various levels of controllable variables, in conjunction with artificial noises created from uncontrollable variables, are used to generate simulated data for statistical analysis via Response Surface Methodology (RSM). Then, an optimization technique, such as mathematical programming, is adopted to integrate these response functions into one formulation so that optimal parameter and tolerance values are concurrently determined, with multiple quality characteristics taken into consideration. A bike-frame design was used to demonstrate the presented approach, followed by multiple quality characteristics of interest: material cost, bike-frame weight, structure reliability, and rigidity dependability. The goal is to minimize material cost and bike frame weight and to maximize structure reliability and rigidity dependability. This approach is useful for solving any complex design problems in the early stages, while providing enhanced functionality, quality, economic benefits, and a shorter design cycle.  相似文献   

14.
This article describes the solution to a unique and challenging mixture experiment design problem involving (1) 19 and 21 components for two different parts of the design, (2) many single-component and multicomponent constraints, (3) augmentation of existing data, (4) a layered design developed in stages, and (5) a no-candidate-point optimal design approach. The problem involved studying the liquidus temperature of spinel crystals as a function of nuclear waste glass composition. A D-optimal approach was used to augment existing glasses with new nonradioactive and radioactive glasses chosen to cover the designated nonradioactive and radioactive experimental regions.

The traditional approach to building D-optimal mixture experiment designs is to generate a set of candidate points from which design points are D-optimally selected. The large number of mixture components (19 or 21) and many constraints defining each layer of the waste glass experimental region made it impossible to generate and store the huge number of vertices and other typical candidate points. A new coordinate-exchange algorithm applicable for constrained mixture experiments implemented in JMP® was used to D-optimally select design points without candidate points. The new coordinate-exchange algorithm for mixture experiments is described in this article.  相似文献   

15.
To study the distribution of ferrous burden (which is a mixture of pellets and sinter) in the blast furnace, the burden must be characterised in terms of input parameters which can be used in discrete element method (DEM) simulations. A methodology is presented to determine these parameters which can help represent the ferrous burden mixture. First, angle of repose experiments are performed and determined for pellet, sinter and their mixtures at different proportions. Using this experimental data, the DEM parameters individually for pellets and sinter using previously determined experimental values and DEM calibration approach are chosen and they are represented accurately. From these, the values of DEM parameters for pellet-sinter contact are taken as the average of their individual counterparts. Using all determined parameters for intra-material as well as inter-material particle contacts, simulations of angle of repose for mixtures at varying proportions are done and a good match is found between experimental and simulation values at all proportions. In this way, binary mixtures are characterised while maintaining the constituents as individual species.  相似文献   

16.
This article presents a case study of developing an experimental design for a constrained mixture experiment when the experimental region is defined by single-component constraints (SCCs), linear multiple-component constraints (MCCs), and a nonlinear MCC. Traditional methods and software for designing constrained mixture experiments with SCCs and linear MCCs are not directly applicable because of the nonlinear MCC. A modification of existing methodology to account for the nonlinear MCC was developed and is described in this article. The case study involves a 15-component nuclear waste glass example in which SO3 is one of the components. SO3 has a solubility limit in glass that depends on the composition of the balance of the glass. A goal was to design the experiment so that SO3 would not exceed its predicted solubility limit for any of the experimental glasses. A partial quadratic mixture model expressed in the relative proportions of the 14 other components was used to construct a nonlinear MCC in terms of all 15 components. In addition, there were SCCs and linear MCCs. This article discusses the waste glass example and how a layered design was generated to (1) account for the SCCs, linear MCCs, and nonlinear MCC and (2) meet the goals of the study.  相似文献   

17.
Sliced Latin hypercube designs (SLHDs) have important applications in designing computer experiments with continuous and categorical factors. However, a randomly generated SLHD can be poor in terms of space-filling, and based on the existing construction method that generates the SLHD column by column using sliced permutation matrices, it is also difficult to search for the optimal SLHD. In this article, we develop a new construction approach that first generates the small Latin hypercube design in each slice and then arranges them together to form the SLHD. The new approach is intuitive and can be easily adapted to generate orthogonal SLHDs and orthogonal array-based SLHDs. More importantly, it enables us to develop general algorithms that can search for the optimal SLHD efficiently.  相似文献   

18.
We investigate the merits of replication, and provide methods for optimal design (including replicates), with the goal of obtaining globally accurate emulation of noisy computer simulation experiments. We first show that replication can be beneficial from both design and computational perspectives, in the context of Gaussian process surrogate modeling. We then develop a lookahead-based sequential design scheme that can determine if a new run should be at an existing input location (i.e., replicate) or at a new one (explore). When paired with a newly developed heteroscedastic Gaussian process model, our dynamic design scheme facilitates learning of signal and noise relationships which can vary throughout the input space. We show that it does so efficiently, on both computational and statistical grounds. In addition to illustrative synthetic examples, we demonstrate performance on two challenging real-data simulation experiments, from inventory management and epidemiology. Supplementary materials for the article are available online.  相似文献   

19.
A deterministic optimization usually ignores the effects of uncertainties in design variables or design parameters on the constraints. In practical applications, it is required that the optimum solution can endure some tolerance so that the constraints are still satisfied when the solution undergoes variations within the tolerance range. An optimization problem under tolerance conditions is formulated in this article. It is a kind of robust design and a special case of a generalized semi-infinite programming (GSIP) problem. To overcome the deficiency of directly solving the double loop optimization, two sequential algorithms are then proposed for obtaining the solution, i.e. the double loop optimization is solved by a sequence of cycles. In each cycle a deterministic optimization and a worst case analysis are performed in succession. In sequential algorithm 1 (SA1), a shifting factor is introduced to adjust the feasible region in the next cycle, while in sequential algorithm 2 (SA2), the shifting factor is replaced by a shifting vector. Several examples are presented to demonstrate the efficiency of the proposed methods. An optimal design result based on the presented method can endure certain variation of design variables without violating the constraints. For GSIP, it is shown that SA1 can obtain a solution with equivalent accuracy and efficiency to a local reduction method (LRM). Nevertheless, the LRM is not applicable to the tolerance design problem studied in this article.  相似文献   

20.
Jung S  Choi DH  Choi BL  Kim JH 《Applied optics》2011,50(23):4688-4700
In the manufacturing process for the lens system of a mobile phone camera, various types of assembly and manufacturing tolerances, such as tilt and decenter, should be appropriately allocated. Because these tolerances affect manufacturing cost and the expected optical performance, it is necessary to choose a systematic design methodology for determining optimal tolerances. In order to determine the tolerances that minimize production cost while satisfying the reliability constraints on important optical performance indices, we propose a tolerance design procedure for a lens system. A tolerance analysis is carried out using Latin hypercube sampling for evaluating the expected optical performance. The tolerance optimization is carried out using a function-based sequential approximate optimization technique that can reduce the computational burden and smooth numerical noise occurring in the optimization process. Using the proposed design approach, the optimal production cost was decreased by 28.3% compared to the initial cost while satisfying all the constraints on the expected optical performance. We believe that the tolerance analysis and design procedure presented in this study can be applied to the tolerance optimization of other systems.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号