首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 359 毫秒
1.
Second‐order experimental designs are employed when an experimenter wishes to fit a second‐order model to account for response curvature over the region of interest. Partition designs are utilized when the output quality or performance characteristics of a product depend not only on the effect of the factors in the current process, but the effects of factors from preceding processes. Standard experimental design methods are often difficult to apply to several sequential processes. We present an approach to building second‐order response models for sequential processes with several design factors and multiple responses. The proposed design expands current experimental designs to incorporate two processes into one partitioned design. Potential advantages include a reduction in the time required to execute the experiment, a decrease in the number of experimental runs, and improved understanding of the process variables and their influence on the responses. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

2.
In many industrial experiments there are restrictions on the resource (or cost) required for performing the runs in a response surface design. This will require practitioners to choose some subset of the candidate set of experimental runs. The appropriate selection of design points under resource constraints is an important aspect of multi‐factor experimentation. A well‐planned experiment should consist of factor‐level combinations selected such that the resulting design will have desirable statistical properties but the resource constraints should not be violated or the experimental cost should be minimized. The resulting designs are referred to as cost‐efficient designs. We use a genetic algorithm for constructing cost‐constrained G‐efficient second‐order response surface designs over cuboidal regions when an experimental cost at a certain factor level is high and a resource constraint exists. Consideration of practical resource (or cost) restrictions and different cost structures will provide valuable information for planning effective and economical experiments when optimizing statistical design properties. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

3.
We propose ‘low‐cost response surface methods’ (LCRSMs) that typically require half the experimental runs of standard response surface methods based on central composite and Box Behnken designs, but yield comparable or lower modeling errors under realistic assumptions. In addition, the LCRSMs have substantially lower modeling errors and greater expected savings compared with alternatives with comparable numbers of runs, including small composite designs and computer‐generated designs based on popular criteria such as D‐optimality. The LCRSM procedures appear to be the first experimental design methods derived as the solution to a simulation optimization problem. Together with modern computers, simulation optimization offers unprecedented opportunities for applying clear, realistic multicriterion objectives and assumptions to produce useful experimental design methods. We compare the proposed LCRSMs with alternatives based on six criteria. We conclude that the proposed methods offer attractive alternatives when the experimenter is considering dropping factors to use standard response surface methods or would like to perform relatively few runs and stop with a second‐order model. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

4.
Two‐level factorial designs in blocks of size two are useful in a variety of experimental settings, including microarray experiments. Replication is typically used to allow estimation of the relevant effects, but when the number of factors is large this common practice can result in designs with a prohibitively large number of runs. One alternative is to use a design with fewer runs that allows estimation of both main effects and two‐factor interactions. Such designs are available in full factorial experiments, though they may still require a great many runs. In this article, we develop fractional factorial design in blocks of size two when the number of factors is less than nine, using just half of the runs needed for the designs given by Kerr (J Qual. Tech. 2006; 38 :309–318). Two approaches, the orthogonal array approach and the generator approach, are utilized to construct our designs. Analysis of the resulting experimental data from the suggested design is also given. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

5.
A historically common choice for evaluating response surface designs is to use alphabetic optimality criteria. Single‐number criteria such as D, A, G, and V optimality do not completely reflect the estimation or prediction variance characteristics of the designs in question. For prediction‐based assessment, alternatives to single‐number summaries include the graphical displays of the prediction variance across the design regions. Variance dispersion graphs, fraction of design space plots, and quantile plots have been suggested to evaluate the overall prediction capability of response surface designs. The quantile plots use the percentiles. These quantile plots use the percentiles of the distribution at a given radius instead of just the mean, maximum, and minimum prediction variance values on concentric spheres inside the region of the interest. Previously, the user had to select several values of radius and draw corresponding quantile plots to evaluate the overall prediction capability of response surface designs. The user‐specified choice of radii to examine makes the plot somewhat subjective. Alternately, we propose to remove this subjectivity by using a three‐dimensional quantile plot. As another extension of the quantile plots, we suggest dynamic quantile plots to animate the quantile plots and use them for comparing and evaluating response surface designs. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

6.
In industrial experiments, restrictions on the execution of the experimental runs or the existence of one or more hard‐to‐change factors often leads to split‐plot experiments, where there are two types of experimental units and two independent randomizations. The resulting compound symmetric error structure, as well as the settings of whole‐plot and subplot factors, play important roles in the performance of split‐plot experiments. When the practitioner is interested in predicting the response, a response surface design for a second‐order model such as a central composite design (CCD) is often used. The prediction variance of second‐order designs under a split‐plot error structure is often of interest. In this paper, fraction of design space (FDS) plots are adapted to split‐plot designs. In addition to the global curve exploring the entire design space, sliced curves at various whole‐plot levels are presented to study prediction performance for subregions in the design space. The different sizes of the constrained subregions are accounted for by the proportional size of the sliced curves. The construction and use of the FDS plots are demonstrated through two examples of the restricted CCD in split‐plot schemes. We also consider the impact of the variance ratio on design performance. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

7.
The sequential design approach to response surface exploration is often viewed as advantageous as it provides the opportunity to learn from each successive experiment with the ultimate goal of determining optimum operating conditions for the system or process under study. Recent literature has explored factor screening and response surface optimization using only one three‐level design to handle situations where conducting multiple experiments is prohibitive. The most straightforward and accessible analysis strategy for such designs is to first perform a main‐effects only analysis to screen important factors before projecting the design onto these factors to conduct response surface exploration. This article proposes the use of optimal designs with minimal aliasing (MA designs) and demonstrates that they are more effective at screening important factors than the existing designs recommended for single‐design response surface exploration. For comparison purposes, we construct 27‐run MA designs with up to 13 factors and demonstrate their utility using established design criterion and a simulation study. Copyright 2011 © John Wiley & Sons, Ltd.  相似文献   

8.
Traditional space-filling designs are a convenient way to explore throughout an input space of flexible dimension and have design points close to any region where future predictions might be of interest. In some applications, there may be a model connecting the input factors to the response(s), which provides an opportunity to consider the spacing not only in the input space but also in the response space. In this paper, we present an approach for leveraging current understanding of the relationship between inputs and responses to generate designs that allow the experimenter to flexibly balance the spacing in these two regions to find an appropriate design for the experimental goals. Applications where good spacing of the observed response values include calibration problems where the goal is to demonstrate the adequacy of the model across the range of the responses, sensitivity studies where the outputs from a submodel may be used as inputs for subsequent models, and inverse problems where the outputs of a process will be used in the inverse prediction for the unknown inputs. We use the multi-objective optimization method of Pareto fronts to generate multiple non-dominated designs with different emphases on the input and response space-filling criteria from which the experimenter can choose. The methods are illustrated through several examples and a chemical engineering case study.  相似文献   

9.
Most preset response surface methodology (RSM) designs offer ease of implementation and good performance over a wide range of process and design optimization applications. These designs often lack the ability to adapt the design on the basis of the characteristics of application and experimental space so as to reduce the number of experiments necessary. Hence, they are not cost‐effective for applications where the cost of experimentation is high or when the experimentation resources are limited. In this paper, we present an adaptive sequential response surface methodology (ASRSM) for industrial experiments with high experimentation cost, limited experimental resources, and high design optimization performance requirement. The proposed approach is a sequential adaptive experimentation approach that combines concepts from nonlinear optimization, design of experiments, and response surface optimization. The ASRSM uses the information gained from the previous experiments to design the subsequent experiment by simultaneously reducing the region of interest and identifying factor combinations for new experiments. Its major advantage is the experimentation efficiency such that for a given response target, it identifies the input factor combination (or containing region) in less number of experiments than the classical single‐shot RSM designs. Through extensive simulated experiments and real‐world case studies, we show that the proposed ASRSM method outperforms the popular central composite design method and compares favorably with optimal designs. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

10.
In this paper, minimax loss response surface designs are constructed. These designs are more robust to one missing design point than the original designs. The proposed designs are compared with the designs in the literature, and they are better in terms of loss and number of runs. Moreover, the new suggestion for the value of α generates designs not only with less losses but also with higher D‐efficiency.  相似文献   

11.
Activities such as global sensitivity analysis, statistical effect screening, uncertainty propagation, or model calibration have become integral to the Verification and Validation (V&V) of numerical models and computer simulations. One of the goals of V&V is to assess prediction accuracy and uncertainty, which feeds directly into reliability analysis or the Quantification of Margin and Uncertainty (QMU) of engineered systems. Because these analyses involve multiple runs of a computer code, they can rapidly become computationally expensive. An alternative to Monte Carlo-like sampling is to combine a design of computer experiments to meta-modeling, and replace the potentially expensive computer simulation by a fast-running emulator. The surrogate can then be used to estimate sensitivities, propagate uncertainty, and calibrate model parameters at a fraction of the cost it would take to wrap a sampling algorithm or optimization solver around the physics-based code. Doing so, however, offers the risk to develop an incorrect emulator that erroneously approximates the “true-but-unknown” sensitivities of the physics-based code. We demonstrate the extent to which this occurs when Gaussian Process Modeling (GPM) emulators are trained in high-dimensional spaces using too-sparsely populated designs-of-experiments. Our illustration analyzes a variant of the Rosenbrock function in which several effects are made statistically insignificant while others are strongly coupled, therefore, mimicking a situation that is often encountered in practice. In this example, using a combination of GPM emulator and design-of-experiments leads to an incorrect approximation of the function. A mathematical proof of the origin of the problem is proposed. The adverse effects that too-sparsely populated designs may produce are discussed for the coverage of the design space, estimation of sensitivities, and calibration of parameters. This work attempts to raise awareness to the potential dangers of not allocating enough resources when exploring a design space to develop fast-running emulators.  相似文献   

12.
Recently, the application of response surface methodology (RSM) to robust parameter design has attracted a great deal of attention. In some cases, experiments are very expensive and may require a great deal of time to perform. Central composite designs (CCDs) and Box and Behnken designs (BBDs), which are commonly used for RSM, may lead to an unacceptably large number of experimental runs. In this paper, a supersaturated design for RSM is constructed and its application to robust parameter design is proposed. A response surface model is fitted using data from the designed experiment and a stepwise variable selection. An illustrative example is presented to show that the proposed method considerably reduces the number of experimental runs, as compared with CCDs and BBDs. Numerical experiments are also conducted in which type I and II error rates are evaluated. The results imply that the proposed method may be effective for finding the effects (i.e. main effects, two‐factor interactions, and pure quadratic effects) of active factors under the ‘effect sparsity’ assumption. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

13.
Unreplicated designs are fairly common in industrial applications; however, there is resistance to their use in agricultural science. In the agriculture community, there is still a belief that lack of replication may prevent the experimenter from getting useful conclusions. Nevertheless, sound statistical methods that permit valid comparisons in unreplicated studies are available for many types of designs. The objective of this paper is to present an analysis procedure for unreplicated designs combining typical characteristics found in industrial experimentation (factorial designs augmented with center points) and in agricultural applications (inclusion of control treatments and repeated measurements). We illustrate the method through a real experiment to evaluate the use of sugarcane by‐products in chicken diet. Specifically, it is an unreplicated two‐level factorial design with two additional runs (a center point and a control treatment), with experimental units measured in two periods of time. Replication was initially planned in the case study, but the actual treatment application led to an unreplicated design. The application of the proposed method allows interpretation of the data collected. We conclude that the appropriate use of unreplicated designs in agricultural and biological research may reduce overall costs and lessen the use of in vivo testing. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

14.
15.
Exact G-optimal designs have rarely, if ever, been employed in practical applications. One reason for this is that, due to the computational difficulties involved, no statistical software system currently provides capabilities for constructing them. Two algorithms for exact G-optimal design construction of small designs involving one to three factors have been discussed in the literature: one employing a genetic algorithm and one employing a coordinate-exchange algorithm. However, these algorithms are extremely computer intensive in small experiments and do not scale beyond two or three factors. In this article, we develop a new method for constructing exact G-optimal designs using the integrated variance criterion, Iλ-optimality. We show that with careful selection of the weight function, a difficult exact G-optimal design construction problem can be converted to an equivalent exact Iλ-optimal design problem, which is easily and quickly solved. We illustrate the use of the algorithm for full quadratic models in one to five factors. The MATLAB codes used to implement our algorithm and the exact G-optimal designs produced by the algorithm for each test case are available online as supplementary material.  相似文献   

16.
This paper explores the issue of model misspecification, or bias, in the context of response surface design problems involving quantitative and qualitative factors. New designs are proposed specifically to address bias and compared with five types of alternatives ranging from types of composite to D‐optimal designs using four criteria including D‐efficiency and measured accuracy on test problems. Findings include that certain designs from the literature are expected to cause prediction errors that practitioners would likely find unacceptable. A case study relating to the selection of science, technology, engineering, or mathematics majors by college students confirms that the expected substantial improvements in prediction accuracy using the proposed designs can be realized in relevant situations. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

17.
Multi‐response optimization methods rely on empirical process models based on the estimates of model parameters that relate response variables to a set of design variables. However, in determining the optimal conditions for the design variables, model uncertainty is typically neglected, resulting in an unstable optimal solution. This paper proposes a new optimization strategy that takes model uncertainty into account via the prediction region for multiple responses. To avoid obtaining an overly conservative design, the location and dispersion performances are constructed based on the best‐case strategy and the worst‐case strategy of expected loss. We reveal that the traditional loss function and the minimax/maximin strategy are both special cases of the proposed approach. An example is illustrated to present the procedure and the effectiveness of the proposed loss function. The results show that the proposed approach can give reasonable results when both the location and dispersion performances are important issues. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

18.
The preset response surface methodology (RSM) designs are commonly used in a wide range of process and design optimization applications. Although they offer ease of implementation and good performance, they are not sufficiently adaptive to reduce the required number of experiments and thus are not cost effective for applications with high cost of experimentation. We propose an efficient adaptive sequential methodology based on optimal design and experiments ranking for response surface optimization (O‐ASRSM) for industrial experiments with high experimentation cost, limited experimental resources, and requiring high design optimization performance. The proposed approach combines the concepts from optimal design of experiments, nonlinear optimization, and RSM. By using the information gained from the previous experiments, O‐ASRSM designs the subsequent experiment by simultaneously reducing the region of interest and by identifying factor combinations for new experiments. Given a given response target, O‐ASRSM identifies the input factor combination in less number of experiments than the classical single‐shot RSM designs. We conducted extensive simulated experiments involving quadratic and nonlinear response functions. The results show that the O‐ASRSM method outperforms the popular central composite design, the Box–Behnken design, and the optimal designs and is competitive with other sequential response surface methods in the literature. Furthermore, results indicate that O‐ASRSM's performance is robust with respect to the increasing number of factors. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

19.
Efficient estimation of response variables in a process is an important problem that requires experimental designs appropriated for each specific situation. When we have a system involving control and noise variables, we are often interested in the simultaneous optimization of the prediction variance of the mean (PVM) and the prediction variance of the slope (PVS). The goal of this simultaneous optimization is to construct designs that will result in the efficient estimation of important parameters. We construct new computer‐generated designs using a desirability function by transforming PVM and PVS into one desirability value that can be optimized using a genetic algorithm. Fraction of design space (FDS) plots are used to evaluate the new designs and six cases are discussed to illustrate the procedure. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

20.
We study the design of two-level experiments with N runs and n factors large enough to estimate the interaction model, which contains all the main effects and all the two-factor interactions. Yet, an effect hierarchy assumption suggests that main effect estimation should be given more prominence than the estimation of two-factor interactions. Orthogonal arrays (OAs) favor main effect estimation. However, complete enumeration becomes infeasible for cases relevant for practitioners. We develop a partial enumeration procedure for these cases and we establish upper bounds on the D-efficiency for the interaction model based on arrays that have not been generated by the partial enumeration. We also propose an optimal design procedure that favors main effect estimation. Designs created with this procedure have smaller D-efficiencies for the interaction model than D-optimal designs, but standard errors for the main effects in this model are improved. Generated OAs for 7–10 factors and 32–72 runs are smaller or have a higher D-efficiency than the smallest OAs from the literature. Designs obtained with the new optimal design procedure or strength-3 OAs (which have main effects that are not correlated with two-factor interactions) are recommended if main effects unbiased by possible two-factor interactions are of primary interest. D-optimal designs are recommended if interactions are of primary interest. Supplementary materials for this article are available online.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号