首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Finding a D‐optimal design for a split‐plot experiment requires knowledge of the relative size of the whole plot (WP) and sub‐plot error variances. Since this information is typically not known a priori, we propose an optimization strategy based on balancing performance across a range of plausible variance ratios. This approach provides protection against selecting a design which could be sub‐optimal if a single initial guess is incorrect. In addition, options for incorporating experimental cost into design selection are explored. The method uses Pareto front multiple criteria optimization to balance these objectives and allows the experimenter to understand the trade‐offs between several design choices and select one that best suits the goals of the experiment. We present new algorithms for populating the Pareto front for the split‐plot situation when the number of WPs is either fixed or flexible. We illustrate the method with a case study and demonstrate how considering robustness across variance ratios offers improved performance. The Pareto approach identifies multiple promising designs, and allows the experimenter to understand trade‐offs between alternatives and examining their robustness to different ways of combining the objectives. New graphical summaries for up to four criteria are developed to help guide improved decision‐making. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

2.
In a decision‐making process, relying on only one objective can often lead to oversimplified decisions that ignore important considerations. Incorporating multiple, and likely competing, objectives is critical for balancing trade‐offs on different aspects of performance. When multiple objectives are considered, it is often hard to make a precise decision on how to weight the different objectives when combining their performance for ranking and selecting designs. We show that there are situations when selecting a design with near‐optimality for a broad range of weight combinations of the criteria is a better test selection strategy compared with choosing a design that is strictly optimal under very restricted conditions. We propose a new design selection strategy that identifies several top‐ranked solutions across broad weight combinations using layered Pareto fronts and then selects the final design that offers the best robustness to different user priorities. This method involves identifying multiple leading solutions based on the primary objectives and comparing the alternatives using secondary objectives to make the final decision. We focus on the selection of screening designs because they are widely used both in industrial research, development, and operational testing. The method is illustrated with an example of selecting a single design from a catalog of designs of a fixed size. However, the method can be adapted to more general designed experiment selection problems that involve searching through a large design space.  相似文献   

3.
The familiar factorial, fractional factorial, and response surface designs are designs for regularly-shaped regions of interest, typically cuboidal regions and spherical regions. An irregularly shaped region of experimentation arises in situations where there are constraints on the factor level combinations that can be run or restrictions on portions of the region of exploration. Computer-generated designs based on some optimality criterion are a logical alternative for these problems. We give a brief tutorial on design optimality criteria and show how one of these, the D-optimality criteria, can lead to very reasonable designs for constrained regions of interest. We show through a simulation study that D-optimal designs perform very well with respect to the capability of selecting the correct model and accurately estimating the design factor levels that result in the optimal response.  相似文献   

4.
Alphabetic optimality criteria, such as the D, A, and I criteria, require specifying a model to select optimal designs. They are not model‐free, and the designs obtained by them may not be robust. Recently, many extensions of the D and A criteria have been proposed for selecting robust designs with high estimation efficiency. However, approaches for finding robust designs with high prediction efficiency are rarely studied in the literature. In this paper, we propose a compound criterion and apply the coordinate‐exchange 2‐phase local search algorithm to generate robust designs with high estimation, high prediction, or balanced estimation and prediction efficiency for projective submodels. Examples demonstrate that the designs obtained by our method have better projection efficiency than many existing designs.  相似文献   

5.
We propose ‘low‐cost response surface methods’ (LCRSMs) that typically require half the experimental runs of standard response surface methods based on central composite and Box Behnken designs, but yield comparable or lower modeling errors under realistic assumptions. In addition, the LCRSMs have substantially lower modeling errors and greater expected savings compared with alternatives with comparable numbers of runs, including small composite designs and computer‐generated designs based on popular criteria such as D‐optimality. The LCRSM procedures appear to be the first experimental design methods derived as the solution to a simulation optimization problem. Together with modern computers, simulation optimization offers unprecedented opportunities for applying clear, realistic multicriterion objectives and assumptions to produce useful experimental design methods. We compare the proposed LCRSMs with alternatives based on six criteria. We conclude that the proposed methods offer attractive alternatives when the experimenter is considering dropping factors to use standard response surface methods or would like to perform relatively few runs and stop with a second‐order model. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

6.
Subir Ghosh  Yun Shen 《TEST》2006,15(2):485-504
We often assume the standard linear model with uncorrelated observations for comparison of designs without realizing a possible presence of correlation in observations. In this paper we present several change of variance functions including the one given in Zhou (2001) for comparing designs in presence of possible correlation in observations. We find a design by minimizing one of our proposed change of variance functions in a simple response surface setup. We then compare its performance with all variance design, all bias design, and the design making the average variance equal to the average squared bias. We also compare a second order rotatable design with a non-rotatable design. The rotatable design is better than the non-rotatable design with respect to A-, D-, and E- optimality criterion functions under the standard linear model with uncorrelated observations. We observe that the rotatable design may not perform better than the non-rotatable design with respect to the change of variance functions. We present some important properties of the change of variance functions. We find that the A-optimum designs may perform poorly with respect to a change of variance function.  相似文献   

7.
When multiple responses are considered in process optimization, the degree to which they can be simultaneously optimized depends on the optimization objectives and the amount of trade‐offs between the responses. The normalized hypervolume of the Pareto front is a useful summary to quantify the amount of trade‐offs required to balance performance across the multiple responses. To quantify the impact of uncertainty of the estimated response surfaces and add realism to what future data to expect, 2 versions of the scaled normalized hypervolume of the Pareto front are presented. To demonstrate the variation of the hypervolume distributions, we explore a case study for a chemical process involving 3 responses, each with a different type of optimization goal. Results show that the global normalized hypervolume characterizes the proximity to the ideal results possible, while the instance‐specific summary considers the richness of the front and the severity of trade‐offs between alternatives. The 2 scaling schemes complement each other and highlight different features of the Pareto front and hence are useful to quantify what solutions are possible for simultaneous optimization of multiple responses.  相似文献   

8.
In this research, we consider the maximization of process capability as the criterion in product/process design that is used for selecting preferred design factor levels and propose several approaches for single and multiple response performance measure designs. All of these approaches assume that the relationship between a process performance measure and a set of design factors is represented via an estimate of a response surface function. In particular, we develop; (i) criteria for selecting an optimal design, which we call MCpk and MCpm; (h) mathematical programming formulations for maximizing MCpk and MCpm, including formulations for maximizing the desirability index (Harrington, 1965) and for maximizing the standardized performance criteria (Barton and Tsui, 1991) as special cases of the formulation for maximizing MCpk, (iii) formulations for considering cost when maximizing MCpk and MCpm, (iv) a means for assessing propagation of error; (v) a robust design method for assessing design factor effects on residual variance; (vi) a means for assessing the optimality of a proposed solution: and (vii) an original application in the screening of printed circuit board panels.  相似文献   

9.
Mixed‐level designs are employed when factors with different numbers of levels are involved. Practitioners use mixed‐level fractional factorial designs as the total number of runs of the full factorial increases rapidly as the number of factors and/or the number of factor levels increases. One important decision is to determine which fractional designs should be chosen. A new criterion, the general balance metric (GBM), is proposed to evaluate and compare mixed‐level fractional factorial designs. The GBM measures the degree of balance for both main effects and interaction effects. This criterion is tied to, and dominates orthogonality criteria as well as traditional minimum aberration criteria. Furthermore, the proposal is easy to use and has practical interpretations. As part of the GBM, the concept of resolution is generalized and the confounding structure of mixed‐level fractional factorial designs is also revealed. Moreover, the metric can also be used for the purpose of design augmentation. Examples are provided to compare this approach with existing criteria. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

10.
In many experimental situations, practitioners are confronted with costly, time consuming, or hard‐to‐change (HTC) factors. These practical or economic restrictions on randomization can be accommodated with a split‐plot design structure that minimizes the manipulation of the HTC factors. Selecting a good design is a challenging task and requires knowledge of the opportunities and restrictions imposed by the experimental apparatus and an evaluation of statistical performance among competing designs. Building on the well‐established evaluation criteria for the completely randomized context, we emphasize the unique qualitative and quantitative evaluation criteria for split‐plot designs. An example from hypersonic propulsion research is used to demonstrate the consideration of multiple design evaluation criteria. Published in 2007 by John Wiley & Sons, Ltd.  相似文献   

11.
A design optimality criterion, tr (L)-optimality, is applied to the problem of designing two-level multifactor experiments to detect the presence of interactions among the controlled variables. We give rules for constructing tr (L)-optimal foldover designs and tr (L)-optimal fractional factorial designs. Some results are given on the power of these designs for testing the hypothesis that there are no two-factor interactions. Augmentation of the tr (L)-optimal designs produces designs that achieve a compromise between the criteria of D-optimality (for parameter estimation in a first-order model) and tr (L)-optimality (for detecting lack of fit). We give an example to demonstrate an application to the sensitivity analysis of a computer model.  相似文献   

12.
Robust design, axiomatic design, and reliability‐based design provide effective approaches to deal with quality problems, and their integration will achieve better quality improvement. An integration design optimization framework of robust design, axiomatic design, and reliability‐based design is proposed in this paper. First, the fitted response model of each quality characteristic is obtained by response surface methodology and the mean square error (MSE) estimation is given by a second‐order Taylor series approximation expansion. Then the multiple quality characteristics robust design model is developed by the MSE criteria. Finally, the independence axiom constraints for decoupling and reliability constraints are integrated into the multiple quality characteristics robust design model, and the integration design optimization framework is formulated, where the weighted Tchebycheff approach is adopted to solve the multiple objective programming. An illustrative example is presented at the end, and the results show that the proposed approach can obtain better trade‐offs among conflicting quality characteristics, variability, coupling degree and reliability requirements. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

13.
A historically common choice for evaluating response surface designs is to use alphabetic optimality criteria. Single‐number criteria such as D, A, G, and V optimality do not completely reflect the estimation or prediction variance characteristics of the designs in question. For prediction‐based assessment, alternatives to single‐number summaries include the graphical displays of the prediction variance across the design regions. Variance dispersion graphs, fraction of design space plots, and quantile plots have been suggested to evaluate the overall prediction capability of response surface designs. The quantile plots use the percentiles. These quantile plots use the percentiles of the distribution at a given radius instead of just the mean, maximum, and minimum prediction variance values on concentric spheres inside the region of the interest. Previously, the user had to select several values of radius and draw corresponding quantile plots to evaluate the overall prediction capability of response surface designs. The user‐specified choice of radii to examine makes the plot somewhat subjective. Alternately, we propose to remove this subjectivity by using a three‐dimensional quantile plot. As another extension of the quantile plots, we suggest dynamic quantile plots to animate the quantile plots and use them for comparing and evaluating response surface designs. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

14.
This article discusses the benefits of different infill sampling criteria used in surrogate-based constrained global optimization. A new method which selects multiple updates based on Pareto optimal solutions is introduced showing improvements over a number of existing methods. The construction of surrogates (also known as meta-models or response surface models) involves the selection of a limited number of designs which are analysed using the original expensive functions. A typical approach involves two stages. First the surrogate is built using an initial sampling plan; the second stage updates the model using an infill sampling criterion to select further designs that offer improvement. Selecting multiple update points at each iteration, allowing distribution of the expensive function evaluations on several processors offers large potential for accelerating the overall optimization process. This article provides a comparison between different infill sampling criteria suitable for selecting multiple update points in the presence of constraints.  相似文献   

15.
Prediction variance properties for completely randomized designs (CRD) are fairly well covered in the response surface literature for both spherical and cuboidal designs. This paper evaluates the impact of changes in the variance ratio on the prediction properties of second‐order split‐plot designs (SPD). It is shown that the variance ratio not only influences the value of the G‐criterion but also its location, in contrast with the G‐criterion tendencies in CRD. An analytical method, rather than a heuristic optimization algorithm, is used to compute the prediction variance properties, which include the maximum, minimum and integrated variances for second‐order SPD. The analytical equations are functions of the design parameters, radius and variance ratio. As a result, the exact values for these quantities are reported along with the location of the maximum prediction variance used in the G‐criterion. The two design spaces of the whole plot and the subplot are studied and as a result, relative efficiency values for these distinct spaces are suggested. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

16.
Reversing plus and minus signs of one or more factors is the traditional method to fold over two‐level fractional factorial designs. However, when factors in the original design have more than two levels, the method of ‘reversing signs’ loses its efficacy. This article develops a mechanism to foldover designs involving factors with different numbers of levels, say mixed‐level designs. By exhaustive search we identify the optimal foldover plans. The criterion used is the general balance metric, which can reveal the aberration properties of the combined designs (original design plus foldover). The optimal foldovers for some efficient mixed‐level fractional factorial designs are provided for practical use. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

17.
Since their introduction by Box and Hunter, resolution criteria have been widely used when comparing regular fractional factorials designs. In this article, we investigate how a generalized resolution criterion can be used to assess some recently developed three-level screening designs, such as definitive screening designs (DSDs) and screening designs from weighing matrices. The aim of this paper is to capture the projection properties of those three-level screening designs, complementing the work of Deng and Tang, who used generalized resolution and minimum aberration criteria for ranking different two-level designs, particularly Plackett-Burman and other nonregular factorial designs. An advantage of generalized resolution, extended here to work on three-level designs, is that it offers a useful criterion for ranking three-level screening designs, whereas the Deng and Tang resolution is used mainly for the assessment of two-level designs. In addition, we applied a projection estimation capacity (PEC) criterion to select three-level screening designs with desirable properties. Practical examples and the best projections of the designs are presented in tables.  相似文献   

18.
This paper explores the issue of model misspecification, or bias, in the context of response surface design problems involving quantitative and qualitative factors. New designs are proposed specifically to address bias and compared with five types of alternatives ranging from types of composite to D‐optimal designs using four criteria including D‐efficiency and measured accuracy on test problems. Findings include that certain designs from the literature are expected to cause prediction errors that practitioners would likely find unacceptable. A case study relating to the selection of science, technology, engineering, or mathematics majors by college students confirms that the expected substantial improvements in prediction accuracy using the proposed designs can be realized in relevant situations. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

19.
EEG/MEG source localization requires a subject's brain MRI to compute the sourcemodel and headmodel . As part of this computation, co-registration of the digitized head information and brain MRI scan is the essential step. However, in the absence of a brain MRI scan, an approximated sourcemodel and headmodel can be computed from the subject's digitized head information and brain MRI scans from other subjects. In the present work, we compared the fiducial (FID)- and iterative closet point (ICP)-based co-registration approaches for computing an approximated sourcemodel using single and multiple available brain MRI scans. We also evaluated the two different template MRI selection strategies: one is based on objective registration error, and another on sourcemodel approximation error. The outcome suggests that averaged approximated solutions using multiple template brain MRI scans showed better performance than single-template MRI-based solutions. The FID-based approach performed better than the ICP-based approach for co-registration of the digitized head surface and brain MRI scan. While selecting template MRIs, the selection approach based on objective registration error showed better performance than a sourcemodel approximation error-based criterion. Cross-dataset performance analysis showed a higher model approximation error than within-dataset analysis. In conclusion, the FID-based co-registration approach and objective registration error-based MRI selection criteria provide a simple, fast and more accurate solution to compute averaged approximated models compared with the ICP-based approach. The demography of brain MRI scans should be similar to that of the query subject whose brain MRI scan was unavailable.  相似文献   

20.
Frequency‐based designs are presented for exploring large numbers of factors in simulation experiments. This approach yields completely orthogonal full second‐order space‐filling designs. We describe how they are generated, explore their space‐filling properties, and compare their performance to other designs of similar sizes. We illustrate their use for test planning on a simulation model of a live counter‐IED (improved explosive device) test event and present some ideas about ways in which simulation experiments can be used to support planning for live tests.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号