首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 968 毫秒
1.
Israel Prototype testing and experimentation play a key role in the development of new products. It is common practice to build a single prototype product and then test it at specified operating conditions. It is often beneficial, however, to make several variants of a prototype according to a fractional factorial design. The information obtained can be important in comparing design options and improving product performance and quality. In such experiments the response of interest is often not a single number but a performance curve over the test conditions. In this article we develop a general method for the design and analysis of prototype experiments that combines orthogonal polynomials with two-level fractional factorials. The proposed method is simple to use and has wide applicability. We explain our ideas by reference to an experiment reported by Taguchi on carbon monoxide exhaust of combustion engines. We then apply them to an experiment on a prototype fluid-flow controller.  相似文献   

2.
Bisgaard investigated the reduction of effective resolution level of two-level fractional factorial experiments caused by blocking. He did not consider experiments such as the expansible sequences of Addelman. Addelman did not consider the confounding due to blocking. This article considers the block confounding accompanying expansible sequences. Strategies are described for designing experiments as expansible sequences of orthogonal blocks under conditions of crossed-classification block effects of differing types. Rules are given for identifying at each stage of the expansible options what aliased sets of full-model coefficients are confounded with what block-effect parameters. The principles are illustrated with a sequence, one stage of which resembles Bisgaard's 26-2 experiment.  相似文献   

3.
Ensemble Methods are proposed as a means to extendbiAdaptive One‐Factor‐at‐a‐Time (aOFAT) experimentation. The proposed method executes multiple aOFAT experiments on the same system with minor differences in experimental setup, such as ‘starting points’. Experimental conclusions are arrived at by aggregating the multiple, individual aOFATs. A comparison is made to test the performance of the new method with that of a traditional form of experimentation, namely a single fractional factorial design which is equally resource intensive. The comparisons between the two experimental algorithms are conducted using a hierarchical probability meta‐model and an illustrative case study. The case is a wet clutch system with the goal of minimizing drag torque. In this study, the proposed procedure was superior in performance to using fractional factorial arrays consistently across various experimental settings. At the best, the proposed algorithm provides an expected value of improvement that is 15% higher than the traditional approach; at the worst, the two methods are equally effective, and on average the improvement is about 10% higher with the new method. These findings suggest that running multiple adaptive experiments in parallel can be an effective way to make improvements in quality and performance of engineering systems and also provides a reasonable aggregation procedure by which to bring together the results of the many separate experiments. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

4.
Two‐level factorial designs in blocks of size two are useful in a variety of experimental settings, including microarray experiments. Replication is typically used to allow estimation of the relevant effects, but when the number of factors is large this common practice can result in designs with a prohibitively large number of runs. One alternative is to use a design with fewer runs that allows estimation of both main effects and two‐factor interactions. Such designs are available in full factorial experiments, though they may still require a great many runs. In this article, we develop fractional factorial design in blocks of size two when the number of factors is less than nine, using just half of the runs needed for the designs given by Kerr (J Qual. Tech. 2006; 38 :309–318). Two approaches, the orthogonal array approach and the generator approach, are utilized to construct our designs. Analysis of the resulting experimental data from the suggested design is also given. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

5.
This paper considers an experimentation strategy when resource constraints permit only a single design replicate per time interval and one or more design variables are hard to change. The experimental designs considered are two‐level full‐factorial or fractional‐factorial designs run as balanced split plots. These designs are common in practice and appropriate for fitting a main‐effects‐plus‐interactions model, while minimizing the number of times the whole‐plot treatment combination is changed. Depending on the postulated model, single replicates of these designs can result in the inability to estimate error at the whole‐plot level, suggesting that formal statistical hypothesis testing on the whole‐plot effects is not possible. We refer to these designs as balanced two‐level whole‐plot saturated split‐plot designs. In this paper, we show that, for these designs, it is appropriate to use ordinary least squares to analyze the subplot factor effects at the ‘intermittent’ stage of the experiments (i.e., after a single design replicate is run); however, formal inference on the whole‐plot effects may or may not be possible at this point. We exploit the sensitivity of ordinary least squares in detecting whole‐plot effects in a split‐plot design and propose a data‐based strategy for determining whether to run an additional replicate following the intermittent analysis or whether to simply reduce the model at the whole‐plot level to facilitate testing. The performance of the proposed strategy is assessed using Monte Carlo simulation. The method is then illustrated using wind tunnel test data obtained from a NASCAR Winston Cup Chevrolet Monte Carlo stock car. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

6.
Continuous improvement of the quality of industrial products is an essential factor in modern‐day manufacturing. The investigation of those factors that affect process mean and process dispersion (standard deviation) is an important step in such improvements. Most often, experiments are executed for such investigations. To detect mean factors, I use the usual analysis of variance on the experimental data. However, there is no unified method to identify dispersion factors. In recent years several methods have been proposed for identifying such factors with two levels. Multilevel factors, especially three‐level factors, are common in industrial experiments, but we lack methods for identifying dispersion effects in multilevel factors. In this paper, I develop a method for identifying dispersion effects from general fractional factorial experiments. This method consists of two stages. The first stage involves the identification of mean factors using the performance characteristic as the response. The second stage involves the computation of a dispersion measure and the identification of dispersion factors using the dispersion measure as the response. The sequence for identifying dispersion factors is first to test the significance of the total dispersion effect of a factor, then to test the dispersion contrasts of interest, which is a method similar to the typical post hoc testing procedure in the ANOVA analysis. This familiar approach should be appealing to practitioners. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

7.
In many industrial experiments there are restrictions on the resource (or cost) required for performing the runs in a response surface design. This will require practitioners to choose some subset of the candidate set of experimental runs. The appropriate selection of design points under resource constraints is an important aspect of multi‐factor experimentation. A well‐planned experiment should consist of factor‐level combinations selected such that the resulting design will have desirable statistical properties but the resource constraints should not be violated or the experimental cost should be minimized. The resulting designs are referred to as cost‐efficient designs. We use a genetic algorithm for constructing cost‐constrained G‐efficient second‐order response surface designs over cuboidal regions when an experimental cost at a certain factor level is high and a resource constraint exists. Consideration of practical resource (or cost) restrictions and different cost structures will provide valuable information for planning effective and economical experiments when optimizing statistical design properties. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

8.
An analogue of the Box-Hunter rotatability property for second order response surface designs in k independent variables is presented. When such designs are used to estimate the first derivatives with respect to each independent variable the variance of the estimated derivative is a function of the coordinates of the point at which the derivative is evaluated and is also a function of the design. By choice of design it is possible to make this variance constant for all points equidistant from the design origin. This property is called slope-rotatability by analogy with the corresponding property for the variance of the estimated response, ?.

For central composite designs slope-rotatability can be achieved simply by adjusting the axial point distances (α), so that the variance of the pure quadratic coefficients is one-fourth the variance of the mixed second order coefficients. Tables giving appropriate values of α have been constructed for 2 ≤ k ≤ 8. For 5 ≤ k ≤ 8 central composite designs involving fractional factorials are used. It is also shown that appreciable advantage is gained by replicating axial points rather than confining replication to the center point only.  相似文献   

9.
Abstract

This paper presents a multi‐rate state‐space control scheme for digital control of a cascaded continuous‐time system with fractional time delays. First, a discrete‐time state‐space representation of a continuous‐time system with a fractional input delay is established. Based on the time‐delay digital modelling, an ideal state reconstructor is also presented such that system states are exactly reconstructed via the measurement histories of inputs and outputs without a state observer. Next, a time‐delay subsystem (designated subsystem 1) with a fast sampling rate is designed to form the inner loop of the overall system, then the designed closed‐loop subsystem 1 is cascaded with a time‐delay open‐loop subsystem 2 with a slow sampling rate. A digital modelling of the time‐delay open‐loop subsystem 2, based on a fast‐rate sampling, is also formed for obtaining the digital modelling of the overall cascaded continuous‐time system by using the block‐pulse function approximation. Then, the fast‐rate overall system is converted into a slow‐rate model via the newly developed model conversion technique. Furthermore, subsystem 2 is separated from the slow‐rate overall system via a linear transformation for achieving a reduced‐order subsystem design. As a consequence, a digital control law is determined on some specific goals for the overall system. The proposed method is suitable for digital control of a multivariable, multi‐rate, time‐delay system in which state variables are not accessible.  相似文献   

10.
In experimental situations where observation loss is common, it is important for a design to be robust against breakdown. For incomplete block designs, with one treatment factor and a single blocking factor, conditions for connectivity and robustness are developed using the concepts of treatment and block partitions, and of linking blocks. Lower bounds are given for the block breakdown number in terms of parameters of the design and its support. The results provide guidance for construction of designs with good robustness properties.  相似文献   

11.
Design of experiments is a quality technology to achieve product excellence, that is to achieve high quality at low cost. It is a tool to optimize product and process designs, to accelerate the development cycle, to reduce development costs, to improve the transition of products from R & D to manufacturing and to troubleshoot manufacturing problems effectively. It has been successfully, but sporadically, used in the United States. More recently, it has been identified as a major technological reason for the success of Japan in producing high-quality products at low cost. In the United States, the need for increased competitiveness and the emphasis on quality improvement demands a widespread use of design of experiments by engineers, scientists and quality professionals. In the past, such widespread use has been hampered by a lack of proper training and a lack of availability of tools to easily implement design of experiments in industry. Three steps are essential, and are being taken, to change this situation dramatically. First, simple graphical methods, to design and analyse experiments, need to be developed, particularly when the necessary microcomputer resources are not available. Secondly, engineers, scientists and quality professionals must have access to microcomputer-based software for design and analysis of experiments.1 Availability of such software would allow users to concentrate on the important scientific and engineering aspects of the problem by computerizing the necessary statistical expertise. Finally, since a majority of the current workforce is expected to be working in the year 2000, a massive training effort, based upon simple graphical methods and appropriate computer software, is necessary.2 The purpose of this paper is to describe a methodology based upon a new graphical method called interaction graphs and other previously known techniques, to simplify the correct design of practically important fractional factorial experiments. The essential problem in designing a fractional factorial experiment is first stated. The interaction graph for a 16-trial fractional factorial design is given to illustrate how the graphical procedure can be easily used to design a two-level fractional factorial experiment. Other previously known techniques are described to easily modify the two-level fractional factorial designs to create mixed multi-level designs. Interaction graphs for other practically useful fractional factorial designs are provided. A computer package called CADE (computer aided design of experiments), which automatically generates the appropriate fractional factorial designs based upon user specifications of factors, levels and interactions and conducts complete analyses of the designed experiments is briefly described.1 Finally, the graphical method is compared with other available methods for designing fractional factorial experiments.  相似文献   

12.
Reversing plus and minus signs of one or more factors is the traditional method to fold over two‐level fractional factorial designs. However, when factors in the original design have more than two levels, the method of ‘reversing signs’ loses its efficacy. This article develops a mechanism to foldover designs involving factors with different numbers of levels, say mixed‐level designs. By exhaustive search we identify the optimal foldover plans. The criterion used is the general balance metric, which can reveal the aberration properties of the combined designs (original design plus foldover). The optimal foldovers for some efficient mixed‐level fractional factorial designs are provided for practical use. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

13.
Regular two‐level fractional factorial designs have complete aliasing in which the associated columns of multiple effects are identical. In this article, we show how Bayesian variable selection can be used to analyze experiments that use such designs. Bayesian variable selection naturally incorporates heredity in addition to sparsity and hierarchy. This prior information is used to identify the most likely combinations of active terms. The method is demonstrated on simulated and real experiments. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

14.
J. E. J. 《技术计量学》2013,55(4):502-503
This article demonstrates advantages of using nonorthogonal resolution IV designs for running small screening experiments when the primary goal is identification of important main effects (MEs) with a secondary goal of entertaining a small number of potentially important second-order interactions. This is accomplished by evaluating the structure and performance of designs obtained by folding over small efficient nonorthogonal resolution III designs and comparing them with more commonly used orthogonal resolution III designs of comparable size, such as fractional factorials and Plackett–Burman designs. The folded-over designs are available for a wider class of run sizes and perform as well as or better than resolution III competitors in selecting the correct model when a few active two-factor interactions are present and significantly outperform resolution III competitors in terms of correctly identifying MEs. A simple two-step procedure is proposed for analyzing data from such designs that separates the goals and is well suited for sorting through likely models quickly.  相似文献   

15.
The paper discusses the similarities and differences between blocking factors (blocked designs) and noise factors (robust designs) in industrial two‐level factorial experiments. The discussion covers from the objectives of both design types and the nature of blocking and noise factors to the types of designs and the assumptions needed in each case. The conclusions are as follows: the nature and characteristics of noise and blocking factors are equal or very similar; the designs used in both situations are also similar; and the main differences lie in the assumptions and the objectives. The paper argues that the objectives are not in conflict and can easily be harmonized. In consequence, we argue in favor of a unified approach that would clarify the issue, especially for students and practitioners.  相似文献   

16.
Atmospheric water harvesting (AWH)—producing fresh water via collecting moisture from air—enables sustainable water delivery without geographical and hydrologic limitations. However, the fundamental design principle to prepare materials that can convert the water vapor in the air to collectible liquid water is still mostly unknown. Here, a super moisture‐absorbent gel, which is composed of hygroscopic polypyrrole chloride penetrating in hydrophilicity‐switchable polymeric network of poly N‐isopropylacrylamide, is shown. Based on such design, a high‐efficiency water production by AWH has been achieved in a broad range of relative humidity. The synergistic effect enabled by the molecular level integration of hygroscopic and hydrophilicity‐switchable polymers in a network architecture presents controllable interaction between the gel and water molecules, simultaneously realizing efficient vapor capturing, in situ water liquefaction, high‐density water storage and fast water releasing under different weather conditions. Being an effective method to regulate migration of water molecules, such design represents a novel strategy to improve the AWH, and it is also fundamental to other water management systems for environmental cooling, surficial moisturizing and beyond.  相似文献   

17.
In industrial experiments, restrictions on the execution of the experimental runs or the existence of one or more hard‐to‐change factors often leads to split‐plot experiments, where there are two types of experimental units and two independent randomizations. The resulting compound symmetric error structure, as well as the settings of whole‐plot and subplot factors, play important roles in the performance of split‐plot experiments. When the practitioner is interested in predicting the response, a response surface design for a second‐order model such as a central composite design (CCD) is often used. The prediction variance of second‐order designs under a split‐plot error structure is often of interest. In this paper, fraction of design space (FDS) plots are adapted to split‐plot designs. In addition to the global curve exploring the entire design space, sliced curves at various whole‐plot levels are presented to study prediction performance for subregions in the design space. The different sizes of the constrained subregions are accounted for by the proportional size of the sliced curves. The construction and use of the FDS plots are demonstrated through two examples of the restricted CCD in split‐plot schemes. We also consider the impact of the variance ratio on design performance. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

18.
This paper presents a model for supply‐chain design that considers the Cost of Quality as well as the traditional manufacturing and distribution costs (SC‐COQ model). It includes three main contributions: (1) the SC‐COQ model internally computes quality costs for the whole supply chain considering the interdependencies among business entities, whereas previous works have assumed exogenously given Cost of Quality functions; (2) the SC‐COQ model can be used at a strategic planning level to design a logistic route that achieves a maximum profit while considering the overall quality level within a supply chain; and (3) we provide two solution methods based on simulated annealing and a genetic algorithm and perform computational experiments on test instances.  相似文献   

19.
Abstract

Recent developments in the fractional representation approach of linear feedback control systems give some promise for a comprehensive unification of various design methodology. This paper presents some results of the fractional representation theory in terms of mapping of simple algebraic functions defined on appropriate domains of free design parameters. We emphasize that the symmetry of the closed‐loop feedback system could provide us some informations which are useful for the robustness analysis of the closed‐loop system.  相似文献   

20.
Two of the basic approaches to choosing an n-point experimental design in many industrial situations are (i) to set down a simple factorial or fractional factorial design in the factors being studied, or (ii) to choose a design based on the well-known |X′X| criterion. Experimenters often prefer (i) due to its simplicity; our viewpoint here is that (ii) is much better. We first indicate some situations for which (when all the factors are restricted to a cuboidal region) the factorial approach is optimal, as judged by the |X′X| criterion, but the assumed models are often not sensible ones in practical work. We then examine what (similarly restricted) designs are optimal under the |X′X| criterion for the standard linear models of first and second order; because of the very rapid increase in computational difficulties, we consider only “cube plus star” type designs for k ≥ 3 (except for k = 3, n = 10). In spite of computational requirements, we recommend use of the |X′X| criterion in general rather than the indiscriminate use of factorials and we briefly discuss the reasons why, both for linear and nonlinear model situations.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号