首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 27 毫秒
1.
For surrogate construction, a good experimental design (ED) is essential to simultaneously reduce the effect of noise and bias errors. However, most EDs cater to a single criterion and may lead to small gains in that criterion at the expense of large deteriorations in other criteria. We use multiple criteria to assess the performance of different popular EDs. We demonstrate that these EDs offer different trade‐offs, and that use of a single criterion is indeed risky. In addition, we show that popular EDs, such as Latin hypercube sampling (LHS) and D‐optimal designs, often leave large regions of the design space unsampled even for moderate dimensions. We discuss a few possible strategies to combine multiple criteria and illustrate them with examples. We show that complementary criteria (e.g. bias handling criterion for variance‐based designs and vice versa) can be combined to improve the performance of EDs. We demonstrate improvements in the trade‐offs between noise and bias error by combining a model‐based criterion, like the D‐optimality criterion, and a geometry‐based criterion, like LHS. Next, we demonstrate that selecting an ED from three candidate EDs using a suitable error‐based criterion helped eliminate potentially poor designs. Finally, we show benefits from combining the multiple criteria‐based strategies, that is, generation of multiple EDs using the D‐optimality and LHS criteria, and selecting one design using a pointwise bias error criterion. The encouraging results from the examples indicate that it may be worthwhile studying these strategies more rigorously and in more detail. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

2.
We propose and develop a genetic algorithm (GA) for generating D‐optimal designs where the experimental region is an irregularly shaped polyhedral region. Our approach does not require selection of points from a user‐defined candidate set of mixtures and allows movement through a continuous region that includes highly constrained mixture regions. This approach is useful in situations where extreme vertices (EV) designs or conventional exchange algorithms fail to find a near‐optimal design. For illustration, examples with three and four components are presented with comparisons of our GA designs with those obtained using EV designs and exchange‐point algorithms over an irregularly shaped polyhedral region. The results show that the designs produced by the GA perform better than, if not as well as, the designs produced by the exchange‐point algorithms; however, the designs produced by the GA perform better than the designs produced by the EV. This suggests that GA is an alternative approach for constructing the D‐optimal designs in problems of mixture experiments when EV designs or exchange‐point algorithms are insufficient. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

3.
Alphabetic optimality criteria, such as the D, A, and I criteria, require specifying a model to select optimal designs. They are not model‐free, and the designs obtained by them may not be robust. Recently, many extensions of the D and A criteria have been proposed for selecting robust designs with high estimation efficiency. However, approaches for finding robust designs with high prediction efficiency are rarely studied in the literature. In this paper, we propose a compound criterion and apply the coordinate‐exchange 2‐phase local search algorithm to generate robust designs with high estimation, high prediction, or balanced estimation and prediction efficiency for projective submodels. Examples demonstrate that the designs obtained by our method have better projection efficiency than many existing designs.  相似文献   

4.
This article presents and develops a genetic algorithm (GA) to generate D‐efficient designs for mixture‐process variable experiments. It is assumed the levels of a process variable are controlled during the process. The GA approach searches design points from a set of possible points over a continuous region and works without having a finite user‐defined candidate set. We compare the performance of designs generated by the GA with designs generated by two exchange algorithms (DETMAX and k‐exchange) in terms of D‐efficiencies and fraction of design space (FDS) plots which are used to evaluate a design's prediction variance properties. To illustrate the methodology, examples involving three and four mixture components and one process variable are proposed for creating the optimal designs. The results show that GA designs have superior prediction variance properties in comparison with the DETMAX and k‐exchange algorithm designs when the design space is the simplex or is a highly‐constrained subspace of the simplex.  相似文献   

5.
Experimental design strategies most often involve an initial choice of a classic factorial or response surface design and adapt that design to meet restrictions or unique requirements of the system under study. One such experience is described here, in which the objective was to develop an efficient experimental design strategy that would facilitate building second‐order response models with excellent prediction capabilities. In development, careful consideration was paid to the desirable properties of response surface designs. Once developed, the proposed design was evaluated using Monte Carlo simulation to prove the concept, a pilot implementation of the design carried out to evaluate the accuracy of the response models, and a set of validation runs enacted to look for potential weaknesses in the approach. The purpose of the exercise was to develop a procedure to efficiently and effectively calibrate strain‐gauge balances to be used in wind tunnel testing. The current calibration testing procedure is based on a time‐intensive one‐factor‐at‐a‐time method. In this study, response surface methods were used to reduce the number of calibration runs required during the labor‐intensive heavy load calibration, to leverage the prediction capabilities of response surface designs, and to provide an estimate of uncertainty for the calibration models. Results of the three‐phased approach for design evaluation are presented. The new calibration process will require significantly fewer tests to achieve the same or improved levels of precision in balance calibration. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

6.
A historically common choice for evaluating response surface designs is to use alphabetic optimality criteria. Single‐number criteria such as D, A, G, and V optimality do not completely reflect the estimation or prediction variance characteristics of the designs in question. For prediction‐based assessment, alternatives to single‐number summaries include the graphical displays of the prediction variance across the design regions. Variance dispersion graphs, fraction of design space plots, and quantile plots have been suggested to evaluate the overall prediction capability of response surface designs. The quantile plots use the percentiles. These quantile plots use the percentiles of the distribution at a given radius instead of just the mean, maximum, and minimum prediction variance values on concentric spheres inside the region of the interest. Previously, the user had to select several values of radius and draw corresponding quantile plots to evaluate the overall prediction capability of response surface designs. The user‐specified choice of radii to examine makes the plot somewhat subjective. Alternately, we propose to remove this subjectivity by using a three‐dimensional quantile plot. As another extension of the quantile plots, we suggest dynamic quantile plots to animate the quantile plots and use them for comparing and evaluating response surface designs. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

7.
This paper presents a numerical strategy that allows to lower the costs associated to the prediction of the value of homogenized tensors in elliptic problems. This is performed by solving a coupled problem, in which the complex microstructure is confined to a small region and surrounded by a tentative homogenized medium. The characteristics of this homogenized medium are updated using a self‐consistent approach and are shown to converge to the actual solution. The main feature of the coupling strategy is that it really couples the random microstructure with the deterministic homogenized model, and not one (deterministic) realization of the random medium with a homogenized model. The advantages of doing so are twofold: (a) the influence of the boundary conditions is significantly mitigated, and (b) the ergodicity of the random medium can be used in full through appropriate definition of the coupling operator. Both of these advantages imply that the resulting coupled problem is less expensive to solve, for a given bias, than the computation of homogenized tensor using classical approaches. Examples of 1‐D and 2‐D problems with continuous properties, as well as a 2‐D matrix‐inclusion problem, illustrate the effectiveness and potential of the method. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

8.
9.
We propose ‘low‐cost response surface methods’ (LCRSMs) that typically require half the experimental runs of standard response surface methods based on central composite and Box Behnken designs, but yield comparable or lower modeling errors under realistic assumptions. In addition, the LCRSMs have substantially lower modeling errors and greater expected savings compared with alternatives with comparable numbers of runs, including small composite designs and computer‐generated designs based on popular criteria such as D‐optimality. The LCRSM procedures appear to be the first experimental design methods derived as the solution to a simulation optimization problem. Together with modern computers, simulation optimization offers unprecedented opportunities for applying clear, realistic multicriterion objectives and assumptions to produce useful experimental design methods. We compare the proposed LCRSMs with alternatives based on six criteria. We conclude that the proposed methods offer attractive alternatives when the experimenter is considering dropping factors to use standard response surface methods or would like to perform relatively few runs and stop with a second‐order model. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

10.
Sequential experiment design strategies have been proposed for efficiently augmenting initial designs to solve many problems of interest to computer experimenters, including optimization, contour and threshold estimation, and global prediction. We focus on batch sequential design strategies for achieving maturity in global prediction of discrepancy inferred from computer model calibration. Predictive maturity focuses on adding field experiments to efficiently improve discrepancy inference. Several design criteria are extended to allow batch augmentation, including integrated and maximum mean square error, maximum entropy, and two expected improvement criteria. In addition, batch versions of maximin distance and weighted distance criteria are developed. Two batch optimization algorithms are considered: modified Fedorov exchange and a binning methodology motivated by optimizing augmented fractional factorial skeleton designs.  相似文献   

11.
One of the common and important problems in production scheduling is to quote an attractive but attainable due date for an arriving customer order. Among a wide variety of prediction methods proposed to improve due date quotation (DDQ) accuracy, artificial neural networks (ANN) are considered the most effective because of their flexible non-linear and interaction effects modelling capability. In spite of this growing use of ANNs in a DDQ context, ANNs have several intrinsic shortcomings such as instability, bias and variance problems that undermine their accuracy. In this paper, we develop an enhanced ANN-based DDQ model using machine learning, evolutionary and metaheuristics learning concepts. Computational experiments suggest that the proposed model outperforms the conventional ANN-based DDQ method under different shop environments and different training data sizes.  相似文献   

12.
The edge‐based smoothed finite element method (ES‐FEM) was proposed recently in Liu, Nguyen‐Thoi, and Lam to improve the accuracy of the FEM for 2D problems. This method belongs to the wider family of the smoothed FEM for which smoothing cells are defined to perform the numerical integration over the domain. Later, the face‐based smoothed FEM (FS‐FEM) was proposed to generalize the ES‐FEM to 3D problems. According to this method, the smoothing cells are centered along the faces of the tetrahedrons of the mesh. In the present paper, an alternative method for the extension of the ES‐FEM to 3D is investigated. This method is based on an underlying mesh composed of tetrahedrons, and the approximation of the field variables is associated with the tetrahedral elements; however, in contrast to the FS‐FEM, the smoothing cells of the proposed ES‐FEM are centered along the edges of the tetrahedrons of the mesh. From selected numerical benchmark problems, it is observed that the ES‐FEM is characterized by a higher accuracy and improved computational efficiency as compared with linear tetrahedral elements and to the FS‐FEM for a given number of degrees of freedom. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

13.
This paper presents a novel face‐based smoothed finite element method (FS‐FEM) to improve the accuracy of the finite element method (FEM) for three‐dimensional (3D) problems. The FS‐FEM uses 4‐node tetrahedral elements that can be generated automatically for complicated domains. In the FS‐FEM, the system stiffness matrix is computed using strains smoothed over the smoothing domains associated with the faces of the tetrahedral elements. The results demonstrated that the FS‐FEM is significantly more accurate than the FEM using tetrahedral elements for both linear and geometrically non‐linear solid mechanics problems. In addition, a novel domain‐based selective scheme is proposed leading to a combined FS/NS‐FEM model that is immune from volumetric locking and hence works well for nearly incompressible materials. The implementation of the FS‐FEM is straightforward and no penalty parameters or additional degrees of freedom are used. The computational efficiency of the FS‐FEM is found better than that of the FEM. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

14.
This paper presents a methodology for constructing low‐order surrogate models of finite element/finite volume discrete solutions of parameterized steady‐state partial differential equations. The construction of proper orthogonal decomposition modes in both physical space and parameter space allows us to represent high‐dimensional discrete solutions using only a few coefficients. An incremental greedy approach is developed for efficiently tackling problems with high‐dimensional parameter spaces. For numerical experiments and validation, several non‐linear steady‐state convection–diffusion–reaction problems are considered: first in one spatial dimension with two parameters, and then in two spatial dimensions with two and five parameters. In the two‐dimensional spatial case with two parameters, it is shown that a 7 × 7 coefficient matrix is sufficient to accurately reproduce the expected solution, while in the five parameters problem, a 13 × 6 coefficient matrix is shown to reproduce the solution with sufficient accuracy. The proposed methodology is expected to find applications to parameter variation studies, uncertainty analysis, inverse problems and optimal design. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

15.
In this paper, we consider prediction interval estimation in the original units of observation after fitting a linear model to an appropriately transformed response variable. We assume that the residuals obtained from fitting the linear model in the transformed space are iid zero‐mean normal random variables, at least approximately. We discuss the bias in the retransformed mean and derive a reduced‐bias estimator for the kth moment of the original response, given settings of the design variables. This is then used to compute reduced‐bias estimates for the mean and variance of the untransformed response at various locations in design space. We then exploit a well‐known probability inequality, along with our proposed moment estimator, to construct an approximate 100(1?α)% prediction interval on the original response, given settings of the design factors. We used Monte Carlo simulation to evaluate the performance of the proposed prediction interval estimator relative to 2 commonly used alternatives. Our results suggest the proposed method is often the better alternative when the sample size is small and/or when the underlying model is misspecified. We illustrate the application of our new method by applying it to a real experimental data set obtained from the literature, where machine tool life was studied as a function of various machining parameters.  相似文献   

16.
Highly photosensitive nanocrystal (NC) skins based on exciton funneling are proposed and demonstrated using a graded bandgap profile across which no external bias is applied in operation for light‐sensing. Four types of gradient NC skin devices (GNS) made of NC monolayers of distinct sizes with photovoltage readout are fabricated and comparatively studied. In all structures, polyelectrolyte polymers separating CdTe NC monolayers set the interparticle distances between the monolayers of ligand‐free NCs to <1 nm. In this photosensitive GNS platform, excitons funnel along the gradually decreasing bandgap gradient of cascaded NC monolayers, and are finally captured by the NC monolayer with the smallest bandgap interfacing the metal electrode. Time‐resolved measurements of the cascaded NC skins are conducted at the donor and acceptor wavelengths, and the exciton transfer process is confirmed in these active structures. These findings are expected to enable large‐area GNS‐based photosensing with highly efficient full‐spectrum conversion.  相似文献   

17.
Deterministic simulation is a popular tool used to numerically solve complex mathematical models in engineering applications. These models often involve parameters in the form of numerical values that can be calibrated when real‐life observations are available. This paper presents a systematic approach in parameter calibration using Response Surface Methodology (RSM). Additional modeling by considering correlation in error structure is suggested to compensate the inadequacy of the computer model and improve prediction at untried points. Computational Fluid Dynamics (CFD) model for manure storage ventilation is used for illustration. A simulation study shows that in comparison to likelihood‐based parameter calibration, the proposed parameter calibration method performs better in accuracy and consistency of the calibrated parameter value. The result from sensitivity analysis leads to a guideline in setting up factorial distance in relation to initial parameter values. The proposed calibration method extends RSM beyond its conventional use of process yield improvement and can also be applied widely to calibrate other types of models when real‐life observations are available. Moreover, the proposed inadequacy modeling is useful to improve the accuracy of simulation output, especially when a computer model is too expensive to run at its finest level of detail. Copyright © 2011 John Wiley and Sons Ltd.  相似文献   

18.
The paper deals with the issue of accuracy for multiscale methods applied to solve stochastic problems. It more precisely focuses on the control of a coupling, performed using the Arlequin framework, between a deterministic continuum model and a stochastic continuum one. By using residual‐type estimates and adjoint‐based techniques, a strategy for goal‐oriented error estimation is presented for this coupling and contributions of various error sources (modeling, space discretization, and Monte Carlo approximation) are assessed. Furthermore, an adaptive strategy is proposed to enhance the quality of outputs of interest obtained by the coupled stochastic‐deterministic model. Performance of the proposed approach is illustrated on 1D and 2D numerical experiments. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

19.
20.
This paper considers an experimentation strategy when resource constraints permit only a single design replicate per time interval and one or more design variables are hard to change. The experimental designs considered are two‐level full‐factorial or fractional‐factorial designs run as balanced split plots. These designs are common in practice and appropriate for fitting a main‐effects‐plus‐interactions model, while minimizing the number of times the whole‐plot treatment combination is changed. Depending on the postulated model, single replicates of these designs can result in the inability to estimate error at the whole‐plot level, suggesting that formal statistical hypothesis testing on the whole‐plot effects is not possible. We refer to these designs as balanced two‐level whole‐plot saturated split‐plot designs. In this paper, we show that, for these designs, it is appropriate to use ordinary least squares to analyze the subplot factor effects at the ‘intermittent’ stage of the experiments (i.e., after a single design replicate is run); however, formal inference on the whole‐plot effects may or may not be possible at this point. We exploit the sensitivity of ordinary least squares in detecting whole‐plot effects in a split‐plot design and propose a data‐based strategy for determining whether to run an additional replicate following the intermittent analysis or whether to simply reduce the model at the whole‐plot level to facilitate testing. The performance of the proposed strategy is assessed using Monte Carlo simulation. The method is then illustrated using wind tunnel test data obtained from a NASCAR Winston Cup Chevrolet Monte Carlo stock car. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号