首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Computer experiments have received a great deal of attention in many fields of science and technology. Most literature assumes that all the input variables are quantitative. However, researchers often encounter computer experiments involving both qualitative and quantitative variables (BQQV). In this article, a new interface on design and analysis for computer experiments with BQQV is proposed. The new designs are one kind of sliced Latin hypercube designs with points clustered in the design region and possess good uniformity for each slice. For computer experiments with BQQV, such designs help to measure the similarities among responses of different level-combinations in the qualitative variables. An adaptive analysis strategy intended for the proposed designs is developed. The proposed strategy allows us to automatically extract information from useful auxiliary responses to increase the precision of prediction for the target response. The interface between the proposed design and the analysis strategy is demonstrated to be effective via simulation and a real-life example from the food engineering literature. Supplementary materials for this article are available online.  相似文献   

2.
This article is motivated by a computer experiment conducted for optimizing residual stresses in the machining of metals. Although kriging is widely used in the analysis of computer experiments, it cannot be easily applied to model the residual stresses because they are obtained as a profile. The high dimensionality caused by this functional response introduces severe computational challenges in kriging. It is well known that if the functional data are observed on a regular grid, the computations can be simplified using an application of Kronecker products. However, the case of irregular grid is quite complex. In this article, we develop a Gibbs sampling-based expectation maximization algorithm, which converts the irregularly spaced data into a regular grid so that the Kronecker product-based approach can be employed for efficiently fitting a kriging model to the functional data. Supplementary materials are available online.  相似文献   

3.
Computer experiments are used frequently for the study and improvement of a process under study. Optimizing such process based on a computer model is costly, so an approximation of the computer model, or metamodel, is used. Efficient global optimization (EGO) is a sequential optimization method for computer experiments based on a Gaussian process model approximation to the computer model response. A long‐standing problem in EGO is that it does not consider the uncertainty in the parameter estimates of the Gaussian process. Treating these estimates as if they are the true parameters leads to an improper assessment of the precision of the approximation, a precision that is crucial to assess not only in optimization but in metamodeling in general. One way to account for these uncertainties is to use bootstrapping, studied by previous authors. Alternatively, some other authors have mentioned how a Bayesian approach may be the best way to incorporate the parameter uncertainty in the optimization, but no fully Bayesian approach for EGO has been implemented in practice. In this paper, we present a fully Bayesian implementation of the EGO method. The proposed Bayesian EGO algorithm is validated through simulation of noisy nonlinear functions and compared with the standard EGO method and the bootstrapped EGO. We also apply the Bayesian EGO algorithm to the optimization of a stochastic computer model. It is shown how a Bayesian approach to EGO allows one to optimize any function of the posterior predictive density. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

4.
In deterministic computer experiments, it is often known that the output is a monotonic function of some of the inputs. In these cases, a monotonic metamodel will tend to give more accurate and interpretable predictions with less prediction uncertainty than a nonmonotonic metamodel. The widely used Gaussian process (GP) models are not monotonic. A recent article in Biometrika offers a modification that projects GP sample paths onto the cone of monotonic functions. However, their approach does not account for the fact that the GP model is more informative about the true function at locations near design points than at locations far away. Moreover, a grid-based method is used, which is memory intensive and gives predictions only at grid points. This article proposes the weighted projection approach that more effectively uses information in the GP model together with two computational implementations. The first is isotonic regression on a grid while the second is projection onto a cone of monotone splines, which alleviates problems faced by a grid-based approach. Simulations show that the monotone B-spline metamodel gives particularly good results. Supplementary materials for this article are available online.  相似文献   

5.
6.
Constrained blackbox optimization is a difficult problem, with most approaches coming from the mathematical programming literature. The statistical literature is sparse, especially in addressing problems with nontrivial constraints. This situation is unfortunate because statistical methods have many attractive properties: global scope, handling noisy objectives, sensitivity analysis, and so forth. To narrow that gap, we propose a combination of response surface modeling, expected improvement, and the augmented Lagrangian numerical optimization framework. This hybrid approach allows the statistical model to think globally and the augmented Lagrangian to act locally. We focus on problems where the constraints are the primary bottleneck, requiring expensive simulation to evaluate and substantial modeling effort to map out. In that context, our hybridization presents a simple yet effective solution that allows existing objective-oriented statistical approaches, like those based on Gaussian process surrogates and expected improvement heuristics, to be applied to the constrained setting with minor modification. This work is motivated by a challenging, real-data benchmark problem from hydrology where, even with a simple linear objective function, learning a nontrivial valid region complicates the search for a global minimum. Supplementary materials for this article are available online.  相似文献   

7.
In the past two decades, more and more quality and reliability activities have been moving into the design of product and process. The design and analysis of computer experiments, as a new frontier of the design of experiments, has become increasingly popular among modern companies for optimizing product and process conditions and producing high‐quality yet low‐cost products and processes. This article mainly focuses on the issue of constructing cheap metamodels as alternatives to the expensive computer simulators and proposes a new metamodeling method on the basis of the Gaussian stochastic process model or Gaussian Kriging. Rather than a constant mean as in ordinary Kriging or a fixed mean function as in universal Kriging, the new method captures the overall trend of the performance characteristics of products and processes through a more accurate mean, by efficiently incorporating a scheme of sparseness prior–based Bayesian inference into Kriging. Meanwhile, the mean model is able to adaptively exclude the unimportant effects that deteriorate the prediction performance. The results of an experiment on empirical applications demonstrate that, compared with several benchmark methods in the literature, the proposed Bayesian method is not only much more effective in approximation but also very efficient in implementation, hence more appropriate than the widely used ordinary Kriging to empirical applications in the real world. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

8.
Sequential experiment design strategies have been proposed for efficiently augmenting initial designs to solve many problems of interest to computer experimenters, including optimization, contour and threshold estimation, and global prediction. We focus on batch sequential design strategies for achieving maturity in global prediction of discrepancy inferred from computer model calibration. Predictive maturity focuses on adding field experiments to efficiently improve discrepancy inference. Several design criteria are extended to allow batch augmentation, including integrated and maximum mean square error, maximum entropy, and two expected improvement criteria. In addition, batch versions of maximin distance and weighted distance criteria are developed. Two batch optimization algorithms are considered: modified Fedorov exchange and a binning methodology motivated by optimizing augmented fractional factorial skeleton designs.  相似文献   

9.
This article addresses the issue of kriging-based optimization of stochastic simulators. Many of these simulators depend on factors that tune the level of precision of the response, the gain in accuracy being at a price of computational time. The contribution of this work is two-fold: first, we propose a quantile-based criterion for the sequential design of experiments, in the fashion of the classical expected improvement criterion, which allows an elegant treatment of heterogeneous response precisions. Second, we present a procedure for the allocation of the computational time given to each measurement, allowing a better distribution of the computational effort and increased efficiency. Finally, the optimization method is applied to an original application in nuclear criticality safety. This article has supplementary material available online. The proposed criterion is available in the R package DiceOptim.  相似文献   

10.
Robust parameter design with computer experiments is becoming increasingly important for product design. Existing methodologies for this problem are mostly for finding optimal control factor settings. However, in some cases, the objective of the experimenter may be to understand how the noise and control factors contribute to variation in the response. The functional analysis of variance (ANOVA) and variance decompositions of the response, in addition to the mean and variance models, help achieve this objective. Estimation of these quantities is not easy and few methods are able to quantity the estimation uncertainty. In this article, we show that the use of an orthonormal polynomial model of the simulator leads to simple formulas for functional ANOVA and variance decompositions, and the mean and variance models. We show that estimation uncertainty can be taken into account in a simple way by first fitting a Gaussian process model to experiment data and then approximating it with the orthonormal polynomial model. This leads to a joint normal distribution for the polynomial coefficients that quantifies estimation uncertainty. Supplementary materials for this article are available online.  相似文献   

11.
Robust parameter designs are widely used to produce products/processes that perform consistently well across various conditions known as noise factors. Recently, the robust parameter design method is implemented in computer experiments. The structure of conventional product array design becomes unsuitable due to its extensive number of runs and the polynomial modeling. In this article, we propose a new framework robust parameter design via stochastic approximation (RPD-SA) to efficiently optimize the robust parameter design criteria. It can be applied to general robust parameter design problems, but is particularly powerful in the context of computer experiments. It has the following four advantages: (1) fast convergence to the optimal product setting with fewer number of function evaluations; (2) incorporation of high-order effects of both design and noise factors; (3) adaptation to constrained irregular region of operability; (4) no requirement of statistical analysis phase. In the numerical studies, we compare RPD-SA to the Monte Carlo sampling with Newton–Raphson-type optimization. An “Airfoil” example is used to compare the performance of RPD-SA, conventional product array designs, and space-filling designs with the Gaussian process. The studies show that RPD-SA has preferable performance in terms of effectiveness, efficiency and reliability.  相似文献   

12.
Xu He 《技术计量学》2019,61(1):66-76
Space-filling designs are popular choices for computer experiments. A sliced design is a design that can be partitioned into several subdesigns. We propose a new type of sliced space-filling design called sliced rotated sphere packing designs. Their full designs and subdesigns are rotated sphere packing designs. They are constructed by rescaling, rotating, translating, and extracting the points from a sliced lattice. We provide two fast algorithms to generate such designs. Furthermore, we propose a strategy to use sliced rotated sphere packing designs adaptively. Under this strategy, initial runs are uniformly distributed in the design space, follow-up runs are added by incorporating information gained from initial runs, and the combined design is space-filling for any local region. Examples are given to illustrate its potential application.  相似文献   

13.
《技术计量学》2013,55(4):527-541
Computer simulation often is used to study complex physical and engineering processes. Although a computer simulator often can be viewed as an inexpensive way to gain insight into a system, it still can be computationally costly. Much of the recent work on the design and analysis of computer experiments has focused on scenarios where the goal is to fit a response surface or process optimization. In this article we develop a sequential methodology for estimating a contour from a complex computer code. The approach uses a stochastic process model as a surrogate for the computer simulator. The surrogate model and associated uncertainty are key components in a new criterion used to identify the computer trials aimed specifically at improving the contour estimate. The proposed approach is applied to exploration of a contour for a network queuing system. Issues related to practical implementation of the proposed approach also are addressed.  相似文献   

14.
An efficient sequential optimization approach for complex computer models was presented by Jones et al. (1998). After fitting a stochastic process model based on an initial space filling design, this model is sequentially refined by the expected improvement criterion. This criterion balances the need to search in areas in the design space where the prediction is optimal with the need to search where the model uncertainty is high. This approach can easily be extended to physical processes. Since in practice the overall quality of products of production processes is assessed by more than one response, a multivariate version of the expected improvement criterion is proposed based on desirability functions. This criterion is then used to optimize a metal spinning process.  相似文献   

15.
Computer models of dynamic systems produce outputs that are functions of time; models that solve systems of differential equations often have this character. In many cases, time series output can be usefully reduced via principal components to simplify analysis. Time-indexed inputs, such as the functions that describe time-varying boundary conditions, are also common with such models. However, inputs that are functions of time often do not have one or a few “characteristic shapes” that are more common with output functions, and so, principal component representation has less potential for reducing the dimension of input functions. In this article, Gaussian process surrogates are described for models with inputs and outputs that are both functions of time. The focus is on construction of an appropriate covariance structure for such surrogates, some experimental design issues, and an application to a model of marrow cell dynamics.  相似文献   

16.
Gaussian process (GP) is a popular method for emulating deterministic computer simulation models. Its natural extension to computer models with multivariate outputs employs a multivariate Gaussian process (MGP) framework. Nevertheless, with significant increase in the number of design points and the number of model parameters, building an MGP model is a very challenging task. Under a general MGP model framework with nonseparable covariance functions, we propose an efficient meta-modeling approach featuring a pairwise model building scheme. The proposed method has excellent scalability even for a large number of output levels. Some properties of the proposed method have been investigated and its performance has been demonstrated through several numerical examples. Supplementary materials for this article are available online.  相似文献   

17.
Computer models of physical systems are often written based on known theory or “first principles” of a system, reflecting substantial knowledge of each component or subsystem, but also the need to use a numerical approach to mimic the more complex behavior of the entire system of interest. However, in some cases, there is insufficient known theory to encode all necessary aspects of the system, and empirical studies are required to generate approximate functional forms. We consider the question of how a physical experiment might be designed to approximate one module or subroutine of a computer model that can otherwise be written from first principles. The concept of preposterior analysis is used to suggest an approach to generating a kind of I-optimal design for this purpose, when the remainder of the computer model is a composition of nonlinear functions that can be directly evaluated as part of the design process. Extensions are then described for situations in which one or more known components must themselves be approximated by metamodels due to the large number of evaluations needed, and for computer models that have iterative structure. A simple “toy” model is used to demonstrate the ideas. Online supplementary material accompanies this article.  相似文献   

18.
When categorical noise variables are present in the Robust Parameter Design (RPD) context, it is possible to reduce process variance by not only manipulating the levels of the control factors but also by adjusting the proportions associated with the levels of the categorical noise factor(s). When no adjustment factors exist or when the adjustment factors are unable to bring the process mean close to target, a popular approach for determining optimal operating conditions is to find the levels of the control factors that minimize the estimated mean squared error of the response. Although this approach is effective, engineers may have a difficult time translating mean squared error into quality. We propose the use of a parts per million defective objective function. Furthermore, we point out that in many situations the levels of the control factors are not equally desirable due to cost and/or time issues. We have termed these types factors non‐uniform control factors. We propose the use of desirability functions to determine optimal operating conditions when non‐uniform control factors are present and illustrate this methodology with an example from industry. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

19.
This paper deals with the optimal replacement policies following the expiration of warranty: renewing warranty and non-renewing warranty. If the system fails during its warranty period, it is replaced with a new one and if the system fails after the warranty period is expired, then it is minimally repaired at each failure. The criterion used to determine the optimality of the replacement period is the overall value function, which is established based on the expected downtime and the expected cost rate combined. Firstly, we develop the expected downtime per unit time and the expected cost rate per unit time for our replacement model when the cost and downtime structures of maintaining the system are given. The overall value function suggested by Jiang and Ji [Age replacement policy: a multi-attribute value model. Reliab Eng Syst Saf 2002;76:311–8] is then utilized to determine the optimal maintenance period based on the expected downtime and the expected cost rate. Numerical examples are presented for illustrative purpose.  相似文献   

20.
Experiments where the response of interest is a curve or ‘profile’ arise in a variety of applications in engineering practice. In a recent paper (Journal of Quality Technology, 44, 2, pp. 117–135, 2012), a mixed‐effects Bayesian approach was proposed for the Bayesian optimization of profile response systems, where a particular shape of the profile response defines desired properties of the product or process. This paper proposes an alternative spatio‐temporal Gaussian random function process model for such profile response systems, which is more flexible with respect to the types of desired profile shapes that can be modeled and allows us to model profile‐to‐profile correlation, if this exists. The method is illustrated with real examples taken from the literature, and practical aspects related to model building and diagnostics are discussed. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号