首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
    
Abstract

Expensive black box systems arise in many engineering applications but can be difficult to optimize because their output functions may be complex, multi-modal, and difficult to understand. The task becomes even more challenging when the optimization is subject to multiple constraints and no derivative information is available. In this article, we combine response surface modeling and filter methods in order to solve problems of this nature. In employing a filter algorithm for solving constrained optimization problems, we establish a novel probabilistic metric for guiding the filter. Overall, this hybridization of statistical modeling and nonlinear programming efficiently utilizes both global and local search in order to quickly converge to a global solution to the constrained optimization problem. To demonstrate the effectiveness of the proposed methods, we perform numerical tests on a synthetic test problem, a problem from the literature, and a real-world hydrology computer experiment optimization problem.  相似文献   

2.
Profile monitoring is often conducted when the product quality is characterized by profiles. Although existing methods almost exclusively deal with univariate profiles, observations of multivariate profile data are increasingly encountered in practice. These data are seldom analyzed in the area of statistical process control due to lack of effective modeling tools. In this article, we propose to analyze them using the multivariate Gaussian process model, which offers a natural way to accommodate both within-profile and between-profile correlations. To mitigate the prohibitively high computation in building such models, a pairwise estimation strategy is adopted. Asymptotic normality of the parameter estimates from this approach has been established. Comprehensive simulation studies are conducted. In the case study, the method has been demonstrated using transmittance profiles from low-emittance glass. Supplementary materials for this article are available online.  相似文献   

3.
《技术计量学》2013,55(4):527-541
Computer simulation often is used to study complex physical and engineering processes. Although a computer simulator often can be viewed as an inexpensive way to gain insight into a system, it still can be computationally costly. Much of the recent work on the design and analysis of computer experiments has focused on scenarios where the goal is to fit a response surface or process optimization. In this article we develop a sequential methodology for estimating a contour from a complex computer code. The approach uses a stochastic process model as a surrogate for the computer simulator. The surrogate model and associated uncertainty are key components in a new criterion used to identify the computer trials aimed specifically at improving the contour estimate. The proposed approach is applied to exploration of a contour for a network queuing system. Issues related to practical implementation of the proposed approach also are addressed.  相似文献   

4.
A mixture experiment is characterized by having two or more inputs that are specified as a percentage contribution to a total amount of material. In such situations, the input variables are correlated because they must sum to one. Consequently, additional care must be taken when fitting statistical models or visualizing the effect of one or more inputs on the response. In this article, we consider the use of a Gaussian process to model the output from a computer simulator taking a mixture input. We introduce a procedure to perform global sensitivity analysis of the code output providing main effects and revealing interactions. The resulting methodology is illustrated using a function with analytically tractable results for comparison, a chemical compositional simulator, and a physical experiment. Supplementary materials providing assistance with implementing this methodology are available online.  相似文献   

5.
Computer models of dynamic systems produce outputs that are functions of time; models that solve systems of differential equations often have this character. In many cases, time series output can be usefully reduced via principal components to simplify analysis. Time-indexed inputs, such as the functions that describe time-varying boundary conditions, are also common with such models. However, inputs that are functions of time often do not have one or a few “characteristic shapes” that are more common with output functions, and so, principal component representation has less potential for reducing the dimension of input functions. In this article, Gaussian process surrogates are described for models with inputs and outputs that are both functions of time. The focus is on construction of an appropriate covariance structure for such surrogates, some experimental design issues, and an application to a model of marrow cell dynamics.  相似文献   

6.
Gaussian processes have become a standard framework for modeling deterministic computer simulations and producing predictions of the response surface. This article investigates a new covariance function that is shown to offer superior prediction compared to the more common covariances for computer simulations of real physical systems. This is demonstrated via a gamut of realistic examples. A simple, closed-form expression for the covariance is derived as a limiting form of a Brownian-like covariance model as it is extended to some hypothetical higher-dimensional input domain, and so we term it a lifted Brownian covariance. This covariance has connections with the multiquadric kernel. Through analysis of the kriging model, this article offers some theoretical comparisons between the proposed covariance model and existing covariance models. The major emphasis of the theory is explaining why the proposed covariance is superior to its traditional counterparts for many computer simulations of real physical systems. Supplementary materials for this article are available online.  相似文献   

7.
Engineering model development involves several simplifying assumptions for the purpose of mathematical tractability, which are often not realistic in practice. This leads to discrepancies in the model predictions. A commonly used statistical approach to overcome this problem is to build a statistical model for the discrepancies between the engineering model and observed data. In contrast, an engineering approach would be to find the causes of discrepancy and fix the engineering model using first principles. However, the engineering approach is time consuming, whereas the statistical approach is fast. The drawback of the statistical approach is that it treats the engineering model as a black box and therefore, the statistically adjusted models lack physical interpretability. This article proposes a new framework for model calibration and statistical adjustment. It tries to open up the black box using simple main effects analysis and graphical plots and introduces statistical models inside the engineering model. This approach leads to simpler adjustment models that are physically more interpretable. The approach is illustrated using a model for predicting the cutting forces in a laser-assisted mechanical micro-machining process. This article has supplementary material online.  相似文献   

8.
Sequential experiment design strategies have been proposed for efficiently augmenting initial designs to solve many problems of interest to computer experimenters, including optimization, contour and threshold estimation, and global prediction. We focus on batch sequential design strategies for achieving maturity in global prediction of discrepancy inferred from computer model calibration. Predictive maturity focuses on adding field experiments to efficiently improve discrepancy inference. Several design criteria are extended to allow batch augmentation, including integrated and maximum mean square error, maximum entropy, and two expected improvement criteria. In addition, batch versions of maximin distance and weighted distance criteria are developed. Two batch optimization algorithms are considered: modified Fedorov exchange and a binning methodology motivated by optimizing augmented fractional factorial skeleton designs.  相似文献   

9.
    
Computer models of physical systems are often written based on known theory or “first principles” of a system, reflecting substantial knowledge of each component or subsystem, but also the need to use a numerical approach to mimic the more complex behavior of the entire system of interest. However, in some cases, there is insufficient known theory to encode all necessary aspects of the system, and empirical studies are required to generate approximate functional forms. We consider the question of how a physical experiment might be designed to approximate one module or subroutine of a computer model that can otherwise be written from first principles. The concept of preposterior analysis is used to suggest an approach to generating a kind of I-optimal design for this purpose, when the remainder of the computer model is a composition of nonlinear functions that can be directly evaluated as part of the design process. Extensions are then described for situations in which one or more known components must themselves be approximated by metamodels due to the large number of evaluations needed, and for computer models that have iterative structure. A simple “toy” model is used to demonstrate the ideas. Online supplementary material accompanies this article.  相似文献   

10.
For deterministic computer simulations, Gaussian process models are a standard procedure for fitting data. These models can be used only when the study design avoids having replicated points. This characteristic is also desirable for one-dimensional projections of the design, since it may happen that one of the design factors has a strongly nonlinear effect on the response. Latin hypercube designs have uniform one-dimensional projections, but are not efficient for fitting low-order polynomials when there is a small error variance. D-optimal designs are very efficient for polynomial fitting but have substantial replication in projections. We propose a new class of designs that bridge the gap between D-optimal designs and D-optimal Latin hypercube designs. These designs guarantee a minimum distance between points in any one-dimensional projection allowing for the fit of either polynomial or Gaussian process models. Subject to this constraint they are D-optimal for a prespecified model.  相似文献   

11.
We investigate the merits of replication, and provide methods for optimal design (including replicates), with the goal of obtaining globally accurate emulation of noisy computer simulation experiments. We first show that replication can be beneficial from both design and computational perspectives, in the context of Gaussian process surrogate modeling. We then develop a lookahead-based sequential design scheme that can determine if a new run should be at an existing input location (i.e., replicate) or at a new one (explore). When paired with a newly developed heteroscedastic Gaussian process model, our dynamic design scheme facilitates learning of signal and noise relationships which can vary throughout the input space. We show that it does so efficiently, on both computational and statistical grounds. In addition to illustrative synthetic examples, we demonstrate performance on two challenging real-data simulation experiments, from inventory management and epidemiology. Supplementary materials for the article are available online.  相似文献   

12.
《Advanced Powder Technology》2014,25(4):1285-1291
The synthesis of compounds in a multi-component system involving volatile elements remains a costly trial and error practice today, although the governing thermo-chemistry theory and relevant computer modeling tools are well-established. We report a design of synthesis routes based on thermodynamic principles and a fabrication of Cu2ZnSnSe4 (CZTSe) via a solvo-thermal method. To suppress the sublimation of Se, we make use of binary compounds of high vaporization temperature by first converting solid Se into a liquid-like alloy. To simulate the sublimation and vaporization behavior of the system, estimates of the liquid-phase Gibbs energy have to be first derived for the compounds SnSe, SnCu, SnZn and SnS using FactSage (a well-established computational thermo-chemistry platform). Finally, single-phase CZTSe compounds are successfully synthesized and characterized. The relationships among synthesis parameters, microstructure, optical band gap, and visible light absorption are analyzed based on thermodynamic principles. This thermo-chemistry aided research strategy is significant not only to general chemical synthesis involving volatile constituent(s) but also to a wide range of subjects in materials chemistry.  相似文献   

13.
This article proposes a two-stage statistical method for the analysis of multivariate computer experiments when at least one of the output dimensions is large. The stage-one data are modeled by a multivariate extension of a widely used scalar statistical model for computer output. Conditioned on stage-one data, a simple statistical model is then proposed for the stage-two data. The method is demonstrated in a geophysical application involving an ocean model.  相似文献   

14.
Gaussian processes, GPs, can be used to approximate complex non-linear functions with relative simplicity. Their regression performance is, at least, comparable to that achieved via artificial neural networks (ANN) and, in fact, both methods are intrinsically related. They are both non-parametric and, as Neal (1994) [1] has shown, when the number of nodes in the hidden layer of a neural network tends to infinity the ANN converge to a Gaussian process.In most of the cases, the GP will map a multivariate input into a univariate response. In this paper, however, we present an approach to process monitoring that combines several GPs so that multivariate responses can be appropriately modeled. We review a similar approach recently proposed in the literature and highlight some concerns related to it that needs to be taken into consideration. Additionally, we propose an alternative procedure to the way in which new observations are mapped into the non-linear model. A simulation study is provided that will help understand the method flexibility. Furthermore, results from a real example are also discussed.  相似文献   

15.
16.
    
《Quality Engineering》2012,24(4):661-665
  相似文献   

17.
    
《Quality Engineering》2012,24(1):55-59
  相似文献   

18.
MEASURING PROCESS PERFORMANCE FOR MULTIPLE VARIABLES   总被引:1,自引:0,他引:1  
  相似文献   

19.
The construction of decision-theoretical Bayesian designs for realistically complex nonlinear models is computationally challenging, as it requires the optimization of analytically intractable expected utility functions over high-dimensional design spaces. We provide the most general solution to date for this problem through a novel approximate coordinate exchange algorithm. This methodology uses a Gaussian process emulator to approximate the expected utility as a function of a single design coordinate in a series of conditional optimization steps. It has flexibility to address problems for any choice of utility function and for a wide range of statistical models with different numbers of variables, numbers of runs and randomization restrictions. In contrast to existing approaches to Bayesian design, the method can find multi-variable designs in large numbers of runs without resorting to asymptotic approximations to the posterior distribution or expected utility. The methodology is demonstrated on a variety of challenging examples of practical importance, including design for pharmacokinetic models and design for mixed models with discrete data. For many of these models, Bayesian designs are not currently available. Comparisons are made to results from the literature, and to designs obtained from asymptotic approximations. Supplementary materials for this article are available online.  相似文献   

20.
The purpose of model calibration is to make the model predictions closer to reality. The classical Kennedy–O’Hagan approach is widely used for model calibration, which can account for the inadequacy of the computer model while simultaneously estimating the unknown calibration parameters. In many applications, the phenomenon of censoring occurs when the exact outcome of the physical experiment is not observed, but is only known to fall within a certain region. In such cases, the Kennedy–O’Hagan approach cannot be used directly, and we propose a method to incorporate the censoring information when performing model calibration. The method is applied to study the compression phenomenon of liquid inside a bottle. The results show significant improvement over the traditional calibration methods, especially when the number of censored observations is large. Supplementary materials for this article are available online.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号