首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The Gaussian process (GP) model provides a powerful methodology for calibrating a computer model in the presence of model uncertainties. However, if the data contain systematic experimental errors, then the GP model can lead to an unnecessarily complex adjustment of the computer model. In this work, we introduce an adjustment procedure that brings the computer model closer to the data by making minimal changes to it. This is achieved by applying a lasso-based variable selection on the systematic experimental error terms while fitting the GP model. Two real examples and simulations are presented to demonstrate the advantages of the proposed approach. This article has supplementary material available online.  相似文献   

2.
Computer model calibration is the process of determining input parameter settings to a computational model that are consistent with physical observations. This is often quite challenging due to the computational demands of running the model. In this article, we use the ensemble Kalman filter (EnKF) for computer model calibration. The EnKF has proven effective in quantifying uncertainty in data assimilation problems such as weather forecasting and ocean modeling. We find that the EnKF can be directly adapted to Bayesian computer model calibration. It is motivated by the mean and covariance relationship between the model inputs and outputs, producing an approximate posterior ensemble of the calibration parameters. While this approach may not fully capture effects due to nonlinearities in the computer model response, its computational efficiency makes it a viable choice for exploratory analyses, design problems, or problems with large numbers of model runs, inputs, and outputs.  相似文献   

3.
Metamodels are widely used to facilitate the analysis and optimization of engineering systems that involve computationally expensive simulations. Kriging is a metamodelling technique that is well known for its ability to build surrogate models of responses with non‐linear behaviour. However, the assumption of a stationary covariance structure underlying Kriging does not hold in situations where the level of smoothness of a response varies significantly. Although non‐stationary Gaussian process models have been studied for years in statistics and geostatistics communities, this has largely been for physical experimental data in relatively low dimensions. In this paper, the non‐stationary covariance structure is incorporated into Kriging modelling for computer simulations. To represent the non‐stationary covariance structure, we adopt a non‐linear mapping approach based on parameterized density functions. To avoid over‐parameterizing for the high dimension problems typical of engineering design, we propose a modified version of the non‐linear map approach, with a sparser, yet flexible, parameterization. The effectiveness of the proposed method is demonstrated through both mathematical and engineering examples. The robustness of the method is verified by testing multiple functions under various sampling settings. We also demonstrate that our method is effective in quantifying prediction uncertainty associated with the use of metamodels. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

4.
Partial differential equation (PDE) models of physical systems with initial and boundary conditions are often solved numerically via a computer code called the simulator. To study the dependence of the solution on a functional input, the input is expressed as a linear combination of a finite number of basis functions, and the coefficients of the bases are varied. In such studies, Gaussian process (GP) emulators can be constructed to reduce the amount of simulations required from time-consuming simulators. For linear initial-boundary value problems (IBVPs) with functional inputs as additive terms in the PDE, initial conditions, or boundary conditions, the IBVP solution is theoretically a linear function of the coefficients conditional on all other inputs, which is a result called the principle of superposition. Since numerical errors cause deviation from linearity and nonlinear IBVPs are widely solved in practice, we generalize the result to account for nonlinearity. Based on this generalized result, we propose mean and covariance functions for building GP emulators that capture the approximate conditional linear effect of the coefficients. Numerical simulations demonstrate the substantial improvements in prediction performance achieved with the proposed emulator. Matlab codes for reproducing the results in this article are available in the online supplement.  相似文献   

5.
During crack growth of real materials, the total energy released can be partitioned into elastic and dissipative terms. By analyzing material models with mechanisms for dissipating energy and tracking all energy terms during crack growth, it is proposed that computer simulations of fracture can model crack growth by a total energy balance condition. One approach for developing fracture simulations is illustrated by analysis of elastic-plastic fracture. General equations were derived to predict crack growth and crack stability in terms of global energy release rate and irreversible energy effects. To distinguish plastic fracture from non-linear elastic fracture, it was necessary to imply an extra irreversible energy term. A key component of fracture simulations is to model this extra work. A model used here was to assume that the extra irreversible energy is proportional to the plastic work in a plastic-flow analysis. This idea was used to develop a virtual material based on Dugdale yield zones at the crack tips. A Dugdale virtual material was subjected to computer fracture experiments that showed it has many fracture properties in common with real ductile materials. A Dugdale material can serve as a model material for new simulations with the goal of studying the role of structure in the fracture properties of composites. One sample calculation showed that the toughness of a Dugdale material in an adhesive joint mimics the effect of joint thickness on the toughness of real adhesives. It is expected, however, that better virtual materials will be required before fracture simulations will be a viable approach to studying composite fracture. The approach of this paper is extensible to more advanced plasticity models and therefore to the development of better virtual materials.  相似文献   

6.
ABSTRACT

Improving the quality of a product/process using a computer simulator is a much less expensive option than the real physical testing. However, simulation using computationally intensive computer models can be time consuming and, therefore, directly doing the optimization on the computer simulator can be infeasible. Experimental design and statistical modeling techniques can be used to overcome this problem. This article reviews experimental designs known as space-filling designs that are suitable for computer simulations. In the article, a special emphasis is given for a recently developed space-filling design called maximum projection design. Its advantages are illustrated using a simulation conducted for optimizing a milling process.  相似文献   

7.
This paper presents an adaptive, surrogate-based, engineering design methodology for the efficient use of numerical simulations of physical models. These surrogates are nonlinear regression models fitted with data obtained from deterministic numerical simulations using optimal sampling. A multistage Bayesian procedure is followed in the formulation of surrogates to support the evolutionary nature of engineering design. Information from computer simulations of different levels of accuracy and detail is integrated, updating surrogates sequentially to improve their accuracy. Data-adaptive optimal sampling is conducted by minimizing the sum of the eigenvalues of the prior covariance matrix. Metrics to quantify prediction errors are proposed and tested to evaluate surrogate accuracy given cost and time constraints. The proposed methodology is tested with a known analytical function to illustrate accuracy and cost tradeoffs. This methodology is then applied to the thermal design of embedded electronic packages with five design parameters. Temperature distributions of embedded electronic chip configurations are calculated using spectral element direct numerical simulations. Surrogates, built from 30 simulations in two stages, are used to predict responses of new design combinations and to minimize the maximum chip temperature.  相似文献   

8.
苏帅  孙超  杨益新 《声学技术》2007,26(2):179-183
针对水下声纳浮标系统中扩展性较好的五臂阵,为提高其自适应波束形成器的稳健性,引入了一种零点展宽的方法,主要包括Mailloux的基阵协方差矩阵扩展法和Zatman的频散合成法。两者用多个虚拟干扰源替代实际干扰源,从而共同作用形成宽的零陷。并进行了详细的理论推导,还通过仿真实验验证了其应用在实际声纳系统中的可行性和有效性。  相似文献   

9.
Error and uncertainty in modeling and simulation   总被引:1,自引:0,他引:1  
This article develops a general framework for identifying error and uncertainty in computational simulations that deal with the numerical solution of a set of partial differential equations (PDEs). A comprehensive, new view of the general phases of modeling and simulation is proposed, consisting of the following phases: conceptual modeling of the physical system, mathematical modeling of the conceptual model, discretization and algorithm selection for the mathematical model, computer programming of the discrete model, numerical solution of the computer program model, and representation of the numerical solution. Our view incorporates the modeling and simulation phases that are recognized in the systems engineering and operations research communities, but it adds phases that are specific to the numerical solution of PDEs. In each of these phases, general sources of uncertainty, both aleatory and epistemic, and error are identified. Our general framework is applicable to any numerical discretization procedure for solving ODEs or PDEs. To demonstrate this framework, we describe a system-level example: the flight of an unguided, rocket-boosted, aircraft-launched missile. This example is discussed in detail at each of the six phases of modeling and simulation. Two alternative models of the flight dynamics are considered, along with aleatory uncertainty of the initial mass of the missile and epistemic uncertainty in the thrust of the rocket motor. We also investigate the interaction of modeling uncertainties and numerical integration error in the solution of the ordinary differential equations for the flight dynamics.  相似文献   

10.
This article describes the recent developments in the computer modeling of packing of complex-shaped particles and prediction of physical properties of the structures represented by the packing. The computer model DigiPac is capable of packing particles of any shapes and sizes in a container of arbitrary geometry. The ability to predict the packing structure of real particle shapes and to compute directly some structure-dependent physical properties such as liquid permeability, mechanical strength/stability, compaction and sintering, and dissolution and leaching is obviously highly desirable and has significant potential in industrial applications. Examples are presented relating to the packing of bulk and granular materials.  相似文献   

11.
This article describes the recent developments in the computer modeling of packing of complex-shaped particles and prediction of physical properties of the structures represented by the packing. The computer model DigiPac is capable of packing particles of any shapes and sizes in a container of arbitrary geometry. The ability to predict the packing structure of real particle shapes and to compute directly some structure-dependent physical properties such as liquid permeability, mechanical strength/stability, compaction and sintering, and dissolution and leaching is obviously highly desirable and has significant potential in industrial applications. Examples are presented relating to the packing of bulk and granular materials.  相似文献   

12.
Abstract

Computer simulations often involve both qualitative and numerical inputs. Existing Gaussian process (GP) methods for handling this mainly assume a different response surface for each combination of levels of the qualitative factors and relate them via a multiresponse cross-covariance matrix. We introduce a substantially different approach that maps each qualitative factor to underlying numerical latent variables (LVs), with the mapped values estimated similarly to the other correlation parameters, and then uses any standard GP covariance function for numerical variables. This provides a parsimonious GP parameterization that treats qualitative factors the same as numerical variables and views them as affecting the response via similar physical mechanisms. This has strong physical justification, as the effects of a qualitative factor in any physics-based simulation model must always be due to some underlying numerical variables. Even when the underlying variables are many, sufficient dimension reduction arguments imply that their effects can be represented by a low-dimensional LV. This conjecture is supported by the superior predictive performance observed across a variety of examples. Moreover, the mapped LVs provide substantial insight into the nature and effects of the qualitative factors. Supplementary materials for the article are available online.  相似文献   

13.
Gaussian process (GP) is a popular method for emulating deterministic computer simulation models. Its natural extension to computer models with multivariate outputs employs a multivariate Gaussian process (MGP) framework. Nevertheless, with significant increase in the number of design points and the number of model parameters, building an MGP model is a very challenging task. Under a general MGP model framework with nonseparable covariance functions, we propose an efficient meta-modeling approach featuring a pairwise model building scheme. The proposed method has excellent scalability even for a large number of output levels. Some properties of the proposed method have been investigated and its performance has been demonstrated through several numerical examples. Supplementary materials for this article are available online.  相似文献   

14.
Composite materials, as the name indicates, are composed of different materials that yield superior performance as compared to individual components. Pultrusion is one of the most cost-effective manufacturing techniques for producing fiber-reinforced composites with constant cross-sectional profiles. This obviously makes it more attractive for both researchers and practitioners to investigate the optimum process parameters. Validated computer simulations cost less as compared to physical experiments, therefore this makes them an efficient tool for numerical optimization. However, the complexity of the numerical models can still be “expensive” and forces us to use them sparingly. These relatively more complex models can be replaced with “surrogates,” which are less complex and are therefore faster to evaluate representative models. In this article, a previously validated thermochemical simulation of the pultrusion process has shortly been presented. Following this, a new constrained optimization methodology based on a well-known surrogate method, i.e., Kriging, is introduced. Next, a validation case is presented to clarify the working principles of the implementation, which also supports the upcoming main optimization test cases. This design problem involves the design of the heating die with one, two, and three heaters together with the pulling speed. The results show that the proposed methodology is very efficient in finding the optimal process and design parameters.  相似文献   

15.
Computer experiments with qualitative and quantitative factors occur frequently in various applications in science and engineering. Analysis of such experiments is not yet completely resolved. In this work, we propose an additive Gaussian process model for computer experiments with qualitative and quantitative factors. The proposed method considers an additive correlation structure for qualitative factors, and assumes that the correlation function for each qualitative factor and the correlation function of quantitative factors are multiplicative. It inherits the flexibility of unrestrictive correlation structure for qualitative factors by using the hypersphere decomposition, embracing more flexibility in modeling the complex systems of computer experiments. The merits of the proposed method are illustrated by several numerical examples and a real data application. Supplementary materials for this article are available online.  相似文献   

16.
Computer experiments based on mathematical models are powerful tools for understanding physical processes. This article addresses the problem of kriging-based optimization for deterministic computer experiments with tunable accuracy. Our approach is to use multi-fidelity computer experiments with increasing accuracy levels and a nonstationary Gaussian process model. We propose an optimization scheme that sequentially adds new computer runs by following two criteria. The first criterion, called EQI, scores candidate inputs with given level of accuracy, and the second criterion, called EQIE, scores candidate combinations of inputs and accuracy. From simulation results and a real example using finite element analysis, our method outperforms the expected improvement (EI) criterion that works for single-accuracy experiments. Supplementary materials for this article are available online.  相似文献   

17.
《Journal of Modern Optics》2013,60(4):913-924
Stochastic simulations of quantum trajectories have been proposed as an alternative to density matrix calculations for some open quantum systems. In this paper it is shown that Monte Carlo wavefunction simulations of open atomic systems produce quantum trajectories in a discrete Markov process. This process can be simulated with a discrete Markov chain, and this is demonstrated for several example systems. Statistical properties for these atomic systems are produced more rapidly and with greater accuracy using this method than by calculating an ensemble of individual trajectories.  相似文献   

18.
Classification problems are commonly seen in practice. In this article, we aim to develop classifiers that can enjoy great interpretability as linear classifiers, and at the same time have model flexibility as nonlinear classifiers. We propose convex bidirectional large margin classifiers to fill the gap between linear and general nonlinear classifiers for high-dimensional data. Our method provides a new data visualization tool for classification of high-dimensional data. The obtained bilinear projection structure makes the proposed classifier very interpretable. Additional shrinkage to approximate variable selection is also considered. Through analysis of simulated and real data in high-dimensional settings, our method is shown to have superior prediction performance and interpretability when there are potential subpopulations in the data. The computer code of the proposed method is available as supplemental materials.  相似文献   

19.
The reliability and real time of industrial wireless sensor networks (IWSNs) are the absolute requirements for industrial systems, which are two foremost obstacles for the large-scale applications of IWSNs. This paper studies the multi-objective node placement problem to guarantee the reliability and real time of IWSNs from the perspective of systems. A novel multi-objective node deployment model is proposed in which the reliability, real time, costs and scalability of IWSNs are addressed. Considering that the optimal node placement is an NP-hard problem, a new multi-objective binary differential evolution harmony search (MOBDEHS) is developed to tackle it, which is inspired by the mechanism of harmony search and differential evolution. Three large-scale node deployment problems are generated as the benCHmarks to verify the proposed model and algorithm. The experimental results demonstrate that the developed model is valid and can be used to design large-scale IWSNs with guaranteed reliability and real-time performance efficiently. Moreover, the comparison results indicate that the proposed MOBDEHS is an effective tool for multi-objective node placement problems and superior to Pareto-based binary differential evolution algorithms, nondominated sorting genetic algorithm II (NSGA-II) and modified NSGA-II.  相似文献   

20.
《技术计量学》2013,55(4):527-541
Computer simulation often is used to study complex physical and engineering processes. Although a computer simulator often can be viewed as an inexpensive way to gain insight into a system, it still can be computationally costly. Much of the recent work on the design and analysis of computer experiments has focused on scenarios where the goal is to fit a response surface or process optimization. In this article we develop a sequential methodology for estimating a contour from a complex computer code. The approach uses a stochastic process model as a surrogate for the computer simulator. The surrogate model and associated uncertainty are key components in a new criterion used to identify the computer trials aimed specifically at improving the contour estimate. The proposed approach is applied to exploration of a contour for a network queuing system. Issues related to practical implementation of the proposed approach also are addressed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号