共查询到11条相似文献,搜索用时 15 毫秒
1.
《技术计量学》2013,55(4):527-541
Computer simulation often is used to study complex physical and engineering processes. Although a computer simulator often can be viewed as an inexpensive way to gain insight into a system, it still can be computationally costly. Much of the recent work on the design and analysis of computer experiments has focused on scenarios where the goal is to fit a response surface or process optimization. In this article we develop a sequential methodology for estimating a contour from a complex computer code. The approach uses a stochastic process model as a surrogate for the computer simulator. The surrogate model and associated uncertainty are key components in a new criterion used to identify the computer trials aimed specifically at improving the contour estimate. The proposed approach is applied to exploration of a contour for a network queuing system. Issues related to practical implementation of the proposed approach also are addressed. 相似文献
2.
The unknown input parameters of a simulation code are usually adjusted by the nonlinear least squares estimation (NLSE) method which minimizes the sum of differences between computer responses and real observations. However, when a simulation program is very complex and takes several hours for one execution, the NLSE method may not be computationally feasible. In this case, one may build a statistical metamodel which approximates the complex simulation code. Then this metamodel is used as if it is the true simulation code in the NLSE method, which makes the problem computationally feasible. This ‘approximated’ NLSE method is described in this article. A Gaussian process model is used as a metamodel of complex simulation code. The proposed method is validated through a toy-model study where the true parameters are known a priori. An application to nuclear fusion device is presented. 相似文献
3.
Sequential experiments composed of initial experiments and follow-up experiments are widely adopted for economical computer emulations. Many kinds of Latin hypercube designs with good space-filling properties have been proposed for designing the initial computer experiments. However, little work based on Latin hypercubes has focused on the design of the follow-up experiments. Although some constructions of nested Latin hypercube designs can be adapted to sequential designs, the size of the follow-up experiments needs to be a multiple of that of the initial experiments. In this article, a general method for constructing sequential designs of flexible size is proposed, which allows the combined designs to have good one-dimensional space-filling properties. Moreover, the sampling properties and a type of central limit theorem are derived for these designs. Several improvements of these designs are made to achieve better space-filling properties. Simulations are carried out to verify the theoretical results. Supplementary materials for this article are available online. 相似文献
4.
利用了模糊随机变量理论探讨多元统计分析中模型环境下总体分布未知参数的估计方法,定义新的多维模糊数据,给出参数的一致估计,无偏估计及极大似然估计的定义及相关性质。 相似文献
5.
Statisticians typically recommend completely randomized experimental designs. The reasoning behind this advice is theoretically sound. Unfortunately, engineers who typically run industrial experiments frequently fail to recognize restrictions on randomization, i.e., split-plot experiments, and are often unaware of the risks associated with analyzing split-plot experiments as if they were randomized. Similarly, issues concerning the inference space of the experiment frequently are not given adequate consideration. Conversely, statisticians frequently are unaware that a restriction on randomization does not necessarily translate into less information than a completely randomized design.
In this paper, we discuss a proactive methodology for identifying and incorporating information concerning restrictions on randomization and inference space in industrial experiments. We also present the factor relationship diagram (FRD), a tool that assists engineers in the recognition of restrictions of randomization and guides the development of questions that encourage the experimenter to understand those sources of variation that may contribute to a lack of precision in a split-plot experiment or lack of repeatability in inference space different from that studied in the experiment. Examples that illustrate the use of the methodology and the FRD are included. 相似文献
In this paper, we discuss a proactive methodology for identifying and incorporating information concerning restrictions on randomization and inference space in industrial experiments. We also present the factor relationship diagram (FRD), a tool that assists engineers in the recognition of restrictions of randomization and guides the development of questions that encourage the experimenter to understand those sources of variation that may contribute to a lack of precision in a split-plot experiment or lack of repeatability in inference space different from that studied in the experiment. Examples that illustrate the use of the methodology and the FRD are included. 相似文献
6.
论印刷工程专业的计算机课程教学 总被引:1,自引:0,他引:1
根据印刷工程专业的特点及今后发展趋势,提出了加强该专业计算机课程的教学,主要包括基础理论、基础语言、数据库以及常用软件和网络知识的教学。 相似文献
7.
精度估计是线性参数处理中重要的一部分,而累积法是进行线性参数处理的一种新方法。为此在简述了普通累积法的概念和进行线性参数估计的原理之后,推导出了等精度测量时线性参数精度估计的计算过程和公式,并以测量铜棒长度和线性膨胀系数为例,对其进行了精度估计。结果表明,计算过程简便、公式正确。对于不等精度测量,只需要转化为等精度测量即可。 相似文献
8.
Regina Y. Liu 《技术计量学》2013,55(4):488-491
Computer model calibration is the process of determining input parameter settings to a computational model that are consistent with physical observations. This is often quite challenging due to the computational demands of running the model. In this article, we use the ensemble Kalman filter (EnKF) for computer model calibration. The EnKF has proven effective in quantifying uncertainty in data assimilation problems such as weather forecasting and ocean modeling. We find that the EnKF can be directly adapted to Bayesian computer model calibration. It is motivated by the mean and covariance relationship between the model inputs and outputs, producing an approximate posterior ensemble of the calibration parameters. While this approach may not fully capture effects due to nonlinearities in the computer model response, its computational efficiency makes it a viable choice for exploratory analyses, design problems, or problems with large numbers of model runs, inputs, and outputs. 相似文献
9.
金属纳米粒子在新材料研究领域中有着广泛的应用。人们对金属纳米粒子的氧化行为通过理论和实验进行了深入的研究,其中计算机模拟研究是非常重要的方法。本文简要介绍了微观尺度层次上的计算机模拟方法,在此基础上综述了国内外金属纳米粒子氧化行为的计算机模拟研究进展。 相似文献
10.
基于计算机视觉测量技术的图像轮廓提取方法研究 总被引:10,自引:1,他引:10
图像分割与轮廓提取是计算机视觉测量技术的关键环节。针对传统边缘检测方法中存在的问题,结合计算机视觉测量技术的特点,提出了一种实用的轮廓提取方法。该方法采用灰度阈值法进行图像分割,并用数学形态学方法对二值图像进行缺陷修补,通过链码跟踪存储轮廓信息,实现了具有单像素边缘的图像轮廓提取。文中给出了关键技术的原理及实现方法。实验表明,与经典的边缘检测方法相比,此方法具有抗干扰性强、精度高等特点,能满足工程测量的实际需要。 相似文献