首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We propose an approach for constructing a new type of design, called a sliced orthogonal array-based Latin hypercube design. This approach exploits a slicing structure of orthogonal arrays with strength two and makes use of sliced random permutations. Such a design achieves one- and two-dimensional uniformity and can be divided into smaller Latin hypercube designs with one-dimensional uniformity. Sampling properties of the proposed designs are derived. Examples are given for illustrating the construction method and corroborating the derived theoretical results. Potential applications of the constructed designs include uncertainty quantification of computer models, computer models with qualitative and quantitative factors, cross-validation and efficient allocation of computing resources. Supplementary materials for this article are available online.  相似文献   

2.
Activities such as global sensitivity analysis, statistical effect screening, uncertainty propagation, or model calibration have become integral to the Verification and Validation (V&V) of numerical models and computer simulations. One of the goals of V&V is to assess prediction accuracy and uncertainty, which feeds directly into reliability analysis or the Quantification of Margin and Uncertainty (QMU) of engineered systems. Because these analyses involve multiple runs of a computer code, they can rapidly become computationally expensive. An alternative to Monte Carlo-like sampling is to combine a design of computer experiments to meta-modeling, and replace the potentially expensive computer simulation by a fast-running emulator. The surrogate can then be used to estimate sensitivities, propagate uncertainty, and calibrate model parameters at a fraction of the cost it would take to wrap a sampling algorithm or optimization solver around the physics-based code. Doing so, however, offers the risk to develop an incorrect emulator that erroneously approximates the “true-but-unknown” sensitivities of the physics-based code. We demonstrate the extent to which this occurs when Gaussian Process Modeling (GPM) emulators are trained in high-dimensional spaces using too-sparsely populated designs-of-experiments. Our illustration analyzes a variant of the Rosenbrock function in which several effects are made statistically insignificant while others are strongly coupled, therefore, mimicking a situation that is often encountered in practice. In this example, using a combination of GPM emulator and design-of-experiments leads to an incorrect approximation of the function. A mathematical proof of the origin of the problem is proposed. The adverse effects that too-sparsely populated designs may produce are discussed for the coverage of the design space, estimation of sensitivities, and calibration of parameters. This work attempts to raise awareness to the potential dangers of not allocating enough resources when exploring a design space to develop fast-running emulators.  相似文献   

3.
For deterministic computer simulations, Gaussian process models are a standard procedure for fitting data. These models can be used only when the study design avoids having replicated points. This characteristic is also desirable for one-dimensional projections of the design, since it may happen that one of the design factors has a strongly nonlinear effect on the response. Latin hypercube designs have uniform one-dimensional projections, but are not efficient for fitting low-order polynomials when there is a small error variance. D-optimal designs are very efficient for polynomial fitting but have substantial replication in projections. We propose a new class of designs that bridge the gap between D-optimal designs and D-optimal Latin hypercube designs. These designs guarantee a minimum distance between points in any one-dimensional projection allowing for the fit of either polynomial or Gaussian process models. Subject to this constraint they are D-optimal for a prespecified model.  相似文献   

4.
一类加权有理三次样条的区域控制   总被引:2,自引:0,他引:2  
将插值曲线约束于给定的区域之内是曲线形状控制中的重要问题.本文利用分母为二次的有理三次插值样条和仅基于函数值的有理三次插值样条构造了一类加权有理三次插值样条函数,这类新的插值样条中含有权系数,因而增加了处理问题的灵活性,给约束控制带来了方便.给出了将该种插值曲线约束于给定的折线、二次曲线之上、之下或之间的充分条件及将其约束于给定折线之上、之下或之间的充分必要条件.证明了满足约束条件的加权有理样条的存在性.  相似文献   

5.
Large computer simulators have usually complex and nonlinear input output functions. This complicated input output relation can be analyzed by global sensitivity analysis; however, this usually requires massive Monte Carlo simulations. To effectively reduce the number of simulations, statistical techniques such as Gaussian process emulators can be adopted. The accuracy and reliability of these emulators strongly depend on the experimental design where suitable evaluation points are selected. In this paper a new sequential design strategy called hierarchical adaptive design is proposed to obtain an accurate emulator using the least possible number of simulations. The hierarchical design proposed in this paper is tested on various standard analytic functions and on a challenging reservoir forecasting application. Comparisons with standard one-stage designs such as maximin latin hypercube designs show that the hierarchical adaptive design produces a more accurate emulator with the same number of computer experiments. Moreover a stopping criterion is proposed that enables to perform the number of simulations necessary to obtain required approximation accuracy.  相似文献   

6.
We have recently presented a boundary cloud method (BCM) [Comput. Meth. Appl. Mech. Engng 191 (2002) 2337], which combines boundary integral formulations with scattered point interpolation techniques. A generalized least-squares approach, which requires information about the outward normal to the boundary, is employed to construct interpolation functions. Since an outward normal is not well defined for geometries with corners for 2D problems (or for corners and edges for 3D problems), points could not be placed at corners when discretizing the surface of the object. In this paper, we introduce a new implementation of the BCM, which uses a varying base interpolating polynomial to construct interpolation functions. The key idea is to define an appropriate polynomial basis which ensures linear completeness. The polynomial basis can change from cloud to cloud depending on the definition of the cloud at each point. The new implementation can handle points at corners and is much simpler and at least an order of magnitude faster compared to our original implementation. The original implementation can be more accurate and can give higher order convergence rates, but is limited because it cannot handle points at corners. Numerical results comparing the original and the new implementation are shown for several potential and electrostatic problems.  相似文献   

7.
While the orthogonal design of split-plot fractional factorial experiments has received much attention already, there are still major voids in the literature. First, designs with one or more factors acting at more than two levels have not yet been considered. Second, published work on nonregular fractional factorial split-plot designs was either based only on Plackett–Burman designs, or on small nonregular designs with limited numbers of factors. In this article, we present a novel approach to designing general orthogonal fractional factorial split-plot designs. One key feature of our approach is that it can be used to construct two-level designs as well as designs involving one or more factors with more than two levels. Moreover, the approach can be used to create two-level designs that match or outperform alternative designs in the literature, and to create two-level designs that cannot be constructed using existing methodology. Our new approach involves the use of integer linear programming and mixed integer linear programming, and, for large design problems, it combines integer linear programming with variable neighborhood search. We demonstrate the usefulness of our approach by constructing two-level split-plot designs of 16–96 runs, an 81-run three-level split-plot design, and a 48-run mixed-level split-plot design. Supplementary materials for this article are available online.  相似文献   

8.
In estimating a response surface where the k variables represent proportions in a mixture, the experimenter is often interested in a reasonably well-defined region of interest which may, for example, center about current operating levels. Previously developed designs are difficult to use except in exploring the entire factor space, and even then there are several disadvantages to these designs. A general method of constructing designs from familiar response surface designs in k ? 1 independent variables and the appropriate analysis for a general polynomial is given. Special attention is given to the first and second order polynomials.  相似文献   

9.
利用带导数的和仅基于函数值的分母为二次的有理三次插值样条构造了一类加权有理三次插值函数.在给定的插值数据条件下,通过调整插值函数中的参数和权系数,给出了插值曲线的保凸方法和该方法得以实现的充分必要条件,推广和改进了一些相关结论.这种条件是对参数和权系数的简单的线性的不等式约束,容易在计算机辅助几何设计中得到实际应用.  相似文献   

10.
文献[22]中已经构造了一种基于函数值的带参数的二元有理插值样条,它是分子为双四次、分母为双二次的有理样条。论文研究了该种二元有理插值样条的有界性,给出了插值的逼近表达式,讨论了插值曲面形状的点控制问题。在插值条件不变的情况下,插值区域内任一点插值函数的值可以根据设计的需要通过对参数的选取修改,从而达到插值曲面局部修改的目的。  相似文献   

11.
Two methods for evaluating thermocouple calibration uncertainty over the temperature range of the calibration are presented, when the thermocouple is calibrated at only a few temperatures. The evaluation of the uncertainty at fixed-point temperatures is well established, but it is often not clear how the uncertainty arising from interpolation between fixed points can be determined. We present a conventional method, based on that described in the “Guide to the expression of uncertainty in measurement” (GUM), and a numerically-based Monte Carlo method, for quantifying the calibration uncertainty arising from the use of an interpolating polynomial defined by calibration data. The two methods are compared and found to be in excellent agreement, but the Monte Carlo method is, in general, more flexible, e.g., when measurements are described by non-normal distributions.  相似文献   

12.
Robust parameter design with computer experiments is becoming increasingly important for product design. Existing methodologies for this problem are mostly for finding optimal control factor settings. However, in some cases, the objective of the experimenter may be to understand how the noise and control factors contribute to variation in the response. The functional analysis of variance (ANOVA) and variance decompositions of the response, in addition to the mean and variance models, help achieve this objective. Estimation of these quantities is not easy and few methods are able to quantity the estimation uncertainty. In this article, we show that the use of an orthonormal polynomial model of the simulator leads to simple formulas for functional ANOVA and variance decompositions, and the mean and variance models. We show that estimation uncertainty can be taken into account in a simple way by first fitting a Gaussian process model to experiment data and then approximating it with the orthonormal polynomial model. This leads to a joint normal distribution for the polynomial coefficients that quantifies estimation uncertainty. Supplementary materials for this article are available online.  相似文献   

13.
This paper deals with the use of the local optimal point interpolating (LOPI) formula in solving partial differential equations (PDEs) with a collocation method. LOPI is an interpolating formula constructed by localization of optimal point interpolation formulas that reproduces polynomials and verifies the delta Kronecker property. This scheme results in a truly meshless method that produces high quality output and accurate solutions. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

14.
Transform methods for the interpolation of regularly spaced data are described, based on fast evaluation using discrete Fourier transforms. For periodic data adequately sampled, the fast Fourier transform (FFT) is used directly. With undersampled or aperiodic data, a Chebyshev interpolating polynomial is evaluated by means of the FFT to provide minimum deviation and distributed ripple. The merits of two kinds of Chebyshev series are compared. All the methods described produce an interpolation passing directly through the given values and are applied easily to the multi-dimensional case.  相似文献   

15.
A new way of describing the density field in density‐based topology optimization is introduced. The new method uses finite elements constructed from Bernstein polynomials rather than the more common Lagrange polynomials. Use of the Bernstein finite elements allows higher‐order elements to be used in the density‐field interpolation without producing unrealistic density values, ie, values lower than zero or higher than one. Results on several test problems indicate that using the higher‐order Bernstein elements produces optimal designs with sharper estimates of the optimal boundary on coarse design meshes. However, higher‐order elements are also required in the structural analysis to prevent the appearance of unrealistic material distributions. The Bernstein element density interpolation can be combined with adaptive mesh refinement to further improve design accuracy even on design domains with complex geometry.  相似文献   

16.
The reproducing kernel element method is a numerical technique that combines finite element and meshless methods to construct shape functions of arbitrary order and continuity, yet retains the Kronecker- \(\delta \) property. Central to constructing these shape functions is the construction of global partition polynomials on an element. This paper shows that asymmetric interpolations may arise due to such things as changes in the local to global node numbering and that may adversely affect the interpolation capability of the method. This issue arises due to the use of incomplete polynomials that are subsequently non-affine invariant. This paper lays out the new framework for generating general, symmetric, truly minimal and complete affine invariant global partition polynomials for triangular and tetrahedral elements. Optimal convergence rates were observed in the solution of Kirchhoff plate problems with rectangular domains.  相似文献   

17.
Sequential experiments composed of initial experiments and follow-up experiments are widely adopted for economical computer emulations. Many kinds of Latin hypercube designs with good space-filling properties have been proposed for designing the initial computer experiments. However, little work based on Latin hypercubes has focused on the design of the follow-up experiments. Although some constructions of nested Latin hypercube designs can be adapted to sequential designs, the size of the follow-up experiments needs to be a multiple of that of the initial experiments. In this article, a general method for constructing sequential designs of flexible size is proposed, which allows the combined designs to have good one-dimensional space-filling properties. Moreover, the sampling properties and a type of central limit theorem are derived for these designs. Several improvements of these designs are made to achieve better space-filling properties. Simulations are carried out to verify the theoretical results. Supplementary materials for this article are available online.  相似文献   

18.
Risk assessment of rare natural hazards, such as large volcanic block and ash or pyroclastic flows, is addressed. Assessment is approached through a combination of computer modeling, statistical modeling, and extreme-event probability computation. A computer model of the natural hazard is used to provide the needed extrapolation to unseen parts of the hazard space. Statistical modeling of the available data is needed to determine the initializing distribution for exercising the computer model. In dealing with rare events, direct simulations involving the computer model are prohibitively expensive. The solution instead requires a combination of adaptive design of computer model approximations (emulators) and rare event simulation. The techniques that are developed for risk assessment are illustrated on a test-bed example involving volcanic flow.  相似文献   

19.
An automated approach to the design of wavelength-division multiplexing (WDM) filters is based on the combination of ideas from classical design approaches with an integer optimization technique. This approach turns out to be extremely efficient from a computational point of view and makes it possible to construct a set of significantly different filter designs with nearly equivalent spectral properties. The sensitivity of WDM filters is analyzed by a computer simulation of the deposition process with turning-point optical monitoring. This analysis enables the designer to compare feasibility properties of various filter designs.  相似文献   

20.
Probability of failure sensitivity analysis is a subject of major interest in uncertainty based optimization. However, the computational effort required to obtain accurate results is frequently very high. In this work we present a novel strategy for probability of failure sensitivity analysis, that is based on polynomial expansions of both the performance function and its derivatives. Sensitivity analysis is then made using the obtained polynomial expansions together with standard expressions. Since the simulation step requires only evaluation of closed form polynomials, very large samples can be used to obtain accurate results with small computational effort. The proposed approach is expected to be efficient when polynomial expansion methods are suitable. Four numerical examples are presented in order to show the effectiveness of the proposed approach.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号