共查询到20条相似文献,搜索用时 0 毫秒
1.
Matthias Hwai Yong Tan 《技术计量学》2017,59(1):1-10
In deterministic computer experiments, it is often known that the output is a monotonic function of some of the inputs. In these cases, a monotonic metamodel will tend to give more accurate and interpretable predictions with less prediction uncertainty than a nonmonotonic metamodel. The widely used Gaussian process (GP) models are not monotonic. A recent article in Biometrika offers a modification that projects GP sample paths onto the cone of monotonic functions. However, their approach does not account for the fact that the GP model is more informative about the true function at locations near design points than at locations far away. Moreover, a grid-based method is used, which is memory intensive and gives predictions only at grid points. This article proposes the weighted projection approach that more effectively uses information in the GP model together with two computational implementations. The first is isotonic regression on a grid while the second is projection onto a cone of monotone splines, which alleviates problems faced by a grid-based approach. Simulations show that the monotone B-spline metamodel gives particularly good results. Supplementary materials for this article are available online. 相似文献
2.
Christine M. Anderson‐Cook Todd L. Graves Michael S. Hamada 《Quality and Reliability Engineering International》2009,25(4):481-494
To assess the reliability of a complex system, many different types of data may be available. Full‐system tests are the most direct measure of reliability, but may be prohibitively expensive or difficult to obtain. Other less direct measures, such as component or section level tests, may be cheaper to obtain and more readily available. Using a single Bayesian analysis, multiple sources of data can be combined to give component and system reliability estimates. Resource allocation looks to develop methods to predict which new data would most improve the precision of the estimate of system reliability, in order to maximally improve understanding. In this paper, we consider a relatively simple system with different types of data from the components and system. We present a methodology for assessing the relative improvement in system reliability estimation for additional data from the various types. Various metrics for comparing improvement and a response surface approach to modeling the relationship between improvement and the additional data are presented. Copyright © 2008 John Wiley & Sons, Ltd. 相似文献
3.
4.
Accelerated life tests (ALT) provide timely information on product reliability. As product complexity increases, ALT often generate multiple dependent failure modes. However, the planning of an ALT with dependent failure modes has not been well studied in the literature. This article investigates the statistical modeling and planning of ALT with multiple dependent failure modes. An ALT model is constructed. Associated with each failure mode there is a latent lifetime described by a log-location-scale distribution, and the statistical dependence between different failure modes is described by a Gamma frailty model. The proposed model incorporates the ALT model with independent failure modes as a special limiting case. We obtain the c-optimal test plans by minimizing the large-sample approximate variance of the maximum likelihood estimator of a certain life quantile at use condition. The method is illustrated by developing ALT plans for field-effect transistors with competing gate oxide breakdown. A sensitivity analysis is performed to investigate the robustness of the optimal ALT plan against misspecification of model parameter values. This article has supplementary materials that are available online. 相似文献
5.
6.
We consider Bayesian design of experiments problems in which we maximize the prior expectation of a utility function over a set of permutations, for example, when sequencing a number of tasks to perform. When the number of tasks is large and the expected utility is expensive to compute, it may be unreasonable or infeasible to evaluate the expected utility of all permutations. We propose an approach to emulate the expected utility using a surrogate function based on a parametric probabilistic model for permutations. The surrogate function is fitted by maximizing the correlation with the expected utility over a set of training points. We propose a suitable transformation of the expected utility to improve the fit. We provide results linking the correlation between the two functions and the number of expected utility evaluations to undertake. The approach is applied to the sequencing of reliability growth tasks in the development of hardware systems, in which there are a large number of potential tasks to perform and engineers are interested in meeting a reliability target subject to minimizing costs and time. An illustrative example shows how the approach can be used and a simulation study demonstrates the performance of the approach more generally. Supplementary materials for this article are available online. 相似文献
7.
We consider the problem of constructing metamodels for computationally expensive simulation codes; that is, we construct interpolators/predictors of functions values (responses) from a finite collection of evaluations (observations). We use Gaussian process (GP) modeling and kriging, and combine a Bayesian approach, based on a finite set GP models, with the use of localized covariances indexed by the point where the prediction is made. Our approach is not based on postulating a generative model for the unknown function, but by letting the covariance functions depend on the prediction site, it provides enough flexibility to accommodate arbitrary nonstationary observations. Contrary to kriging prediction with plug-in parameter estimates, the resulting Bayesian predictor is constructed explicitly, without requiring any numerical optimization, and locally adjusts the weights given to the different models according to the data variability in each neighborhood. The predictor inherits the smoothness properties of the covariance functions that are used and its superiority over plug-in kriging, sometimes also called empirical-best-linear-unbiased predictor, is illustrated on various examples, including the reconstruction of an oceanographic field over a large region from a small number of observations. Supplementary materials for this article are available online. 相似文献
8.
9.
10.
Renata J. Romanowicz Keith J. Beven 《Reliability Engineering & System Safety》2006,91(10-11):1315-1321
The paper presents an application of the generalised likelihood uncertainty estimation methodology to the problem of estimating the uncertainty of predictions produced by environmental models. The methodology is placed in a wider context of different approaches to inverse modelling and, in particular, a comparison is made with Bayesian estimation techniques based on explicit structural assumptions about model error. Using a simple example of a rainfall-flow model, different evaluation measures and their influence on the prediction uncertainty and credibility intervals are demonstrated. 相似文献
11.
Mathematical models have been constructed for three types of uncertainty (interval, stochastic, and Bayesian), and the application
of these models is discussed for describing measurements in the presence of unmonitored fluctuations leading to ambiguities
in the results.
__________
Translated from Izmeritel'naya Tekhnika, No. 9, pp. 39–44, September, 2005. 相似文献
12.
Response surface-based design optimization has been commonly used for optimizing large-scale design problems in the automotive industry. However, most response surface models are built by a limited number of design points without considering data uncertainty. In addition, the selection of a response surface in the literature is often arbitrary. This article uses a Bayesian metric to systematically select the best available response surface among several candidates in a library while considering data uncertainty. An adaptive, efficient response surface strategy, which minimizes the number of computationally intensive simulations, was developed for design optimization of large-scale complex problems. This methodology was demonstrated by a crashworthiness optimization example. 相似文献
13.
Saturated fractional factorial experimental designs and orthogonal main effect plans are extremely valuable tools in quality engineering. However, one problem with these designs is that there are no replicate runs to be used for estimating experimental error. This note develops an estimator of the experimental error based on the hypothesis that not all factor effects will be non-zero. A joint Bayesian prior distribution is presented for the experimental error variance of an effect, σ2, and the probability that each effect is non-zero. From this prior distribution a posterior marginal distribution for σ2 is derived along with a direct estimate of σ2. This method is compared with the traditional methods of estimating σ2 in unreplicated designs through a numerical example. 相似文献
14.
Michael S. Hamada Stefan H. Steiner R. Jock MacKay C. Shane Reese 《Quality and Reliability Engineering International》2017,33(3):657-668
A commonly used model to analyze experiments with normal responses does not distinguish between replicates and repeats. The same problem arises with binary and count responses where we can use a generalized linear model. In this article, we propose using models that explicitly allow for two sources of variation, that due to replicates and that due to repeats. In addition, for experiments carried out on high‐volume, existing processes, there are often large amounts of data, collected in different ways, that are available to aid in the planning and analysis of the experiment. We demonstrate the value of using these available data with two detailed examples. We finish with a brief summary and raise some further issues. Copyright © 2016 John Wiley & Sons, Ltd. 相似文献
15.
崔军辉 《中国材料科技与设备》2012,(6):62-64
为了更合理的对建立材料宏观本构模型进行实验研究,本文在因果图分析的基础上采用正交实验设计方法,通过极差分析和贡献分析确定影响材料流变应力的宏观因素,最后通过正交多项式建立材料的宏观本构模型。 相似文献
16.
17.
Lawrence W. Robinson 《Quality Engineering》2003,15(3):403-406
This modest paper contains a much more concise representation of the popular experimental design generators for fractional factorial designs. The information is indexed by the information that the experimenter will start with: the number of factors and the desired resolution for the experiment. Its format facilitates the exploration of the effect of changing the experimental design by using a different number of factors or by changing the experiment resolution. 相似文献
18.
19.
This article presents methods to enhance the efficiency of Evolutionary Algorithms (EAs), particularly those assisted by surrogate evaluation models or metamodels. The gain in efficiency becomes important in applications related to industrial optimization problems with a great number of design variables. The development is based on the principal components analysis of the elite members of the evolving EA population, the outcome of which is used to guide the application of evolution operators and/or train dependable metamodels/artificial neural networks by reducing the number of sensory units. Regarding the latter, the metamodels are trained with less computing cost and yield more relevant objective function predictions. The proposed methods are applied to constrained, single- and two-objective optimization of thermal and hydraulic turbomachines. 相似文献
20.
This article describes an implementation of a particular design of experiment (DoE) plan based upon optimal Latin hypercubes that have certain space-filling and uniformity properties with the goal of maximizing the information gained. The feature emphasized here is the concept of simultaneous model building and model validation plans whose union contains the same properties as the component sets. Two Latin hypercube DoE are constructed simultaneously for use in a meta-modelling context for model building and model validation. The goal is to optimize the uniformity of both sets with respect to space-filling properties of the designs whilst satisfying the key concept that the merged DoE, comprising the union of build and validation sets, has similar space-filling properties. This represents a development of an optimal sampling approach for the first iteration—the initial model building and validation where most information is gained to take the full advantage of parallel computing. A permutation genetic algorithm using several genetic operator strategies is implemented in which fitness evaluation is based upon the Audze-Eglais potential energy function, and an example is presented based upon the well-known six-hump camel back function. The relative efficiency of the strategies and the associated computational aspects are discussed with respect to the quality of the designs obtained. The requirement for such design approaches arises from the need for multiple calls to traditionally expensive system and discipline analyses within iterative multi-disciplinary optimisation frameworks. 相似文献