首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Teng Long  Di Wu  Xin Chen  Xiaosong Guo  Li Liu 《工程优选》2016,48(6):1019-1036
Space-filling and projective properties of design of computer experiments methods are desired features for metamodelling. To enable the production of high-quality sequential samples, this article presents a novel deterministic sequential maximin Latin hypercube design (LHD) method using successive local enumeration, notated as sequential-successive local enumeration (S-SLE). First, a mesh-mapping algorithm is proposed to map the positions of existing points into the new hyper-chessboard to ensure the projective property. According to the maximin distance criterion, new sequential samples are generated through successive local enumeration iterations to improve the space-filling uniformity. Through a number of comparative studies, several appealing merits of S-SLE are demonstrated: (1) S-SLE outperforms several existing LHD methods in terms of sequential sampling quality; (2) it is flexible and robust enough to produce high-quality multiple-stage sequential samples; and (3) the proposed method can improve the overall performance of sequential metamodel-based optimization algorithms. Thus, S-SLE is a promising sequential LHD method for metamodel-based optimization.  相似文献   

2.
Space‐filling and projective properties are probably the two most important features in computer experiment. The existing research works have tried to develop different kinds of sequential Latin hypercube design (LHD) to meet these two properties. However, most if not all of them cannot simultaneously ensure these two properties in their versions of sequential LHD. In this paper, we propose a novel sequential LHD that can simultaneously meet the space‐filling and the projective properties at each stage. A search algorithm is employed to find how many design points should be added in each stage to ensure the projective property; and the “Maximin" criterion is used to meet the space‐filling property. Two kinds of examples for low dimension and higher dimension are presented to illustrate how these sequential sampling processes are realized. The proposed method can be applied to the areas where computationally expensive simulations are involved.  相似文献   

3.
Sequential experiments composed of initial experiments and follow-up experiments are widely adopted for economical computer emulations. Many kinds of Latin hypercube designs with good space-filling properties have been proposed for designing the initial computer experiments. However, little work based on Latin hypercubes has focused on the design of the follow-up experiments. Although some constructions of nested Latin hypercube designs can be adapted to sequential designs, the size of the follow-up experiments needs to be a multiple of that of the initial experiments. In this article, a general method for constructing sequential designs of flexible size is proposed, which allows the combined designs to have good one-dimensional space-filling properties. Moreover, the sampling properties and a type of central limit theorem are derived for these designs. Several improvements of these designs are made to achieve better space-filling properties. Simulations are carried out to verify the theoretical results. Supplementary materials for this article are available online.  相似文献   

4.
A classifier-guided sampling (CGS) method is introduced for solving engineering design optimization problems with discrete and/or continuous variables and continuous and/or discontinuous responses. The method merges concepts from metamodel-guided sampling and population-based optimization algorithms. The CGS method uses a Bayesian network classifier for predicting the performance of new designs based on a set of known observations or training points. Unlike most metamodelling techniques, however, the classifier assigns a categorical class label to a new design, rather than predicting the resulting response in continuous space, and thereby accommodates non-differentiable and discontinuous functions of discrete or categorical variables. The CGS method uses these classifiers to guide a population-based sampling process towards combinations of discrete and/or continuous variable values with a high probability of yielding preferred performance. Accordingly, the CGS method is appropriate for discrete/discontinuous design problems that are ill suited for conventional metamodelling techniques and too computationally expensive to be solved by population-based algorithms alone. The rates of convergence and computational properties of the CGS method are investigated when applied to a set of discrete variable optimization problems. Results show that the CGS method significantly improves the rate of convergence towards known global optima, on average, compared with genetic algorithms.  相似文献   

5.
Variable-fidelity (VF) modelling methods have been widely used in complex engineering system design to mitigate the computational burden. Building a VF model generally includes two parts: design of experiments and metamodel construction. In this article, an adaptive sampling method based on improved hierarchical kriging (ASM-IHK) is proposed to refine the improved VF model. First, an improved hierarchical kriging model is developed as the metamodel, in which the low-fidelity model is varied through a polynomial response surface function to capture the characteristics of a high-fidelity model. Secondly, to reduce local approximation errors, an active learning strategy based on a sequential sampling method is introduced to make full use of the already required information on the current sampling points and to guide the sampling process of the high-fidelity model. Finally, two numerical examples and the modelling of the aerodynamic coefficient for an aircraft are provided to demonstrate the approximation capability of the proposed approach, as well as three other metamodelling methods and two sequential sampling methods. The results show that ASM-IHK provides a more accurate metamodel at the same simulation cost, which is very important in metamodel-based engineering design problems.  相似文献   

6.
Jinglai Wu  Zhen Luo  Nong Zhang 《工程优选》2013,45(9):1264-1288
The accuracy of metamodelling is determined by both the sampling and approximation. This article proposes a new sampling method based on the zeros of Chebyshev polynomials to capture the sampling information effectively. First, the zeros of one-dimensional Chebyshev polynomials are applied to construct Chebyshev tensor product (CTP) sampling, and the CTP is then used to construct high-order multi-dimensional metamodels using the ‘hypercube’ polynomials. Secondly, the CTP sampling is further enhanced to develop Chebyshev collocation method (CCM) sampling, to construct the ‘simplex’ polynomials. The samples of CCM are randomly and directly chosen from the CTP samples. Two widely studied sampling methods, namely the Smolyak sparse grid and Hammersley, are used to demonstrate the effectiveness of the proposed sampling method. Several numerical examples are utilized to validate the approximation accuracy of the proposed metamodel under different dimensions.  相似文献   

7.
When fitting complex models, such as finite element or discrete event simulations, the experiment design should exhibit desirable properties of both projectivity and orthogonality. To reduce experimental effort, sequential design strategies allow experimenters to collect data only until some measure of prediction precision is reached. In this article, we present a batch sequential experiment design method that uses sliced full factorial-based Latin hypercube designs (sFFLHDs), which are an extension to the concept of sliced orthogonal array-based Latin hypercube designs (OALHDs). At all stages of the sequential design, good univariate stratification is achieved. The structure of the FFLHDs also tends to produce uniformity in higher dimensions, especially at certain stages of the design. We show that our batch sequential design approach has good sampling and fitting qualities through both empirical studies and theoretical arguments. Supplementary materials are available online.  相似文献   

8.
Metamodels are widely used to facilitate the analysis and optimization of engineering systems that involve computationally expensive simulations. Kriging is a metamodelling technique that is well known for its ability to build surrogate models of responses with non‐linear behaviour. However, the assumption of a stationary covariance structure underlying Kriging does not hold in situations where the level of smoothness of a response varies significantly. Although non‐stationary Gaussian process models have been studied for years in statistics and geostatistics communities, this has largely been for physical experimental data in relatively low dimensions. In this paper, the non‐stationary covariance structure is incorporated into Kriging modelling for computer simulations. To represent the non‐stationary covariance structure, we adopt a non‐linear mapping approach based on parameterized density functions. To avoid over‐parameterizing for the high dimension problems typical of engineering design, we propose a modified version of the non‐linear map approach, with a sparser, yet flexible, parameterization. The effectiveness of the proposed method is demonstrated through both mathematical and engineering examples. The robustness of the method is verified by testing multiple functions under various sampling settings. We also demonstrate that our method is effective in quantifying prediction uncertainty associated with the use of metamodels. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

9.
Adel Younis 《工程优选》2013,45(8):691-718
Global optimization techniques have been used extensively due to their capability in handling complex engineering problems. In addition to a number of well known global optimization techniques, many new methods have been introduced recently for various optimal design applications. In this work, a number of representative, well known and recently introduced global optimization techniques are closely examined and compared. The historical development, special features and trends on the development of global optimization algorithms are reviewed. Special attention is devoted to the recent developments of multidisciplinary design optimization algorithms based on effective metamodelling techniques. Commonly used benchmark optimization problems are used as test examples to reveal the pros and cons of these global optimization methods. A new meta-model based global optimization search method, introduced and improved recently by the authors, is also included in the tests and comparison.  相似文献   

10.
Quinn Thomson 《工程优选》2013,45(6):615-633
This article presents an adaptive accuracy trust region (AATR) optimization strategy where cross-validation is used by the trust region to reduce the number of sample points needed to construct metamodels for each step of the optimization process. Lower accuracy metamodels are initially used for the larger trust regions, and higher accuracy metamodels are used for the smaller trust regions towards the end of optimization. Various metamodelling strategies are used in the AATR algorithm: optimal and inherited Latin hypercube sampling to generate experimental designs; quasi-Newton, kriging and polynomial regression metamodels to approximate the objective function; and the leave-k-out method for validation. The algorithm is tested with two-dimensional single-discipline problems. Results show that the AATR algorithm is a promising method when compared to a traditional trust region method. Polynomial regression in conjunction with a new hybrid inherited-optimal Latin hypercube sampling performed the best.  相似文献   

11.
In the field of engineering design and optimization, metamodels are widely used to replace expensive simulation models in order to reduce computing costs. To improve the accuracy of metamodels effectively and efficiently, sequential sampling designs have been developed. In this article, a sequential sampling design using the Monte Carlo method and space reduction strategy (MCSR) is implemented and discussed in detail. The space reduction strategy not only maintains good sampling properties but also improves the efficiency of the sampling process. Furthermore, a local boundary search (LBS) algorithm is proposed to efficiently improve the performance of MCSR, which is called LBS-MCSR. Comparative results with several sequential sampling approaches from low to high dimensions indicate that the space reduction strategy generates samples with better sampling properties (and thus better metamodel accuracy) in less computing time.  相似文献   

12.
在工程优化设计中,采用数值仿真模拟计算结构响应需耗费大量的时间和计算成本,给计算密集型的优化设计带来了巨大挑战,因此基于代理模型的序列优化设计方法得到了深入研究和广泛应用.对代理模型的序列优化方法框架进行了简要的概述;针对现有方法中存在的不足,发展了一类模型无关的混合加点准则,使优化循环过程中产生的新样本点分布在当前最...  相似文献   

13.
Large computer simulators have usually complex and nonlinear input output functions. This complicated input output relation can be analyzed by global sensitivity analysis; however, this usually requires massive Monte Carlo simulations. To effectively reduce the number of simulations, statistical techniques such as Gaussian process emulators can be adopted. The accuracy and reliability of these emulators strongly depend on the experimental design where suitable evaluation points are selected. In this paper a new sequential design strategy called hierarchical adaptive design is proposed to obtain an accurate emulator using the least possible number of simulations. The hierarchical design proposed in this paper is tested on various standard analytic functions and on a challenging reservoir forecasting application. Comparisons with standard one-stage designs such as maximin latin hypercube designs show that the hierarchical adaptive design produces a more accurate emulator with the same number of computer experiments. Moreover a stopping criterion is proposed that enables to perform the number of simulations necessary to obtain required approximation accuracy.  相似文献   

14.
A novel infill sampling criterion is proposed for efficient estimation of the global robust optimum of expensive computer simulation based problems. The algorithm is especially geared towards addressing problems that are affected by uncertainties in design variables and problem parameters. The method is based on constructing metamodels using Kriging and adaptively sampling the response surface via a principle of expected improvement adapted for robust optimization. Several numerical examples and an engineering case study are used to demonstrate the ability of the algorithm to estimate the global robust optimum using a limited number of expensive function evaluations.  相似文献   

15.
基于Stochastic Kriging模型的不确定性序贯试验设计方法   总被引:1,自引:0,他引:1  
不确定性研究中需要计算大量重复样本,这无疑对计算量较大的数值模拟提出了巨大的挑战.通过试验设计方法可以有效地减少不确定性研究中的计算量,然而,目前考虑不确定性的试验设计方法研究大多仍专注于传统试验设计方法.针对这一问题,为了通过更为合理的计算资源分配得到更精准的不确定性评估,基于有限样本的Stochastic Kriging模型提出了针对不确定性问题的三阶段序贯试验设计方法.首先,通过特定位置的采样对IMSE进行简化,构建了预选步进信息选取策略,通过预选增量样本总个数以及各取样位置处的分布信息,达到随机代理模型目标精度要求;同时,基于IMSE构建了基于步进信息的单轮选点试验设计准则,以同时考虑设计变量的取样位置及其分布信息.由算例与传统方法的对比分析可知,所建立方法通过等量的采样得到了精度更高的随机代理模型,验证了其在不确定性问题中的可行性和优势.  相似文献   

16.
Sequential experiment design strategies have been proposed for efficiently augmenting initial designs to solve many problems of interest to computer experimenters, including optimization, contour and threshold estimation, and global prediction. We focus on batch sequential design strategies for achieving maturity in global prediction of discrepancy inferred from computer model calibration. Predictive maturity focuses on adding field experiments to efficiently improve discrepancy inference. Several design criteria are extended to allow batch augmentation, including integrated and maximum mean square error, maximum entropy, and two expected improvement criteria. In addition, batch versions of maximin distance and weighted distance criteria are developed. Two batch optimization algorithms are considered: modified Fedorov exchange and a binning methodology motivated by optimizing augmented fractional factorial skeleton designs.  相似文献   

17.
The calibration of computer models using physical experimental data has received a compelling interest in the last decade. Recently, multiple works have addressed the functional calibration of computer models, where the calibration parameters are functions of the observable inputs rather than taking a set of fixed values as traditionally treated in the literature. While much of the recent work on functional calibration was focused on estimation, the issue of sequential design for functional calibration still presents itself as an open question. Addressing the sequential design issue is thus the focus of this article. We investigate different sequential design approaches and show that the simple separate design approach has its merit in practical use when designing for functional calibration. Analysis is carried out on multiple simulated and real-world examples.  相似文献   

18.
曲杰  苏海赋 《工程力学》2013,30(2):332-339
该文提出一种基于代理模型的复杂结构优化设计方法,并用于通风盘式制动器制动盘结构优化设计。提出的优化设计方法集成了CAE分析、实验设计、代理模型构造及非线性优化几部分,实验设计采用拉丁超立方抽样策略,代理模型选用改进的响应面模型,非线性优化算法采用序列二次规划算法。为了解决传统的响应面模型部分预测值与实验值误差较大问题,改进方法认为只有能够确保在每一个抽样点处的预测值与试验值的相对误差均在一定范围内的响应面模型才是一个可行的模型。在保证制动盘质量不变情况下,以寿命最大化为目标,应用设计的集成优化方法对制动盘进行优化设计,优化设计结果较好,其中制动盘疲劳寿命根据Coffin-Manson方法预测,制动过程中制动盘表面最大热应力及最高温度通过热机耦合的有限元模拟紧急制动过程获得。优化结果表明该文提出的方法是一种有效的复杂结构的优化设计方法。  相似文献   

19.
Preparation of Gradient Materials by Powder Metallurgy In mechanical constructions as well as in electronic devices, the designer is commonly faced with the problem of matching materials properties to the demands for strength. ductility, conductivity, corrosion resistance etc. Usually, these types of demands are quite different in different parts of the component. Composition gradient materials offer an elegant and economic solution to the problem. The article describes a reliable preparation method for gradient materials with a one-dimensional gradient of composition. The method is based on an entirely novel powder metallurgy technique that employes a computer controlled dosing system and a centrifugal forming unit. As the results show, it is possible to select almost any desired property profile.  相似文献   

20.
We propose a systematic approach to determine the optimal maintenance policy for an automated manufacturing system which includes a flexible manufacturing cell (FMC) and several automated machine shops. The systematic approach combines simulation, fractional factorial design, noise or outer array of Taguchi design, regression metamodelling, and classical queueing analysis. A useful expression of the fractional utilization of the manufacturing system is derived and incorporated into formulating and solving the corresponding decision problem. The systematic approach provides an effective implementation procedure to handle practical maintenance problems found in a complex manufacturing environment. © 1997 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号