首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
F. Xiong  Y. Xiong  S. Yang 《工程优选》2013,45(8):793-810
Space-filling and projective properties are desired features in the design of computer experiments to create global metamodels to replace expensive computer simulations in engineering design. The goal in this article is to develop an efficient and effective sequential Quasi-LHD (Latin Hypercube design) sampling method to maintain and balance the two aforementioned properties. The sequential sampling is formulated as an optimization problem, with the objective being the Maximin Distance, a space-filling criterion, and the constraints based on a set of pre-specified minimum one-dimensional distances to achieve the approximate one-dimensional projective property. Through comparative studies on sampling property and metamodel accuracy, the new approach is shown to outperform other sequential sampling methods for global metamodelling and is comparable to the one-stage sampling method while providing more flexibility in a sequential metamodelling procedure.  相似文献   

2.
Metamodel-based global optimization methods have been extensively studied for their great potential in solving expensive problems. In this work, a design space management strategy is proposed to improve the accuracy and efficiency of metamodel-based optimization methods. In this strategy, the whole design space is divided into two parts: the important region constructed using several expensive points and the other region. Combined with a previously developed hybrid metamodel strategy, a hybrid metamodel-based design space management method (HMDSM) is developed. In this method, three representative metamodels are used simultaneously in the search of the global optimum in both the important region and the other region. In the search process, the important region is iteratively reduced and the global optimum is soon captured. Tests using a series of benchmark mathematical functions and a practical expensive problem demonstrate the excellent performance of the proposed method.  相似文献   

3.
Quinn Thomson 《工程优选》2013,45(6):615-633
This article presents an adaptive accuracy trust region (AATR) optimization strategy where cross-validation is used by the trust region to reduce the number of sample points needed to construct metamodels for each step of the optimization process. Lower accuracy metamodels are initially used for the larger trust regions, and higher accuracy metamodels are used for the smaller trust regions towards the end of optimization. Various metamodelling strategies are used in the AATR algorithm: optimal and inherited Latin hypercube sampling to generate experimental designs; quasi-Newton, kriging and polynomial regression metamodels to approximate the objective function; and the leave-k-out method for validation. The algorithm is tested with two-dimensional single-discipline problems. Results show that the AATR algorithm is a promising method when compared to a traditional trust region method. Polynomial regression in conjunction with a new hybrid inherited-optimal Latin hypercube sampling performed the best.  相似文献   

4.
Long Tang  Hu Wang 《工程优选》2016,48(10):1759-1777
Categorical multi-objective optimization is an important issue involved in many matching design problems. Non-numerical variables and their uncertainty are the major challenges of such optimizations. Therefore, this article proposes a dual-mode nested search (DMNS) method. In the outer layer, kriging metamodels are established using standard regular simplex mapping (SRSM) from categorical candidates to numerical values. Assisted by the metamodels, a k-cluster-based intelligent sampling strategy is developed to search Pareto frontier points. The inner layer uses an interval number method to model the uncertainty of categorical candidates. To improve the efficiency, a multi-feature convergent optimization via most-promising-area stochastic search (MFCOMPASS) is proposed to determine the bounds of objectives. Finally, typical numerical examples are employed to demonstrate the effectiveness of the proposed DMNS method.  相似文献   

5.
A Minimum Bias Latin Hypercube Design   总被引:1,自引:0,他引:1  
Deterministic engineering design simulators can be too complex to be amenable to direct optimization. An indirect route involves data collection from the simulator and fitting of less complex surrogates: metamodels, which are more readily optimized. However, common statistical experiment plans are not appropriate for data collection from deterministic simulators due to their poor projection properties. Data collection plans based upon number-theoretic methods are also inappropriate because they tend to require large sample sizes in order to achieve their desirable properties. We develop a new class of data collection plan, the Minimum Bias Latin Hypercube Design (MBLHD), for sampling from deterministic process simulators. The class represents a compromise between empirical model bias reduction and dispersion of the points within the input variable space. We compare the MBLHD class to previously known classes by several model independent measures selected from three general families: discrepancies, maximin distance measures, and minimax distance measures. In each case, the MBLHD class is at least competitive with the other classes; and, in several cases the MBLHD class demonstrates superior performance. We also make a comparison of the empirical squared bias of fitted metamodels. We approximate a mechanistic model for water flow through a borehole, using both kriging and polynomial metamodels. Here again, the performance of the MBLHD class is encouraging  相似文献   

6.
This paper presents a crashworthiness design optimization method based on a metamodeling technique. The crashworthiness optimization is a highly nonlinear and large scale problem, which is composed various nonlinearities, such as geometry, material and contact and needs a large number expensive evaluations. In order to obtain a robust approximation efficiently, a probability-based least square support vector regression is suggested to construct metamodels by considering structure risk minimization. Further, to save the computational cost, an intelligent sampling strategy is applied to generate sample points at the stage of design of experiment (DOE). In this paper, a cylinder, a full vehicle frontal collision is involved. The results demonstrate that the proposed metamodel-based optimization is efficient and effective in solving crashworthiness, design optimization problems.  相似文献   

7.
Junqi Yang  Kai Zheng  Jie Hu  Ling Zheng 《工程优选》2016,48(12):2026-2045
Metamodels are becoming increasingly popular for handling large-scale optimization problems in product development. Metamodel-based reliability-based design optimization (RBDO) helps to improve the computational efficiency and reliability of optimal design. However, a metamodel in engineering applications is an approximation of a high-fidelity computer-aided engineering model and it frequently suffers from a significant loss of predictive accuracy. This issue must be appropriately addressed before the metamodels are ready to be applied in RBDO. In this article, an enhanced strategy with metamodel selection and bias correction is proposed to improve the predictive capability of metamodels. A similarity-based assessment for metamodel selection (SAMS) is derived from the cross-validation and similarity theories. The selected metamodel is then improved by Bayesian inference-based bias correction. The proposed strategy is illustrated through an analytical example and further demonstrated with a lightweight vehicle design problem. The results show its potential in handling real-world engineering problems.  相似文献   

8.
The use of surrogate models or metamodeling has lead to new areas of research in simulation-based design optimization. Metamodeling approaches have advantages over traditional techniques when dealing with the noisy responses and/or high computational cost characteristic of many computer simulations. This paper focuses on a particular algorithm, Efficient Global Optimization (EGO) that uses kriging metamodels. Several infill sampling criteria are reviewed, namely criteria for selecting design points at which the true functions are evaluated. The infill sampling criterion has a strong influence on how efficiently and accurately EGO locates the optimum. Variance-reducing criteria substantially reduce the RMS error of the resulting metamodels, while other criteria influence how locally or globally EGO searches. Criteria that place more emphasis on global searching require more iterations to locate optima and do so less accurately than criteria emphasizing local search.  相似文献   

9.
Space‐filling and projective properties are probably the two most important features in computer experiment. The existing research works have tried to develop different kinds of sequential Latin hypercube design (LHD) to meet these two properties. However, most if not all of them cannot simultaneously ensure these two properties in their versions of sequential LHD. In this paper, we propose a novel sequential LHD that can simultaneously meet the space‐filling and the projective properties at each stage. A search algorithm is employed to find how many design points should be added in each stage to ensure the projective property; and the “Maximin" criterion is used to meet the space‐filling property. Two kinds of examples for low dimension and higher dimension are presented to illustrate how these sequential sampling processes are realized. The proposed method can be applied to the areas where computationally expensive simulations are involved.  相似文献   

10.
Chong Chen  Huili Yu  Hui Zhao 《工程优选》2013,45(10):1761-1776
In engineering design optimization, the usage of hybrid metamodels (HMs) can take full advantage of the individual metamodels, and improve robustness of the predictions by reducing the impact of a poor metamodel. When there are plenty of candidates, it is difficult to make decisions on which metamodels to choose before building an HM. The decisions should simultaneously take into account of the number, accuracy and diversity of the selected metamodels. To address this problem, this research developed an efficient decision-making framework based on partial least squares for metamodel screening. A new significance index is firstly derived from the view of fitting error in a regression model. Then, a desirable metamodel combination which consist of only the significant ones is subsequently configured for further constructing the final HM. The effectiveness of the proposed framework is demonstrated through several benchmark problems.  相似文献   

11.
Computer experiments are often used as inexpensive alternatives to real-world experiments. Statistical metamodels of the computer model's input-output behavior can be constructed to serve as approximations of the response surface of the real-world system. The suitability of a metamodel depends in part on its intended use. While decision makers may want to understand the entire response surface, they may be particularly keen on finding interesting regions of the design space, such as where the gradient is steep. We present an adaptive, value-enhanced batch sequential algorithm that samples more heavily in such areas while still providing an understanding of the entire surface. The design points within each batch can be run in parallel to leverage modern multi-core computing assets. We illustrate our approach for deterministic computer models, but it has potential for stochastic simulation models as well.  相似文献   

12.
In this paper, a metamodel-based optimization method by integration of support vector regression (SVR) and intelligent sampling strategy is applied to optimize sheet forming design. Compared with other popular metamodeling techniques, the SVR is based on the principle of structure risk minimization (SRM) as opposed to the principle of the empirical risk minimization in conventional regression techniques. Thus, the accuracy and robust metamodel can be obtained. The intelligent sampling strategy is a kind of design of experiment (DOE) essentially. The characteristic of this method is to generate new sample automatically by responses of objective functions. Compared with traditional DOE methods, the number of samples isn’t constant according to different cases. Furthermore, the number of samples and size of design space can be well controlled according to the intelligent strategy. To minimize both objective functions of wrinkling, crack and thickness deformation efficiently, the proposed method is employed as a fast analysis tool to surrogate the time-consuming finite-element (FE) procedure in the iterations of optimization algorithm. An example is studied to illustrate the application of the approach proposed, and it is concluded that the proposed method is feasible for sheet forming optimization.  相似文献   

13.
A distributed evolutionary algorithm is presented that is based on a hierarchy of (fitness or cost function) evaluation passes within each deme and is efficient in solving engineering optimization problems. Starting with non-problem-specific evaluations (using surrogate models or metamodels, trained on previously evaluated individuals) and ending up with high-fidelity problem-specific evaluations, intermediate passes rely on other available lower-fidelity problem-specific evaluations with lower CPU cost per evaluation. The sequential use of evaluation models or metamodels, of different computational cost and modelling accuracy, by screening the generation members to get rid of non-promising individuals, leads to reduced overall computational cost. The distributed scheme is based on loosely coupled demes that exchange regularly their best-so-far individuals. Emphasis is put on the optimal way of coupling distributed and hierarchical search methods. The proposed method is tested on mathematical and compressor cascade airfoil design problems.  相似文献   

14.
Multivariate polynomials are increasingly being used to construct emulators of computer models for uncertainty quantification. For deterministic computer codes, interpolating polynomial metamodels should be used instead of noninterpolating ones for logical consistency and prediction accuracy. However, available methods for constructing interpolating polynomials only provide point predictions. There is no known method that can provide probabilistic statements about the interpolation error. Furthermore, there are few alternatives to grid designs and sparse grids for constructing multivariate interpolating polynomials. A significant disadvantage of these designs is the large gaps between allowable design sizes. This article proposes a stochastic interpolating polynomial (SIP) that seeks to overcome the problems discussed above. A Bayesian approach in which interpolation uncertainty is quantified probabilistically through the posterior distribution of the output is employed. This allows assessment of the effect of interpolation uncertainty on estimation of quantities of interest based on the metamodel. A class of transformed space-filling design and a sequential design approach are proposed to efficiently construct the SIP with any desired number of runs. Simulations demonstrate that the SIP can outperform Gaussian process (GP) emulators. This article has supplementary material online.  相似文献   

15.
Metamodels, also known as surrogate models, can be used in place of computationally expensive simulation models to increase computational efficiency for the purposes of design optimization or design space exploration. The accuracy of these metamodels varies with the scale and complexity of the underlying model. In this article, three metamodelling methods are evaluated with respect to their capabilities for modelling high-dimensional, nonlinear, multimodal functions. Methods analyzed include kriging, radial basis functions, and support vector regression. Each metamodelling technique is used to model a set of single output functions with dimensionality ranging from fifteen to fifty independent variables and modality ranging from one to ten local maxima. The number of points used to train the models is increased until a predetermined error threshold is met. Results show that kriging metamodels perform most consistently across a variety of functions, although radial basis functions and support vector regression are very competitive for highly multimodal functions and functions with large local gradients, respectively. Support vector regression metamodels consistently offer the shortest build and prediction times when applied to large scale multimodal problems.  相似文献   

16.
The coupling of Finite Element (FE) simulations with approximate optimization techniques is becoming increasingly popular in forming industry. By doing so, it is implicitly assumed that the optimization objective and possible constraints are smooth functions of the design variables and, in case of robust optimization, design and noise variables. However, non-linear FE simulations are known to introduce numerical noise caused by the discrete nature of the simulation algorithms, e.g. errors caused by re-meshing, time-step adjustments or contact algorithms. The subsequent usage of metamodels based on such noisy data reduces the prediction quality of the optimization routine and is known to even magnify the numerical errors. This work provides an approach to handle noisy numerical data in approximate optimization of forming processes, covering several fundamental research questions in dealing with numerical noise. First, the deteriorating effect of numerical noise on the prediction quality of several well-known metamodeling techniques is demonstrated using an analytical test function. Next, numerical noise is quantified and its effect is minimized by the application of local approximation and regularization techniques. A general approximate optimization strategy is subsequently presented and coupling with a sequential update algorithm is proposed. The strategy is demonstrated by the sequential deterministic and robust optimization of 2 industrial metal forming processes i.e. a V-bending application and a cup-stretching application. Although numerical noise is often neglected in practice, both applications in this work show that the general awareness of its presence is highly important to increase the overall accuracy of optimization results.  相似文献   

17.
Whereas an optimal Pseudo-Random Number (PRN) assignment strategy for simulation experiments involving the estimation of linear metamodels currently exists, no such optimal assignment strategy for quadratic metamodels has been proposed. This situation is now rectified by the introduction of a PRN assignment strategy for a quadratic metamodel for 3k factorial designs. In addition to extending the theory from linear to quadratic metamodels, the proposed PRN strategy is shown to be superior to a number of existing and competing strategies in terms of various variance measures.  相似文献   

18.
在工程优化设计中,采用数值仿真模拟计算结构响应需耗费大量的时间和计算成本,给计算密集型的优化设计带来了巨大挑战,因此基于代理模型的序列优化设计方法得到了深入研究和广泛应用.对代理模型的序列优化方法框架进行了简要的概述;针对现有方法中存在的不足,发展了一类模型无关的混合加点准则,使优化循环过程中产生的新样本点分布在当前最...  相似文献   

19.
This article presents methods to enhance the efficiency of Evolutionary Algorithms (EAs), particularly those assisted by surrogate evaluation models or metamodels. The gain in efficiency becomes important in applications related to industrial optimization problems with a great number of design variables. The development is based on the principal components analysis of the elite members of the evolving EA population, the outcome of which is used to guide the application of evolution operators and/or train dependable metamodels/artificial neural networks by reducing the number of sensory units. Regarding the latter, the metamodels are trained with less computing cost and yield more relevant objective function predictions. The proposed methods are applied to constrained, single- and two-objective optimization of thermal and hydraulic turbomachines.  相似文献   

20.
Teng Long  Di Wu  Xin Chen  Xiaosong Guo  Li Liu 《工程优选》2016,48(6):1019-1036
Space-filling and projective properties of design of computer experiments methods are desired features for metamodelling. To enable the production of high-quality sequential samples, this article presents a novel deterministic sequential maximin Latin hypercube design (LHD) method using successive local enumeration, notated as sequential-successive local enumeration (S-SLE). First, a mesh-mapping algorithm is proposed to map the positions of existing points into the new hyper-chessboard to ensure the projective property. According to the maximin distance criterion, new sequential samples are generated through successive local enumeration iterations to improve the space-filling uniformity. Through a number of comparative studies, several appealing merits of S-SLE are demonstrated: (1) S-SLE outperforms several existing LHD methods in terms of sequential sampling quality; (2) it is flexible and robust enough to produce high-quality multiple-stage sequential samples; and (3) the proposed method can improve the overall performance of sequential metamodel-based optimization algorithms. Thus, S-SLE is a promising sequential LHD method for metamodel-based optimization.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号