首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A popular method to reduce the computational effort in simulation-based engineering design is by way of approximation. An approximation method involves two steps: Design of Experiments (DOE) and metamodeling. In this paper, a new DOE approach is introduced. The proposed approach is adaptive and samples more design points in regions where the simulation response is expected to be highly nonlinear and multi-modal. Numerical and engineering examples are used to demonstrate the applicability of the proposed DOE approach. The results from these examples show that for the same number of simulation evaluations and according to metamodel accuracy, the proposed DOE approach performs better for majority of test examples compared to two previous methods, i.e., the maximum entropy design method and maximum scaled distance method.  相似文献   

2.
A new approach for single response adaptive design of deterministic computer experiments is presented. The approach is called SFCVT, for Space-Filling Cross-Validation Tradeoff. SFCVT uses metamodeling to obtain an estimate of cross-validation errors, which are maximized subject to a constraint on space filling to determine sample points in the design space. The proposed method is compared, using a test suite of forty four numerical examples, with three DOE methods from the literature. The numerical test examples can be classified into symmetric and asymmetric functions. Symmetric examples refer to functions for which the extreme points are located symmetrically in the design space and asymmetric examples are those for which the extreme regions are not located in a symmetric fashion in the design space. Based upon the comparison results for the numerical examples, it is shown that SFCVT performs better than an existing adaptive and a non-adaptive DOE method for asymmetric multimodal functions with high nonlinearity near the boundary, and is comparable for symmetric multimodal functions and other test problems. The proposed approach is integrated with a multi-scale heat exchanger optimization tool to reduce the computational effort involved in the design of novel air-to-water heat exchangers. The resulting designs are shown to be significantly more compact than mainstream heat exchanger designs.  相似文献   

3.
This paper presents some improvements to Multi-Objective Genetic Algorithms (MOGAs). MOGA modifies certain operators within the GA itself to produce a multiobjective optimization technique. The improvements are made to overcome some of the shortcomings in niche formation, stopping criteria and interaction with a design decision-maker. The technique involves filtering, mating restrictions, the idea of objective constraints, and detecting Pareto solutions in the non-convex region of the Pareto set. A step-by-step procedure for an improved MOGA has been developed and demonstrated via two multiobjective engineering design examples: (i) two-bar truss design, and (ii) vibrating platform design. The two-bar truss example has continuous variables while the vibrating platform example has mixed-discrete (combinatorial) variables. Both examples are solved by MOGA with and without the improvements. It is shown that MOGA with the improvements performs better for both examples in terms of the number of function evaluations.  相似文献   

4.
To reduce the computational cost of implementing computer-based simulations and analyses in engineering design, a variety of metamodeling techniques have been developed and used for the construction of metamodels. Metamodels, also called approximation models and surrogate models, can be used to make a replacement of the expensive simulation codes for design and optimization. In this paper, gene expression programming (GEP) algorithm in the evolutionary computing area is investigated as an alternative metamodeling technique to provide the approximation of a design space. The approximation performance of GEP is tested on some low-dimensional mathematical and engineering problems. A comparative study is conducted on GEP and three common metamodeling techniques in engineering design (i.e., response surface methodology (RSM), kriging and radial basis functions (RBF)) for the approximation of the low-dimensional design space. Multiple evaluation criteria are considered in the comparison: accuracy, robustness, transparency and efficiency. Two different sample sizes are adopted: small and large. Comparative results indicate that GEP can achieve the most accurate and robust approximation of a low-dimensional design space for small sample sets. For large sample sets, GEP also presents good prediction accuracy and high robustness. Moreover, the transparency of GEP is the best since it can provide clear function relationships and factor contributions by means of compact expressions. As a novel metamodeling technique, GEP shows great promise for metamodeling applications in a low-dimensional design space, especially when only a few sample points are selected and used for training.  相似文献   

5.
A multi-surrogate approximation method for metamodeling   总被引:2,自引:0,他引:2  
Metamodeling methods have been widely used in engineering applications to create surrogate models for complex systems. In the past, the input–output relationship of the complex system is usually approximated globally using only a single metamodel. In this research, a new metamodeling method, namely multi-surrogate approximation (MSA) metamodeling method, is developed using multiple metamodels when the sample data collected from different regions of the design space are of different characteristics. In this method, sample data are first classified into clusters based on their similarities in the design space, and a local metamodel is identified for each cluster of the sample data. A global metamodel is then built using these local metamodels considering the contributions of these local metamodels in different regions of the design space. Compared with the traditional approach of global metamodeling using only a single metamodel, this MSA metamodeling method can improve the modeling accuracy considerably. Applications of this metamodeling method have also been demonstrated in this research.  相似文献   

6.
In this paper, we present the Gauss-type quadrature formula as a rigorous method for statistical moment estimation involving arbitrary input distributions and further extend its use to robust design optimization. The mathematical background of the Gauss-type quadrature formula is introduced and its relation with other methods such as design of experiments (DOE) and point estimate method (PEM) is discussed. Methods for constructing one dimensional Gauss-type quadrature formula are summarized and the insights are provided. To improve the efficiency of using it for robust design optimization, a semi-analytic design sensitivity analysis with respect to the statistical moments is proposed for two different multi-dimensional integration methods, the tensor product quadrature (TPQ) formula and the univariate dimension reduction (UDR) method. Through several examples, it is shown that the Gauss-type quadrature formula can be effectively used in robust design involving various non-normal distributions. The proposed design sensitivity analysis significantly reduces the number of function calls of robust optimization using the TPQ formulae, while using the UDR method, the savings of function calls are observed only in limited situations.  相似文献   

7.
There is an ever increasing need to use optimization methods for thermal design of data centers and the hardware populating them. Airflow simulations of cabinets and data centers are computationally intensive and this problem is exacerbated when the simulation model is integrated with a design optimization method. Generally speaking, thermal design of data center hardware can be posed as a constrained multi-objective optimization problem. A popular approach for solving this kind of problem is to use Multi-Objective Genetic Algorithms (MOGAs). However, the large number of simulation evaluations needed for MOGAs has been preventing their applications to realistic engineering design problems. In this paper, details of a substantially more efficient MOGA are formulated and demonstrated through a thermal analysis simulation model of a data center cabinet. First, a reduced-order model of the cabinet problem is constructed using the Proper Orthogonal Decomposition (POD). The POD model is then used to form the objective and constraint functions of an optimization model. Next, this optimization model is integrated with the new MOGA. The new MOGA uses a “kriging” guided operation in addition to conventional genetic algorithm operations to search the design space for global optimal design solutions. This approach for optimal design is essential to handle complex multi-objective situations, where the optimal solutions may be non-obvious from simple analyses or intuition. It is shown that in optimizing the data center cabinet problem, the new MOGA outperforms a conventional MOGA by estimating the Pareto front using 50% fewer simulation calls, which makes its use very promising for complex thermal design problems. Recommended by: Monem Beitelmal  相似文献   

8.
This work deals with multiobjective optimization problems using Genetic Algorithms (GA). A MultiObjective GA (MOGA) is proposed to solve multiobjective problems combining both continuous and discrete variables. This kind of problem is commonly found in chemical engineering since process design and operability involve structural and decisional choices as well as the determination of operating conditions. In this paper, a design of a basic MOGA which copes successfully with a range of typical chemical engineering optimization problems is considered and the key points of its architecture described in detail. Several performance tests are presented, based on the influence of bit ranging encoding in a chromosome. Four mathematical functions were used as a test bench. The MOGA was able to find the optimal solution for each objective function, as well as an important number of Pareto optimal solutions. Then, the results of two multiobjective case studies in batch plant design and retrofit were presented, showing the flexibility and adaptability of the MOGA to deal with various engineering problems.  相似文献   

9.
Zhou  Qi  Wu  Jinhong  Xue  Tao  Jin  Peng 《Engineering with Computers》2021,37(1):623-639

Surrogate model-assisted multi-objective genetic algorithms (MOGA) show great potential in solving engineering design problems since they can save computational cost by reducing the calls of expensive simulations. In this paper, a two-stage adaptive multi-fidelity surrogate (MFS) model-assisted MOGA (AMFS-MOGA) is developed to further relieve their computational burden. In the warm-up stage, a preliminary Pareto frontier is obtained relying only on the data from the low-fidelity (LF) model. In the second stage, an initial MFS model is constructed based on the data from both LF and high-fidelity (HF) models at the samples, which are selected from the preliminary Pareto set according to the crowding distance in the objective space. Then the fitness values of individuals are evaluated using the MFS model, which is adaptively updated according to two developed strategies, an individual-based updating strategy and a generation-based updating strategy. The former considers the prediction uncertainty from the MFS model, while the latter takes the discrete degree of the population into consideration. The effectiveness and merits of the proposed AMFS-MOGA approach are illustrated using three benchmark tests and the design optimization of a stiffened cylindrical shell. The comparisons between the proposed AMFS-MOGA approach and some existing approaches considering the quality of the obtained Pareto frontiers and computational efficiency are made. The results show that the proposed AMFS-MOGA method can obtain Pareto frontiers comparable to that obtained by the MOGA with HF model, while significantly reducing the number of evaluations of the expensive HF model.

  相似文献   

10.
Large scale, multidisciplinary, engineering designs are always difficult due to the complexity and dimensionality of these problems. Direct coupling between the analysis codes and the optimization routines can be prohibitively time consuming due to the complexity of the underlying simulation codes. One way of tackling this problem is by constructing computationally cheap(er) approximations of the expensive simulations that mimic the behavior of the simulation model as closely as possible. This paper presents a data driven, surrogate-based optimization algorithm that uses a trust region-based sequential approximate optimization (SAO) framework and a statistical sampling approach based on design of experiment (DOE) arrays. The algorithm is implemented using techniques from two packages—SURFPACK and SHEPPACK that provide a collection of approximation algorithms to build the surrogates and three different DOE techniques—full factorial (FF), Latin hypercube sampling, and central composite design—are used to train the surrogates. The results are compared with the optimization results obtained by directly coupling an optimizer with the simulation code. The biggest concern in using the SAO framework based on statistical sampling is the generation of the required database. As the number of design variables grows, the computational cost of generating the required database grows rapidly. A data driven approach is proposed to tackle this situation, where the trick is to run the expensive simulation if and only if a nearby data point does not exist in the cumulatively growing database. Over time the database matures and is enriched as more and more optimizations are performed. Results show that the proposed methodology dramatically reduces the total number of calls to the expensive simulation runs during the optimization process.  相似文献   

11.
Multi-fidelity metamodeling provides an efficient way to approximate expensive black-box problems by utilizing the samples of multiple fidelities. While it still faces the challenge of “curse-of-dimensionality” when used in approximating high dimensional problems. On the other hand, the high dimensional model representation (HDMR) method, as an efficient tool to tackle high dimensional problems, can only handle single-fidelity samples in approximation. Therefore, a hybrid metamodel which combines Cut-HDMR with Co-kriging and kriging is proposed to improve the metamodeling efficiency for high dimensional problems. The developed HDMR, termed as MF-HDMR, can efficiently use multi-fidelity samples to approximate black-box problems by using a two stage metamodeling strategy. It can naturally explore and exploit the linearity/nonlinearity and correlations among variables of underlying problems, which are unknown or computationally expensive. Besides, to further improve the efficiency of MF-HDMR, an extended maximin distance sequential sampling method is proposed to add new sample points of different fidelities in the metamodeling process. Moreover, a mathematical function is used to illustrate the modeling theory and procedures of MF-HDMR. In order to validate the proposed method, it is tested by several numerical benchmark problems and successfully applied in the optimal design of a long cylinder pressure vessel. Moreover, an overall comparison between the proposed method and several other metamodeling methods has been made. Results show that the proposed method is very efficient in approximating high dimensional problems by using multi-fidelity samples, thus making it particularly suitable for high dimensional engineering design problems involving computationally expensive simulations.  相似文献   

12.
Design space exploration and metamodeling techniques have gained rapid dominance in complex engineering design problems. It is observed that the modeling efficiency and accuracy are directly associated with the design space. In order to reduce the complexity of the design space and improve modeling accuracy, a multi-stage design space reduction and metamodeling optimization methodology based on self-organizing maps and fuzzy clustering is proposed in this paper. By using the proposed three-stage optimization approach, the design space is systematically reduced to a relatively small promising region. Self-organizing maps are introduced to act as the preliminary reduction approach through analyzing the underlying mapping relations between design variables and system responses within the original samples. GK (Gustafson & Kessel) clustering algorithm is employed to determine the proper number of clusters by utilizing clustering validity indices, and sample points are clustered using fuzzy c-means (FCM) clustering method with the known number of clusters, so that the search can focus on the most promising area and be better supported by the constructed kriging metamodel. Empirical studies on benchmark problems with multi-humps and two practical nonlinear engineering design problems show the accurate results can be obtained within the reduced design space, which improve the overall efficiency significantly.  相似文献   

13.
A design methodology for micromixers is presented which systematically integrates computational fluid dynamics (CFD) with an optimization methodology based on the use of design of experiments (DOE), function approximation technique (FA) and multi-objective genetic algorithm (MOGA). The methodology allows the simultaneous investigation of the effect of geometric parameters on the mixing performance of micromixers whose design strategy is based fundamentally on the generation of chaotic advection. The methodology has been applied on a Staggered Herringbone Micromixer (SHM) at several Reynolds numbers. The geometric features of the SHM are optimized and their effects on mixing are evaluated. The degree of mixing and the pressure drop are the performance criteria to define the efficiency of the micromixer for different design requirements.  相似文献   

14.
This paper presents a new approach to shaping of the frequency response of the sensitivity function. In this approach, a desired frequency response is assumed to be specified at a finite number of frequency points. A sensitivity shaping problem is formulated as an approximation problem to the desired frequency response with a function in a class of sensitivity functions with a degree bound. The sensitivity shaping problem is reduced to a finite-dimensional constrained nonlinear least-squares optimization problem. To solve the optimization problem numerically, standard algorithms for an unconstrained version of nonlinear least-squares problems are modified to incorporate the constraint. Numerical examples illustrate how these design parameters are tuned in an intuitive manner, as well as how the design proceeds in actual control problems.  相似文献   

15.
An important direction of metamodeling research focuses on developing methods which can iteratively improve the accuracy of the metamodel. The intention of these kinds of strategies is to use a space reduction strategy to lead response surface refinement to a smaller design space; and new sample points are commonly generated near the optimum. The potential risk is that some characteristics of given problems might be lost, especially for nonlinear problems. Therefore, a novel metamodel-assisted optimization called “Min–Median–Max” (M3) is proposed. This algorithm classifies sample points into three categories (maximum, median and minimum) based on corresponding objective function values, new sample points should be generated by considering combination of three kinds of samples. In order to avoid local convergence and control size of sample points, particle swarm optimization (PSO) algorithm and radial basis function (RBF) metamodeling technique are integrated to implement the suggested M3 strategy. To validate the performance of the M3 strategy, multiple mathematical test functions are used for evaluating the accuracy and efficiency. As a practical engineering application, drawbead design of a stamping system is optimized. The results demonstrate applicability and effectiveness of the M3 algorithm.  相似文献   

16.
Computational simulation models with variable fidelity have been widely used in complex systems design. However, running the most accurate simulation models tends to be very time-consuming and can therefore only be used sporadically, while incorporating less accurate, inexpensive models into the design process may result in inaccurate design alternatives. To make a trade-off between high accuracy and low expense, variable fidelity (VF) metamodeling approaches that aim to integrate information from both low-fidelity (LF) and high-fidelity (HF) models have gained increasing popularity. In this paper, an adaptive global VF metamodeling approach named difference adaptive decreasing variable-fidelity metamodeling (DAD-VFM) is proposed, in which the one-shot VF metamodeling process is transformed into an iterative process to utilize the already-acquired information of difference characteristics between the HF and LF models. In DAD-VFM, support vector regression (SVR) is adopted to map the difference between the HF and LF models. Besides, a generalized objective-oriented sampling strategy is introduced to adaptively probe and sample more points in the interesting regions where the differences between the HF and LF models are multi-model, non-smooth and have abrupt changes. Several numerical cases and a long cylinder pressure vessel optimization design problem verify the applicability of the proposed VF metamodeling approach.  相似文献   

17.
Managing approximation models in multiobjective optimization   总被引:5,自引:1,他引:5  
In engineering problems, computationally intensive high-fidelity models or expensive computer simulations hinder the use of standard optimization techniques because they should be invoked repeatedly during optimization, despite the tremendous growth of computer capability. Therefore, these expensive analyses are often replaced with approximation models that can be evaluated at considerably less effort. However, due to their limited accuracy, it is practically impossible to exactly find an actual optimum (or a set of actual noninferior solutions) of the original single (or multi) objective optimization problem. Significant efforts have been made to overcome this limitation. The model management framework is one of such endeavours. The approximation models are sequentially updated during the iterative optimization process in such a way that their capability to accurately model the original functions especially in the region of our interests can be improved. The models are modified and improved by using one or several sample points generated by making a good use of the predictive ability of the approximation models. However, these approaches have been restricted to a single objective optimization problem. It seems that there is no reported management framework that can handle a multi-objective optimization problem. This paper will suggest strategies that can successfully treat not only a single objective but also multiple objectives by extending the concept of sequentially managing approximation models and combining this extended concept with a genetic algorithm which can treat multiple objectives (MOGA). Consequently, the number of exact analyses required to converge to an actual optimum or to generate a sufficiently accurate Pareto set can be reduced considerably. Especially, the approach for multiple objectives will lead to a surprising reduction in number. We will confirm these effects through several illustrative examples.  相似文献   

18.
Fuzzy approximation via grid point sampling and singular valuedecomposition   总被引:1,自引:0,他引:1  
This paper introduces a new approach for fuzzy approximation of continuous function on a compact domain. The approach calls for sampling the function over a set of rectangular grid points and applying singular value decomposition to the sample matrix. The resulting quantities are then tailored to become rule consequences and membership functions via the conditions of sum normalization and non-negativeness. The inference paradigm of product-sum-gravity is apparent from the structure of the decomposition equation. All information are extracted directly from the function samples. The present approach yields a class of equivalent fuzzy approximator to a given function. A tight bounding technique to facilitate normal or close-to-normal membership functions is also formulated. The fuzzy output approximates the given function to within an error which is dependent on the sampling intervals and the singular values discarded from the approximation process. Trade-off between the number of membership functions and the desired approximation accuracy is also discussed.  相似文献   

19.
Existing collaborative optimization techniques with multiple coupled subsystems are predominantly focused on single-objective deterministic optimization. However, many engineering optimization problems have system and subsystems that can each be multi-objective, constrained and with uncertainty. The literature reports on a few deterministic Multi-objective Multi-Disciplinary Optimization (MMDO) techniques. However, these techniques in general require a large number of function calls and their computational cost can be exacerbated when uncertainty is present. In this paper, a new Approximation-Assisted Multi-objective collaborative Robust Optimization (New AA-McRO) under interval uncertainty is presented. This new AA-McRO approach uses a single-objective optimization problem to coordinate all system and subsystem multi-objective optimization problems in a Collaborative Optimization (CO) framework. The approach converts the consistency constraints of CO into penalty terms which are integrated into the system and subsystem objective functions. The new AA-McRO is able to explore the design space better and obtain optimum design solutions more efficiently. Also, the new AA-McRO obtains an estimate of Pareto optimum solutions for MMDO problems whose system-level objective and constraint functions are relatively insensitive (or robust) to input uncertainties. Another characteristic of the new AA-McRO is the use of online approximation for objective and constraint functions to perform system robustness evaluation and subsystem-level optimization. Based on the results obtained from a numerical and an engineering example, it is concluded that the new AA-McRO performs better than previously reported MMDO methods.  相似文献   

20.
Engineers widely use Gaussian process regression framework to construct surrogate models aimed to replace computationally expensive physical models while exploring design space. Thanks to Gaussian process properties we can use both samples generated by a high fidelity function (an expensive and accurate representation of a physical phenomenon) and a low fidelity function (a cheap and coarse approximation of the same physical phenomenon) while constructing a surrogate model. However, if samples sizes are more than few thousands of points, computational costs of the Gaussian process regression become prohibitive both in case of learning and in case of prediction calculation. We propose two approaches to circumvent this computational burden: one approach is based on the Nyström approximation of sample covariance matrices and another is based on an intelligent usage of a blackbox that can evaluate a low fidelity function on the fly at any point of a design space. We examine performance of the proposed approaches using a number of artificial and real problems, including engineering optimization of a rotating disk shape.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号