首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
It is important to design robust and reliable systems by accounting for uncertainty and variability in the design process. However, performing optimization in this setting can be computationally expensive, requiring many evaluations of the numerical model to compute statistics of the system performance at every optimization iteration. This paper proposes a multifidelity approach to optimization under uncertainty that makes use of inexpensive, low‐fidelity models to provide approximate information about the expensive, high‐fidelity model. The multifidelity estimator is developed based on the control variate method to reduce the computational cost of achieving a specified mean square error in the statistic estimate. The method optimally allocates the computational load between the two models based on their relative evaluation cost and the strength of the correlation between them. This paper also develops an information reuse estimator that exploits the autocorrelation structure of the high‐fidelity model in the design space to reduce the cost of repeatedly estimating statistics during the course of optimization. Finally, a combined estimator incorporates the features of both the multifidelity estimator and the information reuse estimator. The methods demonstrate 90% computational savings in an acoustic horn robust optimization example and practical design turnaround time in a robust wing optimization problem. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

2.
Partitioned procedures are appealing for solving complex fluid‐structure interaction (FSI) problems, as they allow existing computational fluid dynamics (CFD) and computational structural dynamics algorithms and solvers to be combined and reused. However, for problems involving incompressible flow and strong added‐mass effect (eg, heavy fluid and slender structure), partitioned procedures suffer from numerical instability, which typically requires additional subiterations between the fluid and structural solvers, hence significantly increasing the computational cost. This paper investigates the use of Robin‐Neumann transmission conditions to mitigate the above instability issue. Firstly, an embedded Robin boundary method is presented in the context of projection‐based incompressible CFD and finite element–based computational structural dynamics. The method utilizes operator splitting and a modified ghost fluid method to enforce the Robin transmission condition on fluid‐structure interfaces embedded in structured non–body‐conforming CFD grids. The method is demonstrated and verified using the Turek and Hron benchmark problem, which involves a slender beam undergoing large transient deformation in an unsteady vortex‐dominated channel flow. Secondly, this paper investigates the effect of the combination parameter in the Robin transmission condition, ie, αf, on numerical stability and solution accuracy. This paper presents a numerical study using the Turek and Hron benchmark problem and an analytical study using a simplified FSI model featuring an Euler‐Bernoulli beam interacting with a two‐dimensional incompressible inviscid flow. Both studies reveal a trade‐off between stability and accuracy: smaller values of αf tend to improve numerical stability, yet deteriorate the accuracy of the partitioned solution. Using the simplified FSI model, the critical value of αf that optimizes this trade‐off is derived and discussed.  相似文献   

3.
This paper presents an efficient metamodel building technique for solving collaborative optimization (CO) based on high fidelity models. The proposed method is based on a metamodeling concept, that is designed to simultaneously utilize computationally efficient (low fidelity) and expensive (high fidelity) models in an optimization process. A distinctive feature of the method is the utilization of interaction between low and high fidelity models in the construction of high quality metamodels both at the discipline level and system level of the CO. The low fidelity model is tuned in such a way that it approaches the same level of accuracy as the high fidelity model; but at the same time remains computational inexpensive. In this process, the tuned low fidelity models are used in the discipline level optimization process. In the system level, to handle the computational cost of the equality constraints in CO, model management strategy along with metamodeling technique are used. To determine the fidelity of metamodels, the predictive estimation of model fidelity method is applied. The developed method is demonstrated on a 2D Airfoil design problem, involving tightly coupled high fidelity structural and aerodynamic models. The results obtained show that the proposed method significantly reduces computational cost, and improves the convergence rate for solving the multidisciplinary optimization problem based on high fidelity models.  相似文献   

4.
Variable-complexity methods are applied to aerodynamic shape design problems with the objective of reducing the total computational cost of the optimization process. Two main strategies are employed: the use of different levels of fidelity in the analysis models (variable fidelity) and the use of different sets of design variables (variable parameterization). Variable-fidelity methods with three different types of corrections are implemented and applied to a set of two-dimensional airfoil optimization problems that use computational fluid dynamics for the analysis. Variable parameterization is also used to solve the same problems. Both strategies are shown to reduce the computational cost of the optimization.  相似文献   

5.
Vision system calibration and identification are important issues for effective implementation of high‐performance robotic systems. Vision system identification addresses the problem of determining the mapping from points in the world frame to their corresponding location in a computer image frame. By assuming rotation of the camera frame around one of the principal axes of the world frame—but incorporating radial lens distortion—we show that this mapping can be expressed as a linear regression model in terms of a suitable combination of the intrinsic and extrinsic camera parameters. This property allows the application of several known techniques based on resolution of a determined set of linear equations and least‐squares–based methods to estimate these parameters from experimental input‐output data. Experimental comparisons are carried out to illustrate the performances of these methods. © 2001 John Wiley & Sons, Inc. Int J Imaging Syst Technol 11, 170–180, 2000  相似文献   

6.
Unlike the traditional topology optimization approach that uses the same discretization for finite element analysis and design optimization, this paper proposes a framework for improving multiresolution topology optimization (iMTOP) via multiple distinct discretizations for: (1) finite elements; (2) design variables; and (3) density. This approach leads to high fidelity resolution with a relatively low computational cost. In addition, an adaptive multiresolution topology optimization (AMTOP) procedure is introduced, which consists of selective adjustment and refinement of design variable and density fields. Various two‐dimensional and three‐dimensional numerical examples demonstrate that the proposed schemes can significantly reduce computational cost in comparison to the existing element‐based approach. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

7.
In optimization under uncertainty for engineering design, the behavior of the system outputs due to uncertain inputs needs to be quantified at each optimization iteration, but this can be computationally expensive. Multifidelity techniques can significantly reduce the computational cost of Monte Carlo sampling methods for quantifying the effect of uncertain inputs, but existing multifidelity techniques in this context apply only to Monte Carlo estimators that can be expressed as a sample average, such as estimators of statistical moments. Information reuse is a particular multifidelity method that treats previous optimization iterations as lower fidelity models. This work generalizes information reuse to be applicable to quantities whose estimators are not sample averages. The extension makes use of bootstrapping to estimate the error of estimators and the covariance between estimators at different fidelities. Specifically, the horsetail matching metric and quantile function are considered as quantities whose estimators are not sample averages. In an optimization under uncertainty for an acoustic horn design problem, generalized information reuse demonstrated computational savings of over 60% compared with regular Monte Carlo sampling.  相似文献   

8.
In this paper we perform shape optimization of a pediatric pulsatile ventricular assist device (PVAD). The device simulation is carried out using fluid–structure interaction (FSI) modeling techniques within a computational framework that combines FEM for fluid mechanics and isogeometric analysis for structural mechanics modeling. The PVAD FSI simulations are performed under realistic conditions (i.e., flow speeds, pressure levels, boundary conditions, etc.), and account for the interaction of air, blood, and a thin structural membrane separating the two fluid subdomains. The shape optimization study is designed to reduce thrombotic risk, a major clinical problem in PVADs. Thrombotic risk is quantified in terms of particle residence time in the device blood chamber. Methods to compute particle residence time in the context of moving spatial domains are presented in a companion paper published in the same issue (Comput Mech, doi:10.1007/s00466-013-0931-y, 2013). The surrogate management framework, a derivative-free pattern search optimization method that relies on surrogates for increased efficiency, is employed in this work. For the optimization study shown here, particle residence time is used to define a suitable cost or objective function, while four adjustable design optimization parameters are used to define the device geometry. The FSI-based optimization framework is implemented in a parallel computing environment, and deployed with minimal user intervention. Using five SEARCH/POLL steps the optimization scheme identifies a PVAD design with significantly better throughput efficiency than the original device.  相似文献   

9.
In many real-world optimization problems, the underlying objective and constraint function(s) are evaluated using computationally expensive iterative simulations such as the solvers for computational electro-magnetics, computational fluid dynamics, the finite element method, etc. The default practice is to run such simulations until convergence using termination criteria, such as maximum number of iterations, residual error thresholds or limits on computational time, to estimate the performance of a given design. This information is used to build computationally cheap approximations/surrogates which are subsequently used during the course of optimization in lieu of the actual simulations. However, it is possible to exploit information on pre-converged solutions if one has the control to abort simulations at various stages of convergence. This would mean access to various performance estimates in lower fidelities. Surrogate assisted optimization methods have rarely been used to deal with such classes of problem, where estimates at various levels of fidelity are available. In this article, a multiple surrogate assisted optimization approach is presented, where solutions are evaluated at various levels of fidelity during the course of the search. For any solution under consideration, the choice to evaluate it at an appropriate fidelity level is derived from neighbourhood information, i.e. rank correlations between performance at different fidelity levels and the highest fidelity level of the neighbouring solutions. Moreover, multiple types of surrogates are used to gain a competitive edge. The performance of the approach is illustrated using a simple 1D unconstrained analytical test function. Thereafter, the performance is further assessed using three 10D and three 20D test problems, and finally a practical design problem involving drag minimization of an unmanned underwater vehicle. The numerical experiments clearly demonstrate the benefits of the proposed approach for such classes of problem.  相似文献   

10.
We present a model reduction approach to the solution of large‐scale statistical inverse problems in a Bayesian inference setting. A key to the model reduction is an efficient representation of the non‐linear terms in the reduced model. To achieve this, we present a formulation that employs masked projection of the discrete equations; that is, we compute an approximation of the non‐linear term using a select subset of interpolation points. Further, through this formulation we show similarities among the existing techniques of gappy proper orthogonal decomposition, missing point estimation, and empirical interpolation via coefficient‐function approximation. The resulting model reduction methodology is applied to a highly non‐linear combustion problem governed by an advection–diffusion‐reaction partial differential equation (PDE). Our reduced model is used as a surrogate for a finite element discretization of the non‐linear PDE within the Markov chain Monte Carlo sampling employed by the Bayesian inference approach. In two spatial dimensions, we show that this approach yields accurate results while reducing the computational cost by several orders of magnitude. For the full three‐dimensional problem, a forward solve using a reduced model that has high fidelity over the input parameter space is more than two million times faster than the full‐order finite element model, making tractable the solution of the statistical inverse problem that would otherwise require many years of CPU time. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

11.
Reduced‐order models that are able to approximate output quantities of interest of high‐fidelity computational models over a wide range of input parameters play an important role in making tractable large‐scale optimal design, optimal control, and inverse problem applications. We consider the problem of determining a reduced model of an initial value problem that spans all important initial conditions, and pose the task of determining appropriate training sets for reduced‐basis construction as a sequence of optimization problems. We show that, under certain assumptions, these optimization problems have an explicit solution in the form of an eigenvalue problem, yielding an efficient model reduction algorithm that scales well to systems with states of high dimension. Furthermore, tight upper bounds are given for the error in the outputs of the reduced models. The reduction methodology is demonstrated for a large‐scale contaminant transport problem. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

12.
We study practical strategies for estimating numerical errors in scalar outputs calculated from unsteady simulations of convection‐dominated flows, including those governed by the compressible Navier–Stokes equations. The discretization is a discontinuous Galerkin finite element method in space and time on static spatial meshes. Time‐integral quantities are considered for scalar outputs and these are shown to superconverge with temporal refinement. Output error estimates are calculated using the adjoint‐weighted residual method, where the unsteady adjoint solution is obtained using a discrete approach with an iterative solver. We investigate the accuracy versus computational cost trade‐off for various approximations of the fine‐space adjoint and find that exact adjoint solutions are accurate but expensive. To reduce the cost, we propose a local temporal reconstruction that takes advantage of superconvergence properties at Radau points, and a spatial reconstruction based on nearest‐neighbor elements. This inexact adjoint yields output error estimates at a computational cost of less than 2.5 times that of the forward problem for the cases tested. The calculated error estimates account for numerical error arising from both the spatial and temporal discretizations, and we present a method for identifying the percentage contributions of each discretization to the output error. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

13.
在基于仿真模型的工程设计优化中,采用高精度、高成本的分析模型会导致计算量大,采用低精度、低成本的分析模型会导致设计优化结果的可信度低,难以满足实际工程的要求。为了有效平衡高精度与低成本之间的矛盾关系,通过建立序贯层次Kriging模型融合高/低精度数据,采用大量低成本、低精度的样本点反映高精度分析模型的变化趋势,并采用少量高成本、高精度的样本点对低精度分析模型进行校正,以实现对优化目标的高精度预测。为了避免层次Kriging模型误差对优化结果的影响,将层次Kriging模型与遗传算法相结合,根据6σ设计准则计算每一代最优解的预测区间,具有较大预测区间的当前最优解即为新的高精度样本点。同时,在优化过程中序贯更新层次Kriging模型,提高最优解附近的层次Kriging模型的预测精度,从而保证设计结果的可靠性。将所提出的方法应用于微型飞行器机身结构的设计优化中,以验证该方法的有效性和优越性。采用具有不同单元数的网格模型分别作为低精度分析模型和高精度分析模型,利用最优拉丁超立方设计分别选取60个低精度样本点和20个高精度样本点建立初始层次Kriging模型,采用本文方法求解并与直接采用高精度仿真模型求解的结果进行比较。结果表明,所提出的方法能够有效利用高/低精度样本点处的信息,建立高精度的层次Kriging模型;本文方法仅需要少量的计算成本就能求得近似最优解,有效提高了设计效率,为类似的结构设计优化问题提供了参考。  相似文献   

14.
The ability to quickly and intuitively edit digital content has become increasingly important in our everyday life. However, existing edit propagation methods for editing digital images are typically based on optimization with high computational cost for large inputs. Moreover, existing edit propagation methods are generally inefficient and highly time-consuming. Accordingly, to improve edit efficiency, this paper proposes a novel edit propagation method using a bilateral grid, which can achieve instant propagation of sparse image edits. Firstly, given an input image with user interactions, we resample each of its pixels into a regularly sampled bilateral grid, which facilitates efficient mapping from an image to the bilateral space. As a result, all pixels with the same feature information (color, coordinates) are clustered to the same grid, which can achieve the goal of reducing both the amount of image data processing and the cost of calculation. We then reformulate the propagation as a function of the interpolation problem in bilateral space, which is solved very efficiently using radial basis functions. Experimental results show that our method improves the efficiency of color editing, making it faster than existing edit approaches, and results in excellent edited images with high quality.  相似文献   

15.
This article presents a computational approach to the image reconstruction of a perfectly conducting cylinder illuminated by transverse electric waves. A perfectly conducting cylinder of unknown shape buried in one half‐space and scatters the incident wave from another half‐space where the scattered field is recorded. Based on the boundary condition and the measured scattered field, a set of nonlinear integral equations is derived, and the imaging problem is reformulated into an optimization problem. The steady state genetic algorithm is then employed to find out the global extreme solution of the cost function. Numerical results demonstrated that, even when the initial guess is far away from the exact one, good reconstruction can be obtained. In such a case, the gradient‐based methods often get trapped in a local extreme. In addition, the effect of different noise on the reconstruction is investigated. © 2006 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 15, 261–265, 2005  相似文献   

16.
Genetic algorithms (GAs) have become a popular optimization tool for many areas of research and topology optimization an effective design tool for obtaining efficient and lighter structures. In this paper, a versatile, robust and enhanced GA is proposed for structural topology optimization by using problem‐specific knowledge. The original discrete black‐and‐white (0–1) problem is directly solved by using a bit‐array representation method. To address the related pronounced connectivity issue effectively, the four‐neighbourhood connectivity is used to suppress the occurrence of checkerboard patterns. A simpler version of the perimeter control approach is developed to obtain a well‐posed problem and the total number of hinges of each individual is explicitly penalized to achieve a hinge‐free design. To handle the problem of representation degeneracy effectively, a recessive gene technique is applied to viable topologies while unusable topologies are penalized in a hierarchical manner. An efficient FEM‐based function evaluation method is developed to reduce the computational cost. A dynamic penalty method is presented for the GA to convert the constrained optimization problem into an unconstrained problem without the possible degeneracy. With all these enhancements and appropriate choice of the GA operators, the present GA can achieve significant improvements in evolving into near‐optimum solutions and viable topologies with checkerboard free, mesh independent and hinge‐free characteristics. Numerical results show that the present GA can be more efficient and robust than the conventional GAs in solving the structural topology optimization problems of minimum compliance design, minimum weight design and optimal compliant mechanisms design. It is suggested that the present enhanced GA using problem‐specific knowledge can be a powerful global search tool for structural topology optimization. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

17.
Mehdi Ebrahimi 《工程优选》2017,49(12):2079-2094
An efficient strategy is presented for global shape optimization of wing sections with a parallel genetic algorithm. Several computational techniques are applied to increase the convergence rate and the efficiency of the method. A variable fidelity computational evaluation method is applied in which the expensive Navier–Stokes flow solver is complemented by an inexpensive multi-layer perceptron neural network for the objective function evaluations. A population dispersion method that consists of two phases, of exploration and refinement, is developed to improve the convergence rate and the robustness of the genetic algorithm. Owing to the nature of the optimization problem, a parallel framework based on the master/slave approach is used. The outcomes indicate that the method is able to find the global optimum with significantly lower computational time in comparison to the conventional genetic algorithm.  相似文献   

18.
A new computational approach to modelling and control of a flexible beam is proposed. The structural modelling and the control design problems are formulated in a unified mathematical framework that allows simultaneous structural and control design iterations that result in an optimal overall system performance. The method employs the space–time spectral elements for simultaneous space and time discretizations of a Timoshenko beam model. Dimensionless equations of motion are derived using Hamilton's principle of variable action and an integral formulation in the framework of space–time spectral elements is introduced. An optimal control problem formulated for the continuum model is transformed by the space–time spectral element formulation into an optimization problem in a finite-dimensional parameter space. Dynamic programming is then used to obtain both open and closed loop control laws. A simulation study shows good performance of the control law applied to the nominal model. It is also demonstrated that proper discretization yields performance robustness of the system with respect to modal truncation.  相似文献   

19.
The fractional step method (FSM) is an efficient solution technique for the particle finite element method, a Lagrangian‐based approach to simulate fluid–structure interaction (FSI). Despite various refinements, the applicability of the FSM has been limited to low viscosity flow and FSI simulations with a small number of equations along the fluid–structure interface. To overcome these limitations, while incorporating nonlinear response in the structural domain, an FSM that unifies structural and fluid response in the discrete governing equations is developed using the quasi‐incompressible formulation. With this approach, fluid and structural particles do not need to be treated separately, and both domains are unified in the same system of equations. Thus, the equations along the fluid–structure interface do not need to be segregated from the fluid and structural domains. Numerical examples compare the unified FSM with the non‐unified FSM and show that the computational cost of the proposed method overcomes the slow convergence of the non‐unified FSM for high values of viscosity. As opposed to the non‐unified FSM, the number of iterations required for convergence with the unified FSM becomes independent of viscosity and time step, and the simulation run time does not depend on the size of the FSI interface. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

20.
Mass transport processes are known to play an important role in many fields of biomechanics such as respiratory, cardiovascular, and biofilm mechanics. In this paper, we present a novel computational model considering the effect of local solid deformation and fluid flow on mass transport. As the transport processes are assumed to influence neither structure deformation nor fluid flow, a sequential one‐way coupling of a fluid–structure interaction (FSI) and a multi‐field scalar transport model is realized. In each time step, first the non‐linear monolithic FSI problem is solved to determine current local deformations and velocities. Using this information, the mass transport equations can then be formulated on the deformed fluid and solid domains. At the interface, concentrations are related depending on the interfacial permeability. First numerical examples demonstrate that the proposed approach is suitable for simulating convective and diffusive scalar transport on coupled, deformable fluid and solid domains. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号