首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We present a methodical procedure for topology optimization under uncertainty with multiresolution finite element (FE) models. We use our framework in a bifidelity setting where a coarse and a fine mesh corresponding to low- and high-resolution models are available. The inexpensive low-resolution model is used to explore the parameter space and approximate the parameterized high-resolution model and its sensitivity, where parameters are considered in both structural load and stiffness. We provide error bounds for bifidelity FE approximations and their sensitivities and conduct numerical studies to verify these theoretical estimates. We demonstrate our approach on benchmark compliance minimization problems, where we show significant reduction in computational cost for expensive problems such as topology optimization under manufacturing variability, reliability-based topology optimization, and three-dimensional topology optimization while generating almost identical designs to those obtained with a single-resolution mesh. We also compute the parametric von Mises stress for the generated designs via our bifidelity FE approximation and compare them with standard Monte Carlo simulations. The implementation of our algorithm, which extends the well-known 88-line topology optimization code in MATLAB, is provided.  相似文献   

2.
In this paper, the proper generalized decomposition (PGD) is used for model reduction in the solution of an inverse heat conduction problem within the Bayesian framework. Two PGD reduced order models are proposed and the approximation Error model (AEM) is applied to account for the errors between the complete and the reduced models. For the first PGD model, the direct problem solution is computed considering a separate representation of each coordinate of the problem during the process of solving the inverse problem. On the other hand, the second PGD model is based on a generalized solution integrating the unknown parameter as one of the coordinates of the decomposition. For the second PGD model, the reduced solution of the direct problem is computed before the inverse problem within the parameter space provided by the prior information about the parameters, which is required to be proper. These two reduced models are evaluated in terms of accuracy and reduction of the computational time on a transient three-dimensional two region inverse heat transfer problem. In fact, both reduced models result on substantial reduction of the computational time required for the solution of the inverse problem, and provide accurate estimates for the unknown parameter due to the application of the approximation error model approach.  相似文献   

3.
We study practical strategies for estimating numerical errors in scalar outputs calculated from unsteady simulations of convection‐dominated flows, including those governed by the compressible Navier–Stokes equations. The discretization is a discontinuous Galerkin finite element method in space and time on static spatial meshes. Time‐integral quantities are considered for scalar outputs and these are shown to superconverge with temporal refinement. Output error estimates are calculated using the adjoint‐weighted residual method, where the unsteady adjoint solution is obtained using a discrete approach with an iterative solver. We investigate the accuracy versus computational cost trade‐off for various approximations of the fine‐space adjoint and find that exact adjoint solutions are accurate but expensive. To reduce the cost, we propose a local temporal reconstruction that takes advantage of superconvergence properties at Radau points, and a spatial reconstruction based on nearest‐neighbor elements. This inexact adjoint yields output error estimates at a computational cost of less than 2.5 times that of the forward problem for the cases tested. The calculated error estimates account for numerical error arising from both the spatial and temporal discretizations, and we present a method for identifying the percentage contributions of each discretization to the output error. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

4.
This paper proposes a generalized pointwise bias error bounds estimation method for polynomial‐based response surface approximations when bias errors are substantial. A relaxation parameter is introduced to account for inconsistencies between the data and the assumed true model. The method is demonstrated with a polynomial example where the model is a quadratic polynomial while the true function is assumed to be cubic polynomial. The effect of relaxation parameter is studied. It is demonstrated that when bias errors dominate, the bias error bounds characterize the actual error field better than the standard error. The bias error bound estimates also help to identify regions in the design space where the accuracy of the response surface approximations is inadequate. It is demonstrated that this information can be utilized for adaptive sampling in order to improve accuracy in such regions. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

5.
本文基于对偶理论对椭圆变分不等式的正则化方法提供一个相对全面的后验误差分析.我们分别考虑了摩擦接触问题和障碍问题,通过选取一种不同形式的有界算子和泛函,推导了其对偶形式并给出了正则化方法的 $H^1$ 范后验误差估计.最后,利用凸分析中的对偶理论建立了障碍问题的残量型后验误差估计的一般框架.同时我们选取一种特殊的对偶变量和泛函形式得到该问题的残量型误差估计及其有效性.数值解的后验误差估计是发展有效自适应算法的基础而模型误差的后验误差估计在分析问题中数据的不确定影响时是非常有用的.  相似文献   

6.
We introduce a port (interface) approximation and a posteriori error bound framework for a general component‐based static condensation method in the context of parameter‐dependent linear elliptic partial differential equations. The key ingredients are as follows: (i) efficient empirical port approximation spaces—the dimensions of these spaces may be chosen small to reduce the computational cost associated with formation and solution of the static condensation system; and (ii) a computationally tractable a posteriori error bound realized through a non‐conforming approximation and associated conditioner—the error in the global system approximation, or in a scalar output quantity, may be bounded relatively sharply with respect to the underlying finite element discretization. Our approximation and a posteriori error bound framework is of particular computational relevance for the static condensation reduced basis element (SCRBE) method. We provide several numerical examples within the SCRBE context, which serve to demonstrate the convergence rate of our port approximation procedure as well as the efficacy of our port reduction error bounds. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

7.
This article introduces a novel error estimator for the proper generalized decomposition (PGD) approximation of parametrized equations. The estimator is intrinsically random: it builds on concentration inequalities of Gaussian maps and an adjoint problem with random right-hand side, which we approximate using the PGD. The effectivity of this randomized error estimator can be arbitrarily close to unity with high probability, allowing the estimation of the error with respect to any user-defined norm as well as the error in some quantity of interest. The performance of the error estimator is demonstrated and compared with some existing error estimators for the PGD for a parametrized time-harmonic elastodynamics problem and the parametrized equations of linear elasticity with a high-dimensional parameter space.  相似文献   

8.
We propose a certified reduced basis approach for the strong- and weak-constraint four-dimensional variational (4D-Var) data assimilation problem for a parametrized PDE model. While the standard strong-constraint 4D-Var approach uses the given observational data to estimate only the unknown initial condition of the model, the weak-constraint 4D-Var formulation additionally provides an estimate for the model error and thus can deal with imperfect models. Since the model error is a distributed function in both space and time, the 4D-Var formulation leads to a large-scale optimization problem for every given parameter instance of the PDE model. To solve the problem efficiently, various reduced order approaches have therefore been proposed in the recent past. Here, we employ the reduced basis method to generate reduced order approximations for the state, adjoint, initial condition, and model error. Our main contribution is the development of efficiently computable a posteriori upper bounds for the error of the reduced basis approximation with respect to the underlying high-dimensional 4D-Var problem. Numerical results are conducted to test the validity of our approach.  相似文献   

9.
Mathematical methods provide useful framework for the analysis and design of complex systems. In newer contexts such as biology, however, there is a need to both adapt existing methods as well as to develop new ones. Using a combination of analytical and computational approaches, the authors adapt and develop the method of describing functions to represent the input–output responses of biomolecular signalling systems. They approximate representative systems exhibiting various saturating and hysteretic dynamics in a way that is better than the standard linearisation. Furthermore, they develop analytical upper bounds for the computational error estimates. Finally, they use these error estimates to augment the limit cycle analysis with a simple and quick way to bound the predicted oscillation amplitude. These results provide system approximations that can add more insight into the local behaviour of these systems than standard linearisation, compute responses to other periodic inputs and to analyse limit cycles.Inspec keywords: molecular biophysics, physiological models, approximation theoryOther keywords: describing‐function‐based approximations, mathematical methods, computational approaches, biomolecular signalling systems, hysteretic dynamics, saturating dynamics, analytical upper bounds, computational error estimates, oscillation amplitude  相似文献   

10.
The conventional determination of model parameter errors in least-squares regression of experimental cyclic voltammetric data assumes validity of local approximations (e.g., linearization) in the parameter space and normal distributions of the data and parameter errors. Such assumptions may not always be satisfied in practice. Bootstrap resampling techniques present a more universally applicable approach to error estimation, which until now has not been used in cyclic voltammetric studies, owing to the high costs of the required voltammogram simulations. We demonstrate that the burden of computing voltammograms can be significantly reduced by the use of high-dimensional model representation (HDMR) solution mapping techniques, thereby making it feasible to apply the bootstrap data analysis in cyclic voltammetry. We perform computational experiments with bootstrap resampling, enhanced by HDMR maps, for a typical cyclic voltammetric model (i.e., the Eqrev Cirr Eqrev reaction mechanism at a planar macroelectrode under semi-infinite, pure diffusion transport conditions). The experiments reveal that the bootstrap distributions of the estimated parameters provide a satisfactory quantification of the parameter errors and can also be used for detecting statistical correlations of the parameters.  相似文献   

11.
S J Cooper 《Applied optics》1999,38(15):3258-3265
A pair of laser parameters of considerable practical interest are the small signal gain and saturation irradiance of the gain medium. These are commonly measured by observing the dependence of the output power on some adjustable cavity loss parameter and comparing the measured data with the predictions of a suitable laser model. Because of the inevitable approximations in this model the resulting estimates of gain and saturation irradiance are always affected to some extent by systematic errors. The small-gain, plane-wave, mean-field, and pure homogeneous or inhomogeneous line-broadening approximations are considered, with estimates of the magnitudes of these errors being presented for the case in which the gain, the saturation irradiance, and the cavity loss are fitted to the data. It is shown that these errors can be quite substantial, and therefore accurate absolute measurements of the three laser parameters can be quite difficult to obtain using the variable loss method. As an illustration of these errors, a comparison between the measured output power from a HCN laser and the power predicted using experimentally measured gain and saturation irradiance values is shown. The poor quality of these predictions illustrates the serious effects that the systematic errors can have, although an alternative analysis in which the cavity loss is supplied and only the gain and saturation irradiance fitted is also shown and gives good predictions despite inaccuracies in the model.  相似文献   

12.
A fatigue crack growth (FCG) model for specimens with well-characterized residual stress fields has been studied using experimental analysis and finite element (FE) modeling. The residual stress field was obtained using four point bending tests performed on 7050-T7451 aluminum alloy rectangular specimens and consecutively modeled using the FE method. The experimentally obtained residual stress fields were characterized using a digital image correlation technique and a slitting method, and a good agreement between the experimental residual stress fields and the stress field in the FE model was obtained. The FE FCG models were developed using a linear elastic model, a linear elastic model with crack closure and an elastic–plastic model with crack closure. The crack growth in the FE FCG model was predicted using Paris–Erdogan data obtained from the residual stress free samples, using the Harter T-method for interpolating between different baseline crack growth curves, and using the effective stress intensity factor range and stress ratio. The elastic–plastic model with crack closure effects provides results close to the experimental data for the FCG with positive applied stress ratios reproducing the FCG deceleration in the compressive zone of the residual stress field. However, in the case of a negative stress ratio all models with crack closure effects strongly underestimate the FCG rates, in which case a linear elastic model provides the best fit with the experimental data. The results demonstrate that the negative part of the stress cycle with a fully closed crack contributes to the driving force for the FCG and thus should be accounted for in the fatigue life estimates.  相似文献   

13.
In this paper, we consider the problem of constructing reduced‐order models of a class of time‐dependent randomly parametrized linear partial differential equations. Our objective is to efficiently construct a reduced basis approximation of the solution as a function of the spatial coordinates, parameter space, and time. The proposed approach involves decomposing the solution in terms of undetermined spatial and parametrized temporal basis functions. The unknown basis functions in the decomposition are estimated using an alternating iterative Galerkin projection scheme. Numerical studies on the time‐dependent randomly parametrized diffusion equation are presented to demonstrate that the proposed approach provides good accuracy at significantly lower computational cost compared with polynomial chaos‐based Galerkin projection schemes. Comparison studies are also made against Nouy's generalized spectral decomposition scheme to demonstrate that the proposed approach provides a number of computational advantages. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

14.
In this work, an explicit-implicit time-marching procedure with model/ solution-adaptive time integration parameters is proposed for the analysis of hyperbolic models. The two time integrators of the methodology are locally evaluated, enabling their different spatial and temporal distributions. The first parameter defines the explicit/implicit subdomains of the model, and it is defined in a way that stability is always ensured, as well as period elongation errors are reduced; the second parameter controls the dissipative properties of the methodology, allowing spurious high-frequency modes to be properly eliminated, rendering reduced amplitude decay errors. In addition, the proposed explicit-implicit approach allows contracted systems of equations to be obtained, reducing the computational effort of the analysis. The main features of the novel methodology can be summarized as follows: (i) it is simple; (ii) it is locally defined; (iii) it has guaranteed stability; (iv) it is an efficient noniterative single-step procedure; (v) it provides enhanced accuracy; (vi) it enables advanced controllable algorithmic dissipation in the higher modes; (vii) it considers a link between the temporal and the spatial discretization; (viii) it stands as a single-solve framework based on reduced systems of equations; (ix) it is truly self-starting; and (x) it is entirely automatic.  相似文献   

15.
We show that the issue of a posteriori estimate the errors in the numerical simulation of non‐linear parabolic equations can be reduced to a posteriori estimate the errors in the approximation of an elliptic problem with the right‐hand side depending on known data of the problem and the computed numerical solution. A procedure to obtain local error estimates for the p version of the finite element method by solving small discrete elliptic problems with right‐hand side the residual of the p‐FEM solution is introduced. The boundary conditions are inherited by those of the space of hierarchical bases to which the error estimator belongs. We prove that the error in the numerical solution can be reduced by adding the estimators that behave as a locally defined correction to the computed approximation. When the error being estimated is that of a elliptic problem constant free local lower bounds are obtained. The local error estimation procedure is applied to non‐linear parabolic differential equations in several space dimensions. Some numerical experiments for both the elliptic and the non‐linear parabolic cases are provided. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

16.
Data reliability at the output of the error-correction code decoderin a compact-disc system is influenced by the decoding strategyemployed by the decoder, as well as by the statistical distribution oferrors that contaminate the recorded data. Recovered-datareliability estimates have been computed by use of error statisticsobtained from the measurement of errors that contaminate the actualdata stored on clean write-once and read-only-memory compactdiscs. These estimates consist of probabilities that specify theoccurrence of residual errors in the data that appear at the output ofa compact-disc player's cross-interleaved Reed-Solomon code(CIRC) decoder. Data reliability estimates that apply to fivespecific CIRC decoding strategies are reported.  相似文献   

17.
The interaction of acoustic waves with submerged structures remains one of the most difficult and challenging problems in underwater acoustics. Many techniques such as coupled Boundary Element (BE)/Finite Element (FE) or coupled Infinite Element (IE)/Finite Element approximations have evolved. In the present work, we focus on the steady‐state formulation only, and study a general coupled hp‐adaptive BE/FE method. A particular emphasis is placed on an a posteriori error estimation for the viscoelastic scattering problems. The highlights of the proposed methodology are as follows: (1) The exterior Helmholtz equation and the Sommerfeld radiation condition are replaced with an equivalent Burton–Miller (BM) boundary integral equation on the surface of the scatterer. (2) The BM equation is coupled to the steady‐state form of viscoelasticity equations modelling the behaviour of the structure. (3) The viscoelasticity equations are approximated using hp‐adaptive FE isoparametric discretizations with order of approximation p⩾5 in order to avoid the ‘locking’ phenomenon. (4) A compatible hp superparametric discretization is used to approximate the BM integral equation. (5) Both the FE and BE approximations are based on a weak form of the equations, and the Galerkin method, allowing for a full convergence analysis. (6) An a posteriori error estimate for the coupled problem of a residual type is derived, allowing for estimating the error in pressure on the wet surface of the scatterer. (7) An adaptive scheme, an extension of the Texas Three Step Adaptive Strategy is used to manipulate the mesh size h and the order of approximation p so as to approximately minimize the number of degrees of freedom required to produce a solution with a specified accuracy. The use of this hp‐scheme may exhibit exponential convergence rates. Several numerical experiments illustrate the methodology. These include detailed convergence studies for the problem of scattering of a plane acoustic wave on a viscoelastic sphere, and adaptive solutions of viscoelastic scattering problems for a series of MOCK0 models. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

18.
Approximations are given relating the shape parameter of the two-parameter Weibull distribution to the coefficient of variation of a complete (uncensored) data set. The accuracy of the approximations over 0.4 < β < 5 is such as to give adequate estimates in most practical applications. For 0-25 < β < 0.4 and 5 < β < 10, the estimates from the approximation may provide initial values for iterative procedures using moments, maximum likelihood, minimum x2, etc. The simplicity of the functions means that jack-knife methods may be used to obtain standard errors of estimates, including the scale parameter (9) where a suitable gamma function approximation is incorporated.  相似文献   

19.
Stability and convergence analysis of finite element approximations of Biot's equations governing quasistatic consolidation of saturated porous media are, discussed. A family of decay functions, parametrized by the number of time steps, is derived for the fully discrete backward Euler–Galerkin formulation, showing that the pore-pressure oscillations, arising from an unstable approximation of the incompressibility constraint on the initial condition, decay in time. Error estimates holding over the unbounded time domain for both semidiscrete and fully discrete formulations are presented, and a post-processing technique is employed to improve the pore-pressure accuracy.  相似文献   

20.
This paper presents a method for optimizing computational meshes for the prediction of scalar outputs when using hybridized and embedded discontinuous Galerkin (HDG/EDG) discretizations. Hybridization offers memory and computational time advantages compared to the standard discontinuous Galerkin (DG) method through a decoupling of elemental degrees of freedom and the introduction of face degrees of freedom that become the only globally coupled unknowns. However, the additional equations of weak flux continuity on each interior face introduce new residuals that augment output error estimates and complicate existing element-centric mesh optimization methods. This work presents techniques for converting face-based error estimates to elements and sampling their reduction with refinement in order to determine element-specific anisotropic convergence rate tensors. The error sampling uses fine-space adjoint projections and does not require additional solves on subelements. Together with a degree-of-freedom cost model, the error models drive metric-based unstructured mesh optimization. Adaptive results for inviscid and viscous two-dimensional flow problems demonstrate (i) improvement of EDG mesh optimality when using error models that incorporate face errors, (ii) the relative insensitivity of HDG mesh optimality to the incorporation of face errors, and (iii) degree of freedom and computational-time benefits of hybridized methods, particularly EDG, relative to DG.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号