首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Traditional computational analyses of perforated materials, much like traditional computational analyses of heterogeneous materials, involve the use of homogenization, using either the assumption of periodicity or the assumption of statistical homogeneity. Recently, Oden and Vemaganti (Journal of Computational Physics 2000; 164: 22–47) have proposed a goal‐oriented approach to the modelling of heterogeneous multi‐phase linearly elastic materials that does not rely on such idealized situations. In this approach, the mathematical model is adaptively selected based on estimates of the modelling error: the error resulting from the smearing out of rapidly varying material moduli. The approach is said to be goal‐oriented because the adaptive process is driven by local estimates of modelling error in quantities of interest to the analyst, instead of global estimates. We extend this goal‐oriented adaptive approach to the case of perforated domains composed of linearly elastic materials. Toward this end, new global and local bounds on modelling errors resulting from homogenization of perforated materials are developed. A representative numerical experiment is presented. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

2.
In this paper, the proper generalized decomposition (PGD) is used for model reduction in the solution of an inverse heat conduction problem within the Bayesian framework. Two PGD reduced order models are proposed and the approximation Error model (AEM) is applied to account for the errors between the complete and the reduced models. For the first PGD model, the direct problem solution is computed considering a separate representation of each coordinate of the problem during the process of solving the inverse problem. On the other hand, the second PGD model is based on a generalized solution integrating the unknown parameter as one of the coordinates of the decomposition. For the second PGD model, the reduced solution of the direct problem is computed before the inverse problem within the parameter space provided by the prior information about the parameters, which is required to be proper. These two reduced models are evaluated in terms of accuracy and reduction of the computational time on a transient three-dimensional two region inverse heat transfer problem. In fact, both reduced models result on substantial reduction of the computational time required for the solution of the inverse problem, and provide accurate estimates for the unknown parameter due to the application of the approximation error model approach.  相似文献   

3.
This article presents a new approach to assess the error in specific quantities of interest in the framework of linear elastodynamics. In particular, a new type of quantities of interest (referred as timeline‐dependent quantities) is proposed. These quantities are scalar time‐dependent outputs of the transient solution, which are better suited to time‐dependent problems than the standard scalar ones, frozen in time. The proposed methodology furnishes error estimates for both the standard scalar and the new timeline‐dependent quantities of interest. The key ingredient is the modal‐based approximation of the associated adjoint problems, which allows efficiently computing and storing the adjoint solution. The approximated adjoint solution is readily post‐processed to produce an enhanced solution, requiring only one spatial post‐process for each vibration mode and using the time‐harmonic hypothesis to recover the time dependence. Thus, the proposed goal‐oriented error estimate consists in injecting this enhanced adjoint solution into the residual of the direct problem. The resulting estimate is very well suited for transient dynamic simulations because the enhanced adjoint solution is computed before starting the forward time integration of the direct problem. Thus, the cost of the error estimate at each time step is very low. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

4.
We describe how wavelets constructed out of finite element interpolation functions provide a simple and convenient mechanism for both goal‐oriented error estimation and adaptivity in finite element analysis. This is done by posing an adaptive refinement problem as one of compactly representing a signal (the solution to the governing partial differential equation) in a multiresolution basis. To compress the solution in an efficient manner, we first approximately compute the details to be added to the solution on a coarse mesh in order to obtain the solution on a finer mesh (the estimation step) and then compute exactly the coefficients corresponding to only those basis functions contributing significantly to a functional of interest (the adaptation step). In this sense, therefore, the proposed approach is unified, since unlike many contemporary error estimation and adaptive refinement methods, the basis functions used for error estimation are the same as those used for adaptive refinement. We illustrate the application of the proposed technique for goal‐oriented error estimation and adaptivity for second and fourth‐order linear, elliptic PDEs and demonstrate its advantages over existing methods. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

5.
Microscopic considerations are drawing increasing attention for modern simulation techniques. Micromorphic continuum theories, considering micro degrees of freedom, are usually adopted for simulation of localization effects like shear bands. The increased number of degrees of freedom clearly motivates an application of adaptive methods. In this work, the adaptive FEM is tailored for micromorphic elasticity. The proposed adaptive procedure is driven by a goal‐oriented a posteriori error estimator based on duality techniques. For efficient computation of the dual solution, a patch‐based recovery technique is proposed and compared to a reference approach. In order to theoretically ensure optimal convergence order of the proposed adaptive procedure, adjoint consistency of the FE‐discretized solution for the linear elastic micromorphic continua is shown. For illustration, numerical examples are provided. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

6.
The paper deals with the issue of accuracy for multiscale methods applied to solve stochastic problems. It more precisely focuses on the control of a coupling, performed using the Arlequin framework, between a deterministic continuum model and a stochastic continuum one. By using residual‐type estimates and adjoint‐based techniques, a strategy for goal‐oriented error estimation is presented for this coupling and contributions of various error sources (modeling, space discretization, and Monte Carlo approximation) are assessed. Furthermore, an adaptive strategy is proposed to enhance the quality of outputs of interest obtained by the coupled stochastic‐deterministic model. Performance of the proposed approach is illustrated on 1D and 2D numerical experiments. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

7.
In this work, we show that the reduced basis method accelerates a partial differential equation constrained optimization problem, where a nonlinear discretized system with a large number of degrees of freedom must be repeatedly solved during optimization. Such an optimization problem arises, for example, from batch chromatography. To reduce the computational burden of repeatedly solving the large‐scale system under parameter variations, a parametric reduced‐order model with a small number of equations is derived by using the reduced basis method. As a result, the small reduced‐order model, rather than the full system, is solved at each step of the optimization process. An adaptive technique for selecting the snapshots is proposed, so that the complexity and runtime for generating the reduced basis are largely reduced. An output‐oriented error bound is derived in the vector space whereby the construction of the reduced model is managed automatically. An early‐stop criterion is proposed to circumvent the stagnation of the error and to make the construction of the reduced model more efficient. Numerical examples show that the adaptive technique is very efficient in reducing the offline time. The optimization based on the reduced model is successful in terms of the accuracy and the runtime for acquiring the optimal solution. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

8.
In this paper we outline a computational technique for the calibration of macroscopic constitutive laws with automatic error control. In the most general situation the state variables of the constitutive law, as well as the material parameters, are spatially non‐homogeneous. The experimental observations are given in space–time. Based on an appropriate dual problem, we compute a posteriori the discretization error contributions from approximations of the parameter, state and costate fields in space–time for an arbitrarily chosen goal‐oriented error measure of engineering significance. Such a measure can be used in an adaptive strategy (not discussed in this paper) to meet a predefined error tolerance. An important observation is that the Jacobian matrix associated with the resulting Newton method is used (in principle) in solving the dual problem. Rather than treating the Jacobian in a monolithic fashion, we utilize a sequential solution strategy, whereby the FE‐topology of the discretized state problem is used repeatedly. Moreover, the proposed solution strategy lends itself naturally to the computation of first and second order sensitivities, which are obtained with little extra computational effort. Numerical results are given for the prototype model of confined aquifer flow with spatially non‐homogeneous permeability. The efficiency of the optimization strategy and the effectivity of the error computation are assessed. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

9.
To be feasible for computationally intensive applications such as parametric studies, optimization, and control design, large‐scale finite element analysis requires model order reduction. This is particularly true in nonlinear settings that tend to dramatically increase computational complexity. Although significant progress has been achieved in the development of computational approaches for the reduction of nonlinear computational mechanics models, addressing the issue of contact remains a major hurdle. To this effect, this paper introduces a projection‐based model reduction approach for both static and dynamic contact problems. It features the application of a non‐negative matrix factorization scheme to the construction of a positive reduced‐order basis for the contact forces, and a greedy sampling algorithm coupled with an error indicator for achieving robustness with respect to model parameter variations. The proposed approach is successfully demonstrated for the reduction of several two‐dimensional, simple, but representative contact and self contact computational models. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

10.
In goal‐oriented adaptivity, the error in the quantity of interest is represented using the error functions of the direct and adjoint problems. This error representation is subsequently bounded above by element‐wise error indicators that are used to drive optimal refinements. In this work, we propose to replace, in the error representation, the adjoint problem by an alternative operator. The main advantage of the proposed approach is that, when judiciously selecting such alternative operator, the corresponding upper bound of the error representation becomes sharper, leading to a more efficient goal‐oriented adaptivity. While the method can be applied to a variety of problems, we focus here on two‐ and three‐dimensional (2‐D and 3‐D) Helmholtz problems. We show via extensive numerical experimentation that the upper bounds provided by the alternative error representations are sharper than the classical ones and lead to a more robust p‐adaptive process. We also provide guidelines for finding operators delivering sharp error representation upper bounds. We further extend the results to a convection‐dominated diffusion problem as well as to problems with discontinuous material coefficients. Finally, we consider a sonic logging‐while‐drilling problem to illustrate the applicability of the proposed method.  相似文献   

11.
Proper generalized decomposition (PGD) is often used for multiquery and fast‐response simulations. It is a powerful tool alleviating the curse of dimensionality affecting multiparametric partial differential equations. Most implementations of PGD are intrusive extensions based on in‐house developed FE solvers. In this work, we propose a nonintrusive PGD scheme using off‐the‐shelf FE codes (such as certified commercial software) as an external solver. The scheme is implemented and monitored by in‐house flow‐control codes. A typical implementation is provided with downloadable codes. Moreover, a novel parametric separation strategy for the PGD resolution is presented. The parametric space is split into two‐ or three‐dimensional subspaces, to allow PGD technique solving problems with constrained parametric spaces, achieving higher convergence ratio. Numerical examples are provided. In particular, a practical example in biomechanics is included, with potential application to patient‐specific simulation.  相似文献   

12.
This work focuses on providing accurate low‐cost approximations of stochastic finite elements simulations in the framework of linear elasticity. In a previous work, an adaptive strategy was introduced as an improved Monte‐Carlo method for multi‐dimensional large stochastic problems. We provide here a complete analysis of the method including a new enhanced goal‐oriented error estimator and estimates of CPU (computational processing unit) cost gain. Technical insights of these two topics are presented in details, and numerical examples show the interest of these new developments. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

13.
In this paper, a proper generalized decomposition (PGD) approach is employed for uncertainty quantification purposes. The neutron diffusion equation with external sources, a diffusion-reaction problem, is used as the parametric model. The uncertainty parameters include the zone-wise constant material diffusion and reaction coefficients as well as the source strengths, yielding a large uncertain space in highly heterogeneous geometries. The PGD solution, parameterized in all uncertain variables, can then be used to compute mean, variance, and more generally probability distributions of various quantities of interest. In addition to parameterized properties, parameterized geometrical variations of three-dimensional models are also considered in this paper. To achieve and analyze a parametric PGD solution, algorithms are developed to decompose the model's parametric space and semianalytically integrate solutions for evaluating statistical moments. Varying dimensional problems are evaluated to showcase PGD's ability to solve high-dimensional problems and analyze its convergence.  相似文献   

14.
Many approaches for solving stochastic inverse problems suffer from both stochastic and deterministic sources of error. The finite number of samples used to construct a solution is a common source of stochastic error. When computational models are expensive to evaluate, surrogate response surfaces are often employed to increase the number of samples available for approximating the solution. This leads to a reduction in finite sampling errors while the deterministic error in the evaluation of each sample is potentially increased. The pointwise accuracy of sampling the surrogate is primarily impacted by two sources of deterministic error: the local order of accuracy in the surrogate and the numerical error from the numerical solution of the model. In this work, we use adjoints to simultaneously give a posteriori error and derivative estimates in order to construct low-order, piecewise-defined surrogates on sets of unstructured samples. Several examples demonstrate the computational gains of this approach in obtaining accurate estimates of probabilities for events in the design space of model input parameters. This lays the groundwork for future studies on goal-oriented adaptive refinement of such surrogates.  相似文献   

15.
Existing convergence estimates for numerical scattering methods based on boundary integral equations are asymptotic in the limit of vanishing discretization length, and break down as the electrical size of the problem grows. In order to analyse the efficiency and accuracy of numerical methods for the large scattering problems of interest in computational electromagnetics, we study the spectrum of the electric field integral equation (EFIE) for an infinite, conducting strip for both the TM (weakly singular kernel) and TE polarizations (hypersingular kernel). Due to the self‐coupling of surface wave modes, the condition number of the discretized integral equation increases as the square root of the electrical size of the strip for both polarizations. From the spectrum of the EFIE, the solution error introduced by discretization of the integral equation can also be estimated. Away from the edge singularities of the solution, the error is second order in the discretization length for low‐order bases with exact integration of matrix elements, and is first order if an approximate quadrature rule is employed. Comparison with numerical results demonstrates the validity of these condition number and solution error estimates. The spectral theory offers insights into the behaviour of numerical methods commonly observed in computational electromagnetics. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

16.
Robust global/goal‐oriented error estimation is used nowadays to control the approximate finite element (FE) solutions obtained from simulation. In the context of computational mechanics, the construction of admissible stress fields (i.e. stress tensors which verify the equilibrium equations) is required to set up strict and guaranteed error bounds (using residual‐based error estimators) and plays an important role in the quality of the error estimates. This work focuses on the different procedures used in the calculation of admissible stress fields, which is a crucial and technically complicated point. The three main techniques that currently exist, called the element equilibration technique (EET), the star‐patch equilibration technique (SPET), and the element equilibration + star‐patch technique (EESPT), are investigated and compared with respect to three different criteria, namely the quality of associated error estimators, computational cost, and easiness of practical implementation into commercial FE codes. The numerical results that are presented focus on industrial problems; they highlight the main advantages and drawbacks of the different methods and show that the behavior of the three estimators, which have the same convergence rate as the exact global error, is consistent. 2D and 3D experiments have been carried out in order to compare the performance and the computational cost of the three different approaches. The analysis of the results reveals that the SPET is more accurate than EET and EESPT methods, but the corresponding computational cost is higher. Overall, the numerical tests prove the interest of the hybrid method EESPT and show that it is a correct compromise between the quality of the error estimate, practical implementation and computational cost. Furthermore, the influence of the cost function involved in the EET and the EESPT is studied in order to optimize the estimators. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

17.
A fast computational technique that speeds up the process of parametric macro‐model extraction is proposed. An efficient starting point is the technique of parametric model order reduction (PMOR). The key step in PMOR is the computation of a projection matrix V, which requires the computation of multiple moment matrices of the underlying system. In turn, for each moment matrix, a linear system with multiple right‐hand sides has to be solved. Usually, a considerable number of linear systems must be solved when the system includes more than two free parameters. If the original system is of very large size, the linear solution step is computationally expensive. In this paper, the subspace recycling algorithm outer generalized conjugate residual method combined with generalized minimal residual method with deflated restarting (GCRO‐DR), is considered as a basis to solve the sequence of linear systems. In particular, two more efficient recycling algorithms, G‐DRvar1 and G‐DRvar2, are proposed. Theoretical analysis and simulation results show that both the GCRO‐DR method and its variants G‐DRvar1 and G‐DRvar2 are very efficient when compared with the standard solvers. Furthermore, the presented algorithms overcome the bottleneck of a recently proposed subspace recycling method the modified Krylov recycling generalized minimal residual method. From these subspace recycling algorithms, a PMOR process for macro‐model extraction can be significantly accelerated. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

18.
The general deformation problem with material and geometric non‐linearities is typically divided into a number of subproblems including the kinematic, the constitutive, and the contact/friction subproblems. These problems are introduced for algorithmic purposes; however, each of them represents distinct physical aspects of the deformation process. For each of these subproblems, several well‐established mathematical and numerical models based on the finite element method have been proposed for their solution. Recent developments in software engineering and in the field of object‐oriented C++ programming have made it possible to model physical processes and mechanisms more expressively than ever before. In particular, the various subproblems and computational models in a large inelastic deformation analysis can be implemented using appropriate hierarchies of classes that accurately represent their underlying physical, mathematical and/or geometric structures. This paper addresses such issues and demonstrates that an approach to deformation processing using classes, inheritance and virtual functions allows a very fast and robust implementation and testing of various physical processes and computational algorithms. Here, specific ideas are provided for the development of an object‐oriented C++ programming approach to the FEM analysis of large inelastic deformations. It is shown that the maintainability, generality, expandability, and code re‐usability of such FEM codes are highly improved. Finally, the efficiency and accuracy of an object‐oriented programming approach to the analysis of large inelastic deformations are investigated using a number of benchmark metal‐forming examples. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

19.
20.
The paper aims at proposing a new strategy for real‐time identification or updating of structural mechanics models defined as dynamical systems. The main idea is to introduce the modified constitutive relation error concept, which is a practical tool that enables to efficiently solve identification problems with highly corrupted data, into the Kalman filtering, which is a classical framework for data assimilation. Furthermore, a PGD‐based model reduction method is performed in order to optimize capabilities of the online updating strategy. Performances of the proposed approach, in terms of robustness gain and computational cost reduction, are illustrated on several unsteady thermal applications. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号