首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Reduced order models are useful for accelerating simulations in many‐query contexts, such as optimization, uncertainty quantification, and sensitivity analysis. However, offline training of reduced order models (ROMs) can have prohibitively expensive memory and floating‐point operation costs in high‐performance computing applications, where memory per core is limited. To overcome this limitation for proper orthogonal decomposition, we propose a novel adaptive selection method for snapshots in time that limits offline training costs by selecting snapshots according an error control mechanism similar to that found in adaptive time‐stepping ordinary differential equation solvers. The error estimator used in this work is related to theory bounding the approximation error in time of proper orthogonal decomposition‐based ROMs, and memory usage is minimized by computing the singular value decomposition using a single‐pass incremental algorithm. Results for a viscous Burgers' test problem demonstrate convergence in the limit as the algorithm error tolerances go to zero; in this limit, the full‐order model is recovered to within discretization error. A parallel version of the resulting method can be used on supercomputers to generate proper orthogonal decomposition‐based ROMs, or as a subroutine within hyperreduction algorithms that require taking snapshots in time, or within greedy algorithms for sampling parameter space. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

2.
3.
This paper deals with the extension of proper generalized decomposition methods to non‐linear problems, in particular, to hyperelasticity. Among the different approaches that can be considered for the linearization of the doubly weak form of the problem, we have implemented a new one that uses asymptotic numerical methods in conjunction with proper generalized decomposition to avoid complex consistent linearization schemes necessary in Newton strategies. This approach results in an approximation of the problem solution in the form of a series expansion. Each term of the series is expressed as a finite sum of separated functions. The advantage of this approach is the presence of only one tangent operator, identical for every term in the series. The resulting approach has proved to render very accurate results that can be stored in the form of a meta‐model in a very compact format. This opens the possibility to use these results in real‐time, reaching kHz feedback rates, or to be used in deployed, handheld devices such as smartphones and tablets. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

4.
A nonoverlapping domain decomposition (DD) method is proposed for the iterative solution of systems of equations arising from the discretization of Helmholtz problems by the discontinuous enrichment method. This discretization method is a discontinuous Galerkin finite element method with plane wave basis functions for approximating locally the solution and dual Lagrange multipliers for weakly enforcing its continuity over the element interfaces. The primal subdomain degrees of freedom are eliminated by local static condensations to obtain an algebraic system of equations formulated in terms of the interface Lagrange multipliers only. As in the FETI‐H and FETI‐DPH DD methods for continuous Galerkin discretizations, this system of Lagrange multipliers is iteratively solved by a Krylov method equipped with both a local preconditioner based on subdomain data, and a global one using a coarse space. Numerical experiments performed for two‐ and three‐dimensional acoustic scattering problems suggest that the proposed DD‐based iterative solver is scalable with respect to both the size of the global problem and the number of subdomains. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

5.
6.
One of the main difficulties that a reduced‐order method could face is the poor separability of the solution. This problem is common to both a posteriori model order reduction (proper orthogonal decomposition, reduced basis) and a priori [proper generalized decomposition (PGD)] model order reduction. Early approaches to solve it include the construction of local reduced‐order models in the framework of POD. We present here an extension of local models in a PGD—and thus, a priori—context. Three different strategies are introduced to estimate the size of the different patches or regions in the solution manifold where PGD is applied. As will be noticed, no gluing or special technique is needed to deal with the resulting set of local reduced‐order models, in contrast to most proper orthogonal decomposition local approximations. The resulting method can be seen as a sort of a priori manifold learning or nonlinear dimensionality reduction technique. Examples are shown that demonstrate pros and cons of each strategy for different problems.  相似文献   

7.
Domain decomposition methods often exhibit very poor performance when applied to engineering problems with large heterogeneities. In particular, for heterogeneities along domain interfaces, the iterative techniques to solve the interface problem are lacking an efficient preconditioner. Recently, a robust approach, named finite element tearing and interconnection (FETI)–generalized eigenvalues in the overlaps (Geneo), was proposed where troublesome modes are precomputed and deflated from the interface problem. The cost of the FETI–Geneo is, however, high. We propose in this paper techniques that share similar ideas with FETI–Geneo but where no preprocessing is needed and that can be easily and efficiently implemented as an alternative to standard domain decomposition methods. In the block iterative approaches presented in this paper, the search space at every iteration on the interface problem contains as many directions as there are domains in the decomposition. Those search directions originate either from the domain‐wise preconditioner (in the simultaneous FETI method) or from the block structure of the right‐hand side of the interface problem (block FETI). We show on two‐dimensional structural examples that both methods are robust and provide good convergence in the presence of high heterogeneities, even when the interface is jagged or when the domains have a bad aspect ratio. The simultaneous FETI was also efficiently implemented in an optimized parallel code and exhibited excellent performance compared with the regular FETI method. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

8.
The direct approximation of strong form using radial basis functions (RBFs), commonly called the radial basis collocation method (RBCM), has been recognized as an effective means for solving boundary value problems. Nevertheless, the non‐compactness of the RBFs precludes its application to problems with local features, such as fracture problems, among others. This work attempts to apply RBCM to fracture mechanics by introducing a domain decomposition technique with proper interface conditions. The proposed method allows (1) natural representation of discontinuity across the crack surfaces and (2) enrichment of crack‐tip solution in a local subdomain. With the proper domain decomposition and interface conditions, exponential convergence rate can be achieved while keeping the discrete system well‐conditioned. The analytical prediction and numerical results demonstrate that an optimal dimension of the near‐tip subdomain exists. The effectiveness of the proposed method is justified by the mathematical analysis and demonstrated by the numerical examples. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

9.
Motivated by atomistic‐to‐continuum coupling, we consider a fine‐scale problem defined on a small region embedded in a much larger coarse‐scale domain and propose an efficient solution technique on the basis of the domain decomposition framework. Specifically, we develop a nonoverlapping Schwarz method with two important features: (i) the use of an efficient approximation of the Dirichlet‐to‐Neumann map for the interface conditions; and (ii) the utilization of the inherent scale separation in the solution. The paper includes a detailed formulation of the proposed interface condition, along with the illustration of its effectiveness by using simple but representative numerical experiments. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

10.
The use of cohesive zone models is an efficient way to treat the damage especially when the crack path is known a priori. It is the case in the modeling of delamination in composite laminates. However, the simulations using cohesive zone models are expensive in a computational point of view. When using implicit time integration or when solving static problems, the non‐linearity related to the cohesive model requires many iteration before reaching convergence. In explicit approaches, an important number of iterations are also needed because of the time step stability condition. In this article, a new approach based on a separated representation of the solution is proposed. The proper generalized decomposition is used to build the solution. This technique coupled with a cohesive zone model allows a significant reduction of the computational cost. The results approximated with the proper generalized decomposition are very close the ones obtained using the classical finite element approach. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

11.
A new and efficient two‐level, non‐overlapping domain decomposition (DD) method is developed for the Helmholtz equation in the two Lagrange multiplier framework. The transmission conditions are designed by utilizing perfectly matched discrete layers (PMDLs), which are a more accurate representation of the exterior Dirichlet‐to‐Neumann map than the polynomial approximations used in the optimized Schwarz method. Another important ingredient affecting the convergence of a DD method, namely, the coarse space augmentation, is also revisited. Specifically, the widely successful approach based on plane waves is modified to that based on interface waves, defined directly on the subdomain boundaries, hence ensuring linear independence and facilitating the estimation of the optimal size for the coarse problem. The effectiveness of both PMDL‐based transmission conditions and interface‐wave‐based coarse space augmentation is illustrated with an array of numerical experiments that include comprehensive scalability studies with respect to frequency, mesh size and the number of subdomains. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

12.
This paper presents a strategy for the computation of structures with repeated patterns based on domain decomposition and block‐Krylov solvers. It can be seen as a special variant of the FETI method. We propose using the presence of repeated domains in the problem to compute the solution by minimizing the interface error on several directions simultaneously. The method not only drastically decreases the size of the problems to solve but also accelerates the convergence of interface problem for nearly no additional computational cost and minimizes expensive memory accesses. The numerical performances are illustrated on some thermal and elastic academic problems. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

13.
In this paper we consider the application of the method of fundamental solutions to solve crack problems. These problems present difficulties, which are not only related to the intrinsic singular nature of the problem, instead they are mainly related to the impossibility in choosing appropriate point sources to write the solution as a whole. In this paper we present: (1) a domain decomposition technique that allows to express a piecewise approximation of the solution using a method of fundamental solutions applied to each subdomain; (2) an enriched approximation whereby singular functions (fully representing the singular behaviour around the cracks or other sources of boundary singularities) are used. An application of the proposed techniques to the torsion of cracked components is carried out.  相似文献   

14.
This paper presents the implementation of advanced domain decomposition techniques for parallel solution of large‐scale shape sensitivity analysis problems. The methods presented in this study are based on the FETI method proposed by Farhat and Roux which is a dual domain decomposition implementation. Two variants of the basic FETI method have been implemented in this study: (i) FETI‐1 where the rigid‐body modes of the floating subdomains are computed explicitly. (ii) FETI‐2 where the local problem at each subdomain is solved by the PCG method and the rigid‐body modes are computed explicitly. A two‐level iterative method is proposed particularly tailored to solve re‐analysis type of problems, where the dual domain decomposition method is incorporated in the preconditioning step of a subdomain global PCG implementation. The superiority of this two‐level iterative solver is demonstrated with a number of numerical tests in serial as well as in parallel computing environments. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

15.
In this paper, the efficiency of a parallelizable preconditioner for domain decomposition methods in the context of the solution of non‐symmetric linear equations arising from discretization of the Saint‐Venant equations, is investigated. The proposed interface strip preconditioner (IS) is based on solving a problem in a narrow strip around the interface. It requires much less memory and computing time than classical Neumann–Neumann preconditioner, and handles correctly the flux splitting among sub‐domains that share the interface. The performance of this preconditioner is assessed with an analytical study of Schur complement matrix eigenvalues and numerical experiments conducted in a parallel computational environment (consisting of a Beowulf cluster of 20 nodes). Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

16.
The motivation of this work is to address real-time sequential inference of parameters with a full Bayesian formulation. First, the proper generalized decomposition (PGD) is used to reduce the computational evaluation of the posterior density in the online phase. Second, Transport Map sampling is used to build a deterministic coupling between a reference measure and the posterior measure. The determination of the transport maps involves the solution of a minimization problem. As the PGD model is quasi-analytical and under a variable separation form, the use of gradient and Hessian information speeds up the minimization algorithm. Eventually, uncertainty quantification on outputs of interest of the model can be easily performed due to the global feature of the PGD solution over all coordinate domains. Numerical examples highlight the performance of the method.  相似文献   

17.
The analysis of transient heat conduction problems in large, complex computational domains is a problem of interest in many technological applications including electronic cooling, encapsulation using functionally graded composite materials, and cryogenics. In many of these applications, the domains may be multiply connected and contain moving boundaries making it desirable to consider meshless methods of analysis. The method of fundamental solutions along with a parallel domain decomposition method is developed for the solution of three‐dimensional parabolic differential equations. In the current approach, time is discretized using the generalized trapezoidal rule transforming the original parabolic partial differential equation into a sequence of non‐homogeneous modified Helmholtz equations. An approximate particular solution is derived using polyharmonic splines. Interfacial conditions between subdomains are satisfied using a Schwarz Neumann–Neumann iteration scheme. Outside of the first time step where zero initial flux is assumed, the initial estimates for the interfacial flux is given from the converged solution obtained during the previous time step. This significantly reduces the number of iterations required to meet the convergence criterion. The accuracy of the method of fundamental solutions approach is demonstrated through two benchmark problems. The parallel efficiency of the domain decomposition method is evaluated by considering cases with 8, 27, and 64 subdomains. Copyright 2004 © John Wiley & Sons, Ltd.  相似文献   

18.
The identification of the geological structure from seismic data is formulated as an inverse problem. The properties and the shape of the rock formations in the subsoil are described by material and geometric parameters, which are taken as input data for a predictive model. Here, the model is based on the Helmholtz equation, describing the acoustic response of the system for a given wave length. Thus, the inverse problem consists in identifying the values of these parameters such that the output of the model agrees the best with observations. This optimization algorithm requires multiple queries to the model with different values of the parameters. Reduced order models are especially well suited to significantly reduce the computational overhead of the multiple evaluations of the model. In particular, the proper generalized decomposition produces a solution explicitly stating the parametric dependence, where the parameters play the same role as the physical coordinates. A proper generalized decomposition solver is devised to inexpensively explore the parametric space along the iterative process. This exploration of the parametric space is in fact seen as a post‐process of the generalized solution. The approach adopted demonstrates its viability when tested in two illustrative examples. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

19.
Radial basis collocation method has easy implementation and exponential convergence. However, generally, the resultant collocation matrix is full and ill-conditioned and it’s hard to represent the local properties in solutions. Therefore, a finite subdomain collocation method with radial basis approximation is proposed. The approximation in subdomain is established within the subdomain and continuity conditions are imposed on all the interfaces in strong form. Consequently, the original full matrix can be transformed into a sparse matrix. Variant shape parameters can be used in different subdomains considering the need of solution representation in each subdomain. It can not only well alleviate the ill-condition and improve the solution accuracy, but also possess exponential convergence. Furthermore, CPU time can be markedly reduced. Error analysis and proper domain decomposition are also investigated. Numerical results show that this method has good performance for problems with high-gradient and singular problems which are prominent for their local characteristics.  相似文献   

20.
In this paper, the proper generalized decomposition (PGD) is used for model reduction in the solution of an inverse heat conduction problem within the Bayesian framework. Two PGD reduced order models are proposed and the approximation Error model (AEM) is applied to account for the errors between the complete and the reduced models. For the first PGD model, the direct problem solution is computed considering a separate representation of each coordinate of the problem during the process of solving the inverse problem. On the other hand, the second PGD model is based on a generalized solution integrating the unknown parameter as one of the coordinates of the decomposition. For the second PGD model, the reduced solution of the direct problem is computed before the inverse problem within the parameter space provided by the prior information about the parameters, which is required to be proper. These two reduced models are evaluated in terms of accuracy and reduction of the computational time on a transient three-dimensional two region inverse heat transfer problem. In fact, both reduced models result on substantial reduction of the computational time required for the solution of the inverse problem, and provide accurate estimates for the unknown parameter due to the application of the approximation error model approach.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号