首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this paper, we model crack discontinuities in two‐dimensional linear elastic continua using the extended finite element method without the need to partition an enriched element into a collection of triangles or quadrilaterals. For crack modeling in the extended finite element, the standard finite element approximation is enriched with a discontinuous function and the near‐tip crack functions. Each element that is fully cut by the crack is decomposed into two simple (convex or nonconvex) polygons, whereas the element that contains the crack tip is treated as a nonconvex polygon. On using Euler's homogeneous function theorem and Stokes's theorem to numerically integrate homogeneous functions on convex and nonconvex polygons, the exact contributions to the stiffness matrix from discontinuous enriched basis functions are computed. For contributions to the stiffness matrix from weakly singular integrals (because of enrichment with asymptotic crack‐tip functions), we only require a one‐dimensional quadrature rule along the edges of a polygon. Hence, neither element‐partitioning on either side of the crack discontinuity nor use of any cubature rule within an enriched element are needed. Structured finite element meshes consisting of rectangular elements, as well as unstructured triangular meshes, are used. We demonstrate the flexibility of the approach and its excellent accuracy in stress intensity factor computations for two‐dimensional crack problems. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

2.
In this work, a reduced-order model based on adaptive finite element meshes and a correction term obtained by using an artificial neural network (FAN-ROM) is presented. The idea is to run a high-fidelity simulation by using an adaptively refined finite element mesh and compare the results obtained with those of a coarse mesh finite element model. From this comparison, a correction forcing term can be computed for each training configuration. A model for the correction term is built by using an artificial neural network, and the final reduced-order model is obtained by putting together the coarse mesh finite element model, plus the artificial neural network model for the correction forcing term. The methodology is applied to nonlinear solid mechanics problems, transient quasi-incompressible flows, and a fluid-structure interaction problem. The results of the numerical examples show that the FAN-ROM is capable of improving the simulation results obtained in coarse finite element meshes at a reduced computational cost.  相似文献   

3.
This paper presents a bubble‐inspired algorithm for partitioning finite element mesh into subdomains. Differing from previous diffusion BUBBLE and Center‐oriented Bubble methods, the newly proposed algorithm employs the physics of real bubbles, including nucleation, spherical growth, bubble–bubble collision, reaching critical state, and the final competing growth. The realization of foaming process of real bubbles in the algorithm enables us to create partitions with good shape without having to specify large number of artificial controls. The minimum edge cut is simply achieved by increasing the volume of each bubble in the most energy efficient way. Moreover, the order, in which an element is gathered into a bubble, delivers the minimum number of surface cells at every gathering step; thus, the optimal numbering of elements in each subdomain has naturally achieved. Because finite element solvers, such as multifrontal method, must loop over all elements in the local subdomain condensation phase and the global interface solution phase, these two features have a huge payback in terms of solver efficiency. Experiments have been conducted on various structured and unstructured meshes. The obtained results are consistently better than the classical kMetis library in terms of the edge cut, partition shape, and partition connectivity. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

4.
This paper describes an improvement in techniques currently used for mesh deformations in fluid–structure calculations in which large body motions are encountered. The proposed approach moving submesh approach (MSA) is based on the assumption of a pseudo-material deformation applied on a triangular coarse mesh to significantly reduce the CPU time. The computation mesh is then updated using an interpolation technique similar to the finite element method. This method may be applied on structured as well as on unstructured meshes. An extension to complex boundaries undergoing large rigid-body motions is proposed combining the MSA and an encapsulation box. The influence of the coarse mesh on the quality mesh is discussed. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

5.
This paper presents an algorithm for nodal numbering in order to obtain a small wavefront. Element clique graphs are employed as the mathematical models of finite element meshes. A priority function containing five vectors is used, which can be viewed as a generalization of Sloan's function. These vectors represent different connectivity properties of the graph models. Unlike Sloan's algorithm, which uses two fixed coefficients, here, five coefficients are employed, based on an evaluation by artificial neural networks. The networks weights are obtained using a simple genetic algorithm. Examples are included to illustrate the performance of the present hybrid method. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

6.
Most of the recently proposed computational methods for solving partial differential equations on multiprocessor architectures stem from the 'divide and conquer' paradigm and involve some form of domain decomposition. For those methods which also require grids of points or patches of elements, it is often necessary to explicitly partition the underlying mesh, especially when working with local memory parallel processors. In this paper, a family of cost-effective algorithms for the automatic partitioning of arbitrary two- and three-dimensional finite element and finite difference meshes is presented and discussed in view of a domain decomposed solution procedure and parallel processing. The influence of the algorithmic aspects of a solution method (implicit/explicit computations), and the architectural specifics of a multiprocessor (SIMD/MIMD, startup/transmission time), on the design of a mesh partitioning algorithm are discussed. The impact of the partitioning strategy on load balancing, operation count, operator conditioning, rate of convergence and processor mapping is also addressed. Finally, the proposed mesh decomposition algorithms are demonstrated with realistic examples of finite element, finite volume, and finite difference meshes associated with the parallel solution of solid and fluid mechanics problems on the iPSC/2 and iPSC/860 multiprocessors.  相似文献   

7.
The performance of partition‐of‐unity based methods such as the generalized finite element method or the extended finite element method is studied for the simulation of cohesive cracking. The focus of investigation is on the performance of bilinear quadrilateral finite elements using these methods. In particular, the approximation of the displacement jump field, representing cohesive cracks, by extended finite element method/generalized finite element method and its effect on the overall behavior at element and structural level is investigated. A single element test is performed with two different integration schemes, namely the Newton‐Cotes/Lobatto and the Gauss integration schemes, for the cracked interface contribution. It was found that cohesive crack segments subjected to a nonuniform opening in unstructured meshes (or an inclined crack in a structured finite element mesh) result in an unrealistic crack opening. The reasons for such behavior and its effect on the response at element level are discussed. Furthermore, a mesh refinement study is performed to analyze the overall response of a cohesively cracked body in a finite element analysis. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

8.
We address the problem of automatic partitioning of unstructured finite element meshes in the context of parallel numerical algorithms based on domain decomposition. A two-step approach is proposed, which combines a direct partitioning scheme with a non-deterministic procedure of combinatorial optimization. In contrast with previously published experiments with non-deterministic heuristics, the optimization step is shown to produce high-quality decompositions at a reasonable compute cost. We also show that the optimization approach can accommodate complex topological constraints and minimization objectives. This is illustrated by considering the particular case of topologically one-dimensional partitions, as well as load balancing of frontal subdomain solvers. Finally, the optimization procedure produces, in most cases, decompositions endowed with geometrically smooth interfaces. This contrasts with available partitioning schemes, and is crucial to some modern numerical techniques based on domain decomposition and a Lagrange multiplier treatment of the interface conditions.  相似文献   

9.
An unstructured finite element solver to evaluate the ship‐wave problem is presented. The scheme uses a non‐structured finite element algorithm for the Euler or Navier–Stokes flow as for the free‐surface boundary problem. The incompressible flow equations are solved via a fractional step method whereas the non‐linear free‐surface equation is solved via a reference surface which allows fixed and moving meshes. A new non‐structured stabilized approximation is used to eliminate spurious numerical oscillations of the free surface. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

10.
Large-scale parallel computation can be an enabling resource in many areas of engineering and science if the parallel simulation algorithm attains an appreciable fraction of the machine peak performance, and if undue cost in porting the code or in developing the code for the parallel machine is not incurred. The issue of code parallelization is especially significant when considering unstructured mesh simulations. The unstructured mesh models considered in this paper result from a finite element simulation of electromagnetic fields scattered from geometrically complex objects (either penetrable or impenetrable.) The unstructured mesh must be distributed among the processors, as must the resultant sparse system of linear equations. Since a distributed memory architecture does not allow direct access to the irregularly distributed unstructured mesh and sparse matrix data, partitioning algorithms not needed in the sequential software have traditionally been used to efficiently spread the data among the processors. This paper presents a new method for simulating electromagnetic fields scattered from complex objects; namely, an unstructured finite element code that does not use traditional mesh partitioning algorithms. © 1998 This paper was produced under the auspices of the U.S. Government and it is therfore not subject to copyright in the U.S.  相似文献   

11.
We propose a new optimization strategy for unstructured meshes that, when coupled with existing automatic generators, produces meshes of high quality for arbitrary domains in 3-D. Our optimizer is based upon a non-differentiable definition of the quality of the mesh which is natural for finite element or finite volume users: the quality of the worst element in the mesh. The dimension of the optimization space is made tractable by restricting, at each iteration, to a suitable neighbourhood of the worst element. Both geometrical (node repositioning) and topological (reconnection) operations are performed. It turns out that the repositioning method is advantageous with respect to both the usual node-by-node techniques and the more recent differentiable optimization methods. Several examples are included that illustrate the efficiency of the optimizer.  相似文献   

12.
This paper presents a multilevel algorithm for balanced partitioning of unstructured grids. The grid is partitioned such that the number of interface elements is minimized and each partition contains an equal number of grid elements. The partition refinement of the proposed multilevel algorithm is based on iterative tabu search procedure. In iterative partition refinement algorithms, tie‐breaking in selection of maximum gain vertices affects the performance considerably. A new tie‐breaking strategy in the iterative tabu search algorithm is proposed that leads to improved partitioning quality. Numerical experiments are carried out on various unstructured grids in order to evaluate the performance of the proposed algorithm. The partition results are compared with those produced by the well‐known partitioning package Metis and k‐means clustering algorithm and shown to be superior in terms of edge cut, partition balance, and partition connectivity. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

13.
A new residual‐based finite element method for the scalar Helmholtz equation is developed. This method is obtained from the Galerkin approximation by appending terms that are proportional to residuals on element interiors and inter‐element boundaries. The inclusion of residuals on inter‐element boundaries distinguishes this method from the well‐known Galerkin least‐squares method and is crucial to the resulting accuracy of this method. In two dimensions and for regular bilinear quadrilateral finite elements, it is shown via a dispersion analysis that this method has minimal phase error. Numerical experiments are conducted to verify this claim as well as test and compare the performance of this method on unstructured meshes with other methods. It is found that even for unstructured meshes this method retains a high level of accuracy. Copyright © 2000 John Wiley & Sons, Ltd.  相似文献   

14.
In spite of significant advancements in automatic mesh generation during the past decade, the construction of quality finite element discretizations on complex three‐dimensional domains is still a difficult and time demanding task. In this paper, the partition of unity framework used in the generalized finite element method (GFEM) is exploited to create a very robust and flexible method capable of using meshes that are unacceptable for the finite element method, while retaining its accuracy and computational efficiency. This is accomplished not by changing the mesh but instead by clustering groups of nodes and elements. The clusters define a modified finite element partition of unity that is constant over part of the clusters. This so‐called clustered partition of unity is then enriched to the desired order using the framework of the GFEM. The proposed generalized finite element method can correctly and efficiently deal with: (i) elements with negative Jacobian; (ii) excessively fine meshes created by automatic mesh generators; (iii) meshes consisting of several sub‐domains with non‐matching interfaces. Under such relaxed requirements for an acceptable mesh, and for correctly defined geometries, today's automated tetrahedral mesh generators can practically guarantee successful volume meshing that can be entirely hidden from the user. A detailed technical discussion of the proposed generalized finite element method with clustering along with numerical experiments and some implementation details are presented. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

15.
针对铝合金薄板的侧壁起皱问题,本文通过有限元软件分析工艺参数对成形质量的影响,提出了一种基于数值模拟与智能算法相结合的优化方法。首先,利用最优拉丁超立方抽样进行实验设计,并依据数值模拟获取实验值;其次,基于BP神经网络拟合工艺参数与成形质量之间的关系,预测结果的平均相对误差为2.69%,建立了准确的预测模型;最后,用遗传算法极值寻优获取了一组最优的工艺参数组合,起皱幅值的预测值和仿真值相对误差仅为4.03%,实验结果与仿真分析结果相近,验证了该优化方法的合理性和有效性。研究表明:以料厚、摩擦系数和压边力作为优化变量,以最大起皱幅值最小化为优化目标,建立几何模型,并利用有限元软件Autoform进行仿真分析;依据起皱轮廓线径向位移的实验和数值模拟对比,验证了有限元模型的正确性,表明利用神经网络和遗传算法极值寻优可以有效解决铝合金侧壁起皱缺陷。  相似文献   

16.
A Multi-Mesh Multi-Physics (MMMP) method is developed to reduce the very long computational time required for simulating incremental forming processes such as cogging or ring rolling. It consists in using several finite element meshes on the same domain to solve the different physics of the problem. A reference mesh is used to accurately store the results and history variables, while the different computational meshes are optimized to solve each physic of the problem. The MMMP algorithm consists in two main key-steps: the generation of the different unstructured meshes and the data transfer between the meshes. The accuracy of the method is supported by using meshes that are embedded by nodes. The method is applied to the simulation of the cogging metal forming process for which it shows as accurate and more than ten times faster than the standard method with a single mesh.  相似文献   

17.
The numerical solution of Maxwell's curl equations in the time domain is achieved by combining an unstructured mesh finite element algorithm with a cartesian finite difference method. The practical problem area selected to illustrate the application of the approach is the simulation of three‐dimensional electromagnetic wave scattering. The scattering obstacle and the free space region immediately adjacent to it are discretized using an unstructured mesh of linear tetrahedral elements. The remainder of the computational domain is filled with a regular cartesian mesh. These two meshes are overlapped to create a hybrid mesh for the numerical solution. On the cartesian mesh, an explicit finite difference method is adopted and an implicit/explicit finite element formulation is employed on the unstructured mesh. This approach ensures that computational efficiency is maintained if, for any reason, the generated unstructured mesh contains elements of a size much smaller than that required for accurate wave propagation. A perfectly matched layer is added at the artificial far field boundary, created by the truncation of the physical domain prior to the numerical solution. The complete solution approach is parallelized, to enable large‐scale simulations to be effectively performed. Examples are included to demonstrate the numerical performance that can be achieved. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

18.
A hybrid finite element formulation is used to model the hygro‐thermo‐chemical process of cement hydration in high performance concrete. The temperature and the relative humidity fields are directly approximated in the domain of the element using naturally hierarchical bases independent of the mapping used to define its geometry. This added flexibility in modeling implies the independent approximation of the heat and moisture flux fields on the boundary of the element, the typical feature of hybrid finite element formulations. The formulation can be implemented using coarse and, eventually, unstructured meshes, which may contain elements with high aspect ratios, an option that can be advantageously used in the simulation of the casting of concrete structural elements. The resulting solving system is sparse and well suited to adaptive refinement and parallelization. It is solved coupling a trapezoidal time integration rule with an adaptation of the Newton–Raphson method designed to preserve symmetry. The relative performance of the formulation is assessed using a set of testing problems supported by experimental data and results obtained with conventional (conform) finite elements. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

19.
The finite element method is being used today to model component assemblies in a wide variety of application areas, including structural mechanics, fluid simulations, and others. Generating hexahedral meshes for these assemblies usually requires the use of geometry decomposition, with different meshing algorithms applied to different regions. While the primary motivation for this approach remains the lack of an automatic, reliable all‐hexahedral meshing algorithm, requirements in mesh quality and mesh configuration for typical analyses are also factors. For these reasons, this approach is also sometimes required when producing other types of unstructured meshes. This paper will review progress to date in automating many parts of the hex meshing process, which has halved the time to produce all‐hex meshes for large assemblies. Particular issues which have been exposed due to this progress will also be discussed, along with their applicability to the general unstructured meshing problem. Published in 2001 by John Wiley & Sons, Ltd.  相似文献   

20.
Recently, graphics processing units (GPUs) have been increasingly leveraged in a variety of scientific computing applications. However, architectural differences between CPUs and GPUs necessitate the development of algorithms that take advantage of GPU hardware. As sparse matrix vector (SPMV) multiplication operations are commonly used in finite element analysis, a new SPMV algorithm and several variations are developed for unstructured finite element meshes on GPUs. The effective bandwidth of current GPU algorithms and the newly proposed algorithms are measured and analyzed for 15 sparse matrices of varying sizes and varying sparsity structures. The effects of optimization and differences between the new GPU algorithm and its variants are then subsequently studied. Lastly, both new and current SPMV GPU algorithms are utilized in the GPU CG solver in GPU finite element simulations of the heart. These results are then compared against parallel PETSc finite element implementation results. The effective bandwidth tests indicate that the new algorithms compare very favorably with current algorithms for a wide variety of sparse matrices and can yield very notable benefits. GPU finite element simulation results demonstrate the benefit of using GPUs for finite element analysis and also show that the proposed algorithms can yield speedup factors up to 12‐fold for real finite element applications. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号