首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
A verification methodology for adaptive processes is devised. The mathematical claims made during the process are identified and measures are presented in order to verify that the mathematical equations are solved correctly. The analysis is based on a formal definition of the optimality of the adaptive process in the case of the control of the L‐norm of the interpolation error. The process requires a reconstruction that is verified using a proper norm. The process also depends on mesh adaptation toolkits in order to generate adapted meshes. In this case, the non‐conformity measure is used to evaluate how well the adapted meshes conform to the size specification map at each iteration. Finally, the adaptive process should converge toward an optimal mesh. The optimality of the mesh is measured using the standard deviation of the element‐wise value of the L‐norm of the interpolation error. The results compare the optimality of an anisotropic process to an isotropic process and to uniform refinement on highly anisotropic 2D and 3D test cases. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

2.
This paper describes a p‐hierarchical adaptive procedure based on minimizing the classical energy norm for the scaled boundary finite element method. The reference solution, which is the solution of the fine mesh formed by uniformly refining the current mesh element‐wise one order higher, is used to represent the unknown exact solution. The optimum mesh is assumed to be obtained when each element contributes equally to the global error. The refinement criteria and the energy norm‐based error estimator are described and formulated for the scaled boundary finite element method. The effectivity index is derived and used to examine quality of the proposed error estimator. An algorithm for implementing the proposed p‐hierarchical adaptive procedure is developed. Numerical studies are performed on various bounded domain and unbounded domain problems. The results reflect a number of key points. Higher‐order elements are shown to be highly efficient. The effectivity index indicates that the proposed error estimator based on the classical energy norm works effectively and that the reference solution employed is a high‐quality approximation of the exact solution. The proposed p‐hierarchical adaptive strategy works efficiently. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

3.
Adaptive algorithms are important tools for efficient finite‐element mesh design. In this paper, an error controlled adaptive mesh‐refining algorithm is proposed for a non‐conforming low‐order finite‐element method for the Reissner–Mindlin plate model. The algorithm is controlled by a reliable and efficient residual‐based a posteriori error estimate, which is robust with respect to the plate's thickness. Numerical evidence for this and the efficiency of the new algorithm is provided in the sense that non‐optimal convergence rates are optimally improved in our numerical experiments. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

4.
A goal‐oriented algorithm is developed and applied for hp‐adaptive approximations given by the discontinuous Galerkin finite element method for the biharmonic equation. The methodology is based on the dual problem associated with the target functional. We consider three error estimators and analyse their properties as basic tools for the design of the hp‐adaptive algorithm. To improve adaptation, the combination of two different error estimators is used, each one at its best efficiency, to guide the tasks of where and how to adapt the approximation spaces. The performance of the resulting hp‐adaptive schemes is illustrated by numerical experiments for two benchmark problems. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

5.
We describe the development and application of a finite element (FE) self‐adaptive hp goal‐oriented algorithm for elliptic problems. The algorithm delivers (without any user interaction) a sequence of optimal hp‐grids. This sequence of grids minimizes the error of a prescribed quantity of interest with respect to the problem size. The refinement strategy is an extension of a fully automatic, energy‐norm based, hp‐adaptive algorithm. We illustrate the efficiency of the method with 2D numerical results. Among other problems, we apply the goal‐oriented hp‐adaptive strategy to simulate direct current (DC) resistivity logging instruments (including through casing resistivity tools) in a borehole environment and for the assessment of rock formation properties. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

6.
An s‐adaptive finite element procedure is developed for the transient analysis of 2‐D solid mechanics problems with material non‐linearity due to progressive damage. The resulting adaptive method simultaneously estimates and controls both the spatial error and temporal error within user‐specified tolerances. The spatial error is quantified by the Zienkiewicz–Zhu error estimator and computed via superconvergent patch recovery, while the estimation of temporal error is based on the assumption of a linearly varying third‐order time derivatives of the displacement field in conjunction with direct numerical time integration. The distinguishing characteristic of the s‐adaptive procedure is the use of finite element mesh superposition (s‐refinement) to provide spatial adaptivity. Mesh superposition proves to be particularly advantageous in computationally demanding non‐linear transient problems since it is faster, simpler and more efficient than traditional h‐refinement schemes. Numerical examples are provided to demonstrate the performance characteristics of the s‐adaptive method for quasi‐static and transient problems with material non‐linearity. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

7.
This paper presents a finite element solver for the simulation of steady non‐Newtonian flow problems, using a regularized Bingham model, with adaptive mesh refinement capabilities. The solver is based on a stabilized formulation derived from the variational multiscale framework. This choice allows the introduction of an a posteriori error indicator based on the small scale part of the solution, which is used to drive a mesh refinement procedure based on element subdivision. This approach applied to the solution of a series of benchmark examples, which allow us to validate the formulation and assess its capabilities to model 2D and 3D non‐Newtonian flows.  相似文献   

8.
A variational h‐adaptive finite element formulation is proposed. The distinguishing feature of this method is that mesh refinement and coarsening are governed by the same minimization principle characterizing the underlying physical problem. Hence, no error estimates are invoked at any stage of the adaption procedure. As a consequence, linearity of the problem and a corresponding Hilbert‐space functional framework are not required and the proposed formulation can be applied to highly non‐linear phenomena. The basic strategy is to refine (respectively, unrefine) the spatial discretization locally if such refinement (respectively, unrefinement) results in a sufficiently large reduction (respectively, sufficiently small increase) in the energy. This strategy leads to an adaption algorithm having O(N) complexity. Local refinement is effected by edge‐bisection and local unrefinement by the deletion of terminal vertices. Dissipation is accounted for within a time‐discretized variational framework resulting in an incremental potential energy. In addition, the entire hierarchy of successive refinements is stored and the internal state of parent elements is updated so that no mesh‐transfer operator is required upon unrefinement. The versatility and robustness of the resulting variational adaptive finite element formulation is illustrated by means of selected numerical examples. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

9.
This paper is concerned with the effective numerical implementation of the adaptive dual boundary‐element method (DBEM), for two‐dimensional potential problems. Two boundary integral equations, which are the potential and the flux equations, are applied for collocation along regular and degenerate boundaries, leading always to a single‐region analysis. Taking advantage on the use of non‐conforming parametric boundary‐elements, the method introduces a simple error estimator, based on the discontinuity of the solution across the boundaries between adjacent elements and implements the p, h and mixed versions of the adaptive mesh refinement. Examples of several geometries, which include degenerate boundaries, are analyzed with this new formulation to solve regular and singular problems. The accuracy and efficiency of the implementation described herein make this a reliable formulation of the adaptive DBEM. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

10.
This paper details a multigrid‐accelerated cut‐cell non‐conforming Cartesian mesh methodology for the modelling of inviscid compressible and incompressible flow. This is done via a single equation set that describes sub‐, trans‐, and supersonic flows. Cut‐cell technology is developed to furnish body‐fitted meshes with an overlapping mesh as starting point, and in a manner which is insensitive to surface definition inconsistencies. Spatial discretization is effected via an edge‐based vertex‐centred finite volume method. An alternative dual‐mesh construction strategy, similar to the cell‐centred method, is developed. Incompressibility is dealt with via an artificial compressibility algorithm, and stabilization achieved with artificial dissipation. In compressible flow, shocks are captured via pressure switch‐activated upwinding. The solution process is accelerated with full approximation storage (FAS) multigrid where coarse meshes are generated automatically via a volume agglomeration methodology. This is the first time that the proposed discretization and solution methods are employed to solve a single compressible–incompressible equation set on cut‐cell Cartesian meshes. The developed technology is validated by numerical experiments. The standard discretization and alternative methods were found equivalent in accuracy and computational cost. The multigrid implementation achieved decreases in CPU time of up to one order of magnitude. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

11.
An efficient parallel computing method for high‐speed compressible flows is presented. The numerical analysis of flows with shocks requires very fine computational grids and grid generation requires a great deal of time. In the proposed method, all computational procedures, from the mesh generation to the solution of a system of equations, can be performed seamlessly in parallel in terms of nodes. Local finite‐element mesh is generated robustly around each node, even for severe boundary shapes such as cracks. The algorithm and the data structure of finite‐element calculation are based on nodes, and parallel computing is realized by dividing a system of equations by the row of the global coefficient matrix. The inter‐processor communication is minimized by renumbering the nodal identification number using ParMETIS. The numerical scheme for high‐speed compressible flows is based on the two‐step Taylor–Galerkin method. The proposed method is implemented on distributed memory systems, such as an Alpha PC cluster, and a parallel supercomputer, Hitachi SR8000. The performance of the method is illustrated by the computation of supersonic flows over a forward facing step. The numerical examples show that crisp shocks are effectively computed on multiprocessors at high efficiency. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

12.
This paper is concerned with the development of a general framework for adaptive mesh refinement and coarsening in three‐dimensional finite‐deformation dynamic–plasticity problems. Mesh adaption is driven by a posteriori global error bounds derived on the basis of a variational formulation of the incremental problem. The particular mesh‐refinement strategy adopted is based on Rivara's longest‐edge propagation path (LEPP) bisection algorithm. Our strategy for mesh coarsening, or unrefinement, is based on the elimination of elements by edge‐collapse. The convergence characteristics of the method in the presence of strong elastic singularities are tested numerically. An application to the three‐dimensional simulation of adiabatic shear bands in dynamically loaded tantalum is also presented which demonstrates the robustness and versatility of the method. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

13.
We propose a fourth‐order compact scheme on structured meshes for the Helmholtz equation given by R(φ):=f( x )+Δφ+ξ2φ=0. The scheme consists of taking the alpha‐interpolation of the Galerkin finite element method and the classical central finite difference method. In 1D, this scheme is identical to the alpha‐interpolation method (J. Comput. Appl. Math. 1982; 8 (1):15–19) and in 2D making the choice α=0.5 we recover the generalized fourth‐order compact Padé approximation (J. Comput. Phys. 1995; 119 :252–270; Comput. Meth. Appl. Mech. Engrg 1998; 163 :343–358) (therein using the parameter γ=2). We follow (SIAM Rev. 2000; 42 (3):451–484; Comput. Meth. Appl. Mech. Engrg 1995; 128 :325–359) for the analysis of this scheme and its performance on square meshes is compared with that of the quasi‐stabilized FEM (Comput. Meth. Appl. Mech. Engrg 1995; 128 :325–359). In particular, we show that the relative phase error of the numerical solution and the local truncation error of this scheme for plane wave solutions diminish at the rate O((ξ?)4), where ξ, ? represent the wavenumber and the mesh size, respectively. An expression for the parameter α is given that minimizes the maximum relative phase error in a sense that will be explained in Section 4.5. Convergence studies of the error in the L2 norm, the H1 semi‐norm and the l Euclidean norm are done and the pollution effect is found to be small. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

14.
The finite cell method (FCM) is an immersed domain finite element method that combines higher‐order non‐boundary‐fitted meshes, weak enforcement of Dirichlet boundary conditions, and adaptive quadrature based on recursive subdivision. Because of its ability to improve the geometric resolution of intersected elements, it can be characterized as an immersogeometric method. In this paper, we extend the FCM, so far only used with Cartesian hexahedral elements, to higher‐order non‐boundary‐fitted tetrahedral meshes, based on a reformulation of the octree‐based subdivision algorithm for tetrahedral elements. We show that the resulting TetFCM scheme is fully accurate in an immersogeometric sense, that is, the solution fields achieve optimal and exponential rates of convergence for h‐refinement and p‐refinement, if the immersed geometry is resolved with sufficient accuracy. TetFCM can leverage the natural ability of tetrahedral elements for local mesh refinement in three dimensions. Its suitability for problems with sharp gradients and highly localized features is illustrated by the immersogeometric phase‐field fracture analysis of a human femur bone. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

15.
A four‐node, quadrilateral smoothing element is developed based upon a penalized‐discrete‐least‐squares variational formulation. The smoothing methodology recovers C1‐continuous stresses, thus enabling effective a posteriori error estimation and automatic adaptive mesh refinement. The element formulation is originated with a five‐node macro‐element configuration consisting of four triangular anisoparametric smoothing elements in a cross‐diagonal pattern. This element pattern enables a convenient closed‐form solution for the degrees of freedom of the interior node, resulting from enforcing explicitly a set of natural edge‐wise penalty constraints. The degree‐of‐freedom reduction scheme leads to a very efficient formulation of a four‐node quadrilateral smoothing element without any compromise in robustness and accuracy of the smoothing analysis. The application examples include stress recovery and error estimation in adaptive mesh refinement solutions for an elasticity problem and an aerospace structural component. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

16.
We present two efficient methods of two‐grid scheme for the approximation of two‐dimensional semi‐linear reaction‐diffusion equations using an expanded mixed finite element method. To linearize the discretized equations, we use two Newton iterations on the fine grid in our methods. Firstly, we solve an original non‐linear problem on the coarse grid. Then we use twice Newton iterations on the fine grid in our first method, and while in second method we make a correction on the coarse grid between two Newton iterations on the fine grid. These two‐grid ideas are from Xu's work (SIAM J. Sci. Comput. 1994; 15 :231–237; SIAM J. Numer. Anal. 1996; 33 :1759–1777) on standard finite element method. We extend the ideas to the mixed finite element method. Moreover, we obtain the error estimates for two algorithms of two‐grid method. It is showed that coarse space can be extremely coarse and we achieve asymptotically optimal approximation as long as the mesh sizes satisfy H =??(h¼) in the first algorithm and H =??(h?) in second algorithm. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

17.
Abstract

The convergence property of the lp ‐norm algorithm for polynomial‐perceptron having different error signal distributions will be analyzed in this paper. To see the effect of error signal on the convergence rate, two types of activation functions are considered in the analysis: one is of a linear type and the other is of a sigmoidal type. Different activation functions yield different ranges of output signal and, in turn, yield different error signal distributions. Linear activation function causes the error signal to be distributed in an uncertain way, while sigmoidal activation function causes it to be distributed in a tightly bounded region. Based on this difference the convergence property of the lp ‐norm algorithm, 1 ≤ p ≤ 2, is investigated in this paper. Expressions of average learning gains are obtained in terms of the power metric p, the error probability, and the upper bound of the error signal distribution. Analytic results indicate that it is of particular value in using the lp ‐norm algorithm for the perceptron using sigmoidal activation functions. Computer simulation of an adaptive equalizer using this algorithm confirms the theoretical analysis.  相似文献   

18.
The present work is devoted to the damped Newton method applied for solving a class of non‐linear elasticity problems. Following the approach suggested in earlier related publications, we consider a two‐level procedure which involves (i) solving the non‐linear problem on a coarse mesh, (ii) interpolating the coarse‐mesh solution to the fine mesh, (iii) performing non‐linear iterations on the fine mesh. Numerical experiments suggest that in the case when one is interested in the minimization of the L2‐norm of the error rather than in the minimization of the residual norm the coarse‐mesh solution gives sufficiently accurate approximation to the displacement field on the fine mesh, and only a few (or even just one) of the costly non‐linear iterations on the fine mesh are needed to achieve an acceptable accuracy of the solution (the accuracy which is of the same order as the accuracy of the Galerkin solution on the fine mesh). Copyright © 2000 John Wiley & Sons, Ltd.  相似文献   

19.
The main aim of this contribution is to provide a mixed finite element for small strain elasto‐viscoplastic material behavior based on the least‐squares method. The L2‐norm minimization of the residuals of the given first‐order system of differential equations leads to a two‐field functional with displacements and stresses as process variables. For the continuous approximation of the stresses, lowest‐order Raviart–Thomas elements are used, whereas for the displacements, standard conforming elements are employed. It is shown that the non‐linear least‐squares functional provides an a posteriori error estimator, which establishes ellipticity of the proposed variational approach. Further on, details about the implementation of the least‐squares mixed finite elements are given and some numerical examples are presented. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

20.
In the present study, a hexahedral mesh generator was developed for remeshing in three‐dimensional metal forming simulations. It is based on the master grid approach and octree‐based refinement scheme to generate uniformly sized or locally refined hexahedral mesh system. In particular, for refined hexahedral mesh generation, the modified Laplacian mesh smoothing scheme mentioned in the two‐dimensional study (Part I) was used to improve the mesh quality while also minimizing the loss of element size conditions. In order to investigate the applicability and effectiveness of the developed hexahedral mesh generator, several three‐dimensional metal forming simulations were carried out using uniformly sized hexahedral mesh systems. Also, a comparative study of indentation analyses was conducted to check the computational efficiency of locally refined hexahedral mesh systems. In particular, for specification of refinement conditions, distributions of effective strain‐rate gradient and posteriori error values based on a Z2 error estimator were used. From this study, it is construed that the developed hexahedral mesh generator can be effectively used for three‐dimensional metal forming simulations. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号