首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this paper we introduce the Boundary Element Tearing and Interconnecting (BETI) methods as boundary element counterparts of the well-established Finite Element Tearing and Interconnecting (FETI) methods. In some practical important applications such as far field computations, handling of singularities and moving parts etc., BETI methods have certainly some advantages over their finite element counterparts. This claim is especially true for the sparse versions of the BETI preconditioners resp. methods. Moreover, there is an unified framework for coupling, handling, and analyzing both methods. In particular, the FETI methods can benefit from preconditioning components constructed by boundary element techniques. The first numerical results confirm the efficiency and the robustness predicted by our analysis.  相似文献   

2.
Two of the most recent and important nonoverlapping domain decomposition methods, the BDDC method (Balancing Domain Decomposition by Constraints) and the FETI-DP method (Dual-Primal Finite Element Tearing and Interconnecting) are here extended to spectral element discretizations of second-order elliptic problems. In spite of the more severe ill-conditioning of the spectral element discrete systems, compared with low-order finite elements and finite differences, these methods retain their good properties of scalability, quasi-optimality and independence on the discontinuities of the elliptic operator coefficients across subdomain interfaces.  相似文献   

3.
Finite Element Tearing and Interconnecting (FETI) methods are a family of nonoverlapping domain decomposition methods which have been proven to be robust and parallel scalable for a variety of elliptic partial differential equations. Here, an introduction to the classical onelevel FETI methods is given, as well as to the more recent dual-primal FETI methods and some of their variants. With the advent of modern parallel computers with thousands of processors, certain inexact components are needed in these methods to maintain scalability. An introduction to a recent class of inexact dual-primal FETI methods is presented. Scalability results for an elasticity problem using 65 536 processor cores of the JUGENE supercomputer at Forschungszentrum Jülich show the potential of these methods. A hyperelastic problem from biomechanics is presented as an application of the methods to nonlinear finite element analysis.  相似文献   

4.
In this paper we shall study Galerkin approximations to the solution of linear second-order hyperbolic integro-differential equations. The continuous and Crank-Nicolson discrete time Galerkin procedures will be defined and optimal error estimates for these procedures are demonstrated by using a “non-classical” elliptic projection.  相似文献   

5.
This paper describes the implementation, performance, and scalability of our communication layer developed for Total FETI (TFETI) and Hybrid Total FETI (HTFETI) solvers. HTFETI is based on our variant of the Finite Element Tearing and Interconnecting (FETI) type domain decomposition method. In this approach a small number of neighboring subdomains is aggregated into clusters, which results in a smaller coarse problem. To solve the original problem TFETI method is applied twice: to the clusters and then to the subdomains in each cluster.The current implementation of the solver is focused on the performance optimization of the main CG iteration loop, including: implementation of communication hiding and avoiding techniques for global communications; optimization of the nearest neighbor communication - multiplication with a global gluing matrix; and optimization of the parallel CG algorithm to iterate over local Lagrange multipliers only.The performance is demonstrated on a linear elasticity 3D cube and real world benchmarks.  相似文献   

6.
F. Fierro  R. Goglione  M. Paolini 《Calcolo》1994,31(3-4):191-210
We consider the prescribed curvature problem including anisotropy effects. The functional setting inBV (Ω;{−1,1}) is convexified and regularized by strictly convex functionals which, in turn, are discretized by continuous piecewise linear finite elements. It is known that sequences of discrete minima converge to a continuous minimizer. We discuss an efficient implementation of the minimization procedure based on a constrained modified Newton algorithm. Several numerical examples illustrate performances of our algorithm. This work was partially supported by MURST (Progetto Nazionale “Analisi Numerica e Matematica Computazionale”) and CNR (IAN and Contracts 92.00833.01, 94.00139.01) of Italy.  相似文献   

7.
Recovery of epipolar geometry is a fundamental problem in computer vision. The introduction of the “joint image manifold” (JIM) allows to treat the recovery of camera motion and epipolar geometry as the problem of fitting a manifold to the data measured in a stereo pair. The manifold has a singularity and boundary, therefore special care must be taken when fitting it. Four fitting methods are discussed—direct, algebraic, geometric, and the integrated maximum likelihood (IML) based method. The first three methods are the exact analogues of three common methods for recovering epipolar geometry. The more recently introduced IML method seeks the manifold which has the highest “support,” in the sense that the largest measure of its points are close to the data. While computationally more intensive than the other methods, its results are better in some scenarios. Both simulations and experiments suggest that the advantages of IML manifold fitting carry over to the task of recovering epipolar geometry, especially when the extent of the data and/or the motion are small.  相似文献   

8.
This paper describes new “lemma” and “cut” strategies that are efficient to apply in the setting of propositional Model Elimination. Previous strategies for managing lemmas and C-literals in Model Elimination were oriented toward first-order theorem proving. The original “cumulative” strategy remembers lemmas forever, and was found to be too inefficient. The previously reported C-literal and unit-lemma strategies, such as “strong regularity”, forget them unnecessarily soon in the propositional domain. An intermediate strategy, called “quasi-persistent” lemmas, is introduced. Supplementing this strategy, methods for “eager” lemmas and two forms of controlled “cut” provide further efficiencies. The techniques have been incorporated into “Modoc”, which is an implementation of Model Elimination, extended with a new pruning method that is designed to eliminate certain refutation attempts that cannot succeed. Experimental data show that on random 3CNF formulas at the “hard” ratio of 4.27 clauses per variable, Modoc is not as effective as recently reported model-searching methods. However, on more structured formulas from applications, such as circuit-fault detection, it is superior. This revised version was published online in June 2006 with corrections to the Cover Date.  相似文献   

9.
We consider weakly singular integral equations of the first kind on open surface pieces Γ in ℝ3. To obtain approximate solutions we use theh-version Galerkin boundary element method. Furthermore we introduce two-level additive Schwarz operators for non-overlapping domain decompositions of Γ and we estimate the conditions numbers of these operators with respect to the mesh size. Based on these operators we derive an a posteriori error estimate for the difference between the exact solution and the Galerkin solution. The estimate also involves the error which comes from an approximate solution of the Galerkin equations. For uniform meshes and under the assumption of a saturation condition we show reliability and efficiency of our estimate. Based on this estimate we introduce an adaptive multilevel algorithm with easily computable local error indicators which allows direction control of the local refinements. The theoretical results are illustrated by numerical examples for plane and curved surfaces. Supported by the German Research Foundation (DFG) under grant Ste 238/25-9.  相似文献   

10.
Leonard K. Eaton resurrects the reputation of Hardy Cross, developer of the “moment distribution method” and one of America’s most brilliant engineers. The structural calculation of a large reinforced concrete building in the nineteen fifties was a complicated affair. It is a tribute to the engineering profession, and to Hardy Cross, that there were so few failures. When architects and engineers had to figure out what was happening in a statically indeterminate frame, they inevitably turned to what was generally known as the “moment distribution” or “Hardy Cross” method. Although the Cross method has been superseded by more powerful procedures such as the Finite Element Method, the “moment distribution method” made possible the efficient and safe design of many reinforced concrete buildings during an entire generation.  相似文献   

11.
A Total BETI (TBETI) based domain decomposition algorithm with the preconditioning by a natural coarse grid of the rigid body motions is adapted for the solution of contact problems of linear elastostatics and proved to be scalable for the coercive problems, i.e., the cost of the solution is asymptotically proportional to the number of variables. The analysis is based on the original results by Langer and Steinbach on the scalability of BETI for linear problems and our development of optimal quadratic programming algorithms for bound and equality constrained problems. Both theoretical results and numerical experiments indicate a high efficiency of the algorithms presented.   相似文献   

12.
An implementation of compositionality for stochastic well-formed nets (SWN) and, consequently, for generalized stochastic Petri nets (GSPN) has been recently included in the GreatSPN tool. Given two SWNs and a labelling function for places and transitions, it is possible to produce a third one as a superposition of places and transitions of equal label. Colour domains and arc functions of SWNs have to be treated appropriately. The main motivation for this extension was the need to evaluate a library of fault-tolerant “mechanisms” that have been recently defined, and are now under implementation, in a European project called TIRAN. The goal of the TIRAN project is to devise a portable software solution to the problem of fault tolerance in embedded systems, while the goal of the evaluation is to provide evidence of the efficacy of the proposed solution. Modularity being a natural “must” for the project, we have tried to reflect it in our modelling effort. In this paper, we discuss the implementation of compositionality in the GreatSPN tool, and we show its use for the modelling of one of the TIRAN mechanisms, the so-called local voter. Published online: 24 August 2001  相似文献   

13.
An application of a variant of the parallel domain decomposition method that we call Total FETI or TFETI (Total Finite Element Tearing and Interconnecting) for the solution of contact problems of elasticity to the parallel solution of contact shape optimization problems is described. A unique feature of the TFETI algorithm is its capability to solve large contact problems with optimal, i.e., asymptotically linear complexity. We show that the algorithm is even more efficient for the solution of the contact shape optimization problems as it can exploit effectively a specific structure of the auxiliary problems arising in the semi-analytic sensitivity analysis. Thus the triangular factorizations of the stiffness matrices of the subdomains are carried out in parallel only once for each design step, the evaluation of the components of the gradient of the cost function can be carried out in parallel, and even the evaluation of each component of the gradient itself can be further parallelized using the standard TFETI scheme. Theoretical results which prove asymptotically linear complexity of the solution are reported and documented by numerical experiments. The results of numerical solution of a 3D contact shape optimization problem confirm the high degree of parallelism of the algorithm.  相似文献   

14.
In the context of intuitionistic implicational logic, we achieve a perfect correspondence (technically an isomorphism) between sequent calculus and natural deduction, based on perfect correspondences between left-introduction and elimination, cut and substitution, and cut-elimination and normalisation. This requires an enlarged system of natural deduction that refines von Plato’s calculus. It is a calculus with modus ponens and primitive substitution; it is also a “coercion calculus”, in the sense of Cervesato and Pfenning. Both sequent calculus and natural deduction are presented as typing systems for appropriate extensions of the λ-calculus. The whole difference between the two calculi is reduced to the associativity of applicative terms (sequent calculus = right associative, natural deduction = left associative), and in fact the achieved isomorphism may be described as the mere inversion of that associativity. The novel natural deduction system is a “multiary” calculus, because “applicative terms” may exhibit a list of several arguments. But the combination of “multiarity” and left-associativity seems simply wrong, leading necessarily to non-local reduction rules (reason: normalisation, like cut-elimination, acts at the head of applicative terms, but natural deduction focuses at the tail of such terms). A solution is to extend natural deduction even further to a calculus that unifies sequent calculus and natural deduction, based on the unification of cut and substitution. In the unified calculus, a sequent term behaves like in the sequent calculus, whereas the reduction steps of a natural deduction term are interleaved with explicit steps for bringing heads to focus. A variant of the calculus has the symmetric role of improving sequent calculus in dealing with tail-active permutative conversions.  相似文献   

15.
The first boundary value problem for a singularly perturbed parabolic equation of convection-diffusion type on an interval is studied. For the approximation of the boundary value problem we use earlier developed finite difference schemes, ɛ-uniformly of a high order of accuracy with respect to time, based on defect correction. New in this paper is the introduction of a partitioning of the domain for these ɛ-uniform schemes. We determine the conditions under which the difference schemes, applied independently on subdomains may accelerate (ɛ-uniformly) the solution of the boundary value problem without losing the accuracy of the original schemes. Hence, the simultaneous solution on subdomains can in principle be used for parallelization of the computational method. Received December 3, 1999; revised April 20, 2000  相似文献   

16.
Christoph Pflaum 《Computing》2001,67(2):141-166
We present a novel automatic grid generator for the finite element discretization of partial differential equations in 3D. The grids constructed by this grid generator are composed of a pure tensor product grid in the interior of the domain and an unstructured grid which is only contained in boundary cells. The unstructured component consists of tetrahedra, each of which satisfies a maximal interior angle condition. By suitable constructing the boundary cells, the number of types of boundary subcells is reduced to 12 types. Since this grid generator constructs large structured grids in the interior and small unstructured grids near the boundary, the resulting semi-unstructured grids have similar properties as structured tensor product grids. Some appealing properties of this method are computational efficiency and natural construction of coarse grids for multilevel algorithms. Numerical results and an analysis of the discretization error are presented. Received July 17, 2000; revised October 27, 2000  相似文献   

17.
We study the properties of the reference mapping for quadrilateral and hexahedral finite elements. We consider multilevel adaptive grids with possibly hanging nodes which are typically generated by adaptive refinement starting from a regular coarse grid. It turns out that for such grids the reference mapping behaves – up to a perturbation depending on the mesh size – like an affine mapping. As an application, we prove optimal estimates of the interpolation error for discontinuous mapped -elements on quadrilateral and hexahedral grids.  相似文献   

18.
In this paper, we will introduce composite finite elements for solving elliptic boundary value problems with discontinuous coefficients. The focus is on problems where the geometry of the interfaces between the smooth regions of the coefficients is very complicated. On the other hand, efficient numerical methods such as, e.g., multigrid methods, wavelets, extrapolation, are based on a multi-scale discretization of the problem. In standard finite element methods, the grids have to resolve the structure of the discontinuous coefficients. Thus, straightforward coarse scale discretizations of problems with complicated coefficient jumps are not obvious. In this paper, we define composite finite elements for problems with discontinuous coefficients. These finite elements allow the coarsening of finite element spaces independently of the structure of the discontinuous coefficients. Thus, the multigrid method can be applied to solve the linear system on the fine scale. We focus on the construction of the composite finite elements and the efficient, hierarchical realization of the intergrid transfer operators. Finally, we present some numerical results for the multigrid method based on the composite finite elements (CFE–MG).  相似文献   

19.
Stable rankings for different effort models   总被引:1,自引:0,他引:1  
There exists a large and growing number of proposed estimation methods but little conclusive evidence ranking one method over another. Prior effort estimation studies suffered from “conclusion instability”, where the rankings offered to different methods were not stable across (a) different evaluation criteria; (b) different data sources; or (c) different random selections of that data. This paper reports a study of 158 effort estimation methods on data sets based on COCOMO features. Four “best” methods were detected that were consistently better than the “rest” of the other 154 methods. These rankings of “best” and “rest” methods were stable across (a) three different evaluation criteria applied to (b) multiple data sets from two different sources that were (c) divided into hundreds of randomly selected subsets using four different random seeds. Hence, while there exists no single universal “best” effort estimation method, there appears to exist a small number (four) of most useful methods. This result both complicates and simplifies effort estimation research. The complication is that any future effort estimation analysis should be preceded by a “selection study” that finds the best local estimator. However, the simplification is that such a study need not be labor intensive, at least for COCOMO style data sets.  相似文献   

20.
Nonlocal Image and Movie Denoising   总被引:3,自引:0,他引:3  
Neighborhood filters are nonlocal image and movie filters which reduce the noise by averaging similar pixels. The first object of the paper is to present a unified theory of these filters and reliable criteria to compare them to other filter classes. A CCD noise model will be presented justifying the involvement of neighborhood filters. A classification of neighborhood filters will be proposed, including classical image and movie denoising methods and discussing further a recently introduced neighborhood filter, NL-means. In order to compare denoising methods three principles will be discussed. The first principle, “method noise”, specifies that only noise must be removed from an image. A second principle will be introduced, “noise to noise”, according to which a denoising method must transform a white noise into a white noise. Contrarily to “method noise”, this principle, which characterizes artifact-free methods, eliminates any subjectivity and can be checked by mathematical arguments and Fourier analysis. “Noise to noise” will be proven to rule out most denoising methods, with the exception of neighborhood filters. This is why a third and new comparison principle, the “statistical optimality”, is needed and will be introduced to compare the performance of all neighborhood filters. The three principles will be applied to compare ten different image and movie denoising methods. It will be first shown that only wavelet thresholding methods and NL-means give an acceptable method noise. Second, that neighborhood filters are the only ones to satisfy the “noise to noise” principle. Third, that among them NL-means is closest to statistical optimality. A particular attention will be paid to the application of the statistical optimality criterion for movie denoising methods. It will be pointed out that current movie denoising methods are motion compensated neighborhood filters. This amounts to say that they are neighborhood filters and that the ideal neighborhood of a pixel is its trajectory. Unfortunately the aperture problem makes it impossible to estimate ground true trajectories. It will be demonstrated that computing trajectories and restricting the neighborhood to them is harmful for denoising purposes and that space-time NL-means preserves more movie details.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号