首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this article, the adaptive integral method (AIM) is used to analyze large‐scale planar structures. Discretization of the corresponding integral equations by method of moment (MoM) with Rao‐Wilton‐Glisson (RWG) basis functions can model arbitrarily shaped planar structures, but usually leads to a fully populated matrix. AIM could map these basis functions onto a rectangular grid, where the Toeplitz property of the Green's function would be utilized, which enables the calculation of the matrix‐vector multiplication by use of the fast Fourier transform (FFT) technique. It reduces the memory requirement from O(N2) to O(N) and the operation complexity from O(N2) to O(N log N), where N is the number of unknowns. The resultant equations are then solved by the loose generalized minimal residual method (LGMRES) to accelerate iteration, which converges much faster than the conventional conjugate gradient method (CG). Furthermore, several preconditioning techniques are employed to enhance the computational efficiency of the LGMRES. Some typical microstrip circuits and microstrip antenna array are analyzed and numerical results show that the preconditioned LGMRES can converge much faster than conventional LGMRES. © 2008 Wiley Periodicals, Inc. Int J RF and Microwave CAE, 2009.  相似文献   

2.
This paper introduces an intriguing topic in image processing where accuracy of images even in details is important and adopts an intriguing methodology dealing with discrete topics by continuous mathematics and numerical approximation. The key idea is that a pixel of images at different levels can be quantified by a greyness value, which can then be regarded as the mean of an integral of continuous functions over a certain region, and evaluated by numerical integration approximately. However, contrasted to the traditional integration, the integrand has different smooth nature in different subregions due to piecewise interpolation and approximation. New treatments of approximate integration and new discrete algorithms of images have been developed.The cycle conversion T−1T of image transformations is said if an image is distorted by a transformation T and then restored back to itself by the inverse transformation T−1. Combined algorithms of discrete techniques proposed in [1–3] only have the convergence rates O(1/N) and O(1/N2) of sequential greyness errors, where N is the division number for a pixel split into N2 subpixels. This paper reports new combination algorithms using spline functions to achieve the high convergence rates O(1/N3) and O(1/N4) of digital image transformations under the cycle conversion. Both error analysis and numerical experiments have been provided to verify the high convergence rates. High convergence rates of discrete algorithms are important in saving CPU time, particularly to multi-greyness images. Moreover, the computational figures for real images of 256 × 256 with 256 greyness levels, in which N = 2 is good enough for practical requirements, display validity, and effectiveness of the new algorithms in this paper.  相似文献   

3.
The shear crack, propagating spontaneously on a frictional interface, is a useful idealization of a natural earthquake. However, the corresponding boundary value problems are quite challenging in terms of required memory and processor power. While the huge computation amount is reduced by the spectral boundary integral method, the computation effort is still huge. In this paper, a recursive method for the evaluation of convolution integrals was tested in the spectral formulation of the boundary integral method applied to 2D anti-plane crack propagation problems. It is shown that analysis of a 2D anti-plane crack propagation problem involving Nt time steps, based on the recursive evaluation of convolution integrals, requires O(αNt) computational resources for each Fourier mode (as opposed to O(Nt2) for a classical algorithm), where α is a constant depending on the implementation of the method with typical values much less than Nt. Therefore, this recursive scheme renders feasible investigation of long deformational processes involving large surfaces and long periods of time, while preserving accuracy. The computation methodology implemented here can be extended easily to 3D cases where it can be employed for the simulation of complex spontaneously fault rupture problems which carry a high computational cost.  相似文献   

4.
Thin plate smoothing splines are widely used to spatially interpolate surface climate, however, their application to large data sets is limited by computational efficiency. Standard analytic calculation of thin plate smoothing splines requires O(n3) operations, where n is the number of data points, making routine computation infeasible for data sets with more than around 2000 data points. An O(N) iterative procedure for calculating finite element approximations to bivariate minimum generalised cross validation (GCV) thin plate smoothing splines operations was developed, where N is the number of grid points. The key contribution of the method lies in the incorporation of an automatic procedure for optimising smoothness to minimise GCV. The minimum GCV criterion is commonly used to optimise thin plate smoothing spline fits to climate data. The method discretises the bivariate thin plate smoothing spline equations using hierarchical biquadratic B-splines, and uses a nested grid multigrid procedure to solve the system. To optimise smoothness, a double iteration is incorporated, whereby the estimate of the spline solution and the estimate of the optimal smoothing parameter are updated simultaneously. When the method was tested on temperature data from the African and Australian continents, accurate approximations to analytic solutions were obtained.  相似文献   

5.
《国际计算机数学杂志》2012,89(16):3507-3520
This article discusses an extrapolation method for solving a system of weakly singular nonlinear Volterra integral equations of the second kind. Based on a generalization of the discrete Gronwall inequality and Navot's quadrature rule, the modified trapeziform quadrature algorithm is presented. The iterative algorithm for solving the discrete system possesses a high accuracy order O(h 2+α). After the asymptotic expansion of errors is proved, we can obtain an approximation with a higher accuracy order using extrapolation. An a posteriori error estimation is provided. Some numerical results are presented to illustrate the efficiency of our methods.  相似文献   

6.
T. Dornseifer  C. Pflaum 《Computing》1996,56(3):197-213
Elliptic differential equations can be discretized with bilinear finite elements. Using sparse grids instead of full grids, the dimension of the finite element space for the 2D problem reduces fromO(N 2) toO (N logN) while the approximation properties are nearly the same for smooth functions. A method is presented which discretizes elliptic differential equations on curvilinear bounded domains with adaptive sparse grids. The grid is generated by a transformation of the domain. This method has the same behaviour of convergence like the sparse grid discretization on the unit square.  相似文献   

7.
We suggest a Simpson's rule for discretized Feynman path integral approximation of density matrix element. For a class of bounded below potential functions, we rigorously establish the error boundO(1/N 2) for itsN-step discretized representation. As a model, we use harmonic oscillator to compare the Simpson's rule with the conventional trapezoidal rule.  相似文献   

8.
9.
In this paper, we use hat basis functions to solve the system of Fredholm integral equations (SFIEs) and the system of Volterra integral equations (SVIEs) of the second kind. This method converts the system of integral equations into a linear or nonlinear system of algebraic equations. Also, we consider the order of convergence of the method and show that it is O(h2). Application of the method on some examples show its accuracy and efficiency.  相似文献   

10.
Dr. G. Rote 《Computing》1992,48(3-4):337-361
The Sandwich algorithm approximates a convex function of one variable over an interval by evaluating the function and its derivative at a sequence of points. The connection of the obtained points is a piecewise linear upper approximation, and the tangents yield a piecewise linear lower approximation. Similarly, a planar convex figure can be approximated by convex polygons. Different versions of the Sandwich algorithm use different rules for selecting the next evaluation point. We consider four natural rules (interval bisection, slope bisection, maximum error rule, and chord rule) and show that the global approximation error withn evaluation points decreases by the order ofO(1/n 2), which is optimal. By special examples we show that the actual performance of the four rules can be very different from each other, and we report computational experiments which compare the performance of the rules for particular functions.  相似文献   

11.
An algorithm is developed for calculating the horizons for each point in a digital terrain grid in order N iterations, whereas all previous methods seem to be of O(N2) time complexity. The new method makes horizon computations reasonable, and ought to improve the accuracy of surface climate models in rugged terrain.  相似文献   

12.
Explicit values are given for the element stiffness matrices of two triangular finite elements for Poisson's equation in the plane. One with four nodal points including derivatives as nodal values at the vertices and another without a central point. These provide O(h6) andO(h4) accuracy in the energy, respectively.  相似文献   

13.
The spline-based method for solving the Fredholm integral equation of the first kind, earlier suggested by Beniaminy and Deutsch, is discussed. The inaccuracy of the solution is estimated. It is shown that this inaccuracy is of the order of O(h3) where h is a small parameter reflecting the measure of δ-likeness of the kernel.  相似文献   

14.
15.
The coupled system of equations resulting from a mixed variable formulation of the biharmonic problem is solved by a preconditioned conjugate gradient method. The preconditioning matrix is based on an incomplete factorization of a positive definite operator similar to the 13-point difference approximation of the biharmonic operator.The first iterate is already quite accurate even if the initial approximation is not. Hence, often a small number of iterations will suffice to get an accurate enough solution. For smaller iteration errors the number of iterations grows as O(h?1), h → 0, where h is an average mesh-size parameter.  相似文献   

16.
In this paper, we propose a simple general form of high-order approximation of O(c2+ch2+h4) to solve the two-dimensional parabolic equation αuxx+βuyy=F(x,y,t,u,ux,uy,ut), where α and β are positive constants. We apply the compact form for solving diffusion-convection equation. The results of numerical experiments are presented and compared with analytical solutions to confirm the higher accuracy of the presented scheme.  相似文献   

17.
Discontinuous Galerkin (DG) approximations for non-linear parabolic problems are investigated. To linearize the discretized equations, we use a two-grid method involving a small non-linear system on a coarse gird of size H and a linear system on a fine grid of size h. Error estimates in H1-norm are obtained, O(hr+Hr+1) where r is the order of the DG space. The analysis shows that our two-grid DG algorithm will achieve asymptotically optimal approximation as long as the mesh sizes satisfy h=O(H(r+1)/r). The numerical experiments verify the efficiency of our algorithm.  相似文献   

18.
《Parallel Computing》1997,23(13):2041-2065
A parallel diagonally scaled dynamic alternating-direction-implicit (DSDADI) method is shown to be an effective algorithm for solving the 2D and 3D steady-state diffusion equation on large uniform Cartesian grids. Empirical evidence from the parallel solution of large gridsize problems suggests that the computational work done by DSDADI to converge over an Nd grid with continuous diffusivity is of lower order than O(Nd+α) for any fixed α > 0. This is in contrast to the method of diagonally scaled conjugate gradients (DSCG), for which the computational work necessary for convergence is O(Nd+1). Furthermore, the combination of diagonal scaling, spatial domain decomposition (SDD), and distributed tridiagonal system solution gives the DSDADI algorithm reasonable scalability on distributed-memory multiprocessors such as the CRAY T3D. Finally, an approximate parallel tridiagonal system solver with diminished interprocessor communication exhibits additional utility for DSDADI.  相似文献   

19.
The probabilistic reasoning scheme of Dempster-Shafer theory provides a remarkably efficient bug identification algorithm for a hierarchical Buggy model. In the particular Buggy model generated by the repair theory of Brown & Van Lehn (1980, A generative theory of bugs in procedural skills, Cognitive Science, 2, 155-192), both Shafer & Logan (1987, Implementing Dempster's rule for hierarchical evidence, Artificial Intelligence, 33 , 271-298) and Gordon & Shortliffe (1985, A method for managing evidential reasoning in a hierarchical hypothesis space, Artificial Intelligence, 26, 324-357) schemes have provided almost identical computational accuracy although the latter involves an approximation of a "smallest superset". If n denotes the number of bugs to be identified, the computational complexity of the two schemes, originally of O (n4/3) and O(n2) respectively, can be improved to O(n) using the simplified top-down calculation scheme whereby from among all the nodes we first locate the particular "parental" node to which the bug belongs and then the bug itself among the sibs within the node. On average, about 5-7 problems are adequate to raise the belief function of the bug to 95% level, based on the evidential reasoning schemes.  相似文献   

20.
The polynomial chaos (PC) method has been widely adopted as a computationally feasible approach for uncertainty quantification (UQ). Most studies to date have focused on non-stiff systems. When stiff systems are considered, implicit numerical integration requires the solution of a non-linear system of equations at every time step. Using the Galerkin approach the size of the system state increases from n to S × n, where S is the number of PC basis functions. Solving such systems with full linear algebra causes the computational cost to increase from O(n3) to O(S3n3). The S3-fold increase can make the computation prohibitive. This paper explores computationally efficient UQ techniques for stiff systems using the PC Galerkin, collocation, and collocation least-squares (LS) formulations. In the Galerkin approach, we propose a modification in the implicit time stepping process using an approximation of the Jacobian matrix to reduce the computational cost. The numerical results show a run time reduction with no negative impact on accuracy. In the stochastic collocation formulation, we propose a least-squares approach based on collocation at a low-discrepancy set of points. Numerical experiments illustrate that the collocation least-squares approach for UQ has similar accuracy with the Galerkin approach, is more efficient, and does not require any modification of the original code.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号