首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this article, the microstrip circuit is analyzed in terms of the mixed‐potential integral equation (MPIE) by means of the rooftop‐function expansion and the blaze‐function testing technique, and the integral equation is solved using the loose generalized minimal residual fast Fourier transform (LGMRES‐FFT) method. Our numerical calculations show that LGMRES‐FFT can converge faster than the conjugate gradient‐fast Fourier transform (CG‐FFT) method. Some typical microstrip discontinuities are analyzed and the good results obtained demonstrate the validity of the proposed algorithm. © 2005 Wiley Periodicals, Inc. Int J RF and Microwave CAE, 2005.  相似文献   

2.
The restoration of digital images and patterns by the splitting-integrating method (SIM) proposed by Li (1993) and Liet al. (1992) is much simpler than other algorithms because no solutions of nonlinear algebraic equations are required. Let a pixel in 2D images be split intoN 2 subpixels; the convergence rates areO(1/N) andO/(1/N 2) for pixel greyness under image normalization by SIM. In this paper, the advanced SIM using spline functions can raise the convergence rates to (O(1/N 3) andO(1/N 4). Error bounds of pixel greyness obtained are derived from numerical analysis, and numerical experiments are carried out to confirm the high convergence rates ofO(1/N 3) andO(1/N 4).  相似文献   

3.
J. Tausch 《Computing》2004,72(3-4):267-291
We discuss the variable order Fast Multipole Method (FMM) applied to piecewise constant Galerkin discretizations of boundary integral equations. In this version of the FMM low-order expansions are employed in the finest level and orders are increased in the coarser levels. Two versions will be discussed, the first version computes exact moments, the second is based on approximated moments. When applied to integral equations of the second kind, both versions retain the asymptotic error of the direct method. The complexity estimate of the first version contains a logarithmic term while the second version is O(N) where N is the number of panels.This work was supported by the NSF under contract DMS-0074553  相似文献   

4.
《国际计算机数学杂志》2012,89(13):3052-3062
This paper describes a procedure for solving the system of linear Volterra integral equations by means of the Sinc collocation method. A convergence and an error analysis are given; it is shown that the Sinc solution produces an error of order O(exp(?c N 1/2)), where c>0 is a constant. This approximation reduces the system of integral equations to an explicit system of algebraic equations. The analytical results are illustrated with numerical examples that exhibit the exponential convergence rate.  相似文献   

5.
In this paper we present a modified Fourier–Galerkin method for the numerical solution of the Poisson and Helmholtz equations in a d-dimensional box. The inversion of the differential operators requires O(N d ) operations, where N d is the number of unknowns. The total cost of the presented algorithms is O(N d :log2:N), due to the application of the Fast Fourier Transform (FFT) at the preprocessing stage. The method is based on an extension of the Fourier spaces by adding appropriate functions. Utilizing suitable bilinear forms, approximate projections onto these extended spaces give rapidly converging and highly accurate series expansions.  相似文献   

6.
The boundary concentrated FEM, a variant of the hp-version of the finite element method, is proposed for the numerical treatment of elliptic boundary value problems. It is particularly suited for equations with smooth coefficients and non-smooth boundary conditions. In the two-dimensional case it is shown that the Cholesky factorization of the resulting stiffness matrix requires O(Nlog4 N) units of storage and can be computed with O(Nlog8 N) work, where N denotes the problem size. Numerical results confirm theoretical estimates. Received October 4, 2001; revised August 19, 2002 Published online: October 24, 2002  相似文献   

7.
Thedistance transform(DT) is an image computation tool which can be used to extract the information about the shape and the position of the foreground pixels relative to each other. It converts a binary image into a grey-level image, where each pixel has a value corresponding to the distance to the nearest foreground pixel. The time complexity for computing the distance transform is fully dependent on the different distance metrics. Especially, the more exact the distance transform is, the worse execution time reached will be. Nowadays, quite often thousands of images are processed in a limited time. It seems quite impossible for a sequential computer to do such a computation for the distance transform in real time. In order to provide efficient distance transform computation, it is considerably desirable to develop a parallel algorithm for this operation. In this paper, based on the diagonal propagation approach, we first provide anO(N2) time sequential algorithm to compute thechessboard distance transform(CDT) of anN×Nimage, which is a DT using the chessboard distance metrics. Based on the proposed sequential algorithm, the CDT of a 2D binary image array of sizeN×Ncan be computed inO(logN) time on the EREW PRAM model usingO(N2/logN) processors,O(log logN) time on the CRCW PRAM model usingO(N2/log logN) processors, andO(logN) time on the hypercube computer usingO(N2/logN) processors. Following the mapping as proposed by Lee and Horng, the algorithm for the medial axis transform is also efficiently derived. The medial axis transform of a 2D binary image array of sizeN×Ncan be computed inO(logN) time on the EREW PRAM model usingO(N2/logN) processors,O(log logN) time on the CRCW PRAM model usingO(N2/log logN) processors, andO(logN) time on the hypercube computer usingO(N2/logN) processors. The proposed parallel algorithms are composed of a set of prefix operations. In each prefix operation phase, only increase (add-one) operation and minimum operation are employed. So, the algorithms are especially efficient in practical applications.  相似文献   

8.
Fast computation of sample entropy and approximate entropy in biomedicine   总被引:1,自引:0,他引:1  
Both sample entropy and approximate entropy are measurements of complexity. The two methods have received a great deal of attention in the last few years, and have been successfully verified and applied to biomedical applications and many others. However, the algorithms proposed in the literature require O(N2) execution time, which is not fast enough for online applications and for applications with long data sets. To accelerate computation, the authors of the present paper have developed a new algorithm that reduces the computational time to O(N3/2)) using O(N) storage. As biomedical data are often measured with integer-type data, the computation time can be further reduced to O(N) using O(N) storage. The execution times of the experimental results with ECG, EEG, RR, and DNA signals show a significant improvement of more than 100 times when compared with the conventional O(N2) method for N = 80,000 (N = length of the signal). Furthermore, an adaptive version of the new algorithm has been developed to speed up the computation for short data length. Experimental results show an improvement of more than 10 times when compared with the conventional method for N > 4000.  相似文献   

9.
Variable Order Panel Clustering   总被引:3,自引:0,他引:3  
Stefan Sauter 《Computing》2000,64(3):223-261
We present a new version of the panel clustering method for a sparse representation of boundary integral equations. Instead of applying the algorithm separately for each matrix row (as in the classical version of the algorithm) we employ more general block partitionings. Furthermore, a variable order of approximation is used depending on the size of blocks. We apply this algorithm to a second kind Fredholm integral equation and show that the complexity of the method only depends linearly on the number, say n, of unknowns. The complexity of the classical matrix oriented approach is O(n 2) while, for the classical panel clustering algorithm, it is O(nlog7 n). Received July 28, 1999; revised September 21, 1999  相似文献   

10.
Li  Jie  Pan  Yi  Shen  Hong 《The Journal of supercomputing》2003,24(3):251-258
Topological sort of an acyclic graph has many applications such as job scheduling and network analysis. Due to its importance, it has been tackled on many models. Dekel et al. [3], proposed an algorithm for solving the problem in O(log2 N) time on the hypercube or shuffle-exchange networks with O(N 3) processors. Chaudhuri [2], gave an O(log N) algorithm using O(N 3) processors on a CRCW PRAM model. On the LARPBS (Linear Arrays with a Reconfigurable Pipelined Bus System) model, Li et al. [5] showed that the problem for a weighted directed graph with N vertices can be solved in O(log N) time by using N 3 processors. In this paper, a more efficient topological sort algorithm is proposed on the same LARPBS model. We show that the problem can be solved in O(log N) time by using N 3/log N processors. We show that the algorithm has better time and processor complexities than the best algorithm on the hypercube, and has the same time complexity but better processor complexity than the best algorithm on the CRCW PRAM model.  相似文献   

11.
Xin He 《Algorithmica》1990,5(1):545-559
We present an efficient algorithm for 4-coloring perfect planar graphs. The best previously known algorithm for this problem takesO(n 3/2) sequential time, orO(log4 n) parallel time withO(n3) processors. The sequential implementation of our algorithm takesO(n logn) time. The parallel implementation of our algorithm takesO(log3 n) time withO(n) processors on a PRAM.  相似文献   

12.
Thin plate smoothing splines are widely used to spatially interpolate surface climate, however, their application to large data sets is limited by computational efficiency. Standard analytic calculation of thin plate smoothing splines requires O(n3) operations, where n is the number of data points, making routine computation infeasible for data sets with more than around 2000 data points. An O(N) iterative procedure for calculating finite element approximations to bivariate minimum generalised cross validation (GCV) thin plate smoothing splines operations was developed, where N is the number of grid points. The key contribution of the method lies in the incorporation of an automatic procedure for optimising smoothness to minimise GCV. The minimum GCV criterion is commonly used to optimise thin plate smoothing spline fits to climate data. The method discretises the bivariate thin plate smoothing spline equations using hierarchical biquadratic B-splines, and uses a nested grid multigrid procedure to solve the system. To optimise smoothness, a double iteration is incorporated, whereby the estimate of the spline solution and the estimate of the optimal smoothing parameter are updated simultaneously. When the method was tested on temperature data from the African and Australian continents, accurate approximations to analytic solutions were obtained.  相似文献   

13.
Two algorithms for constructing a Delaunay triangulation   总被引:51,自引:0,他引:51  
This paper provides a unified discussion of the Delaunay triangulation. Its geometric properties are reviewed and several applications are discussed. Two algorithms are presented for constructing the triangulation over a planar set ofN points. The first algorithm uses a divide-and-conquer approach. It runs inO(N logN) time, which is asymptotically optimal. The second algorithm is iterative and requiresO(N 2) time in the worst case. However, its average case performance is comparable to that of the first algorithm.This work was supported in part by the National Science Foundation under grant MCS-76-17321 and the Joint Services Electronics Program under contract DAAB-07-72-0259.  相似文献   

14.
We present a method called the Truncation method for computing Walsh-Hadamard transforms of one- and two-dimensional data. In one dimension, the method uses binary trees as a basis for representing the data and computing the transform. In two dimensions, the method uses quadtrees (pyramids), adaptive quad-trees, or binary trees as a basis. We analyze the storage and time complexity of this method in worst and general cases. The results show that the Truncation method degenerates to the Fast Walsh Transform (FWT) in the worst case, while the Truncation method is faster than the Fast Walsh Transform when there is coherence in the input data, as will typically be the case for image data. In one dimension, the performance of the Truncation method for N data samples is between O(N) and O(N log2N), and it is between O(N2) and O(N2 log2N) in two dimensions. Practical results on several images are presented to show that both the expected and actual overall times taken to compute Walsh transforms using the Truncation method are less than those required by a similar implementation of the FWT method.  相似文献   

15.
Merkle(1) has shown how keys can be exchanged over a public channel by the use of puzzles. His method gives a work advantage ofO(N 2)/O(N). He also posed the problem of finding other methods of using puzzles that give better work advantage. This paper provides a solution to Merkle's problem. A new method of key exchange, called the nested puzzles method, is also introduced.  相似文献   

16.
The minimum k-terminal cut problem is of considerable theoretical interest and arises in several applied areas such as parallel and distributed computing, VLSI circuit design, and networking. In this paper we present two new approximation and exact algorithms for this problem on an n-vertex undirected weighted planar graph G. For the case when the k terminals are covered by the boundaries of m > 1 faces of G, we give a min{O(n 2 log n logm), O(m 2 n 1.5 log2 n + k n)} time algorithm with a (2–2/k)-approximation ratio (clearly, m \le k). For the case when all k terminals are covered by the boundary of one face of G, we give an O(n k3 + (n log n)k 2) time exact algorithm, or a linear time exact algorithm if k = 3, for computing an optimal k-terminal cut. Our algorithms are based on interesting observations and improve the previous algorithms when they are applied to planar graphs. To our best knowledge, no previous approximation algorithms specifically for solving the k-terminal cut problem on planar graphs were known before. The (2–2/k)-approximation algorithm of Dahlhaus et al. (for general graphs) takes O(k n 2 log n) time when applied to planar graphs. Our approximation algorithm for planar graphs runs faster than that of Dahlhaus et al. by at least an O(k/logm) factor (m \le k).  相似文献   

17.
Previous research on developing parallel triangulation algorithms concentrated on triangulating planar point sets.O(log3 n) running time algorithms usingO(n) processors have been developed in Refs. 1 and 2. Atallah and Goodrich(3) presented a data structure that can be viewed as a parallel analogue of the sequential plane-sweeping paradigm, which can be used to triangulate a planar point set inO(logn loglogn) time usingO(n) processors. Recently Merks(4) described an algorithm for triangulating point sets which runs inO(logn) time usingO(n) processors, and is thus optimal. In this paper we develop a parallel algorithm for triangulating simplicial point sets in arbitrary dimensions based on the idea of the sequential algorithm presented in Ref. 5. The algorithm runs inO(log2 n) time usingO(n/logn) processors. The algorithm hasO(n logn) as the product of the running time and the number of processors; i.e., an optimal speed-up.  相似文献   

18.
The minimum k-terminal cut problem is of considerable theoretical interest and arises in several applied areas such as parallel and distributed computing, VLSI circuit design, and networking. In this paper we present two new approximation and exact algorithms for this problem on an n-vertex undirected weighted planar graph G. For the case when the k terminals are covered by the boundaries of m > 1 faces of G, we give a min{O(n 2 log n logm), O(m 2 n 1.5 log2 n + k n)} time algorithm with a (2–2/k)-approximation ratio (clearly, m \le k). For the case when all k terminals are covered by the boundary of one face of G, we give an O(n k3 + (n log n)k 2) time exact algorithm, or a linear time exact algorithm if k = 3, for computing an optimal k-terminal cut. Our algorithms are based on interesting observations and improve the previous algorithms when they are applied to planar graphs. To our best knowledge, no previous approximation algorithms specifically for solving the k-terminal cut problem on planar graphs were known before. The (2–2/k)-approximation algorithm of Dahlhaus et al. (for general graphs) takes O(k n 2 log n) time when applied to planar graphs. Our approximation algorithm for planar graphs runs faster than that of Dahlhaus et al. by at least an O(k/logm) factor (m \le k).  相似文献   

19.
A new algorithm for computing the medial axis of a simple polygon is presented. Although the algorithm runs in O(kN) time where k is the hierarchy of the Voronoi diagram of the polygon ranging from O(N) to O(logN) it is simple to implement and it does not require the complex data-structures required for the faster methods. This is an important factor in many applications of the medial axis.  相似文献   

20.
He  Xin 《Algorithmica》1990,5(1-4):545-559

We present an efficient algorithm for 4-coloring perfect planar graphs. The best previously known algorithm for this problem takesO(n 3/2) sequential time, orO(log4 n) parallel time withO(n3) processors. The sequential implementation of our algorithm takesO(n logn) time. The parallel implementation of our algorithm takesO(log3 n) time withO(n) processors on a PRAM.

  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号