首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper introduces an intriguing topic in image processing where accuracy of images even in details is important and adopts an intriguing methodology dealing with discrete topics by continuous mathematics and numerical approximation. The key idea is that a pixel of images at different levels can be quantified by a greyness value, which can then be regarded as the mean of an integral of continuous functions over a certain region, and evaluated by numerical integration approximately. However, contrasted to the traditional integration, the integrand has different smooth nature in different subregions due to piecewise interpolation and approximation. New treatments of approximate integration and new discrete algorithms of images have been developed.The cycle conversion T−1T of image transformations is said if an image is distorted by a transformation T and then restored back to itself by the inverse transformation T−1. Combined algorithms of discrete techniques proposed in [1–3] only have the convergence rates O(1/N) and O(1/N2) of sequential greyness errors, where N is the division number for a pixel split into N2 subpixels. This paper reports new combination algorithms using spline functions to achieve the high convergence rates O(1/N3) and O(1/N4) of digital image transformations under the cycle conversion. Both error analysis and numerical experiments have been provided to verify the high convergence rates. High convergence rates of discrete algorithms are important in saving CPU time, particularly to multi-greyness images. Moreover, the computational figures for real images of 256 × 256 with 256 greyness levels, in which N = 2 is good enough for practical requirements, display validity, and effectiveness of the new algorithms in this paper.  相似文献   

2.
Thedistance transform(DT) is an image computation tool which can be used to extract the information about the shape and the position of the foreground pixels relative to each other. It converts a binary image into a grey-level image, where each pixel has a value corresponding to the distance to the nearest foreground pixel. The time complexity for computing the distance transform is fully dependent on the different distance metrics. Especially, the more exact the distance transform is, the worse execution time reached will be. Nowadays, quite often thousands of images are processed in a limited time. It seems quite impossible for a sequential computer to do such a computation for the distance transform in real time. In order to provide efficient distance transform computation, it is considerably desirable to develop a parallel algorithm for this operation. In this paper, based on the diagonal propagation approach, we first provide anO(N2) time sequential algorithm to compute thechessboard distance transform(CDT) of anN×Nimage, which is a DT using the chessboard distance metrics. Based on the proposed sequential algorithm, the CDT of a 2D binary image array of sizeN×Ncan be computed inO(logN) time on the EREW PRAM model usingO(N2/logN) processors,O(log logN) time on the CRCW PRAM model usingO(N2/log logN) processors, andO(logN) time on the hypercube computer usingO(N2/logN) processors. Following the mapping as proposed by Lee and Horng, the algorithm for the medial axis transform is also efficiently derived. The medial axis transform of a 2D binary image array of sizeN×Ncan be computed inO(logN) time on the EREW PRAM model usingO(N2/logN) processors,O(log logN) time on the CRCW PRAM model usingO(N2/log logN) processors, andO(logN) time on the hypercube computer usingO(N2/logN) processors. The proposed parallel algorithms are composed of a set of prefix operations. In each prefix operation phase, only increase (add-one) operation and minimum operation are employed. So, the algorithms are especially efficient in practical applications.  相似文献   

3.
Three-dimensional (3-D) digital images and patterns under transformations are facilitated by the splitting-shooting method (SSM) and the splitting-integration method (SIM). The combination (CSIM) of using both SSM and SIM and two combinations (CIIM) of using SIM only are proposed for a cycle conversion T(-1)T, where T is a nonlinear transformation, and T(-1) is its inverse transformation. This paper focuses on exploitation of accuracy of obtained image greyness. In our discrete algorithms, letting a 3-D pixel be split into N(3) subpixels, the convergence rates, O(1/N), O(1/N(2)), and O(1/N (3)), of sequential error can be achieved by the three combinations respectively. High convergence rates indicate less CPU time needed. Both error bounds and computation of pixel greyness have shown the significance of the proposed new algorithms.  相似文献   

4.
The image template matching problem is one of the fundamental problems of and has many practical applications in image processing, pattern recognition, and computer vision. It is a useful operation for filtering, edge detection, image registration, and object detection [13]. In this paper, we first design twoO[(M2/p2)log logM] andO[(M2/p2)+(M/p)log logp] time parallel image template matching algorithms on a 3-D processor array with a reconfigurable bus system usingp2N2processors with each processor containingO(1) andO(M/p) restricted memory for 1 ≤pMN, respectively, for anN×Ndigital image and anM×Mtemplate. By increasing the number of processors, these two proposed algorithms can be run inO(M2/p2) time for speeding up the time complexity usingp2M1/cN2andp2+1/cN2processors, respectively, wherecis a constant andc≥1. Furthermore, anO(1) time can be also obtained from these two proposed algorithms by usingM2+1/cN2processors. These results improve the best known bounds and achieve both optimal and optimal speed-up in their time and processor complexities.  相似文献   

5.
In this paper we study a first-order primal-dual algorithm for non-smooth convex optimization problems with known saddle-point structure. We prove convergence to a saddle-point with rate O(1/N) in finite dimensions for the complete class of problems. We further show accelerations of the proposed algorithm to yield improved rates on problems with some degree of smoothness. In particular we show that we can achieve O(1/N 2) convergence on problems, where the primal or the dual objective is uniformly convex, and we can show linear convergence, i.e. O(ω N ) for some ω∈(0,1), on smooth problems. The wide applicability of the proposed algorithm is demonstrated on several imaging problems such as image denoising, image deconvolution, image inpainting, motion estimation and multi-label image segmentation.  相似文献   

6.
We present three explicit schemes for distributingM variables amongN memory modules, whereM=Θ(N 1.5),M = Θ(N 2), andM=Θ(N 3), respectively. Each variable is replicated into a constant number of copies stored in distinct modules. We show thatN processors, directly accessing the memories through a complete interconnection, can read/write any set ofN variables in worst-case timeO (N 1/3),O(N 1/2), andO(N 2/3), respectively for the three schemes. The access times for the last two schemes are optimal with respect to the particular redundancy values used by such schemes. The address computation can be carried out efficiently by each processor without recourse to a complete memory map and requiring onlyO(1) internal storage. This paper was partially supported by NFS Grants CCR-91-96152 and CCR-94-00232, by ONR Contract N00014-91-J-4052, ARPA Order 8225, and by the ESPRIT III Basic Research Programme of the EC under Contract No. 9072 (Project GEPPCOM). Results reported here were presented in preliminary form at the 10th Symposium on Theoretical Aspects of Computer Science (Würzburg, Germany, 1993), and at the 5th ACM Symposium on Parallel Algorithms and Architectures (Velen, Germany, 1993).  相似文献   

7.
Shellsort with a constant number of increments   总被引:2,自引:0,他引:2  
M. A. Weiss 《Algorithmica》1996,16(6):649-654
We consider the worst-case running time of Shellsort when only a constant number,c, of increments are allowed. Forc=3, we show that Shellsort can be implemented to run inO(N 5/3) time, which is optimal. Forc=4, we further improve the running time toO(N 11/7), and, forc=5, we obtain a bound ofO(N 23/15). We also show anO(N 1+1/k ) bound for generalc, wherek=[(1+8c+1)/4]. Forc=6, this isO(N 3/2).  相似文献   

8.
LetN max(q) denote the maximum number of points of an elliptic curve over F q . Given a prime powerq=p f and an integern satisfying 1/2q+1<n(N max(q)–2)/2, we present an algorithm which on inputq andn produces an optimal bilinear algorithm of length 2n for multiplication in F q n /F q . The algorithm takes roughlyO(q 4+n 4logq) F q -operations or equivalentlyO((q 4+n 4logq)f 2log2 p) bit-operations to compute the output data.  相似文献   

9.
The boundary concentrated FEM, a variant of the hp-version of the finite element method, is proposed for the numerical treatment of elliptic boundary value problems. It is particularly suited for equations with smooth coefficients and non-smooth boundary conditions. In the two-dimensional case it is shown that the Cholesky factorization of the resulting stiffness matrix requires O(Nlog4 N) units of storage and can be computed with O(Nlog8 N) work, where N denotes the problem size. Numerical results confirm theoretical estimates. Received October 4, 2001; revised August 19, 2002 Published online: October 24, 2002  相似文献   

10.
Fork functionsf 1, ...f k, ak-tuple (x 1, ...x k) such thatf 1(x 1)=...=f k(x k) is called a claw off 1, ...,f k. In this paper, we construct a new quantum claw-finding algorithm for three functions that is efficient when the numberM of intermediate solutions is small. The known quantum claw-finding algorithm for three functions requiresO(N 7/8 logN) queries to find a claw, but our algorithm requiresO(N 3/4 logN) queries ifM ≤ √N andO(N 7/12 M 1/3 logN) queries otherwise. Thus, our algorithm is more efficient ifMN 7/8. Kazuo Iwama, Ph.D.: Professor of Informatics, Kyoto University, Kyoto 606-8501, Japan. Received BE, ME, and Ph.D. degrees in Electrical Engineering from Kyoto University in 1978, 1980 and 1985, respectively. His research interests include algorithms, complexity theory and quantum computation. Editorial board of Information Processing Letters and Parallel Computing. Council Member of European Association for Theoretical Computer Science (EATCS). Akinori Kawachi: Received B.Eng. and M.Info. from Kyoto University in 2000 and 2002, respectively. His research interests are quantum computation and distributed computation.  相似文献   

11.
We study the complexity of, and algorithms to construct, approximations of the union of lines and of the Minkowski sum of two simple polygons. We also studythick unions of lines and Minkowski sums, which are inflated with a small disc. Letb=D/ɛ be the ratio of the diameter of the region of interest and the maximum distance (or error) of the approximation. An approximate union of lines or Minkowski sum has complexity Θ (b 2) in the worst case. The thick union ofn lines has complexity Ω(n+b 2) andO(n +bbn), and thick Minkowski sums have complexity Ω(n 2+b2) andO((n+b)n√blogn+n 2 logn) in the worst case. We present algorithms that run inO(n+n 2/3+δ b 4/3 andO(min(bn,n 4/3+δ b 2/3)) time (any δ>0) for approximate and thick arrangements. For approximate Minkowski sums, the running time isO(min(b 2 n,n 2 +b 2 + (nb)4/3+δ); thick Minkowski sums takeO(n 8/3+δ b 2/3) time to compute.  相似文献   

12.
An algorithm for finding a minimal edge coloring of a bipartite multigraph is presented. The algorithm usesO(V 1/2 ElogV + V) time andO(E + V) space. It is based on a divide-and-conquer strategy, using euler partitions to divide the graph. A modification of the algorithm for matching is described. This algorithm finds a maximum matching of a regular bipartite graph with all degrees 2n, inO(E + V) time andO(E + V) space.This work was partially supported by the National Science Foundation under Grant GJ36461.  相似文献   

13.
In this article, the adaptive integral method (AIM) is used to analyze large‐scale planar structures. Discretization of the corresponding integral equations by method of moment (MoM) with Rao‐Wilton‐Glisson (RWG) basis functions can model arbitrarily shaped planar structures, but usually leads to a fully populated matrix. AIM could map these basis functions onto a rectangular grid, where the Toeplitz property of the Green's function would be utilized, which enables the calculation of the matrix‐vector multiplication by use of the fast Fourier transform (FFT) technique. It reduces the memory requirement from O(N2) to O(N) and the operation complexity from O(N2) to O(N log N), where N is the number of unknowns. The resultant equations are then solved by the loose generalized minimal residual method (LGMRES) to accelerate iteration, which converges much faster than the conventional conjugate gradient method (CG). Furthermore, several preconditioning techniques are employed to enhance the computational efficiency of the LGMRES. Some typical microstrip circuits and microstrip antenna array are analyzed and numerical results show that the preconditioned LGMRES can converge much faster than conventional LGMRES. © 2008 Wiley Periodicals, Inc. Int J RF and Microwave CAE, 2009.  相似文献   

14.
Fast computation of sample entropy and approximate entropy in biomedicine   总被引:1,自引:0,他引:1  
Both sample entropy and approximate entropy are measurements of complexity. The two methods have received a great deal of attention in the last few years, and have been successfully verified and applied to biomedical applications and many others. However, the algorithms proposed in the literature require O(N2) execution time, which is not fast enough for online applications and for applications with long data sets. To accelerate computation, the authors of the present paper have developed a new algorithm that reduces the computational time to O(N3/2)) using O(N) storage. As biomedical data are often measured with integer-type data, the computation time can be further reduced to O(N) using O(N) storage. The execution times of the experimental results with ECG, EEG, RR, and DNA signals show a significant improvement of more than 100 times when compared with the conventional O(N2) method for N = 80,000 (N = length of the signal). Furthermore, an adaptive version of the new algorithm has been developed to speed up the computation for short data length. Experimental results show an improvement of more than 10 times when compared with the conventional method for N > 4000.  相似文献   

15.
A polynomial interpolation time-marching technique can efficiently provide balanced spectral accuracy in both the space and time dimensions for some PDEs. The Newton-form interpolation based on Fejér points has been successfully implemented to march the periodic Fourier pseudospectral solution in time. In this paper, this spectrally accurate time-stepping technique will be extended to solve some typical nonperiodic initial boundary value problems by the Chebyshev collocation spatial approximation. Both homogeneous Neumann and Dirichlet boundary conditions will be incorporated into the time-marching scheme. For the second order wave equation, besides more accurate timemarching, the new scheme numerically has anO(1/N 2) time step size limitation of stability, much larger thanO(1/N 4) stability limitation in conventional finitedifference time-stepping, Chebyshev space collocation methods.  相似文献   

16.
《国际计算机数学杂志》2012,89(13):3052-3062
This paper describes a procedure for solving the system of linear Volterra integral equations by means of the Sinc collocation method. A convergence and an error analysis are given; it is shown that the Sinc solution produces an error of order O(exp(?c N 1/2)), where c>0 is a constant. This approximation reduces the system of integral equations to an explicit system of algebraic equations. The analytical results are illustrated with numerical examples that exhibit the exponential convergence rate.  相似文献   

17.
18.
This paper shows a parallel implementation of a priority queue with bandwidthPand maximum sizenPby means of a network with reconfigurable buses. The proposed solution is based on a tree of meshes architecture ofO(nP2) processors andO(Plogn) maximum subbus length. The computational times required by the operations of a priority queue with bandwidthPareO(1) for all the operations, using the unit-time delay model for broadcasting, while they areO(1) for MIN andO(logP+ log logn) for both DELETEMIN and INSERT, using the log-time delay model. The proposed network can be laid out in a classical H-shaped manner to occupyO(nP2) area in the VLSI model. WhenP=O(1), the required area is optimal and, using the unit-time delay model, the resultingAT2is also optimal. The paper presents also a very simple and efficient way of merging two sorted sequences on a reconfigurable mesh, which is used in the implementation of the priority queue operations.  相似文献   

19.
Li  Jie  Pan  Yi  Shen  Hong 《The Journal of supercomputing》2003,24(3):251-258
Topological sort of an acyclic graph has many applications such as job scheduling and network analysis. Due to its importance, it has been tackled on many models. Dekel et al. [3], proposed an algorithm for solving the problem in O(log2 N) time on the hypercube or shuffle-exchange networks with O(N 3) processors. Chaudhuri [2], gave an O(log N) algorithm using O(N 3) processors on a CRCW PRAM model. On the LARPBS (Linear Arrays with a Reconfigurable Pipelined Bus System) model, Li et al. [5] showed that the problem for a weighted directed graph with N vertices can be solved in O(log N) time by using N 3 processors. In this paper, a more efficient topological sort algorithm is proposed on the same LARPBS model. We show that the problem can be solved in O(log N) time by using N 3/log N processors. We show that the algorithm has better time and processor complexities than the best algorithm on the hypercube, and has the same time complexity but better processor complexity than the best algorithm on the CRCW PRAM model.  相似文献   

20.
Efficient data structures are given for the following two query problems: (i) preprocess a setP of simple polygons with a total ofn edges, so that all polygons ofP intersected by a query segment can be reported efficiently, and (ii) preprocess a setS ofn segments, so that the connected components of the arrangement ofS intersected by a query segment can be reported quickly. In these problems we do not want to return the polygons or connected components explicitly (i.e., we do not wish to report the segments defining the polygon, or the segments lying in the connected components). Instead, we assume that the polygons (or connected components) are labeled and we just want to report their labels. We present data structures of sizeO(n 1+) that can answer a query in timeO(n 1++k), wherek is the output size. If the edges ofP (or the segments inS) are orthogonal, the query time can be improved toO(logn+k) usingO(n logn) space. We also present data structures that can maintain the connected components as we insert new segments. For arbitrary segments the amortized update and query time areO(n 1/2+) andO(n 1/2++k), respectively, and the space used by the data structure isO(n 1+. If we allowO(n 4/3+ space, the amortized update and query time can be improved toO(n 1/3+ andO(n 1/3++k, respectively. For orthogonal segments the amortized update and query time areO(log2 n) andO(log2 n+klogn), and the space used by the data structure isO (n logn). Some other related results are also mentioned.Part of this work was done while the second author was visiting the first author on a grant by the Dutch Organization for Scientific Research (N.W.O.). The research of the second author was also supported by the ESPRIT Basic Research Action No. 3075 (project ALCOM). The research of the first author was supported by National Science Foundation Grant CCR-91-06514.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号