首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
《Real》2002,8(2):137-144
In image processing, moments are useful tools for analyzing shapes. Suppose that the input grey image with size N×N has been compressed into the compressed image using the block representation, where the number of blocks used is K, commonly K<N2 due to the compression effect. This paper presents an efficient O (NK)-time algorithm for computing moments on the compressed image directly. Experimental results reveal a significant computational advantage of the proposed algorithm, while preserving a high accuracy of moments and a good compression ratio. The results of this paper extend the previous results in [7] from the binary image domain to the grey image domain.  相似文献   

2.
We consider the conjectured O(N2+?) time complexity of multiplying any two N×N matrices A and B. Our main result is a deterministic Compressed Sensing (CS) algorithm that both rapidly and accurately computes AB provided that the resulting matrix product is sparse/compressible. As a consequence of our main result we increase the class of matrices A, for any given N×N matrix B, which allows the exact computation of AB to be carried out using the conjectured O(N2+?) operations. Additionally, in the process of developing our matrix multiplication procedure, we present a modified version of Indyk's recently proposed extractor-based CS algorithm [P. Indyk, Explicit constructions for compressed sensing of sparse signals, in: SODA, 2008] which is resilient to noise.  相似文献   

3.
Given a compressed image in the restricted quadtree and shading format, this paper presents a fast algorithm for computing 2D discrete cosine transform (DCT) on the compressed grey image directly without the need to decompress the compressed image. The proposed new DCT algorithm takes O(K2logK+N2) time where the decompressed image is of size N×N and K denotes the number of nodes in the restricted quadtree. Since commonly K<N, the proposed algorithm is faster than the indirect method by decompressing the compressed image first, then applying the conventional DCT algorithm on the decompressed image. The indirect method takes O(N2logN) time.  相似文献   

4.
The edit distance problem is a classical fundamental problem in computer science in general, and in combinatorial pattern matching in particular. The standard dynamic programming solution for this problem computes the edit-distance between a pair of strings of total length O(N) in O(N 2) time. To this date, this quadratic upper-bound has never been substantially improved for general strings. However, there are known techniques for breaking this bound in case the strings are known to compress well under a particular compression scheme. The basic idea is to first compress the strings, and then to compute the edit distance between the compressed strings. As it turns out, practically all known o(N 2) edit-distance algorithms work, in some sense, under the same paradigm described above. It is therefore natural to ask whether there is a single edit-distance algorithm that works for strings which are compressed under any compression scheme. A rephrasing of this question is to ask whether a single algorithm can exploit the compressibility properties of strings under any compression method, even if each string is compressed using a different compression. In this paper we set out to answer this question by using straight line programs. These provide a generic platform for representing many popular compression schemes including the LZ-family, Run-Length Encoding, Byte-Pair Encoding, and dictionary methods. For two strings of total length N having straight-line program representations of total size n, we present an algorithm running in O(nNlg(N/n)) time for computing the edit-distance of these two strings under any rational scoring function, and an O(n 2/3 N 4/3) time algorithm for arbitrary scoring functions. Our new result, while providing a speed up for compressible strings, does not surpass the quadratic time bound even in the worst case scenario.  相似文献   

5.
6.
We present an efficient algorithm for finding all approximate occurrences of a given pattern p of length m in a text t of length n allowing for translocations of equal length adjacent factors and inversions of factors. The algorithm is based on an efficient filtering method and has an O(nmmax(α,β))-time complexity in the worst case and O(max(α,β,σ))-space complexity, where α and β are respectively the maximum length of the factors involved in any translocation and inversion, and σ is the alphabet size. Moreover we show that our algorithm has an O(n) average time complexity, whenever , for ε>0. Experiments show that the proposed algorithm achieves very good results in practical cases.  相似文献   

7.
In this paper we study a first-order primal-dual algorithm for non-smooth convex optimization problems with known saddle-point structure. We prove convergence to a saddle-point with rate O(1/N) in finite dimensions for the complete class of problems. We further show accelerations of the proposed algorithm to yield improved rates on problems with some degree of smoothness. In particular we show that we can achieve O(1/N 2) convergence on problems, where the primal or the dual objective is uniformly convex, and we can show linear convergence, i.e. O(ω N ) for some ω∈(0,1), on smooth problems. The wide applicability of the proposed algorithm is demonstrated on several imaging problems such as image denoising, image deconvolution, image inpainting, motion estimation and multi-label image segmentation.  相似文献   

8.
A recent trend in stringology explores the possibility of utilizing text compression to speed up similarity computation between strings. In this line of investigation, run-length encoding is one of the earliest studied compression schemes. Despite its simple coding nature, the only positive result before this work is the computation of the in-del distance (dual of longest common subsequence), which requires O(mnlogmn) time, where m and n denote the number of runs of the input strings. The worst-case time complexity of computing the edit distance between two run-length encoded strings still depends on the uncompressed string lengths. In this paper, we break the foundational gap by providing its first “fully compressed” algorithm whose running time depends solely on the compressed string lengths. Specifically, given two strings, compressed into m and n runs, mn, we present an O(mn 2)-time algorithm for computing the edit distance of the strings. Our approach also yields the first fully compressed solution to approximate matching of a pattern of m runs in a text of n runs in O(mn 2) time.  相似文献   

9.
We show an O?(n(?+1))-time algorithm for the channel assignment problem, where ? is the maximum edge weight. This improves on the previous O?(n(?+2))-time algorithm by Král (2005) [1], as well as algorithms for important special cases, like L(2,1)-labeling. For the latter problem, our algorithm works in O?(n3) time. The progress is achieved by applying the fast zeta transform in combination with the inclusion-exclusion principle.  相似文献   

10.
We give the first optimal algorithm that computes a minimum cycle basis for any weighted outerplanar graph. Specifically, for any n-node edge-weighted outerplanar graph G, we give an O(n)-time algorithm to obtain an O(n)-space compact representation Z(C) for a minimum cycle basis C of G. Each cycle in C can be computed from Z(C) in O(1) time per edge. Our result works for directed and undirected outerplanar graphs G.  相似文献   

11.
We present semi-streaming algorithms for basic graph problems that have optimal per-edge processing times and therefore surpass all previous semi-streaming algorithms for these tasks. The semi-streaming model, which is appropriate when dealing with massive graphs, forbids random access to the input and restricts the memory to bits.Particularly, the formerly best per-edge processing times for finding the connected components and a bipartition are O(α(n)), for determining k-vertex and k-edge connectivity O(k2n) and O(n⋅logn) respectively for any constant k and for computing a minimum spanning forest O(logn). All these time bounds we reduce to O(1).Every presented algorithm determines a solution asymptotically as fast as the best corresponding algorithm up to date in the classical RAM model, which therefore cannot convert the advantage of unlimited memory and random access into superior computing times for these problems.  相似文献   

12.
A certifying algorithm for a problem is an algorithm that provides a certificate with each answer that it produces. The certificate is an evidence that can be used to authenticate the correctness of the answer. A Hamiltonian cycle in a graph is a simple cycle in which each vertex of the graph appears exactly once. The Hamiltonian cycle problem is to determine whether or not a graph contains a Hamiltonian cycle. The best result for the Hamiltonian cycle problem on circular-arc graphs is an O(n2logn)-time algorithm, where n is the number of vertices of the input graph. In fact, the O(n2logn)-time algorithm can be modified as a certifying algorithm although it was published before the term certifying algorithms appeared in the literature. However, whether there exists an algorithm whose time complexity is better than O(n2logn) for solving the Hamiltonian cycle problem on circular-arc graphs has been opened for two decades. In this paper, we present an O(Δn)-time certifying algorithm to solve this problem, where Δ represents the maximum degree of the input graph. The certificates provided by our algorithm can be authenticated in O(n) time.  相似文献   

13.
Thedistance transform(DT) is an image computation tool which can be used to extract the information about the shape and the position of the foreground pixels relative to each other. It converts a binary image into a grey-level image, where each pixel has a value corresponding to the distance to the nearest foreground pixel. The time complexity for computing the distance transform is fully dependent on the different distance metrics. Especially, the more exact the distance transform is, the worse execution time reached will be. Nowadays, quite often thousands of images are processed in a limited time. It seems quite impossible for a sequential computer to do such a computation for the distance transform in real time. In order to provide efficient distance transform computation, it is considerably desirable to develop a parallel algorithm for this operation. In this paper, based on the diagonal propagation approach, we first provide anO(N2) time sequential algorithm to compute thechessboard distance transform(CDT) of anN×Nimage, which is a DT using the chessboard distance metrics. Based on the proposed sequential algorithm, the CDT of a 2D binary image array of sizeN×Ncan be computed inO(logN) time on the EREW PRAM model usingO(N2/logN) processors,O(log logN) time on the CRCW PRAM model usingO(N2/log logN) processors, andO(logN) time on the hypercube computer usingO(N2/logN) processors. Following the mapping as proposed by Lee and Horng, the algorithm for the medial axis transform is also efficiently derived. The medial axis transform of a 2D binary image array of sizeN×Ncan be computed inO(logN) time on the EREW PRAM model usingO(N2/logN) processors,O(log logN) time on the CRCW PRAM model usingO(N2/log logN) processors, andO(logN) time on the hypercube computer usingO(N2/logN) processors. The proposed parallel algorithms are composed of a set of prefix operations. In each prefix operation phase, only increase (add-one) operation and minimum operation are employed. So, the algorithms are especially efficient in practical applications.  相似文献   

14.
The Euclidean distance transform (EDT) is an operation to convert a binary image consisting of black and white pixels to a representation where each pixel has the Euclidean distance of the nearest black pixel. The EDT has many applications in computer vision and image processing. In this paper, we present a constant-time algorithm for computing the EDT of an N×N image on a reconfigurable mesh. Our algorithm has two variants. (i) If the image is initially given in an N×N mesh, one pixel per processor, our algorithm requires an N×N×N mesh for computing the EDT. (ii) If the image is given in an N×N2 mesh, each row of the image in the first row of a separate N×N mesh, we can compute the EDT in the same N×N2 mesh. The AT2 bounds for these two variants are O(N4) and O(N3) respectively. The best previously known algorithm (Y. Pan and K. Li, Inform. Sci.120 (1999), 209–221) for this problem assumes input similar to the second variant of our algorithm and runs in constant-time on an N2×N2 reconfigurable mesh with an AT2 bound of O(N4). Hence both variants of our algorithm improve upon the processor complexity of the algorithm in Pan and Li (1999) by a factor of N and the second variant improves upon the AT2 complexity by a factor of N.  相似文献   

15.
Clustering is a basic operation in image processing and computer vision, and it plays an important role in unsupervised pattern recognition and image segmentation. While there are many methods for clustering, the single-link hierarchical clustering is one of the most popular techniques. In this paper, with the advantages of both optical transmission and electronic computation, we design efficient parallel hierarchical clustering algorithms on the arrays with reconfigurable optical buses (AROB). We first design three efficient basic operations which include the matrix multiplication of two N×N matrices, finding the minimum spanning tree of a graph with N vertices, and identifying the connected component containing a specified vertex. Based on these three data operations, an O(log N) time parallel hierarchical clustering algorithm is proposed using N3 processors. Furthermore, if the connectivity of the AROB with four-port connection is allowed, two constant time clustering algorithms can be also derived using N4 and N3 processors, respectively. These results improve on previously known algorithms developed on various parallel computational models.  相似文献   

16.
《Pattern recognition》1986,19(3):221-228
The relative neighbourhood graph (RNG) of a set of N points connects the relative neighbours, i.e. a pair of points is connected by an edge if those points are at least as close to each other as to any other point. The paper presents two new algorithms for constructing RNG in two-dimensional Euclidean space. The method is to determine a supergraph for RNG which can then be thinned efficiently from the extra edges. The first algorithm is simple, and works in O(N2) time. The worst case running time of the second algorithm is also O(N2), but its average case running time is O(N) for points from a homogeneous planar Poisson point process. Experimental tests have shown the usefulness of the approach.  相似文献   

17.
This paper addresses the multi-robot 2-cyclic scheduling problem in a no-wait robotic cell where exactly two parts enter and leave the cell during each cycle and multiple robots on a single track are responsible for transporting parts between machines. We develop a polynomial algorithm to find the minimum number of robots for all feasible cycle times. Consequently, the optimal cycle time for any given number of robots can be obtained with the algorithm. The proposed algorithm can be implemented in O(N7) time, where N is the number of machines in the considered robotic cell.  相似文献   

18.
We present an O(n 2logn)-time algorithm that finds a maximum matching in a regular graph with n vertices. More generally, the algorithm runs in O(rn 2logn) time if the difference between the maximum degree and the minimum degree is less than r. This running time is faster than applying the fastest known general matching algorithm that runs in $O(\sqrt{n}m)$ -time for graphs with m edges, whenever m=ω(rn 1.5logn).  相似文献   

19.
In the attention-driven image interpretation process, an image is interpreted as containing several perceptually attended objects as well as the background. The process benefits greatly a content-based image retrieval task with attentively important objects identified and emphasized. An important issue to be addressed in an attention-driven image interpretation is to reconstruct several attentive objects iteratively from the segments of an image by maximizing a global attention function. The object reconstruction is a combinational optimization problem with a complexity of 2N which is computationally very expensive when the number of segments N is large. In this paper, we formulate the attention-driven image interpretation process by a matrix representation. An efficient algorithm based on the elementary transformation of matrix is proposed to reduce the computational complexity to 3ωN(N-1)2/2, where ω is the number of runs. Experimental results on both the synthetic and real data show a significantly improved processing speed with an acceptable degradation to the accuracy of object formulation.  相似文献   

20.
Given an n-node edge-weighted graph and a subset of k terminal nodes, the NP-hard (weighted) Steiner tree problem is to compute a minimum-weight tree which spans the terminals. All the known algorithms for this problem which improve on trivial O(1.62 n )-time enumeration are based on dynamic programming, and require exponential space. Motivated by the fact that exponential-space algorithms are typically impractical, in this paper we address the problem of designing faster polynomial-space algorithms. Our first contribution is a simple O((27/4) k n O(logk))-time polynomial-space algorithm for the problem. This algorithm is based on a variant of the classical tree-separator theorem: every Steiner tree has a node whose removal partitions the tree in two forests, containing at most 2k/3 terminals each. Exploiting separators of logarithmic size which evenly partition the terminals, we are able to reduce the running time to $O(4^{k}n^{O(\log^{2} k)})$ . This improves on trivial enumeration for roughly k<n/3, which covers most of the cases of practical interest. Combining the latter algorithm (for small k) with trivial enumeration (for large k) we obtain a O(1.59 n )-time polynomial-space algorithm for the weighted Steiner tree problem. As a second contribution of this paper, we present a O(1.55 n )-time polynomial-space algorithm for the cardinality version of the problem, where all edge weights are one. This result is based on a improved branching strategy. The refined branching is based on a charging mechanism which shows that, for large values of k, convenient local configurations of terminals and non-terminals exist. The analysis of the algorithm relies on the Measure & Conquer approach: the non-standard measure used here is a linear combination of the number of nodes and number of non-terminals. Using a recent result in Nederlof (International colloquium on automata, languages and programming (ICALP), pp. 713–725, 2009), the running time can be reduced to O(1.36 n ). The previous best algorithm for the cardinality case runs in O(1.42 n ) time and exponential space.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号