首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Approximate string matching is about finding a given string pattern in a text by allowing some degree of errors. In this paper we present a space efficient data structure to solve the 1-mismatch and 1-difference problems. Given a text T of length n over an alphabet A, we can preprocess T and give an -bit space data structure so that, for any query pattern P of length m, we can find all 1-mismatch (or 1-difference) occurrences of P in O(|A|mlog log n+occ) time, where occ is the number of occurrences. This is the fastest known query time given that the space of the data structure is o(nlog 2 n) bits. The space of our data structure can be further reduced to O(nlog |A|) with the query time increasing by a factor of log  ε n, for 0<ε≤1. Furthermore, our solution can be generalized to solve the k-mismatch (and the k-difference) problem in O(|A| k m k (k+log log n)+occ) and O(log  ε n(|A| k m k (k+log log n)+occ)) time using an -bit and an O(nlog |A|)-bit indexing data structures, respectively. We assume that the alphabet size |A| is bounded by for the -bit space data structure.  相似文献   

2.
The paper describes several algorithms related to a problem of computing the local dimension of a semialgebraic set. Let a semialgebraic set V be defined by a system of k inequalities of the formf  ≥  0 with f  R [ X1, ,Xn ], deg(f)  < d , andx   V . An algorithm is constructed for computing the dimension of the Zariski tangent space to V at x in time (kd)O(n). Let x belong to a stratum of codimension lxin V with respect to a smooth stratification ofV . Another algorithm computes the local dimension dimx(V) with the complexity (k(lx +  1)d)O(lx2n). Ifl  = maxx  Vlx, and for every connected component the local dimension is the same at each point, then the algorithm computes the dimension of every connected component with complexity (k(l +  1)d)O(l2n). If V is a real algebraic variety defined by a system of equations, then the complexity of the algorithm is less thankdO(l2n) , and the algorithm also finds the dimension of the tangent space to V at x in time kdO(n). Whenl is fixed, like in the case of a smooth V , the complexity bounds for computing the local dimension are (kd)O(n)andkdO(n) respectively. A third algorithm finds the singular locus ofV in time (kd)O(n2).  相似文献   

3.
Dario Bini 《Calcolo》1985,22(1):209-228
The tensor rankA of the linear spaceA generated by the set of linearly independent matricesA 1, A2, …, Ap, is the least integert for wich there existt diadsu (r) v (r)τ, τ=1,2,...,t, such that . IfA=n+k,k≪n then some computational problems concerning matricesAA can be solyed fast. For example the parallel inversion of almost any nonsingular matrixAA costs 3 logn+0(log2 k) steps with max(n 2+p (n+k), k2 n+nk) processors, the evaluation of the determinant ofA can be performed by a parallel algorithm in logp+logn+0 (log2 k) parallel steps and by a sequential algorithm inn(1+k 2)+p (n+k)+0 (k 3) multiplications. Analogous results hold to accomplish one step of bisection method, Newton's iterations method and shifted inverse power method applied toA−λB in order to compute the (generalized) eigenvalues provided thatA, BA. The same results hold if tensor rank is replaced by border rank. Applications to the case of banded Toeplitz matrices are shown. Dedicated to Professor S. Faedo on his 70th birthday Part of the results of this paper has been presented at the Oberwolfach Conference on Komplexitatstheorie, November 1983  相似文献   

4.
We present new baby steps/giant steps algorithms of asymptotically fast running time for dense matrix problems. Our algorithms compute the determinant, characteristic polynomial, Frobenius normal form and Smith normal form of a dense n × n matrix A with integer entries in and bit operations; here denotes the largest entry in absolute value and the exponent adjustment by +o(1) captures additional factors for positive real constants C1, C2, C3. The bit complexity results from using the classical cubic matrix multiplication algorithm. Our algorithms are randomized, and we can certify that the output is the determinant of A in a Las Vegas fashion. The second category of problems deals with the setting where the matrix A has elements from an abstract commutative ring, that is, when no divisions in the domain of entries are possible. We present algorithms that deterministically compute the determinant, characteristic polynomial and adjoint of A with n3.2+o(1) and O(n2.697263) ring additions, subtractions and multiplications.To B. David Saunders on the occasion of his 60th birthday  相似文献   

5.
Efficient data structures are given for the following two query problems: (i) preprocess a setP of simple polygons with a total ofn edges, so that all polygons ofP intersected by a query segment can be reported efficiently, and (ii) preprocess a setS ofn segments, so that the connected components of the arrangement ofS intersected by a query segment can be reported quickly. In these problems we do not want to return the polygons or connected components explicitly (i.e., we do not wish to report the segments defining the polygon, or the segments lying in the connected components). Instead, we assume that the polygons (or connected components) are labeled and we just want to report their labels. We present data structures of sizeO(n 1+) that can answer a query in timeO(n 1++k), wherek is the output size. If the edges ofP (or the segments inS) are orthogonal, the query time can be improved toO(logn+k) usingO(n logn) space. We also present data structures that can maintain the connected components as we insert new segments. For arbitrary segments the amortized update and query time areO(n 1/2+) andO(n 1/2++k), respectively, and the space used by the data structure isO(n 1+. If we allowO(n 4/3+ space, the amortized update and query time can be improved toO(n 1/3+ andO(n 1/3++k, respectively. For orthogonal segments the amortized update and query time areO(log2 n) andO(log2 n+klogn), and the space used by the data structure isO (n logn). Some other related results are also mentioned.Part of this work was done while the second author was visiting the first author on a grant by the Dutch Organization for Scientific Research (N.W.O.). The research of the second author was also supported by the ESPRIT Basic Research Action No. 3075 (project ALCOM). The research of the first author was supported by National Science Foundation Grant CCR-91-06514.  相似文献   

6.
We generalize the recent relative loss bounds for on-line algorithms where the additional loss of the algorithm on the whole sequence of examples over the loss of the best expert is bounded. The generalization allows the sequence to be partitioned into segments, and the goal is to bound the additional loss of the algorithm over the sum of the losses of the best experts for each segment. This is to model situations in which the examples change and different experts are best for certain segments of the sequence of examples. In the single segment case, the additional loss is proportional to log n, where n is the number of experts and the constant of proportionality depends on the loss function. Our algorithms do not produce the best partition; however the loss bound shows that our predictions are close to those of the best partition. When the number of segments is k+1 and the sequence is of length &ell, we can bound the additional loss of our algorithm over the best partition by . For the case when the loss per trial is bounded by one, we obtain an algorithm whose additional loss over the loss of the best partition is independent of the length of the sequence. The additional loss becomes , where L is the loss of the best partitionwith k+1 segments. Our algorithms for tracking the predictions of the best expert aresimple adaptations of Vovk's original algorithm for the single best expert case. As in the original algorithms, we keep one weight per expert, and spend O(1) time per weight in each trial.  相似文献   

7.
We prove tight upper and lower bounds on the area of semelective, when-oblivious VLSI circuits for the problem ofl-selection. The area required to select thelth smallest ofn k-bit integers is found to be heavily dependent on the relative sizes ofl,k, andn. Whenl<2 k , the minimal area isA = (minn,l(k-logl)). Whenl2 k ,A = (2 k (logl-k + 1)).This work was supported in part by National Science Foundation Grant DMC 84-06408 and by the Slovak Academy of Sciences.  相似文献   

8.
Fast Algorithms for the Density Finding Problem   总被引:1,自引:0,他引:1  
We study the problem of finding a specific density subsequence of a sequence arising from the analysis of biomolecular sequences. Given a sequence A=(a 1,w 1),(a 2,w 2),…,(a n ,w n ) of n ordered pairs (a i ,w i ) of numbers a i and width w i >0 for each 1≤in, two nonnegative numbers , u with u and a number δ, the Density Finding Problem is to find the consecutive subsequence A(i *,j *) over all O(n 2) consecutive subsequences A(i,j) with width constraint satisfying w(i,j)=∑ r=i j w r u such that its density is closest to δ. The extensively studied Maximum-Density Segment Problem is a special case of the Density Finding Problem with δ=∞. We show that the Density Finding Problem has a lower bound Ω(nlog n) in the algebraic decision tree model of computation. We give an algorithm for the Density Finding Problem that runs in optimal O(nlog n) time and O(nlog n) space for the case when there is no upper bound on the width of the sequence, i.e., u=w(1,n). For the general case, we give an algorithm that runs in O(nlog 2 m) time and O(n+mlog m) space, where and w min=min  r=1 n w r . As a byproduct, we give another O(n) time and space algorithm for the Maximum-Density Segment Problem. Grants NSC95-2221-E-001-016-MY3, NSC-94-2422-H-001-0001, and NSC-95-2752-E-002-005-PAE, and by the Taiwan Information Security Center (TWISC) under the Grants NSC NSC95-2218-E-001-001, NSC95-3114-P-001-002-Y, NSC94-3114-P-001-003-Y and NSC 94-3114-P-011-001.  相似文献   

9.
In this paper, we study the merging of two sorted arrays and on EREW PRAM with two restrictions: (1) The elements of two arrays are taken from the integer range [1,n], where n=Max(n 1,n 2). (2) The elements are taken from either uniform distribution or non-uniform distribution such that , for 1≤ip (number of processors). We give a new optimal deterministic algorithm runs in time using p processors on EREW PRAM. For ; the running time of the algorithm is O(log (g) n) which is faster than the previous results, where log (g) n=log log (g−1) n for g>1 and log (1) n=log n. We also extend the domain of input data to [1,n k ], where k is a constant.
Hazem M. BahigEmail:
  相似文献   

10.
11.
A graph G is said to be a bicluster graph if G is a disjoint union of bicliques (complete bipartite subgraphs), and a cluster graph if G is a disjoint union of cliques (complete subgraphs). In this work, we study the parameterized versions of the NP-hard Bicluster Graph Editing and Cluster Graph Editing problems. The former consists of obtaining a bicluster graph by making the minimum number of modifications in the edge set of an input bipartite graph. When at most k modifications are allowed (Bicluster(k) Graph Editing problem), this problem is FPT, and can be solved in O(4 k nm) time by a standard search tree algorithm. We develop an algorithm of time complexity O(4 k +n+m), which uses a strategy based on modular decomposition techniques; we slightly generalize the original problem as the input graph is not necessarily bipartite. The algorithm first builds a problem kernel with O(k 2) vertices in O(n+m) time, and then applies a bounded search tree. We also show how this strategy based on modular decomposition leads to a new way of obtaining a problem kernel with O(k 2) vertices for the Cluster(k) Graph Editing problem, in O(n+m) time. This problem consists of obtaining a cluster graph by modifying at most k edges in an input graph. A previous FPT algorithm of time O(1.92 k +n 3) for this problem was presented by Gramm et al. (Theory Comput. Syst. 38(4), 373–392, 2005, Algorithmica 39(4), 321–347, 2004). In their solution, a problem kernel with O(k 2) vertices is built in O(n 3) time.  相似文献   

12.
Liu Huan-wen 《Calcolo》1992,29(3-4):269-289
Let S k lmn (l) ) and denote the space of bivariate splines and the space of double periodic splines of degree k on type-1 triangulation, respectively. We give, in this paper, an integral representation of splines in Let S k lmn (l) ) and a direct sum decomposition of Let S k lmn (l) ). By which together with the Bezier net of bivariate splines, we are able to determine the dimension of the space of double periodic splines for degree k≥4.  相似文献   

13.
We present the design and analysis of a nearly-linear work parallel algorithm for solving symmetric diagonally dominant (SDD) linear systems. On input an SDD n-by-n matrix A with m nonzero entries and a vector b, our algorithm computes a vector \(\tilde{x}\) such that \(\|\tilde{x} - A^{+}b\|_{A} \leq\varepsilon\cdot\|{A^{+}b}\|_{A}\) in \(O(m\log^{O(1)}{n}\log {\frac{1}{\varepsilon}})\) work and \(O(m^{1/3+\theta}\log\frac{1}{\varepsilon})\) depth for any θ>0, where A + denotes the Moore-Penrose pseudoinverse of A. The algorithm relies on a parallel algorithm for generating low-stretch spanning trees or spanning subgraphs. To this end, we first develop a parallel decomposition algorithm that in O(mlog O(1) n) work and polylogarithmic depth, partitions a graph with n nodes and m edges into components with polylogarithmic diameter such that only a small fraction of the original edges are between the components. This can be used to generate low-stretch spanning trees with average stretch O(n α ) in O(mlog O(1) n) work and O(n α ) depth for any α>0. Alternatively, it can be used to generate spanning subgraphs with polylogarithmic average stretch in O(mlog O(1) n) work and polylogarithmic depth. We apply this subgraph construction to derive a parallel linear solver. By using this solver in known applications, our results imply improved parallel randomized algorithms for several problems, including single-source shortest paths, maximum flow, minimum-cost flow, and approximate maximum flow.  相似文献   

14.
Ramsey numbers and an approximation algorithm for the vertex cover problem   总被引:5,自引:0,他引:5  
Summary We show two results. First we derive an upper bound for the special Ramsey numbers r k(q) where r k(q) is the largest number of nodes a graph without odd cycles of length bounded by 2k+1 and without an independent set of size q+1 can have. We prove The proof is constructive and yields an algorithm for computing an independent set of that size. Using this algorithm we secondly describe an OV¦·¦E¦) time bounded approximation algorithm for the vertex cover problem, whose worst case ratio is , for all graphs with at most (2k+3) k (2k+2) nodes (e.g. 1.8, if ¦V¦146000).  相似文献   

15.
In this work, we consider the problem of solving , , where b (k+1) = f(x (k)). We show that when A is a full matrix and , where depends on the specific software and hardware setup, it is faster to solve for by explicitly evaluating the inverse matrix A −1 rather than through the LU decomposition of A. We also show that the forward error is comparable in both methods, regardless of the condition number of A.  相似文献   

16.
Given a graph G=(V,E) with n vertices and m edges, and a subset T of k vertices called terminals, the Edge (respectively, Vertex) Multiterminal Cut problem is to find a set of at most l edges (non-terminal vertices), whose removal from G separates each terminal from all the others. These two problems are NP-hard for k≥3 but well-known to be polynomial-time solvable for k=2 by the flow technique. In this paper, based on a notion farthest minimum isolating cut, we design several simple and improved algorithms for Multiterminal Cut. We show that Edge Multiterminal Cut can be solved in O(2 l kT(n,m)) time and Vertex Multiterminal Cut can be solved in O(k l T(n,m)) time, where T(n,m)=O(min?(n 2/3,m 1/2)m) is the running time of finding a minimum (s,t) cut in an unweighted graph. Furthermore, the running time bounds of our algorithms can be further reduced for small values of k: Edge 3-Terminal Cut can be solved in O(1.415 l T(n,m)) time, and Vertex {3,4,5,6}-Terminal Cuts can be solved in O(2.059 l T(n,m)), O(2.772 l T(n,m)), O(3.349 l T(n,m)) and O(3.857 l T(n,m)) time respectively. Our results on Multiterminal Cut can also be used to obtain faster algorithms for Multicut: $O((\min(\sqrt{2k},l)+1)^{2k}2^{l}T(n,m))Given a graph G=(V,E) with n vertices and m edges, and a subset T of k vertices called terminals, the Edge (respectively, Vertex) Multiterminal Cut problem is to find a set of at most l edges (non-terminal vertices), whose removal from G separates each terminal from all the others. These two problems are NP-hard for k≥3 but well-known to be polynomial-time solvable for k=2 by the flow technique. In this paper, based on a notion farthest minimum isolating cut, we design several simple and improved algorithms for Multiterminal Cut. We show that Edge Multiterminal Cut can be solved in O(2 l kT(n,m)) time and Vertex Multiterminal Cut can be solved in O(k l T(n,m)) time, where T(n,m)=O(min (n 2/3,m 1/2)m) is the running time of finding a minimum (s,t) cut in an unweighted graph. Furthermore, the running time bounds of our algorithms can be further reduced for small values of k: Edge 3-Terminal Cut can be solved in O(1.415 l T(n,m)) time, and Vertex {3,4,5,6}-Terminal Cuts can be solved in O(2.059 l T(n,m)), O(2.772 l T(n,m)), O(3.349 l T(n,m)) and O(3.857 l T(n,m)) time respectively. Our results on Multiterminal Cut can also be used to obtain faster algorithms for Multicut: O((min(?{2k},l)+1)2k2lT(n,m))O((\min(\sqrt{2k},l)+1)^{2k}2^{l}T(n,m)) -time algorithm for Edge Multicut and O((2k) k+l/2 T(n,m))-time algorithm for Vertex Multicut.  相似文献   

17.
We present a general framework for designing fast subexponential exact and parameterized algorithms on planar graphs. Our approach is based on geometric properties of planar branch decompositions obtained by Seymour and Thomas, combined with refined techniques of dynamic programming on planar graphs based on properties of non-crossing partitions. To exemplify our approach we show how to obtain an  $O(2^{6.903\sqrt{n}})We present a general framework for designing fast subexponential exact and parameterized algorithms on planar graphs. Our approach is based on geometric properties of planar branch decompositions obtained by Seymour and Thomas, combined with refined techniques of dynamic programming on planar graphs based on properties of non-crossing partitions. To exemplify our approach we show how to obtain an  O(26.903?n)O(2^{6.903\sqrt{n}}) time algorithm solving weighted Hamiltonian Cycle on an n-vertex planar graph. Similar technique solves Planar Graph Travelling Salesman Problem with n cities in time O(29.8594?n)O(2^{9.8594\sqrt{n}}) . Our approach can be used to design parameterized algorithms as well. For example, we give an algorithm that for a given k decides if a planar graph on n vertices has a cycle of length at least k in time O(213.6?kn+n3)O(2^{13.6\sqrt{k}}n+n^{3}) .  相似文献   

18.
Indexing of factors or substrings is a widely used and useful technique in stringology and can be seen as a tool in solving diverse text algorithmic problems. A gapped-factor is a concatenation of a factor of length k, a gap of length d and another factor of length k′. Such a gapped factor is called a (kdk′)-gapped-factor. The problem of indexing the gapped-factors was considered recently by Peterlongo et al. (In: Stringology, pp. 182–196, 2006). In particular, Peterlongo et al. devised a data structure, namely a gapped factor tree (GFT) to index the gapped-factors. Given a text of length n over the alphabet Σ and the values of the parameters k, d and k′, the construction of GFT requires O(n|Σ|) time. Once GFT is constructed, a given (kdk′)-gapped-factor can be reported in O(k+k′+Occ) time, where Occ is the number of occurrences of that factor in  . In this paper, we present a new improved indexing scheme for the gapped-factors. The improvements we achieve come from two aspects. Firstly, we generalize the indexing data structure in the sense that, unlike GFT, it is independent of the parameters k and k′. Secondly, our data structure can be constructed in O(nlog 1+ε n) time and space, where 0<ε<1. The only price we pay is a slight increase, i.e. an additional log log n term, in the query time. Preliminary version appeared in [29]. C.S. Iliopoulos is supported by EPSRC and Royal Society grants. M.S. Rahman is supported by the Commonwealth Scholarship Commission in the UK under the Commonwealth Scholarship and Fellowship Plan (CSFP). M.S. Rahman is on leave from Department of CSE, BUET, Dhaka 1000, Bangladesh.  相似文献   

19.
As is well known, a finite field n = GF(q n ) can be described in terms of n × n matrices A over the field = GF(q) such that their powers A i , i = 1, 2, ..., q n – 1, correspond to all nonzero elements of the field. It is proved that, for fields n of characteristic 2, such a matrix A can be chosen to be symmetric. Several constructions of field-representing symmetric matrices are given. These matrices A i together with the all-zero matrix can be considered as a n -linear matrix code in the rank metric with maximum rank distance d = n and maximum possible cardinality q n . These codes are called symmetric rank codes. In the vector representation, such codes are maximum rank distance (MRD) linear [n, 1, n] codes, which allows one to use known rank-error-correcting algorithms. For symmetric codes, an algorithm of erasure symmetrization is proposed, which considerably reduces the decoding complexity as compared with standard algorithms. It is also shown that a linear [n, k, d = nk + 1] MRD code k containing the above-mentioned one-dimensional symmetric code as a subcode has the following property: the corresponding transposed code is also n -linear. Such codes have an extended capability of correcting symmetric errors and erasures.  相似文献   

20.
Disjoint -pairs are a well studied complexity-theoretic concept with important applications in cryptography and propositional proof complexity. In this paper we introduce a natural generalization of the notion of disjoint -pairs to disjoint k-tuples of -sets for k≥2. We define subclasses of the class of all disjoint k-tuples of -sets. These subclasses are associated with a propositional proof system and possess complete tuples which are defined from the proof system. In our main result we show that complete disjoint -pairs exist if and only if complete disjoint k-tuples of -sets exist for all k≥2. Further, this is equivalent to the existence of a propositional proof system in which the disjointness of all k-tuples is shortly provable. We also show that a strengthening of this conditions characterizes the existence of optimal proof systems. An extended abstract of this paper appeared in the proceedings of the conference CSR 2006 (Lecture Notes in Computer Science 3967, 80–91, 2006). Supported by DFG grant KO 1053/5-1.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号