首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 10 毫秒
1.
The length-constrained heaviest path (LCHP) in a weighted tree T, where each edge is assigned a weight and a length, is the path P in T with maximum total path weight and total path length bounded by a given value B. This paper presents an O(nlogn) time LCHP algorithm which utilizes a data structure constructed from the spine decomposition of the input tree. This is an improvement over the existing algorithm by Wu et al. (1999), which runs in O(nlog2n) time. Our method also improves on a previous O(nlogn) time algorithm by Kim (2005) for the special case of finding a longest nonnegative path in a constant degree tree in that we can handle trees of arbitrary degree within the same time bounds.  相似文献   

2.
This paper addresses the problem of storing an ordered list using a red-black tree, where node keys can only be expressed relative to each other. The insert and delete operations in a red-black tree are extended to maintain the relative key values. The extensions rely only on relative keys of neighboring nodes, adding constant overhead and thus preserving the logarithmic time complexity of the original operations.  相似文献   

3.
We give two algorithms to randomly permute a linked list of length n in place using O(nlogn) time and O(logn) stack space in both the expected case and the worst case. The first algorithm uses well-known sequential random sampling, and the second uses inverted sequential random sampling.  相似文献   

4.
In this paper, we study the problem of keyword proximity search in XML documents. We take the disjunctive semantics among the keywords into consideration and find top-k relevant compact connected trees (CCTrees) as the answers of keyword proximity queries. We first introduce the notions of compact lowest common ancestor (CLCA) and maximal CLCA (MCLCA), and then propose compact connected trees and maximal CCTrees (MCCTrees) to efficiently and effectively answer keyword proximity queries. We give the theoretical upper bounds of the numbers of CLCAs, MCLCAs, CCTrees and MCCTrees, respectively. We devise an efficient algorithm to generate all MCCTrees, and propose a ranking mechanism to rank MCCTrees. Our extensive experimental study shows that our method achieves both high efficiency and effectiveness, and outperforms existing state-of-the-art approaches significantly.  相似文献   

5.
The main results of this paper are efficient parallel algorithms, MSP and LOCATE, for computing minimal spanning trees and locating minimal paths in directed graphs, respectively. Algorithm MSP has time complexityO(log3 n) usingO(n 3/logn) processors, while LOCATE has time complexityO(logn) usingO(n 2) processors. Algorithm MSP is derived from sequential algorithms, when the unbounded parallelism model is used.  相似文献   

6.
Restricted rotation distance between pairs of rooted binary trees measures differences in tree shape and is related to rotation distance. In restricted rotation distance, the rotations used to transform the trees are allowed to be only of two types. Restricted rotation distance is larger than rotation distance, since there are only two permissible locations to rotate, but is much easier to compute and estimate. We obtain linear upper and lower bounds for restricted rotation distance in terms of the number of interior nodes in the trees. Further, we describe a linear-time algorithm for estimating the restricted rotation distance between two trees and for finding a sequence of rotations which realizes that estimate. The methods use the metric properties of the abstract group known as Thompson's group F.  相似文献   

7.
We consider a variant of the classical Longest Common Subsequence problem called Doubly-Constrained Longest Common Subsequence (DC-LCS). Given two strings s1 and s2 over an alphabet Σ, a set Cs of strings, and a function Co:ΣN, the DC-LCS problem consists of finding the longest subsequence s of s1 and s2 such that s is a supersequence of all the strings in Cs and such that the number of occurrences in s of each symbol σΣ is upper bounded by Co(σ). The DC-LCS problem provides a clear mathematical formulation of a sequence comparison problem in Computational Biology and generalizes two other constrained variants of the LCS problem that have been introduced previously in the literature: the Constrained LCS and the Repetition-Free LCS. We present two results for the DC-LCS problem. First, we illustrate a fixed-parameter algorithm where the parameter is the length of the solution which is also applicable to the more specialized problems. Second, we prove a parameterized hardness result for the Constrained LCS problem when the parameter is the number of the constraint strings (|Cs|) and the size of the alphabet Σ. This hardness result also implies the parameterized hardness of the DC-LCS problem (with the same parameters) and its NP-hardness when the size of the alphabet is constant.  相似文献   

8.
The constrained longest common subsequence problem   总被引:1,自引:0,他引:1  
This paper considers a constrained version of longest common subsequence problem for two strings. Given strings S1, S2 and P, the constrained longest common subsequence problem for S1 and S2 with respect to P is to find a longest common subsequence lcs of S1 and S2 such that P is a subsequence of this lcs. An O(rn2m2) time algorithm based upon the dynamic programming technique is proposed for this new problem, where n, m and r are lengths of S1, S2 and P, respectively.  相似文献   

9.
Double hashing with bucket capacity one is augmented with multiple passbits to obtain significant reduction to unsuccessful search lengths. This improves the analysis of Martini et al. [P.M. Martini, W.A. Burkhard, Double hashing with multiple passbits, Internat. J. Found. Theoret. Comput. Sci. 14 (6) (2003) 1165-1188] by providing a closed form expression for the expected unsuccessful search lengths.  相似文献   

10.
A compact searchable representation of static binary trees is presented that can be traversed in O(h) time where h is the height of the tree. The space requirement for a tree with n nodes is less than 2.5n+(h−1)(2+log2((n−1)/(h−1))) bits. The access time per node is in O(1). The scheme uses a cumulative-count technique to map the nodes at each level in the tree into sequential memory locations. The mapping requires the nodes to be of uniform size.  相似文献   

11.
String inclusion and non-inclusion problems have been vigorously studied in such diverse fields as molecular biology, data compression, and computer security. Among the well-known string inclusion or non-inclusion notions, we are interested in the longest common nonsuperstring. Given a set of strings, the longest common nonsuperstring problem is finding the longest string that is not a superstring of any string in the given set. It is known that the longest common nonsuperstring problem is solvable in polynomial time.In this paper, we propose an efficient algorithm for the longest common nonsuperstring problem. The running time of our algorithm is linear with respect to the sum of the lengths of the strings in the given set, using generalized suffix trees.  相似文献   

12.
There are two general approaches to the longest common subsequence problem. The dynamic programming approach takes quadratic time but linear space, while the nondynamic-programming approach takes less time but more space. We propose a new implementation of the latter approach which seems to get the best for both time and space for the DNA application.  相似文献   

13.
We present a randomized and a deterministic data structure for maintaining a dynamic family of sequences under equality tests of pairs of sequences and creations of new sequences by joining or splitting existing sequences. Both data structures support equality tests inO(1) time. The randomized version supports new sequence creations inO(log2 n) expected time wheren is the length of the sequence created. The deterministic solution supports sequence creations inO(logn(logmlog* m+logn)) time for themth operation. This work was supported by the ESPRIT Basic Research Actions Program, under Contract No. 7141 (Project ALCOM II).  相似文献   

14.
E. Ruppert 《Algorithmica》2000,28(2):242-254
A concurrent-read exclusive-write PRAM algorithm is developed to find the k shortest paths between pairs of vertices in an edge-weighted directed graph. Repetitions of vertices along the paths are allowed. The algorithm computes an implicit representation of the k shortest paths to a given destination vertex from every vertex of a graph with n vertices and m edges, using O(m+nk log 2 k) work and O( log^3k log ^*k+ log n( log log k+ log ^*n)) time, assuming that a shortest path tree rooted at the destination is pre-computed. The paths themselves can be extracted from the implicit representation in O( log k + log n) time, and O(n log n +L) work, where L is the total length of the output. Received July 2, 1997; revised June 18, 1998.  相似文献   

15.
Invertible Bloom Lookup Tables (IBLTs) have been recently introduced as an extension of traditional Bloom filters. IBLTs store key-value pairs. Unlike traditional Bloom filters, IBLTs support both a lookup operation (given a key, return a value) and an operation that lists out all the key-value pairs stored. One issue with IBLTs is that there is a probability that a lookup operation will return “not found” for a key. In this paper, a technique to reduce this probability without affecting the storage requirement and only moderately increasing the search time is presented and evaluated. The results show that it can significantly reduce the probability of not returning a value that is actually stored in the IBLT. The overhead of the modified search procedure, compared to the standard IBLT search procedure, is small and has little impact on the average search time.  相似文献   

16.
The most natural kinetic data structure for maintaining the maximum of a collection of continuously changing numbers is the kinetic heap. Basch, Guibas, and Ramkumar proved that the maximum number of events processed by a kinetic heap with n numbers changing as linear functions of time is O(nlog2n) and . We prove that this number is actually Θ(nlogn). In the kinetic heap, a linear number of events are stored in a priority queue, consequently, it takes O(logn) time to determine the next event at each iteration. We also present a modified version of the kinetic heap that processes O(nlogn/loglogn) events, with the same O(logn) time complexity to determine the next event.  相似文献   

17.
We give an algorithm to compute the subset partial order (called the subset graph) for a family F of sets containing k sets with N elements in total and domain size n. Our algorithm requires O(nk2/logk) time and space on a Pointer Machine. When F is dense, i.e. N=Θ(nk), the algorithm requires O(N2/log2N) time and space. We give a construction for a dense family whose subset graph is of size Θ(N2/log2N), indicating the optimality of our algorithm for dense families. The subset graph can be dynamically maintained when F undergoes set insertions and deletions in O(nk/logk) time per update (that is sub-linear in N for the case of dense families). If we assume words of b?k bits, allow bits to be packed in words, and use bitwise operations, the above running time and space requirements can be reduced by a factor of blog(k/b+1)/logk and b2log(k/b+1)/logk respectively.  相似文献   

18.
We consider the following problem: Given an unsorted array of n elements, and a sequence of intervals in the array, compute the median in each of the subarrays defined by the intervals. We describe a simple algorithm which needs O(nlogk+klogn) time to answer k such median queries. This improves previous algorithms by a logarithmic factor and matches a comparison lower bound for k=O(n). The space complexity of our simple algorithm is O(nlogn) in the pointer machine model, and O(n) in the RAM model. In the latter model, a more involved O(n) space data structure can be constructed in O(nlogn) time where the time per query is reduced to O(logn/loglogn). We also give efficient dynamic variants of both data structures, achieving O(log2n) query time using O(nlogn) space in the comparison model and O((logn/loglogn)2) query time using O(nlogn/loglogn) space in the RAM model, and show that in the cell-probe model, any data structure which supports updates in O(logO(1)n) time must have Ω(logn/loglogn) query time.Our approach naturally generalizes to higher-dimensional range median problems, where element positions and query ranges are multidimensional—it reduces a range median query to a logarithmic number of range counting queries.  相似文献   

19.
Given a sequence of n elements, we introduce the notion of an almost-increasing subsequence as the longest subsequence that can be converted to an increasing subsequence by possibly adding a value, that is at most a fixed constant, to each of the elements. We show how to optimally construct such subsequence in time, where k is the length of the output subsequence.  相似文献   

20.
For a tree T the Level-Ancestor problem is the following: given a node x and an integer depth d, find the ancestor of x in T that is at depth d in T. We formulate this problem precisely for the Pure Pointer Machine model, i.e., pointer machines without arithmetic capabilities. We then provide a solution for the problem in the fully dynamic case that runs in worst-case O(lgn) time per operation. We also prove a matching lower bound of to solve the problem.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号