共查询到20条相似文献,搜索用时 31 毫秒
1.
2.
RNA二级结构预测中动态规划的优化和有效并行 总被引:6,自引:0,他引:6
基于最小自由能模型的方法是计算生物学中RNA二级结构预测的主要方法,而计算最小自由能的动态规划算法需要O(n4)的时间,其中n是RNA序列的长度.目前有两种降低时间复杂度的策略:限制二级结构中内部环的大小不超过k,得到O(n2×k2)算法;Lyngso方法根据环的能量规则,不限制环的大小,在O(n3)的时间内获得近似最优解.通过使用额外的O(n)的空间,计算内部环中的冗余计算大为减少,从而在同样不限制环大小的情况下,在O(n3)的时间内能够获得最优解.然而,优化后的算法仍然非常耗时,通过有效的负载平衡方法,在机群系统上实现并行程序.实验结果表明,并行程序获得了很好的加速比. 相似文献
3.
Computing an optimal solution to the knapsack problem is known to be NP-hard. Consequently, fast parallel algorithms for finding such a solution without using an exponential number of processors appear unlikely. An attractive alternative is to compute an approximate solution to this problem rapidly using a polynomial number of processors. In this paper, we present an efficient parallel algorithm for finding approximate solutions to the 0–1 knapsack problem. Our algorithm takes an , 0 < < 1, as a parameter and computes a solution such that the ratio of its deviation from the optimal solution is at most a fraction of the optimal solution. For a problem instance having n items, this computation uses O(n5/2/3/2) processors and requires O(log3n + log2nlog(1/)) time. The upper bound on the processor requirement of our algorithm is established by reducing it to a problem on weighted bipartite graphs. This processor complexity is a significant improvement over that of other known parallel algorithms for this problem. 相似文献
4.
Xin He 《Algorithmica》1990,5(1):545-559
We present an efficient algorithm for 4-coloring perfect planar graphs. The best previously known algorithm for this problem takesO(n
3/2) sequential time, orO(log4
n) parallel time withO(n3) processors. The sequential implementation of our algorithm takesO(n logn) time. The parallel implementation of our algorithm takesO(log3
n) time withO(n) processors on a PRAM. 相似文献
5.
6.
《国际计算机数学杂志》2012,89(3-4):205-226
Ghosh and Bhattacharjee propose [2] (Intern. J. Computer Math., 1984, Vol. 15, pp. 255-268) an algorithm of determining breadth first spanning trees for graphs, which requires that the input graphs contain some vertices, from which every other vertex in the input graph can be reached. These vertices are called starting vertices. The complexity of the GB algorithm is O(log2 n) using O{n 3) processors. In this paper an algorithm, named BREADTH, also computing breadth first spanning trees, is proposed. The complexity is O(log2 n) using O{n 3/logn) processors. Then an efficient parallel algorithm, named- BREADTHFOREST, is proposed, which generalizes algorithm BREADTH. The output of applying BREADTHFOREST to a general graph, which may not contain any starting vertices, is a breadth first spanning forest of the input graph. The complexity of BREADTHFOREST is the same as BREADTH. 相似文献
7.
We present an efficient algorithm for 4-coloring perfect planar graphs. The best previously known algorithm for this problem takesO(n 3/2) sequential time, orO(log4 n) parallel time withO(n3) processors. The sequential implementation of our algorithm takesO(n logn) time. The parallel implementation of our algorithm takesO(log3 n) time withO(n) processors on a PRAM.
相似文献8.
Hossam ElGindy 《International journal of parallel programming》1986,15(5):389-398
Previous research on developing parallel triangulation algorithms concentrated on triangulating planar point sets.O(log3
n) running time algorithms usingO(n) processors have been developed in Refs. 1 and 2. Atallah and Goodrich(3) presented a data structure that can be viewed as a parallel analogue of the sequential plane-sweeping paradigm, which can be used to triangulate a planar point set inO(logn loglogn) time usingO(n) processors. Recently Merks(4) described an algorithm for triangulating point sets which runs inO(logn) time usingO(n) processors, and is thus optimal. In this paper we develop a parallel algorithm for triangulating simplicial point sets in arbitrary dimensions based on the idea of the sequential algorithm presented in Ref. 5. The algorithm runs inO(log2
n) time usingO(n/logn) processors. The algorithm hasO(n logn) as the product of the running time and the number of processors; i.e., an optimal speed-up. 相似文献
9.
10.
Deterministic collect algorithms are presented that are adaptive to total contention and are efficient with respect to both
the number of registers used and the step complexity. One of them has optimal O(k) step and O(n) space complexities, but assumes that processes’ identifiers are in O(n), where n is the total number of processes in the system and k is the total contention. The step complexity of an unrestricted name space variant of this algorithm remains O(k), but its space complexity increases to O(n
2). 相似文献
11.
We construct nonblocking networks that are efficient not only as regards their cost and delay, but also as regards the time and space required to control them. In this paper we present the first simultaneous weakly optimal solutions for the explicit construction of nonblocking networks, the design of algorithms and data-structures. Weakly optimal is in the sense that all measures of complexity (size and depth of the network, time for the algorithm, space for the data-structure, and number of processor-time product) are within one or more logarithmic factors of their smallest possible values. In fact, we construct a scheme in which networks withn inputs andn outputs have sizeO(n(logn)2) and depthO(logn), and we present deterministic and randomized on-line parallel algorithms to establish and abolish routes dynamically in these networks. In particular, the deterministic algorithm usesO((logn)5) steps to process any number of transactions in parallel (with one processor per transaction), maintaining a data structure that useO(n(logn)2) words. 相似文献
12.
Run-time synchronization overhead is a crucial factor in limiting speedup for parallel computers. In this paper, we present
a new two-phase algorithm for removing redundant dependencies and minimizing interprocessor synchronizations when scheduling
an acyclic task graph onto a multiprocessor system. The first phase removes redundant dependencies before scheduling; the
second phase eliminates interprocessor synchronizations after scheduling. In a simulation using randomly generated task graphs,
on the average, 98.28% of the dependencies are eliminated in the first phase, and 65.86% of the remaining dependencies are
eliminated during the second phase, for a total of 99.41% removed. The approach has also been applied to some benchmark task
graphs. The two-phase algorithm, which hasO(n
3) time complexity andO(n
2) space complexity, utilizes a new algorithm which computes the transitive closure and reduction at the same time, storing
results in a single matrix. 相似文献
13.
Chandan Saha 《Information Processing Letters》2009,109(16):907-912
In this paper we consider the problem of finding two parallel rectangles in arbitrary orientation for covering a given set of n points in a plane, such that the area of the larger rectangle is minimized. We propose an algorithm that solves the problem in O(n3) time using O(n2) space. Without altering the complexity, our approach can be used to solve another optimization problem namely, minimize the sum of the areas of two arbitrarily oriented parallel rectangles covering a given set of points in a plane. 相似文献
14.
将串行动态二表算法应用于并行三表算法的设计中,提出一种求解背包、精确的可满足性和集覆盖等背包类NP完全问题的并行三表六子表算法.基于EREW-PRAM模型,该算法可使用O(2n/8)的处理机在O(27n/16)的时间和O(213n/48)的空间求解n维背包类问题,其时间-空间-处理机折衷为O(25n/6).与现有文献的性能对比分析表明,该算法极大地提高了并行求解背包类问题的时间-空间-处理机折衷性能.由于该算法能够破解更高维数的背包类公钥和数字水印系统,其结论在密钥分析领域具有一定的理论和实际意义. 相似文献
15.
In Dijkstra (Commun ACM 17(11):643–644, 1974) introduced the notion of self-stabilizing algorithms and presented three such
algorithms for the problem of mutual exclusion on a ring of n processors. The third algorithm is the most interesting of these three but is rather non intuitive. In Dijkstra (Distrib
Comput 1:5–6, 1986) a proof of its correctness was presented, but the question of determining its worst case complexity—that
is, providing an upper bound on the number of moves of this algorithm until it stabilizes—remained open. In this paper we
solve this question and prove an upper bound of
3\frac1318 n2 + O(n){3\frac{13}{18} n^2 + O(n)} for the complexity of this algorithm. We also show a lower bound of
1\frac56 n2 - O(n){1\frac{5}{6} n^2 - O(n)} for the worst case complexity. For computing the upper bound, we use two techniques: potential functions and amortized analysis.
We also present a new-three state self-stabilizing algorithm for mutual exclusion and show a tight bound of
\frac56 n2 + O(n){\frac{5}{6} n^2 + O(n)} for the worst case complexity of this algorithm. In Beauquier and Debas (Proceedings of the second workshop on self-stabilizing
systems, pp 17.1–17.13, 1995) presented a similar three-state algorithm, with an upper bound of
5\frac34n2+O(n){5\frac{3}{4}n^2+O(n)} and a lower bound of
\frac18n2-O(n){\frac{1}{8}n^2-O(n)} for its stabilization time. For this algorithm we prove an upper bound of
1\frac12n2 + O(n){1\frac{1}{2}n^2 + O(n)} and show a lower bound of n
2−O(n). As far as the worst case performance is considered, the algorithm in Beauquier and Debas (Proceedings of the second workshop
on self-stabilizing systems, pp 17.1–17.13, 1995) is better than the one in Dijkstra (Commun ACM 17(11):643–644, 1974) and
our algorithm is better than both. 相似文献
16.
In 1994, S.G. Matthews introduced the notion of partial metric space in order to obtain a suitable mathematical tool for program
verification (Ann. N.Y. Acad. Sci. 728:183–197, 1994). He gave an application of this new structure to parallel computing by means of a partial metric version of the celebrated
Banach fixed point theorem (Theor. Comput. Sci. 151:195–205, 1995). Later on, M.P. Schellekens introduced the theory of complexity (quasi-metric) spaces as a part of the development of a
topological foundation for the asymptotic complexity analysis of programs and algorithms (Electron. Notes Theor. Comput. Sci.
1:211–232, 1995). The applicability of this theory to the asymptotic complexity analysis of Divide and Conquer algorithms was also illustrated
by Schellekens. In particular, he gave a new proof, based on the use of the aforenamed Banach fixed point theorem, of the
well-known fact that Mergesort algorithm has optimal asymptotic average running time of computing. In this paper, motivated
by the utility of partial metrics in Computer Science, we discuss whether the Matthews fixed point theorem is a suitable tool
to analyze the asymptotic complexity of algorithms in the spirit of Schellekens. Specifically, we show that a slight modification
of the well-known Baire partial metric on the set of all words over an alphabet constitutes an appropriate tool to carry out
the asymptotic complexity analysis of algorithms via fixed point methods without the need for assuming the convergence condition
inherent to the definition of the complexity space in the Schellekens framework. Finally, in order to illustrate and to validate
the developed theory we apply our results to analyze the asymptotic complexity of Quicksort, Mergesort and Largesort algorithms.
Concretely we retrieve through our new approach the well-known facts that the running time of computing of Quicksort (worst
case behaviour), Mergesort and Largesort (average case behaviour) are in the complexity classes O(n2)\mathcal{O}(n^{2}), O(nlog2(n))\mathcal{O}(n\log_{2}(n)) and O(2(n-1)-log2(n))\mathcal{O}(2(n-1)-\log_{2}(n)), respectively. 相似文献
17.
By restricting weight functions to satisfy the quadrangle inequality or the inverse quadrangle inequality, significant progress has been made in developing efficient sequential algorithms for the least-weight subsequence problem [10], [9], [12], [16]. However, not much is known on the improvement of the naive parallel algorithm for the problem, which is fast but demands too many processors (i.e., it takesO(log2
n) time on a CREW PRAM with n3/logn processors). In this paper we show that if the weight function satisfies the inverse quadrangle inequality, the problem can be solved on a CREW PRAM in O(log2
n log logn) time withn/log logn processors, or in O(log2
n) time withn logn processors. Notice that the processor-time complexity of our algorithm is much closer to the almost linear-time complexity of the best-known sequential algorithm [12]. 相似文献
18.
Variable Order Panel Clustering 总被引:3,自引:0,他引:3
Stefan Sauter 《Computing》2000,64(3):223-261
We present a new version of the panel clustering method for a sparse representation of boundary integral equations. Instead
of applying the algorithm separately for each matrix row (as in the classical version of the algorithm) we employ more general
block partitionings. Furthermore, a variable order of approximation is used depending on the size of blocks.
We apply this algorithm to a second kind Fredholm integral equation and show that the complexity of the method only depends
linearly on the number, say n, of unknowns. The complexity of the classical matrix oriented approach is O(n
2) while, for the classical panel clustering algorithm, it is O(nlog7
n).
Received July 28, 1999; revised September 21, 1999 相似文献
19.
Symbolic decision procedure for termination of linear programs 总被引:2,自引:0,他引:2
Tiwari proved that the termination of a class of linear programs is decidable in Tiwari (Proceedings of CAV’04. Lecture notes
in computer science, vol 3114, pp 70–82, 2004). The decision procedure proposed therein depends on the computation of Jordan forms. Thus, people may draw a wrong conclusion from this procedure, if they simply apply floating-point computation to compute
Jordan forms. In this paper, we first use an example to explain this problem, and then present a symbolic implementation of
the decision procedure. Thus, the rounding error problem is therefore avoided. Moreover, we also show that the symbolic decision
procedure is as efficient as the numerical one given in Tiwari (Proceedings of CAV’04. Lecture notes in computer science,
vol 3114, pp 70–82, 2004). The complexity of former is max{O(n
6), O(n
m+3)}, while that of the latter is O(n
m+3), where n is the number of variables of the program and m is the number of its Boolean conditions. In addition, for the case when the characteristic polynomial of the assignment matrix
is irreducible, we design a more efficient symbolic algorithm whose complexity is max(O(n
6), O(mn
3)). 相似文献
20.
We consider the problem of message (and bit) efficient
broadcasting in complete networks with dynamic faults. Despite the
simplicity of the setting, the problem turned out to be surprisingly
interesting from the algorithmic point of view.
In this paper we show an Ω(n + t
f
t/(t – 1)) lower bound on
the number of messages sent by any t-step broadcasting
algorithm, where f is the number of faults per step. The core of
the paper contains a constructive O(n + t
f
(t + 1)/t
) upper bound.
The algorithms involved are of time complexity O(t), not
strictly t.
In addition, we present a bit-efficient algorithm of O(n log2
n) bit and O(log n) time complexities. We also show that it is
possible to achieve the same message complexity even if the nodes
do not know the id’s of their neighbours, but instead have only a
Weak Sense of Direction. 相似文献