首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper considers the problem of permutation packet routing on a n×n mesh-connected array of processors. Each node in the array is assumed to be independently faulty with a probability bounded above by a valuep. This paper gives a routing algorithm which, ifp 0.29, will with very high probability route every packet that can be routed inO(n logn) steps with queue lengths that areO(log2 n). Extensions to higher-dimensional meshes are given.  相似文献   

2.
In this paper we propose two new multilayer grid models for VLSI layout, both of which take into account the number of contact cuts used. For the first model in which nodes exist only on one layer, we prove a tight area × (number of contact cuts) = (n 2) tradeoff for embeddingn-node planar graphs of bounded degree in two layers. For the second model in which nodes exist simultaneously on all layers, we give a number of upper bounds on the area needed to embed groups using no contact cuts. We show that anyn-node graph of thickness 2 can be embedded on two layers inO(n 2) area. This bound is tight even if more layers and any number of contact cuts are allowed. We also show that planar graphs of bounded degree can be embedded on two layers inO(n 3/2(logn)2) area.Some of our embedding algorithms have the additional property that they can respect prespecified grid placements of the nodes of the graph to be embedded. We give an algorithm for embeddingn-node graphs of thicknessk ink layers usingO(n 3) area, using no contact cuts, and respecting prespecified node placements. This area is asymptotically optimal for placement-respecting algorithms, even if more layers are allowed, as long as a fixed fraction of the edges do not use contact cuts. Our results use a new result on embedding graphs in a single-layer grid, namely an embedding ofn-node planar graphs such that each edge makes at most four turns, and all nodes are embedded on the same line.The first author's research was partially supported by NSF Grant No. MCS 820-5167.  相似文献   

3.
We study rigid motions of a rectangle amidst polygonal obstacles. The best known algorithms for this problem have a running time of (n 2), wheren is the number of obstacle corners. We introduce thetightness of a motion-planning problem as a measure of the difficulty of a planning problem in an intuitive sense and describe an algorithm with a running time ofO((a/b · 1/crit + 1)n(logn)2), wherea b are the lengths of the sides of a rectangle and crit is the tightness of the problem. We show further that the complexity (= number of vertices) of the boundary ofn bow ties (see Figure 1) isO(n). Similar results for the union of other simple geometric figures such as triangles and wedges are also presented.This work was supported partially by the DFG Schwerpunkt Datenstrukturen und Algorithmen, Grants Me 620/6 and Al 253/1, and by the ESPRIT II Basic Research Actions Program of the EC under Contract No. 3075 (project ALCOM).  相似文献   

4.
Parallel integer sorting using small operations   总被引:1,自引:0,他引:1  
We consider the problem of sortingn integers in the range [0,n c -1], wherec is a constant. It has been shown by Rajasekaran and Sen [14] that this problem can be solved optimally inO(logn) steps on an EREW PRAM withO(n) n -bit operations, for any constant >O. Though the number of operations is optimal, each operation is very large. In this paper, we show thatn integers in the range [0,n c -1] can be sorted inO(logn) time withO(nlogn)O(1)-bit operations andO(n) O(logn)-bit operations. The model used is a non-standard variant of an EREW PRAMtthat permits processors to have word-sizes ofO(1)-bits and (logn)-bits. Clearly, the speed of the proposed algorithm is optimal. Considering that the input to the problem consists ofO (n logn) bits, the proposed algorithm performs an optimal amount of work, measured at the bit level.This work was partially supported by The Northeast Parallel Architectures Center (NPAC) at Syracuse University, Syracuse, NY 13244 and The Rome Air Development Center, under contract F30602-88-D-0027.  相似文献   

5.
This paper presents quasi-optimal upper bounds for simplex range searching. The problem is to preprocess a setP ofn points in d so that, given any query simplexq, the points inP q can be counted or reported efficiently. Ifm units of storage are available (n <m <n d ), then we show that it is possible to answer any query inO(n 1+/m 1/d ) query time afterO(m 1+) preprocessing. This bound, which holds on a RAM or a pointer machine, is almost tight. We also show how to achieveO(logn) query time at the expense ofO(n d+) storage for any fixed > 0. To fine-tune our results in the reporting case we also establish new zone theorems for arrangements and merged arrangements of planes in 3-space, which are of independent interest.A preliminary version of this paper has appeared in theProceedings of the Sixth Annual ACM Symposium on Computational Geometry, June 1990, pp. 23–33. Work on this paper by Bernard Chazelle has been supported by NSF Grant CCR-87-00917 and NSF Grant CCR-90-02352. Work on this paper by Micha Sharir has been supported by Office of Naval Research Grant N00014-87-K-0129, by National Science Foundation Grants DCR-83-20085 and CCR-8901484, and by grants from the U.S.-Israeli Binational Science Foundation, the NCRD—the Israeli National Council for Research and Development, and the Fund for Basic Research administered by the Israeli Academy of Sciences. Work by Emo Welzl has been supported by Deutsche Forschungsgemeinschaft Grant We 1265/1–2. Micha Sharir and Emo Welzl have also been supported by a grant from the German-Israeli Binational Science Foundation. Last but not least, all authors thank DIMACS, an NSF Science and Technology Center, for additional support under Grant STC-88-09648.  相似文献   

6.
Efficient data structures are given for the following two query problems: (i) preprocess a setP of simple polygons with a total ofn edges, so that all polygons ofP intersected by a query segment can be reported efficiently, and (ii) preprocess a setS ofn segments, so that the connected components of the arrangement ofS intersected by a query segment can be reported quickly. In these problems we do not want to return the polygons or connected components explicitly (i.e., we do not wish to report the segments defining the polygon, or the segments lying in the connected components). Instead, we assume that the polygons (or connected components) are labeled and we just want to report their labels. We present data structures of sizeO(n 1+) that can answer a query in timeO(n 1++k), wherek is the output size. If the edges ofP (or the segments inS) are orthogonal, the query time can be improved toO(logn+k) usingO(n logn) space. We also present data structures that can maintain the connected components as we insert new segments. For arbitrary segments the amortized update and query time areO(n 1/2+) andO(n 1/2++k), respectively, and the space used by the data structure isO(n 1+. If we allowO(n 4/3+ space, the amortized update and query time can be improved toO(n 1/3+ andO(n 1/3++k, respectively. For orthogonal segments the amortized update and query time areO(log2 n) andO(log2 n+klogn), and the space used by the data structure isO (n logn). Some other related results are also mentioned.Part of this work was done while the second author was visiting the first author on a grant by the Dutch Organization for Scientific Research (N.W.O.). The research of the second author was also supported by the ESPRIT Basic Research Action No. 3075 (project ALCOM). The research of the first author was supported by National Science Foundation Grant CCR-91-06514.  相似文献   

7.
We present a collection of algorithms, all running in timeO(n 2 logn (n) o((n)3)) for some fixed integers(where (n) is the inverse Ackermann's function), for constructing a skeleton representation of a suitably generalized Voronoi diagram for a ladder moving in a two-dimensional space bounded by polygonal barriers consisting ofn line segments. This diagram, which is a two-dimensional subcomplex of the dimensional configuration space of the ladder, is introduced and analyzed in a companion paper by the present authors. The construction of the diagram described in this paper yields a motion-planning algorithm for the ladder which runs within the same time bound given above.Work on this paper has been supported in part by Office of Naval Research Grant N00014-82-K-0381, and by grants from the Digital Equipment Corporation, the Sloan Foundation, the System Development Foundation, the IBM corporation, and by National Science Foundation CER Grant No. DCR-8320085. Work by the second author has also been supported in part by a grant from the US-Israeli Binational Science Foundation.  相似文献   

8.
We improve upon the running time of several graph and network algorithms when applied to dense graphs. In particular, we show how to compute on a machine with word size = (logn) a maximal matching in ann-vertex bipartite graph in timeO(n 2+n 2.5/)=O(n 2.5/logn), how to compute the transitive closure of a digraph withn vertices andm edges in timeO(n 2+nm/), how to solve the uncapacitated transportation problem with integer costs in the range [O.C] and integer demands in the range [–U.U] in timeO ((n 3 (log log/logn)1/2+n2 logU) lognC), and how to solve the assignment problem with integer costs in the range [O.C] in timeO(n 2.5 lognC/(logn/loglogn)1/4).Assuming a suitably compressed input, we also show how to do depth-first and breadth-first search and how to compute strongly connected components and biconnected components in timeO(n+n 2/), and how to solve the single source shortest-path problem with integer costs in the range [O.C] in time0 (n 2(logC)/logn). For the transitive closure algorithm we also report on the experiences with an implementation.Most of this research was carried out while both authors worked at the Fachbereich Informatik, Universität des Saarlandes, Saarbrücken, Germany. The research was supported in part by ESPRIT Project No. 3075 ALCOM. The first author acknowledges support also from NSERC Grant No. OGPIN007.  相似文献   

9.
Suppose a directed graph has its arcs stored in secondary memory, and we wish to compute its transitive closure, also storing the result in secondary memory. We assume that an amount of main memory capable of holdings values is available, and thats lies betweenn, the number of nodes of the graph, ande, the number of arcs. The cost measure we use for algorithms is theI/O complexity of Kung and Hong, where we count 1 every time a value is moved into main memory from secondary memory, or vice versa.In the dense case, wheree is close ton 2, we show that I/O equal toO(n 3/s) is sufficient to compute the transitive closure of ann-node graph, using main memory of sizes. Moreover, it is necessary for any algorithm that is standard, in a sense to be defined precisely in the paper. Roughly, standard means that paths are constructed only by concatenating arcs and previously discovered paths. For the sparse case, we show that I/O equal toO(n 2e/s) is sufficient, although the algorithm we propose meets our definition of standard only if the underlying graph is acyclic. We also show that(n 2e/s) is necessary for any standard algorithm in the sparse case. That settles the I/O complexity of the sparse/acyclic case, for standard algorithms. It is unknown whether this complexity can be achieved in the sparse, cyclic case, by a standard algorithm, and it is unknown whether the bound can be beaten by nonstandard algorithms.We then consider a special kind of standard algorithm, in which paths are constructed only by concatenating arcs and old paths, never by concatenating two old paths. This restriction seems essential if we are to take advantage of sparseness. Unfortunately, we show that almost another factor ofn I/O is necessary. That is, there is an algorithm in this class using I/OO(n 3e/s) for arbitrary sparse graphs, including cyclic ones. Moreover, every algorithm in the restricted class must use(n 3e/s/log3 n) I/O, on some cyclic graphs.The work of this author was partially supported by NSF grant IRI-87-22886, IBM contract 476816, Air Force grant AFOSR-88-0266 and a Guggenheim fellowship.  相似文献   

10.
A model of sequential computation with Pipelined access to memory   总被引:1,自引:0,他引:1  
We introduce a new sequential model of computation, called the Logarithmic Pipelined Model (LPM), in which a RAM processor of fixed size has pipelined access to a memory ofm cells in time logm. Our motivation is that the usual assumption that a memory can be accessed in constant time becomes theoretically unacceptable asm increases, while an access time of logm is consistent with VLSI technologies. For a problem II of sizen, IT P, we denote byS(n) the time required by the fastest known sequential algorithm, and byT(n) the time required by the fastest algorithm solving II in the LPM. LettingO(logn) =O(logm), we define several complexity classes; in particular, LP0 = {II P:T(n) =O(S(n))}, the class of problems for which the LPM is as efficient as the standard model, and LP =IIP:T(n) =O(S(n) logn), where the problems are less adequately solved in the new model. We first study the relations between the LPM and other models of computation. Of particular relevance is comparison with the PRAM model. Then we discuss several problems and derive the relative upper and lower bounds in the LPM. Our results lead to a new organization of parallel algorithms for list-linked structures.This work was supported in part by M.U.R.S.T. of Italy under a research grant.  相似文献   

11.
The longest common subsequence problem revisited   总被引:2,自引:0,他引:2  
This paper re-examines, in a unified framework, two classic approaches to the problem of finding a longest common subsequence (LCS) of two strings, and proposes faster implementations for both. Letl be the length of an LCS between two strings of lengthm andn m, respectively, and let s be the alphabet size. The first revised strategy follows the paradigm of a previousO(ln) time algorithm by Hirschberg. The new version can be implemented in timeO(lm · min logs, logm, log(2n/m)), which is profitable when the input strings differ considerably in size (a looser bound for both versions isO(mn)). The second strategy improves on the Hunt-Szymanski algorithm. This latter takes timeO((r +n) logn), wherermn is the total number of matches between the two input strings. Such a performance is quite good (O(n logn)) whenrn, but it degrades to (mn logn) in the worst case. On the other hand the variation presented here is never worse than linear-time in the productmn. The exact time bound derived for this second algorithm isO(m logn +d log(2mn/d)), whered r is the number ofdominant matches (elsewhere referred to asminimal candidates) between the two strings. Both algorithms require anO(n logs) preprocessing that is nearly standard for the LCS problem, and they make use of simple and handy auxiliary data structures.  相似文献   

12.
    
We consider the problem of collectively locating a set of points within a set of disjoint polygonal regions when neither for points nor for regions preprocessing is allowed. This problem arises in geometric database systems. More specifically it is equivalent to computing theinside join of geo-relational algebra, a conceptual model for geo-data management. We describe efficient algorithms for solving this problem based on plane-sweep and divide-and-conquer, requiringO(n(logn) +t) andO(n(log2 n) +t) time, respectively, andO(n) space, wheren is the total number of points and edges, and (is the number of reported (point, region) pairs. Since the algorithms are meant to be practically useful we consider as well as the internal versions-running completely in main memory-versions that run internally but use much less than linear space and versions that run externally, that is, require only a constant amount of internal memory regardless of the amount of data to be processed. Comparing plane-sweep and divide-and-conquer, it turns out that divide-and-conquer can be expected to perform much better in the external case even though it has a higher internal asymptotic worst-case complexity.An interesting theoretical by-product is a new general technique for handling arbitrarily large sets of objects clustered on a singlex-coordinate within a planar divide-and-conquer algorithm and a proof that the resulting unbalanced dividing does not lead to a more than logarithmic height of the tree of recursive calls.This work was supported by the DFG (Deutsche Forschungsgeneinschaft) under Grant Cr 65–2.  相似文献   

13.
Thes-t connectivity problem for undirected graphs is to decide whether two designated vertices,s andt, are in the same connected component. This paper presents the first known deterministic algorithms solving undirecteds-t connectivity using sublinear space and polynomial time. Our algorithms provide a nearly smooth time-space tradeoff between depth-first search and Savitch's algorithm. Forn vertex,m edge graphs, the simplest of our algorithms uses spaceO(s),n 1/2log2 nsnlog2 n, and timeO(((m+n)n 2 log2 n)/s). We give a variant of this method that is faster at the higher end of the space spectrum. For example, with space (nlogn), its time bound isO((m+n)logn), close to the optimal time for the problem. Another generalization uses less space, but more time: spaceO(n 1/logn), for 2log2 n, and timen O(). For constant the time remains polynomial.  相似文献   

14.
We prove that in anyN-node communication network with maximum degreed, any deterministic oblivious algorithm for routing an arbitrary permutation requires (N/d) parallel communication steps in the worst case. This is an improvement upon the (N/d 3/2) bound obtained by Borodin and Hopcroft. For theN-node hypercube, in particular, we show a matching upper bound by exhibiting a deterministic oblivious algorithm that routes any permutation in (N/logN) steps. The best previously known upper bound was (N). Our algorithm may be practical for smallN (up to about 214 nodes).C. Kaklamanis was supported in part by NSF Grant NSF-CCR-87-04513. T. Tsantilas was supported in part by NSF Grants NSF-DCR-86-00379 and NSF-CCR-89-02500.  相似文献   

15.
A method is described for efficient treatment of the time-history effects that arise in closure models constructed by truncation of renormalized perturbation expansions. The scheme is based on rational polynomial (Padé) approximations for the time-lagged functions, combined with Gaussian quadrature applied to a coarse-time evolution equation for equal-time covariances. Power series coefficients required for the Padé approximants are generated by recursion relations resulting from successive differentiation of the slow-varying form of the dynamic equations for time-lagged functions. This strategy is illustrated by applications to equations of the direct interaction approximation for fluid turbulence. Several benefits accrue from such an approach. Computational efficiency comparable to that of Markovian closures is obtained without introduction of arbitrary constants orad hoc Markovianization assumptions: (1) Scaling of the leading operations count is reduced fromO(n T 3 ) toO(n g n t ), wheren T is the number of steps in coarse time, and ng 4–8. For typical problems in fluid turbulence, this results in savings of afactor40 in computing time. (2) Scaling of working array size is reduced fromO(n T 2 ) toO(p), wherep is the order of Padé approximant; this reduces in-core storage requirements for dynamical quantities by afactor 20. These and other advantages depend on the order of Padé approximant required to give acceptable accuracy. Test cases from several problems in fluid turbulence indicate very substantial savings, such that certain CPU-intensive problems with spatial inhomogeneities become computationally feasible.  相似文献   

16.
We study the maximum-flow algorithm of Goldberg and Tarjan and show that the largest-label implementation runs inO(n 2 m) time. We give a new proof of this fact. We compare our proof with the earlier work by Cheriyan and Maheswari who showed that the largest-label implementation of the preflow-push algorithm of Goldberg and Tarjan runs inO(n 2 m) time when implemented with current edges. Our proof that the number of nonsaturating pushes isO(n 2 m), does not rely on implementing pushes with current edges, therefore it is true for a much larger family of largest-label implementation of the preflow-push algorithms.Research performed while the author was a Ph.D. student at Cornell University and was partially supported by the Ministry of Education of the Republic of Turkey through the scholarship program 1416.  相似文献   

17.
A parallel algorithm to generate the dominance graph on a collection of nonoverlapping iso-oriented rectangles is presented. This graph arises from the constraint graph commonly used in compaction algorithms for VLSI circuits. The dominance graph expresses the notion of aboveness on a collection of nonoverlapping rectangles: it is the directed graph which contains an edge from a rectangleb to rectanglec iffc is immediately aboveb. The algorithm is based on the divide and conquer paradigm; in the EREW PRAM model, it has time complexityO(log2 n), usingn/logn processors. Its processor-time product isO(nlogn), which is optimal.  相似文献   

18.
A faster divide-and-conquer algorithm for constructing delaunay triangulations   总被引:15,自引:0,他引:15  
Rex A. Dwyer 《Algorithmica》1987,2(1):137-151
An easily implemented modification to the divide-and-conquer algorithm for computing the Delaunay triangulation ofn sites in the plane is presented. The change reduces its (n logn) expected running time toO(n log logn) for a large class of distributions that includes the uniform distribution in the unit square. Experimental evidence presented demonstrates that the modified algorithm performs very well forn216, the range of the experiments. It is conjectured that the average number of edges it creates—a good measure of its efficiency—is no more than twice optimal forn less than seven trillion. The improvement is shown to extend to the computation of the Delaunay triangulation in theL p metric for 1<p.This research was supported by National Science Foundation Grants DCR-8352081 and DCR-8416190.  相似文献   

19.
Summary Let L(f) be the network complexity of a Boolean function L(f). For any n-ary Boolean function L(f) let . Hereby p ranges over all relative Turing programs and ranges over all oracles such that given the oracle , the restriction of p to inputs of length n is a program for L(f). p is the number of instructions of p. T p (n) is the time bound and S p of the program p relative to the oracle on inputs of length n. Our main results are (1) L(f) O(TC(L(f))), (2) TC(f) O(L(f) 2 2+) for every O.The results of this paper have been reported in a main lecture at the 1975 annual meeting of GAMM, April 2–5, Göttingen  相似文献   

20.
Unification algorithms have been constructed for semigroups and commutative semigroups. This paper considers the intermediate case of partially commutative semigroups. We introduce classesN and of such semigroups and justify their use. We present an equation-solving algorithm for any member of the classN. This algorithm is relative to having an algorithm to determine all non-negative solutions of a certain class of diophantine equations of degree 2 which we call -equations. The difficulties arising when attempting to solve equations in members of the class are discussed, and we present arguments that strongly suggest that unification in these semigroups is undecidable.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号