首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper presents algorithms for multiterminal net channel routing where multiple interconnect layers are available. Major improvements are possible if wires are able to overlap, and our generalized main algorithm allows overlap, but only on everyKth (K 2) layer. Our algorithm will, for a problem with densityd onL layers,L K + 3,provably use at most three tracks more than optimal: (d + 1)/L/K + 2 tracks, compared with the lower bound of d/L/K. Our algorithm is simple, has few vias, tends to minimize wire length, and could be used if different layers have different grid sizes. Finally, we extend our algorithm in order to obtain improved results for adjacent (K = 1) overlap: (d + 2)/2L/3 + 5 forL 7.This work was supported by the Semiconductor Research Corporation under Contract 83-01-035, by a grant from the General Electric Corporation, and by a grant at the University of the Saarland.  相似文献   

2.
Thek-Delaunay tree extends the Delaunay tree introduced in [1], and [2]. It is a hierarchical data structure that allows the semidynamic construction of the higher-order Voronoi diagrams of a finite set ofn points in any dimension. In this paper we prove that a randomized construction of thek-Delaunay tree, and thus of all the orderk Voronoi diagrams, can be done inO(n logn+k 3n) expected time and O(k2n) expected storage in the plane, which is asymptotically optimal for fixedk. Our algorithm extends tod-dimensional space with expected time complexityO(k (d+1)/2+1 n (d+1)/2) and space complexityO(k (d+1)/2 n (d+1)/2). The algorithm is simple and experimental results are given.This work has been supported in part by the ESPRIT Basic Research Action No. 3075 (ALCOM).  相似文献   

3.
A New Class of Depth-Size Optimal Parallel Prefix Circuits   总被引:1,自引:1,他引:0  
Given n values x1, x2, ... ,xn and an associative binary operation o, the prefix problem is to compute x1ox2o··· oxi, 1in. Many combinational circuits for solving the prefix problem, called prefix circuits, have been designed. It has been proved that the size s(D(n)) and the depth d(D(n)) of an n-input prefix circuit D(n) satisfy the inequality d(D(n))+s(D(n))2n–2; thus, a prefix circuit is depth-size optimal if d(D(n))+s(D(n))=2n–2. In this paper, we construct a new depth-size optimal prefix circuit SL(n). In addition, we can build depth-size optimal prefix circuits whose depth can be any integer between d(SL(n)) and n–1. SL(n) has the same maximum fan-out lgn+1 as Snir's SN(n), but the depth of SL(n) is smaller; thus, SL(n) is faster. Compared with another optimal prefix circuit LYD(n), d(LYD(n))+2d(SL(n))d(LYD(n)). However, LYD(n) may have a fan-out of at most 2 lgn–2, and the fan-out of LYD(n) is greater than that of SL(n) for almost all n12. Because an operation node with greater fan-out occupies more chip area and is slower in VLSI implementation, in most cases, SL(n) needs less area and may be faster than LYD(n). Moreover, it is much easier to design SL(n) than LYD(n).  相似文献   

4.
A transformation is presented which converts any pushdown automaton (PDA)M 0 withn 0 states andp 0 stack symbols into an equivalent PDAM withn states and n 0 /n2 p 0 stack symbols into an equivalent ofn, 1n 0. This transformation preserves realtime behavior but not derterminism. The transformation is proved to be the best possible one in the following sense: for each choice of the parametersn 0 + 1 stack symbols for any desired value realtime PDAM 0 such that any equivalent PDAM (whether realtime or not) havingn states must have at least (n 0 /n)2 p0 stack symbols. Furthermore, the loss of deterministic behavior cannot be avoided, since for each choice ofn 0 andp 0, there is a deterministic PDAM 0 such that no equivalent PDAM with fewer states can be deterministic.This research was supported in part by the National Science Foundation under Grants MCS76-10076 and MCS76-10076A01.  相似文献   

5.
Asorting network is a combinational circuit for sorting constructed from comparison-swap units. The depth of such a circuit is a measure of its running time. It is known that sorting-network verification is computationally intractable. However, it is reasonable to hypothesize that only the fastest (that is, the shallowest) networks are likely to be fabricated. It is shown that the verification of shallow sorting networks is also computationally intractable. Firstly, a method for constructing asymptotically optimalsingle-exception sorting networks is demonstrated. These are networks which sort all zero-one inputs except one. More specifically, their depth isD(n-1)+2log(n-1)+2, whereD(n) is the minimum depth of ann-input sorting network. It follows that the verification problem for sorting networks of depth 2D(n)+6logn+O(1) is Co-NP complete. Given the current state of knowledge aboutD(n) for largen, this indicates that the complexity of verification for shallow sorting networks is as great as for deep networks.This research was supported by NSF Grant CCR-8801659.  相似文献   

6.
We present optimal embeddings of three genres of butterfly-like graphs in the (boolean) hypercube; each embedding is specified via a linear-time algorithm. Our first embedding finds an instance of the FFT graph as a subgraph of the smallest hypercube that is big enough to hold it; thus, we embed then-level FFT graph, which has (n+1)2 n vertices, in the (n+log2(n+1))-dimensional hypercube, with unit dilation. This embedding yields a mapping of the pipelined FFT algorithm on the hypercube architecture, which is optimal in all resources (time, processor utilization, load balancing, etc.) and which is on-line in the sense that inputs can be added to the transform even during the computation. Second, we find optimal embeddings of then-level butterfly graph and then-level cube-connected cycles graph, each of which hasn2 n vertices, in the (n+log2 n)-dimensional hypercube. These embeddings, too, have optimal dilation, congestion, and expansion. The dilation is 1+(n mod 2), which is best possible. Our embeddings indicate that these two bounded-degree approximations to the hypercube do not have any communication power that is not already present in the hypercube.The research of D. S. Greenberg was supported in part by NSF Grant MIP-86-01885. The research of L. S. Heath was supported in part by NSF Grant DCI-85-04308. The research of A. L. Rosenberg was supported in part by NSF Grants DCI-85-04308, DCI-87-96236, and CCR-88-12567.  相似文献   

7.
Summary We present a new encoding scheme for binary trees with n internal nodes whose heights are bounded by a given value h, hlog2(n +1)1. The scheme encodes the internal nodes of the tree level by level and enables us to develop an algorithm for generating all binary trees within this class in a certain predetermined order. Specifically, the trees are generated in decreasing height and for trees of the same height they are generated in lexicographically increasing order. The algorithm can be easily generalized to encompass t-ary trees with bounded height. It is then shown that the average generation time per tree is constant (independent of n and h).Supported in part by National Science Foundation under Grants MCS 8342682 and ECS 8340031  相似文献   

8.
In this paper, we consider the linear interval tolerance problem, which consists of finding the largest interval vector included in ([A], [b]) = {x R n | A [A], b [b], Ax = b}. We describe two different polyhedrons that represent subsets of all possible interval vectors in ([A], [b]), and we provide a new definition of the optimality of an interval vector included in ([A], [b]). Finally, we show how the Simplex algorithm can be applied to find an optimal interval vector in ([A], [b]).  相似文献   

9.
The problem of factoring integers in polynomial time with the help of an infinitely powerful oracle who answers arbitrary questions with yes or no is considered. The goal is to minimize the number of oracle questions. LetN be a given compositen-bit integer to be factored, wheren = log2 N. The trivial method of asking for the bits of the smallest prime factor ofN requiresn/2 questions in the worst case. A non-trivial algorithm of Rivest and Shamir requires onlyn/3 questions for the special case whereN is the product of twon/2-bit primes. In this paper, a polynomial-time oracle factoring algorithm for general integers is presented which, for any >0, asks at most n oracle questions for sufficiently largeN, thus solving an open problem posed by Rivest and Shamir. Based on a plausible conjecture related to Lenstra's conjecture on the running time of the elliptic curve factoring algorithm, it is shown that the algorithm fails with probability at mostN –/2 for all sufficiently largeN.  相似文献   

10.
Mutual convertibility of bound entangled states under local quantum operations and classical communication (LOCC) is studied. We focus on states associated with unextendible product bases (UPB) in a system of three qubits. A complete classification of such UPBs is suggested. We prove that for any pair of UPBs S and T the associated bound entangled states S and T cannot be converted to each other by LOCC, unless S and T coincide up to local unitaries. More specifically, there exists a finite precision (S,T) > 0 such that for any LOCC protocol mapping S into a probabilistic ensemble (p, ), the fidelity between T and any possible final state satisfies F(T, ) = 1 - (S,T).PACS: 03.65.Bz; 03.67.-a; 89.70+c.  相似文献   

11.
We analyze four nce Memed novels of Yaar Kemal using six style markers: most frequent words, syllable counts, word type – or part of speech – information, sentence length in terms of words, word length in text, and word length in vocabulary. For analysis we divide each novel into five thousand word text blocks and count the frequencies of each style marker in these blocks. The style markers showing the best separation are most frequent words and sentence lengths. We use stepwise discriminant analysis to determine the best discriminators of each style marker. We then use these markers in cross validation based discriminant analysis. Further investigation based on multiple analysis of variance (MANOVA) reveals how the attributes of each style marker group distinguish among the volumes.  相似文献   

12.
Since Aristotle it is recognised that a valid syllogism cannot have two particular premises. However, that is not how a lay person sees it; at least as long as the premises read many, most etc, instead of a plain some. The lay people are right if one considers that these syllogisms do not have strict but approximate (Zadeh) validity. Typically there are only particular premises available in everyday life and one is dependent on such syllogisms. – Some rules on the usage of particular premises are given below.  相似文献   

13.
The analytic structure of Rational Interpolants (R.I.) f (z) built from randomly perturbed data is explored; the interpolation nodes x j , j = 1,...,M, are real points where the function f reaches these prescribed data . It is assumed that the data are randomly perturbed values of a rational function (n) (m) (m / n is the degree of the numerator/denominator). Much attention is paid to the R.I. familyf (n+1) (m–1), in the small stochasticity régime. The main result is that the additional zero and pole are located nearby the root of the same random polynomial, called the Froissart Polynomial (F.P.). With gaussian hypothesis on the noise, the random real root of F.P. is distributed according to a Cauchy-Lorentz law, with parameters such that the integrated probability over the interpolation interval x 1, x M is always larger than 1/2; in two cases studied in detail, it reaches 2/3 in one case and almost 3/4 in the other. For the families f (n+k) (m+k), numerical explorations point to similar phenomena; inspection shows that, in the mean, the localization occurs in the complex and/or real vicinity of the interpolation interval.  相似文献   

14.
A technique to model and to verify distributed algorithms is suggested. This technique (based on Petri nets) reduces the modelling and analysis effort to a reasonable level. The paper outlines the technique using the example of a typical network algorithm, theecho algorithm.Supported by the DFG-projects Verteilte Algorithmen and Konsensalgorithmen  相似文献   

15.
The design of the database is crucial to the process of designing almost any Information System (IS) and involves two clearly identifiable key concepts: schema and data model, the latter allowing us to define the former. Nevertheless, the term model is commonly applied indistinctly to both, the confusion arising from the fact that in Software Engineering (SE), unlike in formal or empirical sciences, the notion of model has a double meaning of which we are not always aware. If we take our idea of model directly from empirical sciences, then the schema of a database would actually be a model, whereas the data model would be a set of tools allowing us to define such a schema.The present paper discusses the meaning of model in the area of Software Engineering from a philosophical point of view, an important topic for the confusion arising directly affects other debates where model is a key concept. We would also suggest that the need for a philosophical discussion on the concept of data model is a further argument in favour of institutionalizing a new area of knowledge, which could be called: Philosophy of Engineering.  相似文献   

16.
Through key examples and constructs, exact and approximate, complexity, computability, and solution of linear programming systems are reexamined in the light of Khachian's new notion of (approximate) solution. Algorithms, basic theorems, and alternate representations are reviewed. It is shown that the Klee-Minty example hasnever been exponential for (exact) adjacent extreme point algorithms and that the Balinski-Gomory (exact) algorithm continues to be polynomial in cases where (approximate) ellipsoidal centered-cutoff algorithms (Levin, Shor, Khachian, Gacs-Lovasz) are exponential. By model approximation, both the Klee-Minty and the new J. Clausen examples are shown to be trivial (explicitly solvable) interval programming problems. A new notion of computable (approximate) solution is proposed together with ana priori regularization for linear programming systems. New polyhedral constraint contraction algorithms are proposed for approximate solution and the relevance of interval programming for good starts or exact solution is brought forth. It is concluded from all this that the imposed problem ignorance of past complexity research is deleterious to research progress on computability or efficiency of computation.This research was partly supported by Project NR047-071, ONR Contract N00014-80-C-0242, and Project NR047-021, ONR Contract N00014-75-C-0569, with the Center for Cybernetic Studies, The University of Texas at Austin.  相似文献   

17.
A Maple procedure is described by means of which an algebraic function given by an equation f(x y) = 0 can be expanded into a fractional power series (Puiseux series)
where
,
of special (nice) type. It may be a series with polynomial, rational, hypergeometric coefficients, or m-sparse or m-sparse m-hypergeometric series. First, a linear ordinary differential equation with polynomial coefficients Ly(x) = 0 is constructed which is satisfied by the given algebraic function. The , n 0, and a required number of initial coefficients 0, ..., are computed by using Maple algcurves package. By means of Maple Slode package, a solution to the equation Ly(x) = 0 is constructed in the form of a series with nice coefficients, the initial coefficients of which correspond to the calculated 0, ..., . The procedure suggested can construct an expansion at a user-given point x 0, as well as determine points where an expansion of such a special type is possible.  相似文献   

18.
Summary Formula size and depth are two important complexity measures of Boolean functions. We study the tradeoff between those two measures: We give an infinite set of Boolean functions and show for nearly each of them: There is no monotone formula computing it optimal with respect to both measures. We give a lower and upper bound on the product of size and depth of monotone formulae computing our functions. That implies, moreover, a logarithmic lower bound on circuit depth.Denotations the set of natural numbers {1,2,...} - for x>0, x =max{y{0}¦y¦<=x} - log logarithm to the base 2  相似文献   

19.
Pushing Convertible Constraints in Frequent Itemset Mining   总被引:1,自引:0,他引:1  
Recent work has highlighted the importance of the constraint-based mining paradigm in the context of frequent itemsets, associations, correlations, sequential patterns, and many other interesting patterns in large databases. Constraint pushing techniques have been developed for mining frequent patterns and associations with antimonotonic, monotonic, and succinct constraints. In this paper, we study constraints which cannot be handled with existing theory and techniques in frequent pattern mining. For example, avg(S)v, median(S)v, sum(S)v (S can contain items of arbitrary values, {<, <, , } and v is a real number.) are customarily regarded as tough constraints in that they cannot be pushed inside an algorithm such as Apriori. We develop a notion of convertible constraints and systematically analyze, classify, and characterize this class. We also develop techniques which enable them to be readily pushed deep inside the recently developed FP-growth algorithm for frequent itemset mining. Results from our detailed experiments show the effectiveness of the techniques developed.  相似文献   

20.
Summary We consider a specific kind of binary trees with weighted edges. Each right edge has weight while each left edge has weight . Furthermore, no path in the tree is allowed to contain L or more consecutive -edges, where L 1 is fixed. Given, , , L and the number of nodes n, an optimal tree is one which minimizes the total weighted path length. Algorithms for constructing an optimal tree as well as all optimal trees for given , , L and n are proposed and analyzed. Timing and storage requirements are also discussed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号