首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Suppose a directed graph has its arcs stored in secondary memory, and we wish to compute its transitive closure, also storing the result in secondary memory. We assume that an amount of main memory capable of holdings values is available, and thats lies betweenn, the number of nodes of the graph, ande, the number of arcs. The cost measure we use for algorithms is theI/O complexity of Kung and Hong, where we count 1 every time a value is moved into main memory from secondary memory, or vice versa.In the dense case, wheree is close ton 2, we show that I/O equal toO(n 3/s) is sufficient to compute the transitive closure of ann-node graph, using main memory of sizes. Moreover, it is necessary for any algorithm that is standard, in a sense to be defined precisely in the paper. Roughly, standard means that paths are constructed only by concatenating arcs and previously discovered paths. For the sparse case, we show that I/O equal toO(n 2e/s) is sufficient, although the algorithm we propose meets our definition of standard only if the underlying graph is acyclic. We also show that(n 2e/s) is necessary for any standard algorithm in the sparse case. That settles the I/O complexity of the sparse/acyclic case, for standard algorithms. It is unknown whether this complexity can be achieved in the sparse, cyclic case, by a standard algorithm, and it is unknown whether the bound can be beaten by nonstandard algorithms.We then consider a special kind of standard algorithm, in which paths are constructed only by concatenating arcs and old paths, never by concatenating two old paths. This restriction seems essential if we are to take advantage of sparseness. Unfortunately, we show that almost another factor ofn I/O is necessary. That is, there is an algorithm in this class using I/OO(n 3e/s) for arbitrary sparse graphs, including cyclic ones. Moreover, every algorithm in the restricted class must use(n 3e/s/log3 n) I/O, on some cyclic graphs.The work of this author was partially supported by NSF grant IRI-87-22886, IBM contract 476816, Air Force grant AFOSR-88-0266 and a Guggenheim fellowship.  相似文献   

2.
Common Lisp [25],[26] includes a dynamic datatype system of moderate complexity, as well as predicates for checking the types of language objects. Additionally, an interesting predicate of two type specifiers—SUBTYPEP—is included in the language. Thissubtypep predicate provides a mechanism with which to query the Common Lisp type system regarding containment relations among the various built-in and user-defined types. Whilesubtypep is rarely needed by an applications programmer, the efficiency of a Common Lisp implementation can depend critically upon the quality of itssubtypep predicate: the run-time system typically calls uponsubtypep to decide what sort of representations to use when making arrays; the compiler calls uponsubtypep to interpret userdeclarations, on which efficient data representation and code generation decisions are based.As might be expected due to the complexity of the Common Lisp type system, there may be type containment questions which cannot be decided. In these casessubtypep is expected to return can't determine, in order to avoid giving an incorrect answer. Unfortunately, most Common Lisp implementations have abused this license by answering can't determine in all but the most trivial cases.In particular, most Common Lisp implementations of SUBTYPEP fail on the basic axioms of the Common Lisp type system itself [25][26]. This situation is particularly embarrassing for Lisp-the premier symbol processing language—in which the implementation of complex symbolic logical operations should be relatively easy. Sincesubtypep was presumably included in Common Lisp to answer thehard cases of type containment, this lazy evaluation limits the usefulness of an important language feature.  相似文献   

3.
We show that if a complexity classC is closed downward under polynomial-time majority truth-table reductions ( mtt p ), then practically every other polynomial closure property it enjoys is inherited by the corresponding bounded two-sided error class BP[C]. For instance, the Arthur-Merlin game class AM [B1] enjoys practically every closure property of NP. Our main lemma shows that, for any relativizable classD which meets two fairly transparent technical conditions, we haveC BP[C] BP[D C]. Among our applications, we simplify the proof by Toda [Tol], [To2] that the polynomial hierarchy PH is contained in BP[P]. We also show that relative to a random oracleR, PH R is properly contained in P R .The first author was supported in part by NSF Grant CCR-9011248 and the second author was supported in part by NSF Grant CCR-89011154.  相似文献   

4.
On Bounding Solutions of Underdetermined Systems   总被引:1,自引:0,他引:1  
Sufficient conditions for the existence and uniqueness of a solution x* D (R n ) of Y(x) = 0 where : R n R m (m n) with C 2(D) where D R n is an open convex set and Y = (x)+ are given, and are compared with similar results due to Zhang, Li and Shen (Reliable Computing 5(1) (1999)). An algorithm for bounding zeros of f (·) is described, and numerical results for several examples are given.  相似文献   

5.
We consider the deterministic and the randomized decision tree complexities for Boolean functions, denotedDC(f) andRC(f), respectively. A major open problem is how smallRC(f) can be with respect toDC(f). It is well known thatRC(f)DC(f) 0.5 for every Boolean functionf (called 0.5-exponent). On the other hand, some Boolean functionf is known to haveRC(f) = (DC(f))0.753...) (or 0.753...-exponent). It is not known whether there is a Boolean function with exponent smaller than 0.753... Likewise, no lower bound for arbitrary Boolean functions with exponent greater than 0.5 is known.Our result is a 0.51 lower bound on the exponent for everyread-once function. Read-once means that each input variable appears exactly once in the Boolean formula representing the function. To obtain this result we generalize an existing lower bound technique and combine it with restriction arguments. This result provides a lower bound ofn 0.51 on the number of positions that have to be evaluated by any randomized - pruning algorithm computing the value of any two-person zero-sum game tree withn final positions.  相似文献   

6.
In this paper, we consider the class of Boolean -functions, which are the Boolean functions definable by -expressions (Boolean expressions in which no variable occurs more than once). We present an algorithm which transforms a Boolean formulaE into an equivalent -expression-if possible-in time linear in E times , where E is the size ofE andn m is the number of variables that occur more than once inE. As an application, we obtain a polynomial time algorithm for Mundici's problem of recognizing -functions fromk-formulas [17]. Furthermore, we show that recognizing Boolean -functions is co-NP-complete for functions essentially dependent on all variables and we give a bound close to co-NP for the general case.  相似文献   

7.
LetB be a Banach space ofR n valued continuous functions on [0, ) withfB. Consider the nonlinear Volterra integral equation (*)x(t)+ o t K(t,s,x(s))ds. We use the implicit function theorem to give sufficient conditions onB andK (t,s,x) for the existence of a unique solutionxB to (*) for eachf B with f B sufficiently small. Moreover, there is a constantM>0 independent off with MfB.Part of this work was done while the author was visiting at Wright State University.  相似文献   

8.
This paper is an informal introduction to the theory of types which use a connective for the intersection of two types and a constant for a universal type, besides the usual connective for function-types. This theory was first devised in about 1977 by Coppo, Dezani and Sallé in the context of-calculus and its main development has been by Coppo and Dezani and their collaborators in Turin. With suitable axioms and rules to assign types to-calculus terms, they obtained a system in which (i) the set of types given to a term does not change under-conversion, (ii) some interesting sets of terms, for example the solvable terms and the terms with normal form, can be characterised exactly by the types of their members, and (iii) the type-apparatus is not so complex as polymorphic systems with quantifier-containing types and therefore probably not so expensive to implement mechanically as these systems.There are in fact several variant systems with different detailed properties. This paper defines and motivates the simplest one from which the others are derived, and describes its most basic properties. No proofs are given but the motivation is shown by examples. A comprehensive bibliography is included.  相似文献   

9.
10.
An analogy between functional dependencies and implicational formulas of sentential logic has been discussed in the literature. We feel that a somewhat different connexion between dependency theory and sentential logic is suggested by the similarity between Armstrong's axioms for functional dependencies and Tarski's defining conditions for consequence relations, and we pursue aspects of this other analogy here for their theoretical interest. The analogy suggests, for example, a different semantic interpretation of consequence relations: instead of thinking ofB as a consequence of a set of formulas {A1,...,A n} whenB is true on every assignment of truth-values on which eachA i is true, we can think of this relation as obtaining when every pair of truth-value assignments which give the same truth-values toA 1, the same truth-values toA 2,..., and the same truth-values toA n, also make the same assignment in respect ofB. We describe the former as the consequence relation inference-determined by the class of truth-value assignments (valuations) under consideration, and the latter as the consequence relation supervenience-determined by that class of assignments. Some comparisons will be made between these two notions.  相似文献   

11.
Given a reachable discrete-time linear system (A,b), the reachable set is a cone when a positive constraint is imposed on the input. The problem to be studied is the geometrical structure of the reachable set = cone(b, Ab, A2b,...) in terms of the spectrum ofA. In particular, conditions which ensure , or its closureR, is a polyhedral proper cone are derived. The impact of the given results on finite-time reachability and positive realizability is also discussed.  相似文献   

12.
We consider the half-space range-reporting problem: Given a setS ofn points in d, preprocess it into a data structure, so that, given a query half-space , allk points ofS can be reported efficiently. We extend previously known static solutions to dynamic ones, supporting insertions and deletions of points ofS. For a given parameterm,n m n d/2 and an arbitrarily small positive constant , we achieveO(m 1+) space and preprocessing time, O((n/m d/2 logn+k) query time, and O(m1+n) amortized update time (d 3). We present, among others, the following applications: an O(n1+)-time algorithm for computing convex layers in 3, and an output sensitive algorithm for computing a level in an arrangements of planes in 3, whose time complexity is O((b+n) n, whereb is the size of the level.Work by the first author has been supported by National Science Foundation Grant CCR-91-06514. A preliminary version of this paper appeared in Agarwalet al. [2], which also contains the results of [20] on dynamic bichromatic closest pair and minimum spanning trees.  相似文献   

13.
Trakhtenbrot calls a setA autoreducible ifA is Turing-reducible toA via an oracle Turing machine that never queries the oracle about the input string. Yao considers sets that are autoreducible via probabilistic, polynomial-time oracle Turing machines, and he calls such setscoherent. We continue the study of autoreducible sets, including those that are autoreducible via a family of polynomial-sized circuits, which we callweakly coherent. Sets that are not weakly coherent are calledstrongly incoherent. We show
–  Ifs is superpolynomial and space-constructible, then there is a strongly incoherent set in DSPACE (s(n)).
–  If NEEEXP BPEEEXP, then there is a set in NP that is incoherent.
–  IfA is complete for any of the classes i p , i p , or i p ,i0, thenA is coherent. In particular, all NP-complete sets are coherent.
As corollaries, we obtain new lower bounds onprogram checkability andrandom-self-reducibility. These results answer open questions posed by Yao and by Blum and Kannan.  相似文献   

14.
Transformation of programs for fault-tolerance   总被引:2,自引:0,他引:2  
In this paper we describe how a program constructed for afault-free system can be transformed into afault-tolerant program for execution on a system which is susceptible to failures. A program is described by a set of atomic actions which perform transformations from states to states. We assume that a fault environment is represented by a programF. Interference by the fault environmentF on the execution of a programP can then be described as afault-transformation which transformsP into a program (P). This is proved to be equivalent to the programPP F , whereP F is derived fromP andF, and defines the union of the sets of actions ofP andF P . A recovery transformation transformsP into a program (P) =PR by adding a set ofrecovery actions R, called arecovery program. If the system isfailstop and faults do not affect recovery actions, we have ((P))=(P)R=PP F R We illustrate this approach to fault-tolerant programming by considering the problem of designing a protocol that guarantees reliable communication from a sender to a receiver in spite of faults in the communication channel between them.  相似文献   

15.
We extend the stratified model of probabilistic processes to obtain a very general notion ofprocess priority. The main idea is to allow probability guards of value 0 to be associated with alternatives of a probabilistic summation expression. Such alternatives can be chosen only if the non-zero alternatives are precluded by contextual constraints. We refer to this model as one of extremal probability and to its signature asPCCS . We providePCCS with a structural operational semantics and a notionof probabilistic bisimulation, which is shown to be a congruence. Of particular interest is the abstractionPCCS ofPCCS in which all non-zero probability guards are identified.PCCS represents a customized framework for reasoning about priority, and covers all features of process algebras proposed for reasoning about priority that we know of.A preliminary version of this paper appeared inProceedings of CONCUR '90 — First International Conference on Concurrency Theory, Vol. 458 of the Springer-Verlag seriesLecture Notes in Computer Science, pp. 456–466, Aug. 1990. The research of Scott Smolka was supported in part by NSF Grants CCR-9120995, CCR-9208585 and CCR-9505562; and AFOSR Grants F49620-93-1-0250, F49620-95-1-0508 and F49620-96-1-0087.  相似文献   

16.
17.
We show thatBPP can be simulated in subexponential time for infinitely many input lengths unless exponential time
–  collapses to the second level of the polynomial-time hierarchy.
–  has polynomial-size circuits and
–  has publishable proofs (EXPTIME=MA).
We also show thatBPP is contained in subexponential time unless exponential time has publishable proofs for infinitely many input lengths. In addition, we showBPP can be simulated in subexponential time for infinitely many input lengths unless there exist unary languages inMA-P.  相似文献   

18.
The plane with parallel coordinates   总被引:15,自引:0,他引:15  
By means ofParallel Coordinates planar graphs of multivariate relations are obtained. Certain properties of the relationship correspond tothe geometrical properties of its graph. On the plane a point line duality with several interesting properties is induced. A new duality betweenbounded and unbounded convex sets and hstars (a generalization of hyperbolas) and between Convex Unions and Intersections is found. This motivates some efficient Convexity algorithms and other results inComputational Geometry. There is also a suprising cusp inflection point duality. The narrative ends with a preview of the corresponding results inR N .  相似文献   

19.
In this paper, we investigate the numerical solution of a model equation u xx = exp(– ) (and several slightly more general problems) when 1 using the standard central difference scheme on nonuniform grids. In particular, we are interested in the error behaviour in two limiting cases: (i) the total mesh point number N is fixed when the regularization parameter 0, and (ii) is fixed when N. Using a formal analysis, we show that a generalized version of a special piecewise uniform mesh 12 and an adaptive grid based on the equidistribution principle share some common features. And the optimal meshes give rates of convergence bounded by |log()| as 0 and N is given, which are shown to be sharp by numerical tests.  相似文献   

20.
I begin by tracing some of the confusions regarding levels and reduction to a failure to distinguish two different principles according to which theories can be viewed as hierarchically arranged — epistemic authority and ontological constitution. I then argue that the notion of levels relevant to the debate between symbolic and connectionist paradigms of mental activity answers to neither of these models, but is rather correlative to the hierarchy of functional decompositions of cognitive tasks characteristic of homuncular functionalism. Finally, I suggest that the incommensurability of the intentional and extensional vocabularies constitutes a strongprima facie reason to conclude that there is little likelihood of filling in the story of Bechtel's missing level in such a way as to bridge the gap between such homuncular functionalism and his own model of mechanistic explanation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号