首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
In this paper, we study two interprocedural program-analysis problems—interprocedural slicing and interprocedural dataflow analysis— and present the following results:
  • ? Interprocedural slicing is log-space complete forP.
  • ? The problem of obtaining “meet-over-all-valid-paths” solutions to interprocedural versions of distributive dataflow-analysis problems isP-hard.
  • ? Obtaining “meet-over-all-valid-paths” solutions to interprocedural versions of distributive dataflow-analysis problems that involve finite sets of dataflow facts (such as the classical “gen/kill” problems) is log-space complete forP.
  • These results provide evidence that there do not exist fast (N?-class) parallel algorithms for interprocedural slicing and precise interprocedural dataflow analysis (unlessP =N?). That is, it is unlikely that there are algorithms for interprocedural slicing and precise interprocedural dataflow analysis for which the number of processors is bounded by a polynomial in the size of the input, and whose running time is bounded by a polynomial in the logarithm of the size of the input. This suggests that there are limitations on the ability to use parallelism to overcome compiler bottlenecks due to expensive interprocedural-analysis computations.  相似文献   

    2.
    Trial and error     
    A pac-learning algorithm isd-space bounded, if it stores at mostd examples from the sample at any time. We characterize thed-space learnable concept classes. For this purpose we introduce the compression parameter of a concept classb and design our Trial and Error Learning Algorithm. We show: b isd-space learnable if and only if the compression parameter ofb is at mostd. This learning algorithm does not produce a hypothesis consistent with the whole sample as previous approaches e.g. by Floyd, who presents consistent space bounded learning algorithms, but has to restrict herself to very special concept classes. On the other hand our algorithm needs large samples; the compression parameter appears as exponent in the sample size. We present several examples of polynomial time space bounded learnable concept classes:
  • - all intersection closed concept classes with finite VC-dimension.
  • - convexn-gons in ?2.
  • - halfspaces in ?n.
  • - unions of triangles in ?2.
  • We further relate the compression parameter to the VC-dimension, and discuss variants of this parameter.  相似文献   

    3.
    Stable semantics for disjunctive programs   总被引:1,自引:0,他引:1  
    We introduce the stable model semantics fordisjunctive logic programs and deductive databases, which generalizes the stable model semantics, defined earlier for normal (i.e., non-disjunctive) programs. Depending on whether only total (2-valued) or all partial (3-valued) models are used we obtain thedisjunctive stable semantics or thepartial disjunctive stable semantics, respectively. The proposed semantics are shown to have the following properties:
  • ? For normal programs, the disjunctive (respectively, partial disjunctive) stable semantics coincides with thestable (respectively,partial stable) semantics.
  • ? For normal programs, the partial disjunctive stable semantics also coincides with thewell-founded semantics.
  • ? For locally stratified disjunctive programs both (total and partial) disjunctive stable semantics coincide with theperfect model semantics.
  • ? The partial disjunctive stable semantics can be generalized to the class ofall disjunctive logic programs.
  • ? Both (total and partial) disjunctive stable semantics can be naturally extended to a broader class of disjunctive programs that permit the use ofclassical negation.
  • ? After translation of the programP into a suitable autoepistemic theory \( \hat P \) the disjunctive (respectively, partial disjunctive) stable semantics ofP coincides with the autoepistemic (respectively, 3-valued autoepistemic) semantics of \( \hat P \) .
  •   相似文献   

    4.
    Every Boolean function may be represented as a real polynomial. In this paper, we characterize the degree of this polynomial in terms of certain combinatorial properties of the Boolean function. Our first result is a tight lower bound of Ω(logn) on the degree needed to represent any Boolean function that depends onn variables. Our second result states that for every Boolean functionf, the following measures are all polynomially related:
  • o The decision tree complexity off.
  • o The degree of the polynomial representingf.
  • o The smallest degree of a polynomialapproximating f in theL max norm.
  •   相似文献   

    5.
    A sequence of natural numbers is said to have level k, for some natural integer k, if it can be computed by a deterministic pushdown automaton of level k (Fratani and Sénizergues in Ann Pure Appl. Log. 141:363–411, 2006). We show here that the sequences of level 2 are exactly the rational formal power series over one undeterminate. More generally, we study mappings from words to words and show that the following classes coincide:
  • the mappings which are computable by deterministic pushdown automata of level 2
  • the mappings which are solution of a system of catenative recurrence equations
  • the mappings which are definable as a Lindenmayer system of type HDT0L.
  • We illustrate the usefulness of this characterization by proving three statements about formal power series, rational sets of homomorphisms and equations in words.  相似文献   

    6.
    We present three new approximation algorithms with improved constant ratios for selecting n points in n disks such that the minimum pairwise distance among the points is maximized.
    1. A very simple O(nlog?n)-time algorithm with ratio 0.511 for disjoint unit disks.
    2. An LP-based algorithm with ratio 0.707 for disjoint disks of arbitrary radii that uses a linear number of variables and constraints, and runs in polynomial time.
    3. A hybrid algorithm with ratio either 0.4487 or 0.4674 for (not necessarily disjoint) unit disks that uses an algorithm of Cabello in combination with either the simple O(nlog?n)-time algorithm or the LP-based algorithm.
    The LP algorithm can be extended for disjoint balls of arbitrary radii in ? d , for any (fixed) dimension d, while preserving the features of the planar algorithm. The algorithm introduces a novel technique which combines linear programming and projections for approximating Euclidean distances. The previous best approximation ratio for dispersion in disjoint disks, even when all disks have the same radius, was 1/2. Our results give a positive answer to an open question raised by Cabello, who asked whether the ratio 1/2 could be improved.  相似文献   

    7.
    We define a class ofn-ary relations on strings called the regular prefix relations, and give four alternative characterizations of this class:
    1. the relations recognized by a new type of automaton, the prefix automata,
    2. the relations recognized by tree automata specialized to relations on strings,
    3. the relations between strings definable in the second order theory ofk successors,
    4. the smallest class containing the regular sets and the prefix relation, and closed under the Boolean operations, Cartesian product, projection, explicit transformation, and concatenation with Cartesian products of regular sets.
    We give concrete examples of regular prefix relations, and a pumping argument for prefix automata. An application of these results to the study of inductive inference of regular sets is described.  相似文献   

    8.
    If you are familiar with Prolog but not with Parlog then this tutorial is aimed at you. In what follows I attempt to:

  • ? explain the basics of Parlog
  • ? demonstrate that Parlog programs can be powerful and elegant
  • ? discuss the relationship of Parlog to Prolog, and
  • ? identify some resources which will take you further.
  • These are what I call ‘four steps to Parlog’.  相似文献   


    9.
    We present a uniform approach to problems involving lines in 3-space. This approach is based on mapping lines inR 3 into points and hyperplanes in five-dimensional projective space (Plücker space). We obtain new results on the following problems:
    1. Preprocessn triangles so as to answer efficiently the query: “Given a ray, which is the first triangle hit?” (Ray- shooting problem). We discuss the ray-shooting problem for both disjoint and nondisjoint triangles.
    2. Construct the intersection of two nonconvex polyhedra in an output sensitive way with asubquadratic overhead term.
    3. Construct the arrangement ofn intersecting triangles in 3-space in an output-sensitive way, with asubquadratic overhead term.
    4. Efficiently detect the first face hit by any ray in a set of axis-oriented polyhedra.
    5. Preprocessn lines (segments) so as to answer efficiently the query “Given two lines, is it possible to move one into the other without crossing any of the initial lines (segments)?” (Isotopy problem). If the movement is possible produce an explicit representation of it.
      相似文献   

    10.
    We settle all relativized questions of the relationships between the following five propositions:
    • P = NP.
    • P = UP.
    • P = NP $\cap$ coNP.
    • All disjoint pairs of NP sets are P-separable.
    • All disjoint pairs of coNP sets are P-separable.
    We make the first widespread use of variations of generic oracles to achieve the necessary relativized worlds.  相似文献   

    11.
    12.
    In this paper we give efficient parallel algorithms for a number of problems from computational geometry by using versions of parallel plane sweeping. We illustrate our approach with a number of applications, which include:
  • General hidden-surface elimination (even if the overlap relation contains cycles).
  • CSG boundary evaluation.
  • Computing the contour of a collection of rectangles.
  • Hidden-surface elimination for rectangles.
  • There are interesting subproblems that we solve as a part of each parallelization. For example, we give an optimal parallel method for building a data structure for line-stabbing queries (which, incidentally, improves the sequential complexity of this problem). Our algorithms are for the CREW PRAM, unless otherwise noted.  相似文献   

    13.
    The concept of a translation is fundamental to any theory of compiling. Formally, atranslation is any set of pairs of words. Classes of finitely describable translations are considered in general, from the point of view of balloon automata [17, 18, 19]. A translation can be defined by atransducer, a device with an input tape and an output terminal. If, with inputx, the stringy appears at the output terminal, then (x, y) is in the translation defined by the transducer. One can also define a translation by a two input taperecognizer. Ifx andy are placed on the two tapes, the recognizer tells if (x, y) is in the defined translation. One can define closed classes of transducers and recognizers by:
    1. restricting the way in which infinite storage may be used (pushdown structure, stack structure, etc.),
    2. allowing the finite control to be nondeterministic or deterministic,
    3. allowing one way or two way motion on the input tapes.
    We have some results on classes of translations which can be categorized roughly into three types.
    1. Translations defined by certain classes of transducers and recognizers are equivalent.
    2. Translations of a given class are sometimes closed under composition and decomposition with a finite memory translation (gsm mapping).
    3. A nondeterministically defined translation can be expressed as the composition of a finitely defined translation and a related deterministically defined translation in many cases.
    In addition, ifC is a class of translations, then one can write a compiler-compiler to render any translationT inC and only if the following question is solvable: For any translationT inC and stringx, does there exist ay such that (x, y) is inT? We shall show that, in general, the decidability of this question is equivalent to the decidability of one or more questions from automata theory, depending upon the type of devices defining the classC.  相似文献   

    14.
    We consider conditionals of the form A ? B where A depends on the future and B on the present and past. We examine models for such conditional arising in Talmudic legal cases. We call such conditionals contrary to time conditionals. Three main aspects will be investigated:
    1. Inverse causality from future to past, where a future condition can influence a legal event in the past (this is a man made causality).
    2. Comparison with similar features in modern law.
    3. New types of temporal logics arising from modelling the Talmudic examples.
    We shall see that we need a new temporal logic,which we call Talmudic temporal logic with linear open advancing future and parallel changing past, based on two parameters for time.  相似文献   

    15.
    We sketch a method for deduction-oriented software and system development. The method incorporates formal machine-supported specification and verification as activities in software and systems development. We describe experiences in applying this method. These experiences have been gained by using the LP, the Larch proof assistant, as a tool for a number of small and medium size case studies for the formal development of software and systems. LP is used for the verification of the development steps. These case studies include
  • ? quicksort
  • ? the majority vote problem
  • ? code generation by a compiler and its correctness
  • ? an interactive queue and its refinement into a network.
  • The developments range over levels of requirement specifications, designs and abstract implementations. The main issues are questions of a development method and how to make good use of a formal tool like LP in a goal-directed way within the development. We further discuss the value of advanced specification techniques, most of which are deliberately not supported by LP and its notation, and their significance in development, Furthermore, we discuss issues of enhancement of a support system like LP and the value and the practicability of using formal techniques such as specification and verification in the development process in practice.  相似文献   

    16.
    A. Bertoni  G. Mauri  M. Torelli 《Calcolo》1980,17(2):163-174
    This paper is intended to show that an algebraic approach can give useful suggestions to design efficient algorithms solving combinatorial problems. The problems we discusses in the paper are:
    1. Counting strings of given length generated by a regular grammar. For this problem, we give an exact algorithm whose complexity is 0 (logn) (with respect to the number of executed operations), and an approximate algorithm which however still has the same order of complexity;
    2. counting trees recognized by a tree automaton. For this problem, we give an exact algorithm of complexity 0(n) and an approximate one of complexity 0 (logn). For this approximate algorithm the relative error is shown to be 0 (1/n).
      相似文献   

    17.
    RaumComputer     
    The RoomComputer is an embedded system and as such offers unprecedented chances to manage buildings. Several RoomComputers can be networked via the Intra-/Internet, which makes it possible to monitor, control, and manage rooms and buildings on a unified worldwide accessible platform, irrespective of any particular local technology. It can be easily installed in any building and gives access to a full set of services. It implements a distributed system, which provides secure and controlled access to services like
    1. control of light, heating, ventilation, air and climate
    2. communication facilities like unified messaging, telephone, fax, etc.
    3. reservation of rooms and required resources
    4. localization of persons and equipment within rooms and buildings
    5. entrance control (i.e. locking/unlocking doors)
    6. organization of maintenance and house keeping, and
    7. charging and billing.
      相似文献   

    18.
    We report progress on the NL versus UL problem.
  • We show that counting the number of s-t paths in graphs where the number of s-v paths for any v is bounded by a polynomial can be done in FUL: the unambiguous log-space function class. Several new upper bounds follow from this including ${{{ReachFewL} \subseteq {UL}}}$ and ${{{LFew} \subseteq {UL}^{FewL}}}$
  • We investigate the complexity of min-uniqueness—a central notion in studying the NL versus UL problem. In this regard we revisit the class OptL[log n] and introduce UOptL[log n], an unambiguous version of OptL[log n]. We investigate the relation between UOptL[log n] and other existing complexity classes.
  • We consider the unambiguous hierarchies over UL and UOptL[log n]. We show that the hierarchy over UOptL[log n] collapses. This implies that ${{{ULH} \subseteq {L}^{{promiseUL}}}}$ thus collapsing the UL hierarchy.
  • We show that the reachability problem over graphs embedded on 3 pages is complete for NL. This contrasts with the reachability problem over graphs embedded on 2 pages, which is log-space equivalent to the reachability problem in planar graphs and hence is in UL.
  •   相似文献   

    19.
    We develop constant-time algorithms to compute the Hough transform on a processor array with a reconfigurable bus system (abbreviated to PARBS). The PARBS is a comptuation model which consists of a processor array and a reconfigurable bus system. It is a very powerful computation model in that many problems can be solved efficiently. In this paper, we introduce the concept of iterative-PARBS which is similar to the FOR-loop construct in sequential programming languages. The iterative-PARBS is a building block in which the processing data can be routed through it several times. We can think it as a “hardware subroutine”. Based on this scheme, we are able to explore constant-time Hough transform algorithms on PARBS. The following new results are derived in this study:
    1. The sum ofn bits can be computed in O(1) times on a PARBS with O(n 1+?) processors for any fixed ?>0.
    2. The weights of each simple path of ann*n image can be computed in O(1) time on a 3-D PARBS with O(n 2+?) processors for any fixed ?>0.
    3. Thep angle Hough transform of ann*n image can be computed in O(1) time on a PARBS with O(p*n 2+?) processors for any fixed ?>0 withp copies of the image pretiled.
    4. Thep angle Hough transform of ann*n image can be computed in O(1) time on a PARBS with O(p*n 3) processors.
      相似文献   

    20.
    The need to design and verify architectures to support parallel implementations of declarative languages has led to the development of a novel language, called Paragon, which bridges the gap between the top-level specification of the abstract machine, and its detailed implementation in terms of parallel processes and message passing. The central technical contributions in this paper are:
  • ? The introduction and specification of Paragon, a parallel object-oriented language based on graph rewriting and message passing principles.
  • ? An illustration of the approach at work in the design of a parallel supercombinator graph reduction machine.
  • ? A sketch proof that this design meets the requirements statement.
  •   相似文献   

    设为首页 | 免责声明 | 关于勤云 | 加入收藏

    Copyright©北京勤云科技发展有限公司  京ICP备09084417号