首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
A stack-counter acceptor is a stack acceptor in which the storage alphabet is just one letter. The present paper discusses multi-stack-counter acceptors operating in quasirealtime, i.e., acceptors in which each storage tape is a stack counter and in which there are only a bounded number of consecutive-moves. For each positive integerk let be the family of languages accepted byk-stack-counter acceptors (k-counter acceptors). Each is a principal AFL closed under reversal but not under-free substitution or under intersection. Also, and a specific language in each, is exhibited. For each and there are noi andj such that. It is shown that a quasi-real-timek-stackcounter acceptor is equivalent to one operating in non-deterministic real time. Lastly, it is shown that acceptance by final state of ak-stack-counter acceptor is equivalent to acceptance by empty tape and final state.Also formerly with System Development Corporation, Santa Monica, California. Research sponsored in part by the Air Force Cambridge Research Laboratories, Office of Aerospace Research, USAF, under Contract F19628-70-C-0023; by the Air Force Office of Scientific Research, Office of Aerospace Research, USAF, under AFOSR No. F44620-70-C-0013; and by NSF Grant No. GJ454.  相似文献   

2.
If a full AFL is not closed under substitution, then ô , the result of substituting members of into, is not substitution closed and hence generates an infinite hierarchy of full AFL's. If 1 and 2 are two incomparable full AFL's, then the least full AFL containing 1 and 2 is not substitution closed. In particular, the substitution closure of any full AFL properly contained in the context-free languages is itself properly contained in the context-free languages. If any set of languages generates the context-free languages, one of its members must do so. The substitution closure of the one-way stack languages is properly contained in the nested stack languages. For eachn, there is a class of full context-free AFL's whose partial ordering under inclusion is isomorphic to the natural partial ordering onn-tuples of positive integers.This paper was completed while the author was in the Division of Engineering and Applied Physics of Harvard University. Research sponsored in part by the Air Force Cambridge Research Laboratories, Office of Aerospace Research, USAF, under contracts F-1962870C0023 and F-1962868C0029, and by the Air Force Office of Scientific Research, Office of Aerospace Research, USAF, under AFOSR Grant No. AF-AFOSR-1203-67A and the Division of Engineering and Applied Physics of Harvard University.  相似文献   

3.
Common Lisp [25],[26] includes a dynamic datatype system of moderate complexity, as well as predicates for checking the types of language objects. Additionally, an interesting predicate of two type specifiers—SUBTYPEP—is included in the language. Thissubtypep predicate provides a mechanism with which to query the Common Lisp type system regarding containment relations among the various built-in and user-defined types. Whilesubtypep is rarely needed by an applications programmer, the efficiency of a Common Lisp implementation can depend critically upon the quality of itssubtypep predicate: the run-time system typically calls uponsubtypep to decide what sort of representations to use when making arrays; the compiler calls uponsubtypep to interpret userdeclarations, on which efficient data representation and code generation decisions are based.As might be expected due to the complexity of the Common Lisp type system, there may be type containment questions which cannot be decided. In these casessubtypep is expected to return can't determine, in order to avoid giving an incorrect answer. Unfortunately, most Common Lisp implementations have abused this license by answering can't determine in all but the most trivial cases.In particular, most Common Lisp implementations of SUBTYPEP fail on the basic axioms of the Common Lisp type system itself [25][26]. This situation is particularly embarrassing for Lisp-the premier symbol processing language—in which the implementation of complex symbolic logical operations should be relatively easy. Sincesubtypep was presumably included in Common Lisp to answer thehard cases of type containment, this lazy evaluation limits the usefulness of an important language feature.  相似文献   

4.
In this paper we propose a general framework for compiling, scheduling, and executing parallel programs on parallel computers. We discuss important aspects of program partitioning, scheduling, and execution, and consider possible realistic alternatives for each issue. Subsequently we propose a possible implementation of an auto-scheduling compiler and give simple examples to illustrate the principles. Our approach to the entire problem is to utilize program information available to the compiler while, at the same time, allowing for run-time corrections and flexibility.This work was supported in part by the National Science Foundation under Grant NSF MIP-8410110, the U.S. Department of Energy under Grant DE-FG02-85ER25001, an IBM donation and a grant from AT&T.  相似文献   

5.
A critique of DIN Kernel Lisp is presented which argues for greater emphasis on implementation efficiency and language cleanliness, and a greater emphasis onParallel andpersistent Lisp environments. Specific recommendations include standardizing the S-expression rather than the character form of a program, using lexical scoping and shadowing to enhance subsystem modularity, relying on macros and compiler-macros for more pleasant syntax and greater modularity, requiring immutable/functional bindings, strings, vectors and lists; using object-oriented capabilities to build basic capabilities-e.g., generic arithmetic, streams and pathnames, relying ondefstruct instead ofdefclass, and standardizing ondefmethod for all function definitions. A virtual/synthetic class mechanism is presented to solve certain technical problems analogous to those solved by the virtual function mechanism of C++. Finally, we recommend the inclusion offutures as DKLisp's fundamental mechanism for the introduction of multiple parallel threads of computation.The opinions expressed here are those of the author and not necessarily those of Nimble Computer Corporation.  相似文献   

6.
Assume we are given ann ×n binary image containing horizontally convex features; i.e., for each feature, each of its row's pixels form an interval on that row. In this paper we consider the problem of assigning topological numbers to such features, i.e., assign a number to every featuref so that all features to the left off in the image have a smaller number assigned to them. This problem arises in solutions to the stereo matching problem. We present a parallel algorithm to solve the topological numbering problem inO(n) time on ann ×n mesh of processors. The key idea of our solution is to create a tree from which the topological numbers can be obtained even though the tree does not uniquely represent the to the left of relationship of the features.The work of M. J. Atallah was supported by the Office of Naval Research under Grants N00014-84-K-0502 and N00014-86-K-0689, and the National Science Foundation under Grant DCR-8451393, with matching funds from AT&T. Part of this work was done while he was a Visiting Scientist at the Center for Advanced Architectures project of the Research Institute for Advanced Computer Science, NASA Ames Research Center, Moffett Field, CA 94035, USA. S. E. Hambrusch's work was supported by the Office of Naval Research under Contracts N00014-84-K-0502 and N00014-86K-0689, and by the National Science Foundation under Grant MIP-87-15652. Part of this work was done while she was visiting the International Computer Science Institute, Berkeley, CA 94704, USA. The work of L. E. TeWinkel was supported by the Office of Naval Research under Contract N00014-86K-0689.  相似文献   

7.
This article presents a convenient and efficient procedural interface that allows the definition and use of procedures with optional arguments and indefinite numbers of arguments without resorting to the use of a language-dependent data structure in which to store the arguments. This interface solves many of the problems inherent in the use of lists in Lisp and Scheme to store indefinite numbers of arguments. Simple recursion can be used to consume such arguments without the complexity problems that are caused by the use of the Lisp procedureapply on argument lists. A natural extension to the interface to support multiple return values is presented.This material is based on work supported by the National Science Foundation under grant number CCR-8803432 and by Sandia National Laboratories under contract number 06-06211. A preliminary version of this article was presented at the 1988 ACM Conference on Lisp and Functional Programming.  相似文献   

8.
We consider the parallel time complexity of logic programs without function symbols, called logical query programs, or Datalog programs. We give a PRAM algorithm for computing the minimum model of a logical query program, and show that for programs with the polynomial fringe property, this algorithm runs in time that is logarithmic in the input size, assuming that concurrent writes are allowed if they are consistent. As a result, the linear and piecewise linear classes of logic programs are inN C. Then we examine several nonlinear classes in which the program has a single recursive rule that is an elementary chain. We show that certain nonlinear programs are related to GSM mappings of a balanced parentheses language, and that this relationship implies the polynomial fringe property; hence such programs are inN C Finally, we describe an approach for demonstrating that certain logical query programs are log space complete forP, and apply it to both elementary single rule programs and nonelementary programs.Supported by NSF Grant IST-84-12791, a grant of IBM Corporation, and ONR contract N00014-85-C-0731.  相似文献   

9.
Let P andQ be two convex,n-vertex polygons. We consider the problem of computing, in parallel, some functions ofP andQ whenP andQ are disjoint. The model of parallel computation we consider is the CREW-PRAM, i.e., it is the synchronous shared-memory model where concurrent reads are allowed but no two processors can simultaneously attempt to write in the same memory location (even if they are trying to write the same thing). We show that a CREW-PRAM havingn 1/k processors can compute the following functions in O(k1+) time: (i) the common tangents betweenP andQ, and (ii) the distance betweenP andQ (and hence a straight line separating them). The positive constant can be made arbitrarily close to zero. Even with a linear number of processors, it was not previously known how to achieve constant time performance for computing these functions. The algorithm for problem (ii) is easily modified to detect the case of zero distance as well.This research was supported by the Office of Naval Research under Grants N00014-84-K-0502 and N00014-86-K-0689, and the National Science Foundation under Grant DCR-8451393, with matching funds from AT&T.  相似文献   

10.
Proportionate progress: A notion of fairness in resource allocation   总被引:27,自引:0,他引:27  
Given a set ofn tasks andm resources, where each taskx has a rational weightx.w=x.e/x.p,0<1, aperiodic schedule is one that allocates a resource to a taskx for exactlyx.e time units in each interval [x.p·k, x.p·(k+1)) for allkN. We define a notion of proportionate progress, called P-fairness, and use it to design an efficient algorithm which solves the periodic scheduling problem.This research was supported by NSF Research Initiation Award CCR-9111591, and the Texas Advanced Research Program under Grant No. 91-003658-480.  相似文献   

11.
Over the past two decades, Scheme macros have evolved into a powerful API for the compiler front end. Like Lisp macros, their predecessors, Scheme macros expand source programs into a small core language; unlike Lisp systems, Scheme macro expanders preserve lexical scoping, and advanced Scheme macro systems handle other important properties such as source location. Using such macros, Scheme programmers now routinely develop the ultimate abstraction: embedded domain-specific programming languages.Unfortunately, a typical Scheme programming environment provides little support for macro development. This lack makes it difficult for programmers to debug their macros and for novices to study the behavior of macros. In response, we have developed a stepping debugger specialized to the concerns of macro expansion. This debugger presents the macro expansion process as a linear rewriting sequence of annotated terms; it graphically illustrates the binding structure of the program as expansion reveals it; and it adapts to the programmer’s level of abstraction, hiding details of syntactic forms that the programmer considers built-in.  相似文献   

12.
In this paper, we study two interprocedural program-analysis problems—interprocedural slicing and interprocedural dataflow analysis— and present the following results:
  • ? Interprocedural slicing is log-space complete forP.
  • ? The problem of obtaining “meet-over-all-valid-paths” solutions to interprocedural versions of distributive dataflow-analysis problems isP-hard.
  • ? Obtaining “meet-over-all-valid-paths” solutions to interprocedural versions of distributive dataflow-analysis problems that involve finite sets of dataflow facts (such as the classical “gen/kill” problems) is log-space complete forP.
  • These results provide evidence that there do not exist fast (N?-class) parallel algorithms for interprocedural slicing and precise interprocedural dataflow analysis (unlessP =N?). That is, it is unlikely that there are algorithms for interprocedural slicing and precise interprocedural dataflow analysis for which the number of processors is bounded by a polynomial in the size of the input, and whose running time is bounded by a polynomial in the logarithm of the size of the input. This suggests that there are limitations on the ability to use parallelism to overcome compiler bottlenecks due to expensive interprocedural-analysis computations.  相似文献   

    13.
    Summary We argue that the important property of self-stabilization is, in principle, unstable across system classes. In particular, we first define a very broad notion of simulation. We then define what it means for a simulation to either preserve or force self-stabilization. Given these definitions, we then show that, for a variety of system classes, there is no simulation that preserves or forces self-stabilization.This work was supported in part by U.S. Office of Naval Research Grant No. N00014-86-K-0763 and National Science Foundation Grant No. CCR-8711579. Preliminary versions of this work have appeared in the Proceedings of the MCC Workshop on Self-Stabilizing Systems, MCC Technical Report No. STP-379-89, Austin, Texas, August 1989, and under the title System Simulation and the Sensitivity of Self-Stabilization in the Proceedings of the 14th International Symposium on Mathematical Foundations of Computer Science, LNCS 379, pp. 249–258, Porabka-Kozubnik, Poland, August–September 1989.  相似文献   

    14.
    We present a collection of algorithms, all running in timeO(n 2 logn (n) o((n)3)) for some fixed integers(where (n) is the inverse Ackermann's function), for constructing a skeleton representation of a suitably generalized Voronoi diagram for a ladder moving in a two-dimensional space bounded by polygonal barriers consisting ofn line segments. This diagram, which is a two-dimensional subcomplex of the dimensional configuration space of the ladder, is introduced and analyzed in a companion paper by the present authors. The construction of the diagram described in this paper yields a motion-planning algorithm for the ladder which runs within the same time bound given above.Work on this paper has been supported in part by Office of Naval Research Grant N00014-82-K-0381, and by grants from the Digital Equipment Corporation, the Sloan Foundation, the System Development Foundation, the IBM corporation, and by National Science Foundation CER Grant No. DCR-8320085. Work by the second author has also been supported in part by a grant from the US-Israeli Binational Science Foundation.  相似文献   

    15.
    Multilisp is a parallel programming language derived from the Scheme dialect of Lisp by addition of thefuture construct. It has been implemented on Concert, a 32-processor shared-memory multiprocessor. A statistics-gathering feature of Concert Multilisp producesparallelism profiles showing the number of processors busy with computing or overhead, as a function of time. Experience gained using parallelism profiles and other measurement tools on several application programs has revealed three basic ways in whichfuture generates concurrency. These ways are illustrated on two example programs: the Lisp mapping functionmapcar and the partitioning routine from Quicksort. Experience with Multilisp programming exposes issues relating to side effects, error and exception handling, low-level operations for explicit manipulation of futures and tasks, and speculative computing, which are also discussed. The basic outlines of Multilisp are now fairly clear and have stood the test of being used for several applications, but further language design work is especially needed in the areas of speculative computing and exception handling.This research was supported in part by the Defense Advanced Research Projects Agency and was monitored by the Office of Naval Research under contract numbers N00014-83-K-0125 and N00014-84-K-0099.  相似文献   

    16.
    Summary The following three results concerning tree automata are presented in this paper. (1) Rounds has presented the following open problem: For every recognizable setR, can we construct a deterministic finite-state transformation recognizingR? We show that this is not possible, in fact, even for a local set. However, the following is true: For every recognizable setR there is an inverse projectionR effectively obtained such thatR is recognized by a deterministic finite-state transformation. (2) Martin and Vere in their study of tree automata leave open the question of whether Generalized Syntax Directed Transductions (GSDT's) are closed under Arden's transformation or Greibach's transformation, and conjecture that they are not. We prove that this conjecture is true. It is also shown that GSDT's are not closed under transformation to LR(k) grammars. (3) Peters and Ritchie have shown that if, in a grammar where the generative rules are context-free, there are recognition rules which are context-sensitive, the language recognized is still context-free. A tree-automata-oriented proof is given by Rounds. We show that a similar result holds also for right linear grammars, i.e., if the generative rules are right linear, then using context-sensitive rules for recognition, one can still recognize only regular languages. Some other related results concerning context-sensitive extensions of subclasses of context-free languages are also presented.This work was partially supported by NSF Grant GJ27, U.S. Army Research Office, Durham (DA-31-124 ARO(D)-98), and NSF Grant GS-2509.A present on leave at The Institute for Advanced Study, Princeton, N.J.  相似文献   

    17.
    Most proof methods for reasoning about concurrent programs are based upon theinterleaving semantics of concurrent computation: a concurrent program is executed in a stepwise fashion, with only one enabled action being executed at each step. Interleaving semantics, in effect, requires that a concurrent program be executed as a nondeterministic sequential program. This is clearly an abstraction of the way in which concurrent programs are actually executed. To ensure that this is a reasonable abstraction, interleaving semantics should only be used to reason about programs with simple actions; we call such programs atomic. In this paper, we formally characterise the class of atomic programs. We adopt the criterion that a program isatomic if it can be implemented in a wait-free, serialisable manner by a primitive program. A program isprimitive if each of its actions has at most one occurrence of a shared bit, and each shared bit is read by at most one process and written by at most one process. It follows from our results that the traditionally accepted atomicity criterion, which allows each action to have at most one occurrence of a shared variable, can be relaxed, allowing programs to have more powerful actions. For example, according to our criterion, an action can read any finite number of shared variables, provided it writes no shared variable.Work supported, in part, at the University of Texas at Austin by Office of Naval Research Contract N00014-89-J-1913, and at the University of Maryland by an award from the University of Maryland General Research Board.Work supported, in part, by Office of Naval Research Contract N00014-89-J-1913.  相似文献   

    18.
    This research aims to fully integrate logic programming into the programming language Scheme. We use a minimalist approach, based on the observation that the fundamental aspects of logic programming, nondeterminism and unification, are separable both in concept and in implementation. We have found that only two new primitive functions and one new special form need to be added to Scheme to achieve this integration. Using these primitives, we can write programs in the style of Scheme, Icon, Prolog, or any mixture thereof. We have found that a style of programming that uses both logical and functional techniques can be more powerful than the use of either technique alone. Because Scheme has side effects and continuations, this research addresses different problems and choices than previous research [2, 16, 19] on merging functional and logical languages.An earlier version of this paper was published in theProceedings of the 4th ACM Conference on Functional Programming Languages and Computer Architecture, as Non-determinism and Unification in LogScheme, pp. 327–339.Supported in part by an AT&T Bell Laboratories Ph.D. Scholarship.Supported by Defense Advanced Research Projects Agency contract # N00014-87-K-0828.  相似文献   

    19.
    In a simply-typed, call-by-value (CBV) language with first-class continuations, the usual CBV fixpoint operator can be defined in terms of a simple, infinitely-looping iteration primitive. We first consider a natural but flawed definition, based on exceptions and iterative deepening of finite unfoldings, and point out some of its shortcomings. Then we present the proper construction using full first-class continuations, with both an informal derivation and a proof that the behavior of the defined operator faithfully mimics a built-in recursion primitive. In fact, given an additional uniformity assumption, the construction is a two-sided inverse of the usual definition of iteration from recursion. Continuing, we show that the CBV looping primitive is in fact the direct-style equivalent of a continuation-passing-style fixpoint, and that this correspondence extends all the way to traditional definitions of these operators in terms of reflexive types.An earlier version of this work appeared inProceedings of the 1992 ACM SIGPLAN Workshop on Continuations.Supported in part by NSF Grant CCR-8922109 and in part by the Avionics Lab, Wright Research and Development Center, Aeronautical Systems Division (AFSC), U.S. Air Force, Wright-Patterson AFB, OH 45433-6543 under Contract F33615-90-C-1465, ARPA Order No. 7597. The views and conclusions contained in this document are those of the author and should not be interpreted as representing the official policies, either expressed or implied, of the U.S. Government.  相似文献   

    20.
    Two transformations are presented which, for any pushdown automaton (PDA)M withn states andp stack symbols, reduce the number of stack symbols to any desired numberp greater than one. The first transformation preserves deterministic behavior and produces an equivalent PDA witho(np/p) states. The second construction, using a technique which introduces nondeterminism, produces an equivalent PDA withO(np/p) states. Both transformations are essentially optimal, the former among determinism-preserving transformations, the latter among all transformations.This research was supported in part by the National Science Foundation under Grants MCS 76-10076 and MCS 76-10076A01 and by the Stiftung Volkswagenwerk under Grant No. II/62 325.  相似文献   

    设为首页 | 免责声明 | 关于勤云 | 加入收藏

    Copyright©北京勤云科技发展有限公司  京ICP备09084417号