首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 703 毫秒
1.
Summary Here we give methods of mechanically converting programs that are easy to understand into more efficient ones, converting recursion equations using high level operations into lower level flowchart programs.The main transformations involved are (i) recursion removal (ii) eliminating common subexpressions and combining loops (iii) replacing procedure calls by their bodies (iv) introducing assignments which overwrite list cells no longer in use (compiletime garbage collection).The operations we use are based on the POP-2 language, Burstall, Collins and Popplestone [1]. The main features to note are that hd is the LISP car, tl the LISP cdr and concat joins two lists (the LISP append).  相似文献   

2.
The paper concerns the relationship between graph theoretic and algebraic properties of structured flowchart schemes. For each of ten classes of flowchart schemes defined algebraically, a graph theoretic property is given which characterizes this class. The classes include the Dijkstra schemes, Elgot's CACI and G-schemes, the reducible schemes and Kosaraju's BJn-schemes. For two classes of schemes defined by a graph theoretic property, an equivalent algebraic characterization is found.  相似文献   

3.
The aim of this work is to develop a declarative semantics for N-Prolog with negation as failure. N-Prolog is an extension of Prolog proposed by Gabbay and Reyle (1984, 1985), which allows for occurrences of nested implications in both goals and clauses. Our starting point is an operational semantics of the language defined by means of top-down derivation trees. Negation as finite failure can be naturally introduced in this context. A goal-G may be inferred from a database if every top-down derivation of G from the database finitely fails, i.e., contains a failure node at finite height.Our purpose is to give a logical interpretation of the underlying operational semantics. In the present work (Part 1) we take into consideration only the basic problems of determining such an interpretation, so that our analysis will concentrate on the propositional case. Nevertheless we give an intuitive account of how to extend our results to a first order language. A full treatment of N-Prolog with quantifiers will be deferred to the second part of this work.Our main contribution to the logical understanding of N-Prolog is the development of a notion of modal completion for programs, or databases. N-Prolog deductions turn out to be sound and complete with respect to such completions. More exactly, we introduce a natural modal three-valued logic PK and we prove that a goal is derivable from a propositional program if and only if it is implied by the completion of the program in the logic PK. This result holds for arbitrary programs. We assume no syntactic restriction, such as stratification (Apt et al. 1988; Bonner and McCarty 1990). In particular, we allow for arbitrary recursion through negation.Our semantical analysis heavily relies on a notion of intensional equivalence for programs and goals. This notion is naturally induced by the operational semantics, and is preserved under substitution of equivalent subexpressions. Basing on this substitution property we develop a theory of normal forms of programs and goals. Every program can be effectively transformed into an equivalent program in normal form. From the simple and uniform structure of programs in normal form one may directly define the completion.  相似文献   

4.
The granularity of shared data is one of the key factors affecting the performance of distributed shared memory machines (DSM). Given that programs exhibit quite different sharing patterns, providing only one or two fixed granularities cannot result in an efficient use of resources. On the other hand, supporting arbitrarily granularity sizes significantly increases not only hardware complexity but software overhead as well. Furthermore. the efficient use of arbitrarily granularities put the burden on users to provide information about program behavior to compilers and/or runtime systems. These kind of requirements tend to restrict the programmability of the shared memory model. In this paper, we present a new communication scheme, calledAdaptive Granularity (AG). Adaptive Granularity makes it possible to transparently integrate bulk transfer into the shared memory model by supporting variable-size granularity and memory replication. It consists of two protocols: one for small data and another for large data. For small size data, the standard hardware DSM protocol is used and the granularity is fixed to the size of a cache line. For large array data, the protocol for bulk data is used instead, and the granularity varies depending on the runtime sharing behavior of the applications. Simulation results show that AG improves performance up to 43% over the hardware implementation of DSM (e.g., DASH, Alewife). Compared with an equivalent architecture that supports fine-grain memory replication at the fixed granularity of a cache line (e.g., Typhoon), AG reduces execution time up to 35%. This research was supported in part by NSF under grant CCR-9308981 by ARPA under Rome Laboratories Contract F30602-91-C-0146, and by the USC Zumberge Fund. Computing resources were provided in part by NSF infrastructure grant CDA-9216321.  相似文献   

5.
Some basic connections between bounded-input, bounded-output stability and exponential asymptotic stability are presented for a class of linear time-varying hereditary systems. For that class of systems, controllability and observability are closely related to that of an equivalent linear system. This eases the analytical treatment of such properties which are then used to derive the stability results. The requirement of boundedness of inputs and outputs in the BIBO definition is modified by requesting that their ‘energy content’ over a fixed-length interval should be bounded independently of the position of the interval. For the class of hereditary linear systems studied, it is found that EAS and BIBO stability are equivalent under uniform complete controllability and observability.  相似文献   

6.
The aim of this paper is to extend the probabilistic choice in probabilistic programs to sub-probabilistic choice, i.e., of the form (p)P (q)Q where p + q ⩽ 1. It means that program P is executed with probability p and program Q is executed with probability q. Then, starting from an initial state, the execution of a sub-probabilistic program results in a sub-probability distribution. This paper presents two equivalent semantics for a sub-probabilistic while-programming language. One of these interprets programs as sub-probabilistic distributions on state spaces via denotational semantics. The other interprets programs as bounded expectation transformers via wp-semantics. This paper proposes an axiomatic systems for total logic, and proves its soundness and completeness in a classical pattern on the structure of programs.  相似文献   

7.
The amount of memory required by a parallel program may be spectacularly larger than the memory required by an equivalent sequential program, particularly for programs that use recursion extensively. Since most parallel programs are nondeterministic in behavior, even when computing a deterministic result, parallel memory requirements may vary from run to run, even with the same data. Hence, parallel memory requirements may be both large (relative to memory requirements of an equivalent sequential program) and unpredictable. Assume that each parallel program has an underlying sequential execution order that may be used as a basis for predicting parallel memory requirements. We propose a simple restriction that is sufficient to ensure that any program that will run in n units of memory sequentially can run in mn units of memory on m processors, using a scheduling algorithm that is always within a factor of two of being optimal with respect to time. Any program can be transformed into one that satisfies the restriction, but some potential parallelism may be lost in the transformation. Alternatively, it is possible to define a parallel programming language in which only programs satisfying the restriction can be written  相似文献   

8.
This paper discusses our methodology for formal analysis and automatic verification of software programs. It is applicable to a large subset of the C programming language that includes pointer arithmetic and bounded recursion. We consider reachability properties, in particular whether certain assertions or basic blocks are reachable in the source code, or whether certain standard property violations can occur. We perform this analysis via a translation to a Boolean circuit representation based on modeling basic blocks. The program is then analyzed by a back-end SAT-based bounded model checker, where each unrolling is mapped to one step in a block-wise execution of the program.  相似文献   

9.

In the state-of-the-art parallel programming approaches OpenCL and CUDA, so-called host code is required for program’s execution. Efficiently implementing host code is often a cumbersome task, especially when executing OpenCL and CUDA programs on systems with multiple nodes, each comprising different devices, e.g., multi-core CPU and graphics processing units; the programmer is responsible for explicitly managing node’s and device’s memory, synchronizing computations with data transfers between devices of potentially different nodes and for optimizing data transfers between devices’ memories and nodes’ main memories, e.g., by using pinned main memory for accelerating data transfers and overlapping the transfers with computations. We develop distributed OpenCL/CUDA abstraction layer (dOCAL)—a novel high-level C++ library that simplifies the development of host code. dOCAL combines major advantages over the state-of-the-art high-level approaches: (1) it simplifies implementing both OpenCL and CUDA host code by providing a simple-to-use, high-level abstraction API; (2) it supports executing arbitrary OpenCL and CUDA programs; (3) it allows conveniently targeting the devices of different nodes by automatically managing node-to-node communications; (4) it simplifies implementing data transfer optimizations by providing different, specially allocated memory regions, e.g., pinned main memory for overlapping data transfers with computations; (5) it optimizes memory management by automatically avoiding unnecessary data transfers; (6) it enables interoperability between OpenCL and CUDA host code for systems with devices from different vendors. Our experiments show that dOCAL significantly simplifies the development of host code for heterogeneous and distributed systems, with a low runtime overhead.

  相似文献   

10.
Tail recursive functions are a special kind of recursive functions where the last action in their body is the recursive call. Tail recursion is important for a number of reasons (e.g., they are usually more efficient). In this article, we introduce an automatic transformation of first-order functions into tail recursive form. Functions are defined using a (first-order) term rewrite system. We prove the correctness of the transformation for constructor-based reduction over constructor systems (i.e., typical first-order functional programs).  相似文献   

11.
P. Ferrara 《Software》2013,43(6):663-684
In this paper, we present heckmate , the first generic static analyzer of multithreaded Java programs based on abstract interpretation. heckmate can be tuned at different levels of precision and efficiency in order to prove various properties (e.g., absence of divisions by zero and data races), and it is sound for multithreaded programs. It supports all the most relevant features of Java multithreading, such as dynamic thread creation, runtime creation of monitors, and dynamic allocation of memory. The experimental results demonstrate that heckmate is accurate and efficient enough to analyze programs with some thousands of statements and a potentially infinite number of threads. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

12.
If citizens’ behavior threatens to harm others or seems not to be in their own interest (e.g., risking severe head injuries by riding a motorcycle without a helmet), it is not uncommon for governments to attempt to change that behavior. Governmental policy makers can apply established tools from the governmental toolbox to this end (e.g., laws, regulations, incentives, and disincentives). Alternatively, they can employ new tools that capitalize on the wealth of knowledge about human behavior and behavior change that has been accumulated in the behavioral sciences (e.g., psychology and economics). Two contrasting approaches to behavior change are nudge policies and boost policies. These policies rest on fundamentally different research programs on bounded rationality, namely, the heuristics and biases program and the simple heuristics program, respectively. This article examines the policy–theory coherence of each approach. To this end, it identifies the necessary assumptions underlying each policy and analyzes to what extent these assumptions are implied by the theoretical commitments of the respective research program. Two key results of this analysis are that the two policy approaches rest on diverging assumptions and that both suffer from disconnects with the respective theoretical program, but to different degrees: Nudging appears to be more adversely affected than boosting does. The article concludes with a discussion of the limits of the chosen evaluative dimension, policy–theory coherence, and reviews some other benchmarks on which policy programs can be assessed.  相似文献   

13.
Given two fuzzy subsets μ and ν of a metric space S (e.g., the Euclidean plane), we define the ‘shortest distance’ between μ and ν as a density function on the non-negative reals; our definition is applicable both when μ and ν are discrete-valued and when they are ‘smooth’ (i.e., differentiable), and it generalizes the definition of shortest distance for crisp sets in a natural way. We also define the mean distance between μ and ν, and show how it relates to the shortest distance. the relationship to earlier definitions of distance between fuzzy sets [1,3] is also discussed.  相似文献   

14.
Avoiding exponential parameter growth in fuzzy systems   总被引:2,自引:0,他引:2  
For standard fuzzy systems where the input membership functions are defined on a grid on the input space, and all possible combinations of rules are used, there is an exponential growth in the number of parameters of the fuzzy system as the number of input dimensions increases. This “curse of dimensionality” effect leads to problems with design of fuzzy controllers (e.g., how to tune all these parameters), training of fuzzy estimators (e.g., complexity of a gradient algorithm for training, and problems with “over parameterization” that lead to poor convergence properties), and with computational complexity in the implementation for practical problems. We introduce a fuzzy system whose number of parameters grows linearly depending upon the number of inputs, even though it is constructed by using all possible combinations of the membership functions in defining the rules. We prove that this fuzzy system is equivalent to the standard fuzzy system as long as its parameters are specified in a certain way. Then, we show that it still holds the universal approximator property by using the Stone-Welerstrass theorem. Finally, we illustrate the performance of the fuzzy system via an application  相似文献   

15.
Verilog代数语义研究   总被引:1,自引:0,他引:1  
给出了Verilog的代数语义.这是一个等式公理体系,它将Verilog语义特征通过代数规则简洁而准确地表达出来;并且这个代数语义相对于已经所作的操作语义模型来讲是可靠的,即所有的这些代数规则左右两边的进程在操作语义的观察模型下都是互模拟的.研究了此代数语义的相对完备性,即参照前面的操作语义模型,相对于扩展Verilog语言的一个子集而言,此代数语义是完备的.即所有符合这样语法的程序,如果它们是互模拟等价的,那么它们同样可以在所提出的代数系统中被推导相等.在完备性证明过程中,采用范式方法,即构造一种语法上特殊的程序,任何属于上述子集中的一个程序通过该代数规则都能够被转化为范式程序,而且范式程序在操作语义模型下是互模拟的当且仅当它们是语法相同的.上述结果具有重要的理论意义,因为现有的进程代数理论主要是针对管道通信并行语言而展开的,而对于像Verilog这种以共享变量通信为基础的复杂并行语言研究还是比较少的,对此类复杂的基于共享变量的并行语言的进程代数理论研究提出了一种通用、有效的方法.  相似文献   

16.
针对以往国产甚低频电磁(VLF EM)测量仪器缺少数据处理功能的不足,例如:重庆产DDS 1,根据工作需要,对目前常用的几种甚低频电磁测量数据处理方法———磁倾角法、Fraser滤波、线性滤波,采用面向对象程序设计(OOD)的原理,运用C 语言,分别编写了上述方法的数据处理程序。通过云南白牛厂银矿区甚低频测量数据处理实践检验,程序运行稳定、实用、简洁。  相似文献   

17.
We study the problem of developing a set of syntax-driven transformations for automatic translation of shared memory parallel programs into sequential programs. The result is a sequential program that is semantically equivalent to the original program. Consequently, the problem of debugging parallel programs is reduced to the problem of debugging sequential programs. Moreover, the efficiency of parallel programs can be increased by sequentializing code segments that include extra parallelism.The main difficulty in developing such a system is to preserve the fairness property of any actual parallel execution, which states that no process can wait forever unserved. Thus, non termination of the sequential version (namely an infinite loop) is allowed only if there is at least one fair parallel execution that does not halt as well (i.e., a process that executes an infinite loop whose termination is not dependent on any other process).We show that it is sufficient to consider the case of two sequential programs executed in parallel in order to solve the general case. We then describe several types of transformations and check their ability to preserve fairness. The results, with regards to the existence of such a transformation for general parallel programs are not conclusive; however, we do show that restricted cases (which are likely to appear in the reality) can be sequentialized using this set of transformations.  相似文献   

18.
Some transient and asymptotic performance properties are established for the model reference adaptive controller proposed by A.S. Morse (Proc. US-Italy Joint Seminar Syst., Models Feedback 1992). It is proved that the L2 norm of the tracking error is uniformly bounded by the initial parameter estimation error. Further, if the initial conditions are sufficiently small, it is shown that the L norm of the tracking error is uniformly bounded by the Lm2 norm of the reference signal. These transient bounds are independent of the signals richness and the adaptation gain, making them arguably the strongest transient results available in the literature. Second, excitation conditions for exponential stability, a property which is well known to insure some local performance measures, are given. To this end, it is shown that, modulo a signal dependent time scale change, Morse's estimator is equivalent to a normalized gradient plus filtering identifier  相似文献   

19.
Virtual Symposium on Virtual Mind   总被引:2,自引:2,他引:0  
When certain formal symbol systems (e.g., computer programs) are implemented as dynamic physical symbol systems (e.g., when they are run on a computer) their activity can be interpreted at higher levels (e.g., binary code can be interpreted as LISP, LISP code can be interpreted as English, and English can be interpreted as a meaninguful conversation). These higher levels of interpretability are called ‘virtual’ systems. If such a virtual system is interpretable as if it had a mind, is such a ‘virtual mind’ real? This is the question addressed in this ‘virtual’ symposium, originally conducted electronically among four cognitive scientists. Donald Perlis, a computer scientist, argues that according to the computationalist thesis, virtual minds are real and hence Searle's Chinese Room Argument fails, because if Searle memorized and executed a program that could pass the Turing Test in Chinese he would have a second, virtual, Chinese-understanding mind of which he was unaware (as in multiple personality). Stevan Harnad, a psychologist, argues that Searle's Argument is valid, virtual minds are just hermeneutic overinterpretations, and symbols must be grounded in the real world of objects, not just the virtual world of interpretations. Computer scientist Patrick Hayes argues that Searle's Argument fails, but because Searle does not really implement the program: a real implementation must not be homuncular but mindless and mechanical, like a computer. Only then can it give rise to a mind at the virtual level. Philosopher Ned Block suggests that there is no reason a mindful implementation would not be a real one.  相似文献   

20.
This paper outlines a logic programming methodology which applies standardized logic program recursion forms afforded by a system of general purpose recursion schemes. The recursion schemes are conceived of as quasi higher-order predicates which accept predicate arguments, thereby representing parameterized program modules. This use of higher-order predicates is analogous to higher-order functionals in functional programming. However, these quasi higher-order predicates are handled by a metalogic programming technique within ordinary logic programming. Some of the proposed recursion operators are actualizations of mathematical induction principles (e.g. structural induction as generalization of primitive recursion). Others are heuristic schemes for commonly occurring recursive program forms. The intention is to handle all recursions in logic programs through the given repertoire of higher-order predicates. We carry out a pragmatic feasibility study of the proposed recursion operators with respect to the corpus of common textbook logic programs. This pragmatic investigation is accompanied with an analysis of the theoretical expressivity. The main theoretical results concerning computability are
  1. Primitive recursive functions can be re-expressed in logic programming by predicates defined solely by non-recursive clauses augmented with afold recursion predicate akin to the fold operators in functional programming.
  2. General recursive functions can be re-expressed likewise sincefold allows re-expression of alinrec recursion predicate facilitating linear, unbounded recursion.
  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号