首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 765 毫秒
1.
In functional languages such as OBJ*, CafeOBJ, and Maude, symbols are given strategy annotations that specify (the order in) which subterms are evaluated. Syntactically, strategy annotations are given either as lists of natural numbers or as lists of integers associated to function symbols whose (absolute) values refer to the arguments of the corresponding symbol. A positive index prescribes the evaluation of an argument whereas a negative index means “evaluation on-demand”. These on-demand indices have been proposed to support laziness in OBJ-like languages. While strategy annotations containing only natural numbers have been implemented and investigated to some extent (regarding, for example, termination, confluence, and completeness), fully general annotations (including positive and negative indices) have been disappointingly under-explored to date.In this paper, we first point out a number of problems of current proposals for handling on-demand strategy annotations. Then, we propose a solution to these problems by keeping an accurate track of annotations along the evaluation sequences. We formalize this solution as a suitable extension of the evaluation strategy of OBJ-like languages (which only consider annotations given as natural numbers) to on-demand strategy annotations. Our on-demand evaluation strategy (ODE) overcomes the drawbacks of previous proposals and also has better computational properties. For instance, we show how to use this strategy for computing (head-)normal forms. We also introduce a transformation which allows us to prove the termination of the new evaluation strategy by using standard rewriting techniques. Finally, we present two interpreters of the new strategy together with some encouraging experiments which demonstrate the usefulness of our approach.  相似文献   

2.
Strategy annotations provide a simple mechanism for introducing some laziness in the evaluation of expressions. As an eager programming language, Maude can take advantage of them and, in fact, they are part of the language. Maude strategy annotations are lists of non-negative integers associated to function symbols which specify the ordering in which the arguments are (eventually) evaluated in function calls. A positive index enables the evaluation of an argument whereas 'zero' means that the function call has to be attempted. The use of negative indices has been proposed to express evaluation on-demand, where the demand is an attempt to match an argument term with the left-hand side of a rewrite rule. In this paper we show how to furnish Maude with the ability of dealing with on-demand strategy annotations.  相似文献   

3.
We present a generalization of the temporal propositional logic of linear time which is useful for stating and proving properties of the generic execution sequence of a parallel program or a non-deterministic program. The formal system we present is exactly that same as the third of three logics presented by Lehmann and Shelah (Information and Control53, 165–198 (1982)), but we give it a different semantics. The models are tree models of arbitrary size similar to those used in branching time temporal logic. The formulation we use allows us to state properties of the “co-meagre” family of paths, where the term “co-meagre” refers to a set whose complement is of the first category in Baire's classification looking at the set of paths in the model as a metric space. Our system is decidable, sound, and, complete for models of arbitrary size, but it has the finite model property; namely, every sentence having a model has a finite model.  相似文献   

4.
Plotkin ((1977) Theoret. Comput. Sci. 5: 223–256) examines the denotational semantics of PCF (essentially typed λ-calculus with arithmetic and looping). The standard Scott semantics V is computationally adequate but not fully abstract; with the addition of some parallel facilities, it becomes fully abstract, and with the addition of an existential operator, denotationally universal. We consider carrying out the same program for , the Scott models built from flat lattices rather than flat cpo's. Surprisingly, no computable extension of PCF can be denotationally universal; perfectly reasonable semantic values such as supremum and Plotkin's “parallel or” cannot be definable. There is an unenlightening fully abstract extension A (approx), based on Gödel numbering and syntactic analysis. Unfortunately, this is the best we can do; operators defined by PCF-style rules cannot give a fully abstract language. (There is a natural and desirable property, operational extensionality, which prevents full abstraction with respect to .) However, we show that Plotkin's program can be carried out for a nonconfluent evaluator.  相似文献   

5.
6.
In this paper we show that it is possible to model observable behaviour of coalgebras independently from their internal dynamics, but within the general framework of representing behaviour by a map into a “final” coalgebra.In the first part of the paper we characterise Set-endofunctors F with the property that bisimilarity of elements of F-coalgebras coincides with having the same observable behaviour. We show that such functors have the final coalgebra of a rather simple nature, and preserve some weak pullbacks. We also show that this is the case if and only if F-bisimilarity corresponds to logical equivalence in the finitary fragment of the coalgebraic logic.In the second part of the paper, we present a construction of a “final” coalgebra that captures the observable behaviour of F-coalgebras. We keep the word “final” quoted since the object we are going to construct need not belong to the original category. The construction is carried out for arbitrary Set-endofunctor F, throughout the construction we remain in Set, but the price to pay is the introduction of new morphisms. The paper concludes with a hint to a possible application to modelling weak bisimilarity for coalgebras.  相似文献   

7.
We study and improve the OBF technique [Barnat, J. and P.Moravec, Parallel algorithms for finding SCCs in implicitly given graphs, in: Proceedings of the 5th International Workshop on Parallel and Distributed Methods in Verification (PDMC 2006), LNCS (2007)], which was used in distributed algorithms for the decomposition of a partitioned graph into its strongly connected components. In particular, we introduce a recursive variant of OBF and experimentally evaluate several different implementations of it that vary in the degree of parallelism. For the evaluation we used synthetic graphs with a few large components and graphs with many small components. We also experimented with graphs that arise as state spaces in real model checking applications. The experimental results are compared with that of other successful SCC decomposition techniques [Orzan, S., “On Distributed Verification and Verified Distribution,” Ph.D. thesis, Free University of Amsterdam (2004); Fleischer, L.K., B. Hendrickson and A. Pinar, On identifying strongly connected components in parallel, in: Parallel and Distributed Processing, IPDPS Workshops, Lecture Notes in Computer Science 1800, 2000, pp. 505–511].  相似文献   

8.
On Full Abstraction for PCF: I, II, and III   总被引:1,自引:0,他引:1  
  相似文献   

9.
In this paper we use a program logic and automatic theorem provers to certify resource usage of low-level bytecode programs equipped with annotations describing resource consumption for methods. We have adapted an existing resource counting logic [Aspinall, D., L. Beringer, M. Hofmann, H.-W. Loidl and A. Momigliano, A program logic for resource verification, in: Proceedings of 17th International Conference on Theorem Proving in Higher Order Logics (TPHOLs2004), Lecture Notes in Computer Science 3223 (2004), pp. 34–49] to fit the first-order setting, implemented a verification condition generator, and tested our approach on programs that contain recursion and deal with recursive data structures. We have successfully applied our framework to programs that did not involve any updates to recursive data structures. But mutation is more tricky because of aliasing of heap. We discuss problems related to this and suggest techniques to solve them.  相似文献   

10.
11.
Semantics preservation between source and target program is the commonly accepted minimum requirement to be ensured by compilers. It is the key term compiler verification and optimization are centered around. The precise meaning, however, is often only implicit. As a rule of thumb, verification tends to interpret semantics preservation in a very tight sense, not only but also to simplify the verification task. Optimization generally prefers a more liberal view in order to enable more powerful transformations otherwise excluded. The surveyor's rod of admissibility is semantics preservation, and hence the language semantics. But the adequate interpretation varies fluently with the application context (“stand-alone” programs, communicating systems, reactive systems, etc.).The aim of the workshop is to bring together researchers and practitioners working on optimizing and verifying compilation as well as on programming language design and semantics in order to plumb the mutual impact of these fields on each other, the degrees of freedom optimizers and verifiers have, to bridge the gap between the communities, and to stimulate synergies.The accepted papers discuss topics such as certifying compilation, verifying compilation, translation validation, and optimization. Chakravarty et al. present correctness proofs for constant-folding and dead code elimination based on SSA. Hartmann et al. discuss a method to annotate SafeTSA code in order to enable object resolution for dynamic objects under certain conditions. Their approach statically analyzes classes in order to determine if object resolution is possible during runtime. Berghofer and Strecker describe the mechanical verification of a compiler from a small subset of Java to JVM using Isabelle. The contribution of Alias and Barthou is concerned with algorithm recognition. It presents a preliminary approach for detecting whether an algorithm, i.e. a piece of code, is an instance of a more general algorithm template. Their approach relies on first transforming the piece of code under consideration into a system of affine recurrent equations (SARE) and then checking whether it is an instance of a SARE template. Glesner and Blech formalize the notion of computer arithmetic and develop a classification of such arithmetics. Based on this classification they prove the correctness of constant folding which isn't as obvious as it seems at first glance. Genet et al. prove the correctness of a converter from ordinary java class files to CAP (CAP is the class file format of Java Card). The proofs are conducted using the PVS theorem proving system. Hoflehner, Lavery and Sehr discuss validation techniques from Intel's IA64 compiler effort. They show how to improve the reliability of both source code and compilers themselves by means of appropriate validation and self-validation techniques.The papers in this volume were reviewed by the program committee consisting, besides the editors, of
• www.ics.uci.edu/~franz/ Michael Franz, University of California, Irvine, CA, USA
• www-2.cs.cmu.edu/~petel/index.html Peter Lee, Carnegie Mellon University, PA, USA
• research.microsoft.com/~emeijer/ Erik Meijer, Microsoft Research, Redmond, WA, USA
• web.comlab.ox.ac.uk/oucl/people/oege.demoor.html Oege de Moor, Oxford University, UK
Robert Morgan, www.datapower.com DataPower, Cambridge, MA, USA
• www.cs.pitt.edu/~soffa/ Mary Lou Soffa, University of Pittsburgh, PA, USA
We are grateful to the following persons, whose help has been crucial for the success of COCV'03: Damian Niwinski and the organizers of ETAPS'2003 for their help with the organization of the Workshop as satellite event of ETAPS'2003; Mike Mislove, one of the Managing Editors of the ENTCS series, for his assistance with the use of the ENTCS style files.  相似文献   

12.
This volume contains the Proceedings of International Workshop on Optimization and Implementation of Declarative Programs (WOID′99). The Workshop was held in held in conjunction with the International Conference on Logic Programming ICLP′99 in Las Cruces, New Mexico, USA on the 2nd and 3rd of December, 1999.Overview:The aim of this workshop was to provide a forum where new trends, ideas and developments concerning the optimization and implementation of declarative languages could be discussed. It was especially geared towards bringing researchers from low-level compilation and high-level optimization together. Indeed, compilers and linkers are getting more and more sophisticated and employ more and more high-level optimizations, such as partial evaluation or deforestation. Researchers in high-level optimization and transformation, on the other hand, realise that low-level issues have to be taken into account in order to apply their techniques in practice. So, in this workshop we wanted to provide the possibility for these two areas to meet and accelerate their synergy.Contributions:Out of 11 submissions 8 papers were selected. To ensure a workshop character ample time was reserved for discussion and each paper was assigned a “key listener” from the program committee. This scheme was borrowed from the LOPSTR workshop series, and proved to be successful. The contributions were arranged in 3 sessions, described below.1. Implementation & Low-level OptimizationThe first paper of the session by R. Muth, S. Watterson, and S. Debray, discussed how the information that certain program variables “almost always” have a particular value (as opposed to “always” as in partial evaluation or constant folding) can and should be exploited for optimization. (For various reasons this paper is not present in this volume of ENTCS.) The other two papers of the session presented new ways of efficiently implementing CLP languages and constructs. The paper by N-F. Zhou and S. Kaneko presented an efficient (hybrid) way to compile equality constraints, while the paper by M. Gavanelli and M. Milano showed how to efficiently implement lazy domain evaluation for Constraint Satisfaction Problems.2. AnalysisThe session on analysis comprised two papers. The first one by K. Ueda developed a new linearity analysis for concurrent logic programs, with the aim of achieving compile time garbage collection. The paper by G. Puebla and M. Hermenegildo provided an encompassing and insightful overview of the important issues and problems that arise when analysing and specialising large programs decomposed into modules.3. SpecializationThe first paper of the last session, by F. Fioravanti, A. Pettorossi, and M. Proietti, presented a novel way to specialise (C)LP programs within a given context. This context allows one to describe additional specialisation constraints which are hard (or impossible) to express in earlier approaches. The next paper, by W. Vanhoof and M. Bruynooghe, dicussed how to achieve a binding-time analysis (i.e., figuring out which values and operations are already known or executable at specialisation time) for offline partial evaluation in the context of Mercury programs with modules. Finally, the paper by M. Leuschel and J. Jorgensen (presented by H. Lehmann) gave an overview of a new, fast offline partial evaluation system (called LOGEN), which can handle partially static data structures and many pure and impure features of Prolog.Further information about the workshop can be obtained at: http://www.ecs.soton.ac.uk/~mal/iclp99.woid.html.The Program Committee was composed of:Saumya Debray(University of Arizona)Bart Demoen(University of Leuven)John Gallagher(University of Bristol)Michael Leuschel (Chair)(University of Southampton)Germán Puebla(University of Madrid)Peter Stuckey(University of Melbourne)Neng-Fa Zhou(Kyushu Institute of Technology)I would like to thank the authors for their participation in the workshop and the program committee for their efforts in reviewing the papers. I also want to thank the Managing Editors of the Electronic Notes in Computer Science series, Michael Mislove, Maurice Nivat, and Christos Papadimitriou, for providing the opportunity to publish the proceedings in this series.Michael Leuschel, Guest Editor  相似文献   

13.
Runmin  Michel   《Automatica》2009,45(11):2685-2691
We revisit here the Almost Disturbance Decoupling Problem (ADDP) (Willems, 1981) by state feedback with the objective to solve ADDP and simultaneously place the maximal number of poles in the closed-loop solution. Indeed, when ADDP is solvable, we show that, whatever be the choice of a particular feedback solution, the obtained closed-loop system always has a set of fixed poles. We characterize these Fixed Poles of ADDP. The other (non-fixed) poles can be placed freely, and we characterize the “optimal” solutions (in terms of ad hoc subspaces and feedbacks) which allow us to solve ADDP with maximal pole placement. From our contribution, which treats the most general case for studying ADDP with maximal, usually partial, pole placement, directly follow the solutions of ADDP with complete pole placement (when there are no ADDP Fixed Poles) and ADDP with internal stability (when all the Fixed Poles of ADDP are stable), without requiring the use of stabilizability subspaces, as in Willems (1981). We extend the concept of Self-Bounded Controlled-Invariant Subspaces (Basile & Marro, 1992) to almost ones. An example is proposed that illustrates our contributions.  相似文献   

14.
The problem of proving that two programs, in any reasonable programming language, are equivalent is well-known to be undecidable. In a formal programming system, in which the rules for equivalence are finitely presented, the problem of provable equivalence is semi-decidable. Despite this improved situation there is a significant lack of generally accepted automated techniques for systematically searching for a proof (or disproof) of program equivalence. Techniques for searching for proofs of equivalence often stumble on the formulation of induction and, of course, coinduction (when it is present) which are often formulated in such a manner as to require inspired guesses.There are, however, well-known program transformation techniques which do address these issues. Of particular interest to this paper are the deforestation techniques introduced by Phil Wadler and the fold/unfold program transformation techniques introduced by Burstall and Darlington. These techniques are shadows of an underlying cut-elimination procedure and, as such, should be more generally recognized as proof techniques.In this paper we show that these techniques apply to languages which have both inductive and coinductive datatypes. The relationship between these program transformation techniques and cut-elimination requires a transformation from initial and final “algebra” proof rules into “circular” proof rules as introduced by Santocanale (and used implicitly in the model checking community). This transformation is only possible in certain proof systems. Here we show that it can be applied to cartesian closed categories with datatypes: closedness is an essential requirement. The cut-elimination theorems and attendant program transformation techniques presented here rely heavily on this alternate presentation of induction and coinduction.  相似文献   

15.
We describe PCTL, a temporal logic extending CTL with connectives allowing to refer to the past of a current state. This incorporates the new N, “from now on,” combinator we recently introduced. PCTL has a branching future, but a determined, finite, and cumulative past. We argue this is the right choice for a semantical framework and show this through an extensive example. We investigate the feasibility of verification with PCTL and demonstrate how a translation-based approach allows model-checking specifications written in NCTL, a fragment of PCTL.  相似文献   

16.
17.
By collecting statistics over runtime executions of a program we can answer complex queries, such as “what is the average number of packet retransmissions” in a communication protocol, or “how often does process P1 enter the critical section while process P2 waits” in a mutual exclusion algorithm. We present an extension to linear-time temporal logic that combines the temporal specification with the collection of statistical data. By translating formulas of this language to alternating automata we obtain a simple and efficient query evaluation algorithm. We illustrate our approach with examples and experimental results.  相似文献   

18.
BCD [Barendregt, Henk, Mario Coppo and Mariangiola Dezani-Ciancaglini, A filter lambda model and the completeness of type assignment, JSL 48 (1983), 931–940] relies for its modeling of λ calculus in intersection type filters on a key theorem which I call BL (for the Bubbling Lemma, following someone). This lemma has been extended in [Castagna, Giuseppe, and Alain Frisch. A gentle introduction to semantic subtyping. In PPDP05, ACM Press (full version) and ICALP05, LNCS volume 3580, Springer-Verlag (summary), 2005. Joint ICALP-PPDP keynote talk; Dezani-Ciancaglini, M., A. Frisch, E. Giovannetti and Y. Motohama, 2002. The Relevance of Semantic Subtyping, Electronic Notes in Theoretical Computer Science 70 No. 1, 15pp] to encompass Boolean structure, including specifically union types; this extended lemma I call BBL (the Better Bubbling Lemma). There are resonances, explored in [Dezani-Ciancaglini, M., R.K. Meyer and Y. Motohama, The semantics of entailment omega, NDJFL 43 (2002), 129–145] and [Dezani-Ciancaglini, M., A. Frisch, E. Giovannetti and Y. Motohama, 2002. The Relevance of Semantic Subtyping, Electronic Notes in Theoretical Computer Science 70 No. 1, 15pp], between intersection and union type theories and the already existing minimal positive relevant logic B+ of [Routley, Richard, and Robert K. Meyer, The semantics of entailment III, JPL 1 (1972), 192–208]. (Indeed [Pal, Koushik, and Robert K. Meyer. 2005. Basic relevant theories for combinators at levels I and II, AJL 3, http://www.philosophy.unimelb.edu.au/2005/2003_2.pdf, 19 pages] applies BL and BBL to get further results linking combinators to relevant theories and propositions.) On these resonances the filters of algebra become the theories of logic. The semantics of [Meyer, Robert K., Yoko Motohama and Viviana Bono, Truth translations of relevant logics, in B. Brown and F. Lepage, eds., “Truth and Probability: Essays in Honour of Hugues Leblanc,” pages 59–84, College Publications, King's College, London, 2005] yields here a new and short proof of BBL, which takes account of full Boolean structure by encompassing not only B+ but also its conservative Boolean extension CB [Routley, Richard, and Robert K. Meyer, The semantics of entailment III, JPL 1 (1972), 192–208; Meyer, Robert K., 1995. Types and the Boolean system CB+, cslab.anu.edu.au/ftp/techreports/SRS/1995/TR-SRS-5-95/meyer.ps.gz; Meyer, Robert K., Yoko Motohama and Viviana Bono, Truth translations of relevant logics, in B. Brown and F. Lepage, eds., “Truth and Probability: Essays in Honour of Hugues Leblanc,” pages 59–84, College Publications, King's College, London, 2005.].  相似文献   

19.
In this paper, we address a fundamental problem related to the induction of Boolean logic: Given a set of data, represented as a set of binary “truen-vectors” (or “positive examples”) and a set of “falsen-vectors” (or “negative examples”), we establish a Boolean function (or an extension)f, so thatfis true (resp., false) in every given true (resp., false) vector. We shall further require that such an extension belongs to a certain specified class of functions, e.g., class of positive functions, class of Horn functions, and so on. The class of functions represents our a priori knowledge or hypothesis about the extensionf, which may be obtained from experience or from the analysis of mechanisms that may or may not cause the phenomena under consideration. The real-world data may contain errors, e.g., measurement and classification errors might come in when obtaining data, or there may be some other influential factors not represented as variables in the vectors. In such situations, we have to give up the goal of establishing an extension that is perfectly consistent with the given data, and we are satisfied with an extensionfhaving the minimum number of misclassifications. Both problems, i.e., the problem of finding an extension within a specified class of Boolean functions and the problem of finding a minimum error extension in that class, will be extensively studied in this paper. For certain classes we shall provide polynomial algorithms, and for other cases we prove their NP-hardness.  相似文献   

20.
Lexico-semantic collocations (LSCs) are a prominent type of multiword expressions. Over the last decade, the automatic compilation of LSCs from text corpora has been addressed in a significant number of works. However, very often, the output of an LSC-extraction program is a plain list of LSCs. Being useful as raw material for dictionary construction, plain lists of LSCs are of a rather limited use in NLP-applications. For NLP, LSCs must be assigned syntactic and, especially, semantic information. Our goal is to develop an “off-the-shelf” LSC-acquisition program that annotates each LSC identified in the corpus with its syntax and semantics. In this article, we address the annotation task as a classification task,viewing it as a machine learning problem. The LSC-typology we use are the lexical functions from the Explanatory Combinatorial Lexicology; as lexico-semantic resource, EuroWordnet has been used. The applied machine learning technique is a variant of the nearest neighbor-family, which is defined over lexico-semantic features of the elements of LSCs. The technique has been tested on Spanish verb–noun bigrams.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号