首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 202 毫秒
1.
Constructing code analyzers may be costly and error prone if inadequate technologies and tools are used. If they are written in a conventional programming language, for instance, several thousand lines of code may be required even for relatively simple analyses. One way of facilitating the development of code analyzers is to define a very high-level domain-oriented language and implement an application generator that creates the analyzers from the specification of the analyses they are intended to perform. This paper presents a system for developing code analyzers that uses a database to store both a no-loss fine-grained intermediate representation and the results of the analyses. The system uses an algebraic representation, called F(p), as the user-visible intermediate representation. Analyzers are specified in a declarative language, called F(p)-l, which enables an analysis to be specified in the form of a traversal of an algebraic expression, with access to, and storage of, the database information the algebraic expression indices. A foreign language interface allows the analyzers to be embedded in C programs. This is useful for implementing the user interface of an analyzer, for example, or to facilitate interoperation of the generated analyzers with pre-existing tools. The paper evaluates the strengths and limitations of the proposed system, and compares it to other related approaches  相似文献   

2.
3.
We describe Chisel, a tool that synthesizes a program slicer directly from a given algebraic specification of a programming language operational semantics \(\mathcal {S}\). \(\mathcal {S}\) is assumed to be a rewriting logic specification, given in Maude, while the program is a ground term of this specification. Chisel takes \(\mathcal {S}\) and synthesizes language constructs, i.e., instructions, that produce features relevant for slicing, e.g., data dependency. We implement syntheses adjusted to each feature as model checking properties over an abstract representation of \(\mathcal {S}\). The syntheses results are used by a traditional interprocedural slicing algorithm that we parameterize by the synthesized language features. We present the tool on two language paradigms: high-level, imperative and low-level, assembly languages. Computing program slices for these languages allows for extracting traceability properties in standard compilation chains and makes our tool fitting for the validation of embedded system designs. Chisel’s slicing benchmark evaluation is based on benchmarks used in avionics.  相似文献   

4.
There is a great deal of research aimed toward the development of temporal logics and model checking algorithms which can be used to verify properties of systems. In this paper, we present a methodology and supporting tools which allow researchers and practitioners to automatically generate model checking algorithms for temporal logics from algebraic specifications. These tools are extensions of algebraic compiler generation tools and are used to specify model checkers as mappings of the form , where L s is a temporal logic source language and L t is a target language representing sets of states of a model M, such that . The algebraic specifications for a model checker define the logic source language, the target language representing sets of states in a model, and the embedding of the source language into the target language. Since users can modify and extend existing specifications or write original specifications, new model checking algorithms for new temporal logics can be easily and quickly developed; this allows the user more time to experiment with the logic and its model checking algorithm instead of developing its implementation. Here we show how this algebraic framework can be used to specify model checking algorithms for CTL, a real-time CTL, CTL*, and a custom extension called CTL e that makes use of propositions labeling the edges as well as the nodes of a model. We also show how the target language can be changed to a language of binary decision diagrams to generate symbolic model checkers from algebraic specifications.  相似文献   

5.
Summary An observational approach to the construction of implementations of algebraic specifications is presented. Based on the theory of observational specifications an implementation relation is defined which formalizes the intuitive idea that an implementation is correct if it produces correct observable output. To be useful in practice proof theoretic criteria for observational implementations are provided and a proof technique (called context induction) for the verification of implementation relations is presented. As an example an abstract specification of (the algebraic semantics of) a small imperative programming language is implemented by a state oriented specification of the language.In order to support the modular construction of implementations the approach is extended to parameterized observational specifications. Based on the notion of observable parameter context a proof theoretic criterion for parametrized observational implementations is presented and it is shown that under appropriate conditions observational implementations compose horizontally. The given implementation criteria are applied to examples.  相似文献   

6.
7.
Summary Equivalence is a fundamental notion for the semantic analysis of algebraic specifications. In this paper the notion of crypt-equivalence is introduced and studied w.r.t. two loose approaches to the semantics of an algebraic specification T: the class of all first-order models of T and the class of all term-generated models of T. Two specifications are called crypt-equivalent if for one specification there exists a predicate logic formula which implicitly defines an expansion (by new functions) of every model of that specification in such a way that the expansion (after forgetting unnecessary functions) is homologous to a model of the other specification, and if vice versa there exists another predicate logic formula with the same properties for the other specification. We speak of first-order crypt-equivalence if this holds for all first-order models, and of inductive crypt-equivalence if this holds for all term-generated models. Characterizations and structural properties of these notions are studied. In particular, it is shown that first order crypt-equivalence is equivalent to the existence of explicit definitions and that in case of positive definability two first-order crypt-equivalent specifications admit the same categories of models and homomorphisms. Similarly, two specifications which are inductively crypt-equivalent via sufficiently complete implicit definitions determine the same associated categories. Moreover, crypt-equivalence is compared with other notions of equivalence for algebraic specifications: in particular, it is shown that first-order cryptequivalence is strictly coarser than abstract semantic equivalence and that inductive crypt-equivalence is strictly finer than inductive simulation equivalence and implementation equivalence.  相似文献   

8.
This paper describes techniques for automatic construction of dictionaries for use in large-scale foreign language tutoring (FLT) and interlingual machine translation (MT) systems. The dictionaries are based on a language-independent representation called lexical conceptual structure (LCS). A primary goal of the LCS research is to demonstrate that synonymous verb senses share distributional patterns. We show how the syntax–semantics relation can be used to develop a lexical acquisition approach that contributes both toward the enrichment of existing online resources and toward the development of lexicons containing more complete information than is provided in any of these resources alone. We start by describing the structure of the LCS and showing how this representation is used in FLT and MT. We then focus on the problem of building LCS dictionaries for large-scale FLT and MT. First, we describe authoring tools for manual and semi-automatic construction of LCS dictionaries; we then present a more sophisticated approach that uses linguistic techniques for building word definitions automatically. These techniques have been implemented as part of a set of lexicon-development tools used in the milt FLT project.  相似文献   

9.
The lack of tools for rule generation, analysis, and run-time monitoring appears one of the main obstacles to the widespreading of active database applications. This paper describes a complete tool environment for assisting the design of active rules applications; the tools were developed at Politecnico di Milano in the context of the IDEA Project, a 4-years Esprit project sponsored by the European Commission which was launched in June 1992. We describe tools for active rule generation, analysis, debugging, and browsing; rules are defined in Chimera, a conceptual design model and language for the specification of active rules applications. We also introduce a tool for mapping from Chimera into Oracle, a relational product supporting triggers.Most of the tools described in this paper are fully implemented and currently in operation (beta-testing) within the companies participating to the IDEA Project, with the exception of two of them (called Argonaut-V and Pandora), which will be completed by the end of 1996.Research presented in this paper is supported by Esprit project P6333 IDEA, and by ENEL contract VDS 1/94: Integrity Constraint Management  相似文献   

10.
In this paper we present an approach for the analysis of graph transformation rules based on an intermediate OCL representation. We translate different rule semantics into OCL, together with the properties of interest (like rule applicability, conflicts or independence). The intermediate representation serves three purposes: (1) it allows the seamless integration of graph transformation rules with the MOF and OCL standards, and enables taking the meta-model and its OCL constraints (i.e. well-formedness rules) into account when verifying the correctness of the rules; (2) it permits the interoperability of graph transformation concepts with a number of standards-based model-driven development tools; and (3) it makes available a plethora of OCL tools to actually perform the rule analysis. This approach is especially useful to analyse the operational semantics of Domain Specific Visual Languages. We have automated these ideas by providing designers with tools for the graphical specification and analysis of graph transformation rules, including a back-annotation mechanism that presents the analysis results in terms of the original language notation.  相似文献   

11.
We introduce CoCasl as a light-weight but expressive coalgebraic extension of the algebraic specification language Casl. CoCasl allows the nested combination of algebraic datatypes and coalgebraic process types. Moreover, it provides syntactic sugar for an observer-indexed modal logic that allows e.g. expressing fairness properties. This logic includes a generic definition of modal operators for observers with structured equational result types. We prove existence of final models for specifications in a format that allows the use of equationally specified initial datatypes as observations, as well as modal axioms. The use of CoCasl is illustrated by specifications of the process algebras CSP and CCS.  相似文献   

12.
We show that by using an intermediate representation, which supports a formalized interface on which to construct parallelization tools, the mapping of the representation onto parallel architectures can be performed quickly and efficiently. An intermediate representation called HICOR (Hierarchical Intermediate Code Object Representation) is shown to facilitate the exploitation of parallel operations by providing an abstraction layer for performing high-level intermediate code analysis, scheduling, and code generation. An object-oriented design approach has been employed in the development of HICOR and associated tools. Source language constructs are transformed into specialized object classes. Inheritance properties provided by the object-oriented paradigm are utilized to provide a common interface to each object in the HICOR representation. It is this interface that provides the needed consistency and flexibility in which to construct tools that has since been lacking. In particular, a tool to performCompile-Time Scheduling is presented. The scheduling algorithm employed differs from traditional scheduling problems in that merging of tasks is performed to reduce both task creation and communication costs in determining the final schedule. Architectural parameters are provided as input to the heuristic allowing the scheduler to produce near-optimal results for a wide variety of MIMD architectures. Once the final schedule is determined theTarget Code Generator, also presented, is used to generate the corresponding target code. A prototype system has been implemented in C++ which incorporates the HICOR intermediate representation with the tools described. The target architectures include the Sun 630 MP/4, Sequent Symmetry S81, and Stardent Titan.  相似文献   

13.
The paper proposes an algebraic representation of program modules, called F(p), and illustrates the algorithms that use F(p) to generate program graph models for measurement, documentation and testing activities. The representation refers to procedural languages, D-structured programs and one-in/one-out modules but its definition can be extended to programs structured in terms of an arbitrary set of one-in/one-out legal control structures. Since it is possible to produce F(p) directly from the program code using reverse engineering techniques, the algorithms proposed are of considerable interest for the setting up of tools supporting the program comprehension phase, which is a fundamental first step in any maintenance operation.  相似文献   

14.
The paper presents a methodology for developing a user interface that combines fourth generation interface tools (SQL forms) with a natural language processor for a database management system. The natural language processor consists of an index, a lexicon and a parser. The index is used to uniquely identify each form in the system through a conceptual representation of its purpose. The form fields specify database or nondatabase fields whose values are either entered by the user (user-defined) or are derived by the form (system-defined) in response to user input. A set of grammar rules are associated with each form. The lexicon consists of all words recognized by the system, their grammatical categories, roots, their associations (if any) with database objects and forms. The parser scans, a natural language query to identify a form in a bottom-up fashion. The information requested in the user query is determined in a top-down manner by parsing, through the grammar rules associated with the identified form. Extragrammatical inputs with limited deviations from the grammar rules are supported. Combining a natural language processor with SQL forms allows processing data modification tasks without violating any database integrity constraint, having duplicate records, or entering invalid data. A prototype natural language interface is described as a front-end to an ORACLE database for a computer integrated manufacturing system  相似文献   

15.
This paper presents a methodology in generating space-efficient code for BCPL and BCPL-like programming languages. An intermediate language called ICE was designed which, while preserving those program features salient to translation, also tries to minimize the number of instructions generated. This internal representation is then realized as an actual machine. The encoding scheme (called ES0) is based on usage frequencies of instructions and other real world constraints on machines such as word size and addressing space. Using a large sample of BCPL programs, it is shown that BCODE, which is a realization of OCODE (the intermediate language currently used for BCPL program translation), takes up an average of 32 per cent more space as compared to ICE/ES0.  相似文献   

16.
HCDFG-II—面向C语言系统描述的控制/数据流图表示   总被引:1,自引:0,他引:1  
介绍了基于C语言系统描述的面向软硬件划分的一种中间表示———层次化控制 /数据流图HCDFG II及其转换方法 HCDFG II根据C语言的特点对HCDFG进行了扩充 ,通过引入内存访问结点来表示和处理C程序中的数组及指针 ,通过定义并发结构来表示C程序中的可并发部分 同时 ,由于HCDFG II采用以控制流为主的结构 ,从C语言进行转换十分容易 这使得它能有效地作为C语言系统描述的中间表示 ,并为软硬件划分提供了更加精确的信息  相似文献   

17.
The Common Lisp Object System (CLOS) is an extension of Common Lisp for object-oriented programming being designed as part of the ANSI X3J13 Common Lisp standardization process. This report describes an algebraic specification of the method combination and application component of CLOS. The specification is based on a draft of the standard presented to the ANSI committee in spring of 1987, and was done using an executable, typed functional programming language called Axis. The result suggests a logical mapping from the abstract data types and operations in the specification to the classes and methods which could become that part of the CLOS kernel (called the metaobject protocol) involved in method combination. In addition, the existence of a formal algebraic specification for CLOS allows the effects of changes in the language to be tested during the design phase, rather than during implementation. This is illustrated by showing how the language semantics change when thecall-next-method function is allowed to take arguments, an extension proposed in the draft standard. Standardization documents like the CLOS standard are usually written in a semi-natural language, but if they are accompanied by an additional formal specifiction during their generation, the probability of undetected or lately discovered errors in the design decreases, and the specification also provides an unambiguous source of reference for implementors.  相似文献   

18.
The specification language Csp-Casl allows one to model processes as well as data of distributed systems within one framework. In our paper, we describe how a combination of the existing tools Hets and Csp-Prover can solve the challenges that Csp-Casl raises on integrated theorem proving for processes and data. For building this new tool, the automated generation of theorems and their proofs in Isabelle/HOL plays a fundamental role. A case study of industrial strength demonstrates that our approach scales up to complex problems.  相似文献   

19.
Many modern program verifiers translate the program to be verified and its specification into a simple intermediate representation and then compute verification conditions on this representation. Using an intermediate language improves the interoperability of tools and facilitates the computation of small verification conditions. Even though the translation into an intermediate representation is critical for the soundness of a verifier, this step has not been formally verified. In this paper, we formalize the translation of a small subset of Java bytecode into an imperative intermediate language similar to BoogiePL. We prove soundness of the translation by showing that each bytecode method whose BoogiePL translation can be verified, can also be verified in a logic that operates directly on bytecode.  相似文献   

20.
MPI for Python     
MPI for Python provides bindings of the Message Passing Interface (MPI) standard for the Python programming language and allows any Python program to exploit multiple processors. This package is constructed on top of the MPI-1 specification and defines an object-oriented interface which closely follows MPI-2 C++ bindings. It supports point-to-point (sends, receives) and collective (broadcasts, scatters, gathers) communications of general Python objects. Efficiency has been tested in a Beowulf class cluster and satisfying results were obtained. MPI for Python is open source and available for download on the web (http://www.cimec.org.ar/python).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号