首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Attribute grammar specification languages, like many domain specific languages, offer significant advantages to their users, such as high-level declarative constructs and domain-specific analyses. Despite these advantages, attribute grammars are often not adopted to the degree that their proponents envision. One practical obstacle to their adoption is a perceived lack of both domain-specific and general purpose language features needed to address the many different aspects of a problem. Here we describe Silver, an extensible attribute grammar specification language, and show how it can be extended with general purpose features such as pattern matching and domain specific features such as collection attributes and constructs for supporting data-flow analysis of imperative programs. The result is an attribute grammar specification language with a rich set of language features. Silver is implemented in itself by a Silver attribute grammar and utilizes forwarding to implement the extensions in a cost-effective manner.  相似文献   

2.
Attribute grammars are a powerful specification paradigm for many language processing tasks, particularly semantic analysis of programming languages. Recent attribute grammar systems use dynamic scheduling algorithms to evaluate attributes on demand. In this paper, we show how to remove the need for a generator, by embedding a dynamic approach in a modern, object-oriented and functional programming language. The result is a small, lightweight attribute grammar library that is part of our larger Kiama language processing library. Kiama’s attribute grammar library supports a range of advanced features including cached, uncached, higher order, parameterised and circular attributes. Forwarding is available to modularise higher order attributes and decorators abstract away from the details of attribute value propagation. Kiama also implements new techniques for dynamic extension and variation of attribute equations. We use the Scala programming language because of its support for domain-specific notations and emphasis on scalability. Unlike generators with specialised notation, Kiama attribute grammars use standard Scala notations such as pattern-matching functions for equations, traits and mixins for composition and implicit parameters for forwarding. A benchmarking exercise shows that our approach is practical for realistic language processing.  相似文献   

3.
We have developed novel techniques for component-based specification of programming languages. In our approach, the semantics of each fundamental programming construct is specified independently, using an inherently modular framework such that no reformulation is needed when constructs are combined. A language specification consists of an unrestricted context-free grammar for the syntax of programs, together with an analysis of each language construct in terms of fundamental constructs. An open-ended collection of fundamental constructs is currently being developed. When supported by appropriate tools, our techniques allow a more agile approach to the design, modelling, and implementation of programming and domain-specific languages. In particular, our approach encourages language designers to proceed incrementally, using prototype implementations generated from specifications to test tentative designs. The components of our specifications are independent and highly reusable, so initial language specifications can be rapidly produced, and can easily evolve in response to changing design decisions. In this paper, we outline our approach, and relate it to the practices and principles of agile modelling.  相似文献   

4.
5.
6.
In this paper we present a framework for the fast prototyping of visual languages exploiting their local context based specification.In previous research, the local context specification has been used as a weak form of syntactic specification to define when visual sentences are well formed. In this paper we add new features to the local context specification in order to fully specify complex constructs of visual languages such as entity-relationships, use case and class diagrams. One of the advantages of this technique is its simplicity of application and, to show this, we present a tool implementing our framework.Moreover, we describe a user study aimed at evaluating the effectiveness and the user satisfaction when prototyping a visual language.  相似文献   

7.
A language implementation with proper compositionality enables a compiler developer to divide-and-conquer the complexity of building a large language by constructing a set of smaller languages. Ideally, these small language implementations should be independent of each other such that they can be designed, implemented and debugged individually, and later be reused in different applications (e.g., building domain-specific languages). However, the language composition offered by several existing parser generators resides at the grammar level, which means all the grammar modules need to be composed together and all corresponding ambiguities have to be resolved before generating a single parser for the language. This produces tight coupling between grammar modules, which harms information hiding and affects independent development of language features. To address this problem, we have developed a novel parsing algorithm that we call Component-based LR (CLR) parsing, which provides code-level compositionality for language development by producing a separate parser for each grammar component. In addition to shift and reduce actions, the algorithm extends general LR parsing by introducing switch and return actions to empower the parsing action to jump from one parser to another. Our experimental evaluation demonstrates that CLR increases the comprehensibility, reusability, changeability and independent development ability of the language implementation. Moreover, the loose coupling among parser components enables CLR to describe grammars that contain LR parsing conflicts or require ambiguous token definitions, such as island grammars and embedded languages.  相似文献   

8.
The specification of realistic programming languages is difficult and expensive. One approach to make language specification more attractive is the development of techniques and systems for the generation of language–specific software from specifications. To contribute to this approach, a tool–based framework with the following features is presented: It supports new techniques to specify more language aspects in a static fashion. This improves the efficiency of generated software. It provides powerful interfaces to generated software components. This facilitates the use of these components as parts of language–specific software. It has a rather simple formal semantics. In the framework, static semantics is defined by a very general attribution technique enabling e.g. the specification of flow graphs. The dynamic semantics is defined by evolving algebra rules, a technique that has been successfully applied to realistic programming languages. After providing the formal background of the framework, an object–oriented programming language is specified to illustrate the central specification features. In particular, it is shown how parallelism can be handled. The relationship to attribute grammar extensions is discussed using a non-trivial compiler problem. Finally, the paper describes new techniques for implementing the framework and reports on experiences made so far with the implemented system. Received: 20 November 1995 / 20 January 1997  相似文献   

9.
Building verified compilers is difficult, especially when complex analyses such as type checking or data-flow analysis must be performed. Both the type checking and program optimization communities have developed methods for proving the correctness of these processes and developed tools for using, respectively, verified type systems and verified optimizations. However, it is difficult to use both of these analyses in a single declarative framework since these processes work on different program representations: type checking on abstract syntax trees and data-flow analysis-based optimization on control flow or program dependency graphs.We present an attribute grammar specification language that has been extended with constructs for specifying attribute-labelled control flow graphs and both CTL and LTL-FV formulas that specify data-flow analyses. These formulas are model-checked on these graphs to perform the specified analyses. Thus, verified type rules and verified data-flow analyses (verified either by hand or with automated proof tools) can both be transcribed into a single declarative framework based on attribute grammars to build a high-confidence language implementations. Also, the attribute grammar specification language is extensible so that it is relatively straight-forward to add new constructs for different temporal logics so that alternative logics and model checkers can be used to specify data-flow analyses in this framework.  相似文献   

10.
We present a method for profiling programs that are written using domain-specific languages. Instead of reporting execution in terms of implementation details as in most existing profilers, our method operates at the level of the problem domain. Program execution generates a stream of events that summarises the execution in terms of domain concepts and operations. The events enable us to construct a hierarchical model of the execution. A flexible reporting system summarises the execution along developer-chosen model dimensions. The result is a flexible way for a developer to explore the execution of their program without requiring any knowledge of the domain-specific language implementation.These ideas are embodied in a new profiling library called dsprofile that is independent of the problem domain so it has no specific knowledge of the data and operations that are being profiled. We illustrate the utility of dsprofile by using it to profile programs that are written using our Kiama language processing library. Specifically, we instrument Kiama's attribute grammar and term rewriting domain-specific languages to use dsprofile to generate events that report on attribute evaluation and rewrite rule application. Examples of typical language processing tasks show how domain-specific profiling can help to diagnose problems in Kiama-based programs without the developer needing to know anything about how Kiama is implemented.  相似文献   

11.
While grammar inference (or grammar induction) has found extensive application in the areas of robotics, computational biology, and speech recognition, its application to problems in programming language and software engineering domains has been limited. We have found a new application area for grammar inference which intends to make domain-specific language development easier for domain experts not well versed in programming language design, and finds a second application in construction of renovation tools for legacy software systems. As a continuation of our previous efforts to infer context-free grammars (CFGs) for domain-specific languages which previously involved a genetic-programming based CFG inference system, we discuss extensions to the inference capabilities of GenInc, an incremental learning algorithm for inferring CFGs. We show that these extensions enable GenInc to infer more comprehensive grammars, discuss the results of applying GenInc to various domain-specific languages and evaluate the results using a comprehensive suite of grammar metrics.  相似文献   

12.
可视化语言文法形式化描述综述   总被引:3,自引:1,他引:3  
许红霞  张莉 《计算机科学》2005,32(4):201-204
可视化是人机交互的主要形式,可视化语言是计算机科学中一个重要研究领域,文法为可视化语言提供了一种有价值的形式化描述方法。本文基于可视化语言的特征,介绍了可视化语言文法形式化描述体系的基本理论,分析了几种典型形式模型,并探讨了当前的主要研究内容和面临的挑战。  相似文献   

13.
XML is successful as a machine processable data interchange format, but it is often too verbose for human use. For this reason, many XML languages permit an alternative more legible non-XML syntax. XSLT stylesheets are often used to convert from the XML syntax to the alternative syntax; however, such transformations are not reversible since no general tool exists to automatically parse the alternative syntax back into XML.

We present XSugar, which makes it possible to manage dual syntax for XML languages. An XSugar specification is built around a context-free grammar that unifies the two syntaxes of a language. Given such a specification, the XSugar tool can translate from alternative syntax to XML and vice versa. Moreover, the tool statically checks that the transformations are reversible and that all XML documents generated from the alternative syntax are valid according to a given XML schema.  相似文献   


14.
通信协议是网络技术的核心。由于异质构环境中网络协议固有的复杂性,因此需要研制一磁建立在严格数学模型基础这之上的协议开发方法和集成工具环境,以降低协议开发难度,提高协议开发效率。本文旨在探索一种用于协议形式描述和工程开发的扩展属性文法,重点研究了设计属性文法描述语言的原理及结构特点等,并通过一个完整例子说明如何使用我们所设计的属性文法描述语言L_PSAG定义一个协议,然后给出一个基于属性文法的协议设  相似文献   

15.
Type analysis can be characterized by a language‐independent collection of standard computational roles such as ‘typed identifier use’ (e.g. a variable name appearing in an expression) and ‘dyadic expression’ (e.g. addition of two values). A type analyzer for a specific language is then defined by stating which language construct(s) play each role. The computational roles provide a framework for understanding the general process and a vocabulary for applying that understanding to the solution of particular problems. We have captured this knowledge in attribute grammar modules that are carefully designed to be combinable and adaptable, exporting language‐independent roles that define the general type analysis problem. From this collection, the compiler designer instantiates the appropriate modules and identifies the relevant source‐language constructs; an attribute grammar processor then weaves the necessary computations into the compiler's semantic analyzer. Our attribute grammar modules provide a precise definition of the actions constituting the various roles and the dependences among them. They can therefore also be used to describe the type analysis process to students, or to specify a hand‐coded semantic analyzer. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

16.
Holmes  N. 《Computer》2004,37(3)
When involved in a project to develop an intermediary language, translators, interpreters, and linguists will need to work closely together, as grammar and vocabulary are closely interdependent. In this case, both must cope with the translation of many hundreds of wildly different languages. What role would computing professionals play in such a team? Given the project's purpose - to make general machine translation possible - computing professionals would be of vital importance, but in a supporting role. Using different approaches to evaluate the intermediate language and its use for a variety of languages would require a succession of translation programs.  相似文献   

17.
Summary Attribute grammars are a value-oriented, non-procedural extension to context-free grammars that facilitate the specification of translations whose domain is described by the underlying context-free grammar. Just as parsers for context-free languages can be automatically constructed from a context-free grammar, so can translators, called attribute evaluators, be automatically generated from an attribute grammar. A major obstacle to generating efficient attribute evaluators is that they typically use large amounts of memory to represent the attributed parse tree. In this report we investigate the problem of efficient representation of the attributed parse tree by analyzing and comparing the strategies of two systems that have been used to automatically generate a translator from an attribute grammar: the GAG system developed at the Universitat de Karlsruhe and the LINGUIST-86 system written at Intel Corporation. Our analysis will characterize the two strategies and highlight their respective strengths and weaknesses. Drawing on the insights given by this analysis, we propose a strategy for storage optimization in automatically generated attribute evaluators that not only incorporates the best features of both GAG and LINGUIST-86, but also contains novel features that address aspects of the problem that are handled poorly by both systems.This research was partially supported by the National Science Foundation under grant DCR-83-10930, and partially supported by the Defense Advanced Research Projects Agency under contract number N00039-84-C-0165  相似文献   

18.
19.
In this paper, we describe PAG (Prototyping with Attribute Grammars), a framework for building Prolog prototypes from specifications based on attribute grammars, which we have developed for supporting rapid prototyping activities in an introductory course on language processors. This framework works for general non-circular attribute grammars with arbitrary underlying context-free grammars, includes a specification language embedded in Prolog that strongly resembles the attribute grammar notations explained in the course cited, and lets students produce comprehensible prototypes from their specifications in a straightforward way.  相似文献   

20.
Many architectural languages have been proposed in the last 15 years, each one with the chief aim of becoming the ideal language for specifying software architectures. What is evident nowadays, instead, is that architectural languages are defined by stakeholder concerns. Capturing all such concerns within a single, narrowly focused notation is impossible. At the same time, it is also impractical to define and use a “universal” notation, such as UML. As a result, many domain-specific notations for architectural modeling have been proposed, each one focusing on a specific application domain, analysis type, or modeling environment. As a drawback, a proliferation of languages exists, each one with its own specific notation, tools, and domain specificity. No effective interoperability is possible to date. Therefore, if a software architect has to model a concern not supported by his own language/tool, he has to manually transform (and, eventually, keep aligned) the available architectural specification into the required language/tool. This paper presents DUALLy, an automated framework that allows architectural languages and tools interoperability. Given a number of architectural languages and tools, they can all interoperate thanks to automated model transformation techniques. DUALLy is implemented as an Eclipse plugin. Putting it in practice, we apply the DUALLy approach to the Darwin/FSP ADL and to a UML2.0 profile for software architectures. By making use of an industrial complex system, we transform a UML software architecture specification in Darwin/FSP, make some verifications by using LTSA, and reflect changes required by the verifications back to the UML specification.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号