首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Summary Affix grammars are an extension of context-free grammars which retain most of their advantages and eliminate most of their limitations with respect to the definition of programming languages and the specification of their translators. The extension allows definition of context-sensitive syntax features, and also allows semantics to be linked to syntax. In this paper, the parsing problem for affix grammars is explored and shown to be closely related to the parsing problem for context-free grammars. This enables a standard context-free parser constructor to be generalised to a constructor for affix grammars, essentially by addition of a preprocessor. The resulting constructors are compared with previously implemented or proposed constructors.  相似文献   

2.
上下文相关图文法分析及其应用初探   总被引:1,自引:0,他引:1  
冉平  石兵  马晓星  吕建 《计算机科学》2006,33(3):255-260
图文法是一种对可视化语言进行形式化定义的元语言,具有表达自然、能力强大的特点.随着使用可视化语言的最终用户编程技术的广泛应用,图文法分析尤其是上下文相关图文法分析在工程应用中的重要性日益突出.国内外相关文献或着重于纯理论探讨,或局限于特定语法类的特定应用,不利于工程应用人员参考.本文选取简洁明了的符号体系,介绍上下文相关图文法分析的一般性过程,并将其中规则选取关键步骤描述为CSP问题,利用已有的针对CSP问题的优化方法来优化算法,介绍了现有的优化方法并给出实现算法;同时,结合自身实践,讨论其在一个面向体系结构的Web服务集成系统中的应用.  相似文献   

3.
The formalism of attribute grammars is a powerful tool for specifying the static semantics of programming languages,and attribute evaluation provides an effective approach to automatic semantic analysis.The author previously proposed a time-optimal algorithm for incremental evaluation of ordered attribute grammars.In this paper,three improvements are suggested upon the algorithm so that it not only allows multiple subtree replacements,but also cancels three auxiliary tables required before,For experimental purposes,the improved algorithm has been implemented in Pascal on Motorola 68010  相似文献   

4.
It is well known that the family of context-sensitive grammars generate languages which are not context-free and that it is undecidable whether a context-sensitive grammar generates a context-free language. However, the mechanism by which the use of context allows a non-context-free language to be generated is not well understood. In this paper we attempt to focus on this problem by surveying some of the results which speak to two questions: (i) What constraints can be placed on the form of the rules of context-sensitive grammars without restricting the weak generative capacity? (ii) What (nontrivial) constraints can be placed on the form of the rules of context-sensitive grammars such that only context-free languages will be generated?  相似文献   

5.
With the introduction of the Regular Membership Constraint, a new line of research has opened where constraints are based on formal languages. This paper is taking the next step, namely to investigate constraints based on grammars higher up in the Chomsky hierarchy. We devise a time- and space-efficient incremental arc-consistency algorithm for context-free grammars, investigate when logic combinations of grammar constraints are tractable, show how to exploit non-constant size grammars and reorderings of languages, and study where the boundaries run between regular, context-free, and context-sensitive grammar filtering.  相似文献   

6.
Dr. G. Barth 《Computing》1979,22(3):243-256
This paper is concerned with an extension of context-free LL(k) grammars, called RLL(k) grammars. RLL(k) grammars are powerful enough to generate non-context-free languages. In particular context-sensitive constructs of programming languages can be formalized conveniently. RLL(k) grammars have the pleasant property that fast syntactical check procedures exist. An algorithm for syntactical analysis with linear average cost is developed in this paper. A worst case quadratic upper bound is derived.  相似文献   

7.
Duane Szafron  Randy Ng 《Software》1990,20(5):459-483
This paper describes LexAGen, an interactive scanner generator which is the first component of an interactive compiler generation environment. LexAGen can generate fast scanners for languages whose tokens can be specified by regular grammars. However, LexAGen also supports several context-sensitive programming language constructs such as nested comments and the interaction between floating-point numbers and the range operator in Modula-2. In addition, LexAGen includes a fast new algorithm for keyword identification. However, the most important and novel aspects of LexAGen are that it constructs scanners incrementally and that specifications can be executed anytime for validation testing. LexAGen specifications are expressed and entered interactively in a restricted BNF format (no left recursion). All syntactic errors and token conflicts are detected and reported immediately as LexAGen incrementally constructs a deterministic finite automaton to represent the scanner. At any time, the user can test the scanner fragment which has been entered by supplying text to be scanned. Alternatively, the user can generate a C-code scanner from the automaton. The generated automaton uses a direct execution approach and is quite fast. LexAGen is implemented in Smalltalk-80. Its extensive use of interactive graphics makes it very easy to use. In addition, the object-oriented paradigm of Smalltalk-80 is the basis for the incremental analysis, the error detection scheme and an intermediate representation which can be easily modified to generate scanners in other target languages such as Pascal, Modula-2 and Ada.  相似文献   

8.
9.
10.
Unification grammars are widely accepted as an expressive means for describing the structure of natural languages. In general, the recognition problem is undecidable for unification grammars. Even with restricted variants of the formalism, off-line parsable grammars, the problem is computationally hard. We present two natural constraints on unification grammars which limit their expressivity and allow for efficient processing. We first show that non-reentrant unification grammars generate exactly the class of context-free languages. We then relax the constraint and show that one-reentrant unification grammars generate exactly the class of mildly context-sensitive languages. We thus relate the commonly used and linguistically motivated formalism of unification grammars to more restricted, computationally tractable classes of languages.  相似文献   

11.
Attribute grammars are traditionally constrained to be noncircular. In using attribute grammars to specify the semantics of programming languages, this noncircularity limitation has restricted attribute grammars to compile-time or static semantics. Inductive attribute grammars add a general form of circularity to this standard approach. Inductive attribute grammars have the expressiveness required to describe the full semantics of programming languages, while at the same time maintaining the declarative character of standard attribute grammars. This expanded view of attribute grammars proves to be useful in interactive language-based programming environments, as inductive attribute grammars allow the environment to provide an interpreter for incremental re-evaluation of programs after small changes to the code. The addition of run-time semantics via circular attribute grammars permits automatically generated environments to be complete, in that incremental static semantic checking and fast incremental execution are now available within a single framework.The authors' present affiliations are the United States Geological Survey and the Northrop Corporation respectively.  相似文献   

12.
S. J. Young 《Software》1981,11(9):913-927
This paper describes an extension to Pascal in the form of an encapsulation mechanism aimed at improving the structure of large Pascal programs. It is based upon the module structure of Modula but extended to include a more detailed specification of module interfaces and to allow the definition of a module body to be deferred. Called Pascal/M, the extended language is implemented via a preprocessor. It has been successfully used in large programming projects and been found to both aid in the application of top-down design methods and to greatly improve the documentation of the final product by breaking up the program text into a hierarchy of short readable modules. The use of Pascal/M is illustrated by a program example and aspects of its design and implementation are discussed.  相似文献   

13.
Douglas Comer 《Software》1979,9(3):203-209
The programming language Pascal was originally designed for teaching introductory programming. Currently, however, production systems use it as the primary implementation language. This paper describes extensions of Pascal intended to aid the large program developer. The extensions are implemented in a macro preprocessor MAP, which supports constant expression evaluation, source file inclusion, conditional compilation and macro substitution. While each of these features can be used independently, they are all implemented with a simple, uniform syntax. Furthermore, in keeping with the spirit of Pascal, an attempt has been made to make the facilities straightforward and simple. The design and implementation details are discussed.  相似文献   

14.
邹阳  吕建  曹春  胡昊  宋巍  杨启亮 《软件学报》2012,23(7):1635-1655
上下文相关图文法是描述可视化语言的形式化工具.为了直观地刻画并高效地分析可视化语言,已有图文法形式框架均着重于文法形式和分析算法的研究,而忽略了对它们之间表达能力的分析.在对已有上下文相关图文法形式框架的关键特征进行分析和归纳的基础上,通过构造不同形式框架之间的转换算法,揭示并形式化证明了它们表达能力之间的关系.而且,转换算法在不同形式框架之间建立了关联,使图文法的应用不必再局限于一个框架,而是可以选择不同框架分别进行图的描述和分析,从而提高了上下文相关图文法的易用性.  相似文献   

15.
A method and results of static and dynamic analysis of Pascal programs are described. In order to investigate characteristics of large systems programs developed by the stepwise refinement programming approach and written in Pascal, several Pascal compilers written in Pascal were analysed from both static and dynamic points of view. As a main conclusion, procedures play an important role in the stepwise refinement approach and implementors of a compiler and designers of high level language machines for Pascal-like languages should pay careful attention to this point. The set data structure is one of the characteristics of the Pascal language and statistics of set operations are also described.  相似文献   

16.
17.
Two-level grammars can define the syntax and the operational semantics of programming languages and these definitions are directly executable by interpretation. In this paper it is shown that axiomatic semantics can also be defined using a two-level grammar with the result being a partially automatic program verification system accomplished within the framework of a language definition. These results imply that a programming language can be defined operationally and axiomatically together in complementary definitions as advocated by Hoare and Lauer. Because two-level grammars are executable, these complementary definitions accomplish a system for interpreting and verifying programs.  相似文献   

18.
Summary In this paper, we mainly study the relation between scattered context grammars, which are an example for regulated context-free rewriting devices, and context-sensitive grammars. Emphasis is laid upon both normal form characterizations of context-sensitive grammars and an argument in how far scattered context grammars are stronger, with respect to generative capacity than unordered scattered context grammars.Parts of this paper have been presented at the Conference on Formal Languages and Programming Languages, University of Dortmund, Germany, March 29–31, 1973.  相似文献   

19.
一种基于边的上下文相关图文法形式化框架   总被引:3,自引:1,他引:2  
曾晓勤  韩秀清  邹阳 《软件学报》2008,19(8):1893-1901
围绕解决图文法中的主要问题——嵌入问题,提出了一种基于边的上下文相关图文法形式化框架,并对由此定义的文法的一些性质及相应的归约算法进行了讨论.对所提出的图文法与已有的文法进行了比较.同时,展望了今后值得进一步研究的一些问题和方向.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号