首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Implementation of a new compiler usually requires making frequent adjustments to grammar definitions. An incremental technique for updating the parser tables after a monor change to the grammer could potentially save much computational effort. More importantly, debugging a grammar is made easier if the grammar is re-checked for correctness after each small change to the grammar. The basic design philosophy of an incremental parser generator, and incremental algorithms for LR(0), SLR(1) and LALR(1) parser generation are discussed in this paper. Some of these algorithms have been incorporated into an implementation of an incremental LALR(1) parser generator.  相似文献   

2.
LR分析法是编译程序语法分析中最常用且有效的自下而上的分析方法,理论较完善,适用于大多数上下文无关语言的分析。本文主要探讨LR分析的教学方法,采用"启发+关联式"教学法,引导学生理解LR分析的内涵。  相似文献   

3.
4.
Parallel parsing is currently receiving attention but there is little discussion about the adaptation of sequential error handling techniques to these parallel algorithms. We describe a noncorrecting error handler implemented with a parallel LR substring parser. The parser used is a parallel version of Cormack's LR substring parser. The applicability of noncorrecting error handling for parallel parsing is discussed. The error information provided for a standard set of 118 erroneous Pascal programs is analysed. The programs are run on the sequential LR substring parser.  相似文献   

5.
Summary Single productions in the syntax of a programming language usually have no semantic significance, and thus parsers can be modified so that they do not perform reductions by single productions. In this paper we show that the basic method developed by Pager for eliminating all undesired single productions from LR parsers can cause a quadratic increase in the number of states of the parser. We then define an improvement of Pager's basic method such that generally smaller parsers are produced; in fact, even a quadratic decrease in the number of states is possible. This improvement is further evaluated by giving a sequence of practically motivated grammars for which substantial savings in space are obtained. We also characterize this improvement by a grammatical condition under which it is guaranteed that no increase in the number of states can occur. This condition is compared with a similar condition derivable in the case of Pager's original method.Part of this work was carried out when the author was visiting the University of California, Department of Mathematics, at Santa Barbara as an ASLA Fulbright Scholar  相似文献   

6.
This paper addresses two of the problems commonly associated with LR parsing and syntax directed translation schemes, namely grammar stratification and excessively large table size, A solution is discussed which can eliminate stratification of the grammar by allowing the designer to embed semantics directly within the LR table (i.e., at shift and error entries instead of just at reduce entries) and to use global context to determine what semantics should be performed. The non-stratified grammar can produce a significantly smaller LR table than the corresponding stratified grammar. The compatibility of this scheme with commonly used table compaction techniques is also discussed.  相似文献   

7.
Visual YACC is a tool that automatically creates visualizations of the YACC LR parsing process and synthesized attribute computation. The Visual YACC tool works by instrumenting a standard YACC grammar with graphics calls that draw the appropriate data structures given the current actions by the parser. The new grammar is processed by the YACC tools and the resulting parser displays the parse stack and parse tree for every step of the parsing process of a given input string. Visual YACC was initially designed to be used in compiler construction courses to supplement the teaching of parsing and syntax directed evaluation. We have also found it to be useful in the difficult task of debugging YACC grammars. In this paper, we describe this tool and how it is used in both contexts. We also detail two different implementations of this tool: one that produces a parser written in C with calls to Motif; and a second implementation that generates Java source code. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

8.
编译原理课程的教学不仅要介绍编译的基本原理和技术,还要培养学生的学习兴趣、专业思维和科学研究的方法及能力,文章以LR类分析方法为例,以还原知识的发现过程为主线,重现解决问题的思路与方法,以期培养学生的专业学习兴趣和科研能力。  相似文献   

9.
It is shown that in many cases the trivial upper bound 2|G|k + 1 on the number of states of an LR(k) parser for a grammar G is too conservative. In particular, if G is not right-recursive, the canonical LR(k) parser for G has at most |Gk|G|·2|G| states. Examples of grammars with large LR(k) parsers are given.  相似文献   

10.
Extended context-free grammars allow regular expressions to appear in productions right hand sides, and are a clear and natural way to describe the syntax of programming languages.In this paper an LR parsing technique for extended context-free grammars is presented, which is based on an underlying transformation of the grammar into an equivalent context-free one.The technique is suitable for inclusion in one-pass compilers: the implementation requires little extensions to the algorithms working for normal LR grammars. Besides describing the parsing method, the paper shows also the algorithms for deriving the parsing tables; tables optimization is also discussed. Finally, this technique is compared with other proposals appeared in the literature.  相似文献   

11.
语法分析作为编译过程中一个不可缺少的步骤,对其进行研究有着非常重要的意义.阐述了语法分析方法研究的现状,并对之进行了具体的分析和探讨,介绍了语法分析方法的各种应用,对语法分析方法进行了总结和展望.  相似文献   

12.
In the literature various proofs of the inclusion of the class of LL(k) grammars into the class of LR(k) grammars can be found. Some of these proofs are not correct, others are informal, semi-formal or contain flaws. Some of them are correct but the proof is less straightforward than demonstrated here.  相似文献   

13.
Summary The paper presents in detail the case for k=1 of a practical general method for constructing LR(k) parsers. For k=1 this method is of rival efficiency to the previous general algorithm described by the author in [21]. The method involves combining the states of an LR(k) parser as they are generated, reducing to a fraction, in the process, the number of configurations that need actually be evaluated, or for which space must be assigned — compared to such general methods as those of [1, 11, 12, 17]. The criteria of compatibility introduced for this purpose are such that the parser obtained is in practice identical in size to, or negligibly larger than, that obtained by resolving the inadequacies of an LR(o) parser (as is done for various subsets of the LR(k) grammars in [5, 8, 14, 20]).This paper is a development of one of the ideas proposed in Pager [16]. The work was supported by the National Science Foundation under Grant GJ-43362.  相似文献   

14.
We present a method for recovering from syntax errors encountered during parsing. The method provides a form of minimum distance repair, has linear time complexity, and is completely automatic. A formal method is presented for evaluating the performance of error recovery methods, based on global minimum-distance error correction. The minimum-distance error recovery method achieves a theoretically best performance on 80% of Pascal programs in the weighted Ripley-Druseikis collection. Comparisons of performance with other error recovery methods are given.  相似文献   

15.
A criterion is formally described which guarantees the correctness of a variety of chain-free LR parsers including the case of grammars containing ?-nonterminals. Using the criterion Pager's algorithm for the elimination of chain productions is shown to produce correct parsers for any compatible merging relation. Also, a mildly restricted version of a claim concerning chain-free SLR parsers made in [K&S 79] is proved correct.  相似文献   

16.
The paper is the second in a series of three papers devoted to a detailed study of LR(k) parsing with error recovery and correction. Error recovery in LR(k) parsing of a context-free grammar is formalized by extending an LR(k) parser of the grammar such that it accepts all strings over the terminal vocabulary. The parse produced by this extension for a terminal string is a right parse if the string is in the language. In the case of a string not in the language the parse produced by the extension contains so-called error productions which represent the error recovery actions performed by the extension. The treatment is based on the formalization of LR(k) parsing presented in the first paper in the series and it covers practically all error recovery methods designed for LR(k) parsing.  相似文献   

17.
LR(k)文法能描述所有确定型上下文无关语言,广泛应用于各类分析器生成器中.传统的LR(k)文法断点调试方法仅支持在产生式右部末尾设置断点(后文简称尾部断点),不支持在产生式右部中间位置设置断点(后文简称中间断点),这给分析器的开发和调试带来了不便.文中提出了一种新颖的LR(k)文法断点调试方法,不但支持传统的尾部断点,还支持中间断点.该方法可显著增加可利用的断点数量,可以跟踪到更细粒度的文法成分,从而帮助用户更好地进行文法调试,降低分析器的开发难度.  相似文献   

18.
There has been a recent effort in the literature to reconsider grammar-dependent software development from an engineering point of view. As part of that effort, we examine a deficiency in the state of the art of practical LR parser table generation. Specifically, LALR sometimes generates parser tables that do not accept the full language that the grammar developer expects, but canonical LR is too inefficient to be practical particularly during grammar development. In response, many researchers have attempted to develop minimal LR parser table generation algorithms. In this paper, we demonstrate that a well known algorithm described by David Pager and implemented in Menhir, the most robust minimal LR(1) implementation we have discovered, does not always achieve the full power of canonical LR(1) when the given grammar is non-LR(1) coupled with a specification for resolving conflicts. We also detail an original minimal LR(1) algorithm, IELR(1) (Inadequacy Elimination LR(1)), which we have implemented as an extension of GNU Bison and which does not exhibit this deficiency. Using our implementation, we demonstrate the relevance of this deficiency for several real-world parser specifications, and we demonstrate the feasibility of IELR(1). Finally, we demonstrate that, if canonical LR(1) were employed instead, grammar development would be severely impeded regardless of the power of the computer hardware.  相似文献   

19.
The problem of regression analysis in a fuzzy setting is discussed. A general linear regression model for studying the dependence of a LR fuzzy response variable on a set of crisp explanatory variables, along with a suitable iterative least squares estimation procedure, is introduced. This model is then framed within a wider strategy of analysis, capable to manage various types of uncertainty. These include the imprecision of the regression coefficients and the choice of a specific parametric model within a given class of models. The first source of uncertainty is dealt with by exploiting the implicit fuzzy arithmetic relationships between the spreads of the regression coefficients and the spreads of the response variable. Concerning the second kind of uncertainty, a suitable selection procedure is illustrated. This consists in maximizing an appropriately introduced goodness of fit index, within the given class of parametric models. The above strategy is illustrated in detail, with reference to an application to real data collected in the framework of an environmental study. In the final remarks, some critical points are underlined, along with a few indications for future research in this field.  相似文献   

20.
An efficient and systematic LL(1) error recovery method is presented that has been implemented for an LL(1) parser generator. Error messages which provide good diagnostic information are generated automatically. Error correction is done by discarding some input symbols and popping up some symbols from the parsing-stack in order to restore the parser to a valid configuration. Thus, symbol deletions and insertions are simulated. The choice between different possible corrections is made by comparing the cost of the inserted (popped) symbols with the reliability value of the recovery symbol (the first input symbol that is not discarded). Our concept of reliability is based on the observation that input symbols differ from each other in their ability to serve as recovery points. A high reliability value of a symbol asserts that it was probably not placed in the input by accident. So it is reasonable not to discard that symbol but to resume parsing. This is done even if a string with high insert-cost has to be inserted before that symbol in order to fit it to the part of the program that has already been analysed. The error recovery routine is invoked only when an error is detected. Thus, there is no additional time required for parsing correct programs. Error-correcting parsers for different languages, including Pascal, have been generated. Some experimental results are summarized.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号