共查询到20条相似文献,搜索用时 0 毫秒
1.
2.
3.
LL(1) grammars have the conceptual and practical advantage that they allow the compiler writer to view the grammar as a program; this allows a more natural positioning of semantic actions and a simple attribute mechanism. Resulting parsers can be constructed that achieve fully automatic error-recovery, which allows the compiler writer to ignore totally the issue of syntax errors. Measurement shows that such parsers can be reasonably efficient. 相似文献
4.
《国际计算机数学杂志》2012,89(2):107-119
The paper is the second in a series of three papers devoted to a detailed study of LR(k) parsing with error recovery and correction. Error recovery in LR(k) parsing of a context-free grammar is formalized by extending an LR(k) parser of the grammar such that it accepts all strings over the terminal vocabulary. The parse produced by this extension for a terminal string is a right parse if the string is in the language. In the case of a string not in the language the parse produced by the extension contains so-called error productions which represent the error recovery actions performed by the extension. The treatment is based on the formalization of LR(k) parsing presented in the first paper in the series and it covers practically all error recovery methods designed for LR(k) parsing. 相似文献
5.
《国际计算机数学杂志》2012,89(3):189-206
The paper is the third in a series of three papers devoted to a detailed study of LR(k) parsing with error recovery and correction. A new class of syntax errors is introduced, called (k)-local parser defined errors, which suit better than the conventional minimum distance errors for characterization of error detection and recovery in LR(k) parsing. The question whether a given string has n k-local parser defined errors for some integer n is shown to be decidable. Using the formalization of LR(k) parsing and error recovery presented in the first and the second paper in the series it is shown that the canonical LR(k) parser of an LR(k) grammar always has an error recovering extension which is able to produce a correction for any terminal string containing only (k)-local parser defined errors. 相似文献
6.
7.
Anton Nijholt 《Information Processing Letters》1982,15(3):97-101
In the literature various proofs of the inclusion of the class of LL(k) grammars into the class of LR(k) grammars can be found. Some of these proofs are not correct, others are informal, semi-formal or contain flaws. Some of them are correct but the proof is less straightforward than demonstrated here. 相似文献
8.
采用自顶向下的LL(1)语法分析技术,设计一个程序静态分析器,对源文法进行高级概念化抽象。可以从不同层次不同方面来得到源程序的设计意图,实现语言平台的无关性。系统不仅优于传统的分析工具,而且具有良好的通用性。 相似文献
9.
Manuel E. Bermudez Richard Newman-Wolfe George Logothetis 《International journal of parallel programming》1990,19(3):163-184
The generation of an LR parser consists of constructing a parse table, with one row per state (in a push-down automaton), and one column per terminal symbol. Traditionally, this is carried out row by row, with the computation of one row depending (potentially) on all the others. We present a technique for carrying out the lookahead computation of SLR (1) and LALR (1) parsers in a completely parallel fashion. Our technique performs the computation by column, rather than by row. We show that the computation is totally independent for each column, making it ideal for parallelization. The speedup factor of the technique is min (N, T), whereN is the number of processors andT is the number of terminal symbols in the user's grammar. 相似文献
10.
11.
An LL(1)-based error-corrector which operates by insertion-only is studied. The corrector is able to correct and parse any input string. It is efficient (linear in space and time requirements) and chooses least-cost insertions (as defined by the user) in correcting syntax errors. Moreover, the error-corrector can be generated automatically from the grammar and a table of terminal symbol insertion costs. This method is also very well suited for use as an automatic error-recovery technique in LL(1) parsers. The class of LL(1) grammars correctable by this method contains (with minor modifications) grammars used to specify most common programming languages. Preliminary results suggest that this method can be used to advantage in LL(1)-driven compilers.A preliminary version of this paper was presented at the Fourth ACM Symposium on Principles of Programming LanguagesResearch supported in part by National Science Foundation Grant MCS78-02570 相似文献
12.
13.
14.
GM(1,1)残差修正的季节性神经网络预测模型及其应用 总被引:2,自引:0,他引:2
季节性时间序列具有增长性和波动性的二重趋势。灰色模型GM(1,1)能反映时间序列的总体变化趋势,但不能很好反映其季节性波动变化的具体特征,在模拟与预测季节性时间序列中有明显的局限性。文中介绍了季节性神经网络建立的残差修正模型。通过季节性神经网络模型对GM(1,1)的残差序列进行分析,提取其中的非线性成分作为预测时的补偿项,以进行残差修正,从而形成GMSANN叠合预测模型。实例表明,所建模型具有较好的适应性和预测精度。 相似文献
15.
Validation and locally least-cost repair are two simple and effective techniques for dealing with syntax errors. We show how the two can be combined into an efficient and effective error-handler for use with LL(1) parsers. Repairs are computed using an extension of the FMQ algorithm. Tables are created as necessary, rather than precomputed, and possible repairs are kept in a priority queue. Empirical results show that the repairs chosen with this strategy are of very high quality and that speed is quite acceptable. 相似文献
16.
If the frame size of a multimedia encoder is small, Internet Protocol (IP) streaming applications need to pack many encoded media frames in each Real-time Transport Protocol (RTP) packet to avoid unnecessary header overhead. The generic forward error correction (FEC) mechanisms proposed in the literature for RTP transmission do not perform optimally in terms of stability when the RTP payload consists of several individual data elements of equal priority. In this paper, we present a novel approach for generating FEC packets optimized for applications packing multiple individually decodable media frames in each RTP payload. In the proposed method, a set of frames and its corresponding FEC data are spread among multiple packets so that the experienced frame loss rate does not vary greatly under different packet loss patterns. We verify the performance improvement gained against traditional generic FEC by analyzing and comparing the variance of the residual frame loss rate in the proposed packetization scheme and in the baseline generic FEC. 相似文献
17.
Summary A new method for transforming grammars into equivalent LL(k) grammars is studied. The applicability of the transformation is characterized by defining a subclass of LR(k) grammars, called predictive LR(k) grammars, with the property that a grammar is predictive LR(k) if and only if the corresponding transformed grammar is LL(k). Furthermore, it is shown that deterministic bottom-up parsing of a predictive LR(k) grammar can be done by the LL(k) parser of the transformed grammar. This parsing method is possible since the transformed grammar always left-to-right covers the original grammar. The class of predictive LR(k) grammars strictly includes the class of LC(k) grammars (the grammars that can be parsed deterministically in the left-corner manner). Thus our transformation is more powerful than the one previously available, which transforms LC(k) grammars into LL(k) form. 相似文献
18.
This paper presents a scheme for ranking of spelling error corrections for Urdu. Conventionally spell-checking techniques
do not provide any explicit ranking mechanism. Ranking is either implicit in the correction algorithm or corrections are not
ranked at all. The research presented in this paper shows that for Urdu, phonetic similarity between the corrections and the
erroneous word can serve as a useful parameter for ranking the corrections. This combined with a new technique Shapex that
uses visual similarity of characters for ranking gives an improvement of 23% in the accuracy of the one-best match compared
to the result obtained when the ranking is done on the basis of word frequencies only.
相似文献
Sarmad HussainEmail: |
19.
20.
This work presents the design and implementation of a syntax analyzer for microcomputers. The classical tools of high-level language analysis have been adapted so that a machine independent analyzer is provided. The architecture as well as the structural details of the analyzer are given. It consists of two phases: a scanner for producing tokens by making use of a lexical analysis and a parser which groups these tokens together into syntactic structures that can be adequately used by the subsequent phase (code generation phase).The main target, throughout all the design steps, is to achieve portability and compatibility for microcomputers. Therefore each phase consists of a number of table-based modules. For these modules the essential characteristic is the ease with which any table can be modified or even regenerated. Also, appropriate interfacing has been provided between phases.The modular design leads to storage minimization as well as system reliability and maintainability. 相似文献