首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
Grail is a package for symbolic manipulation of finite-state automata and regular expressions. It provides most standard operations on automata and expressions, including minimization, subset construction, conversion between automata and regular expressions, and language enumeration and testing. Grail 's objects are parameterizable; users can provide their own classes to define the input alphabet of automata and expressions. Grail's operations are accessible either as individual programs or directly through a C++ class library.  相似文献   

2.
Regular expressions with numeric occurrence indicators are an extension of traditional regular expressions, which let the required minimum and the allowed maximum number of iterations of subexpressions be described with numeric parameters. We consider the problem of testing whether a given regular expression E with numeric occurrence indicators is 1-unambiguous or not. This condition means, informally, that any prefix of any word accepted by expression E determines a unique path of matching symbol positions in E. One-unambiguity appears as a validity constraint in popular document schema languages such as SGML and XML DTDs (document type definitions) and XML Schema; the last one both includes numeric occurrence indicators and requires one-unambiguity of expressions. Previously published solutions for testing the one-unambiguity of regular expressions with numeric occurrence indicators are either erroneous or require exponential time. The main contribution of this paper is a polynomial-time method for solving this problem, and a formal proof of its correctness.  相似文献   

3.
Martens et al. defined a pattern-based specification language equivalent in expressive power to the widely adopted XML Schema definitions (XSDs). This language consists of rules of the form (r,s) where r and s are regular expressions and can be seen as a type-free extension of DTDs with vertical regular expressions. Sets of such rules can be interpreted both in an existential or universal way. In the present paper, we study the succinctness of both semantics w.r.t. each other and w.r.t. the common abstraction of XSDs in terms of single-type extended DTDs. The investigation is carried out relative to three kinds of vertical pattern languages: regular, linear, and strongly linear patterns. We also consider the complexity of the simplification problem for each of the considered pattern-based schemas.  相似文献   

4.
We consider two formalisms for representing regular languages: constant height pushdown automata and straight line programs for regular expressions. We constructively prove that their sizes are polynomially related. Comparing them with the sizes of finite state automata and regular expressions, we obtain optimal exponential and double exponential gaps, i.e., a more concise representation of regular languages.  相似文献   

5.
One-Unambiguous Regular Languages   总被引:2,自引:0,他引:2  
The ISO standard for the Standard Generalized Markup Language (SGML) provides a syntactic meta-language for the definition of textual markup systems. In the standard, the right-hand sides of productions are based on regular expressions, although only regular expressions that denote words unambiguously, in the sense of the ISO standard, are allowed. In general, a word that is denoted by a regular expression is witnessed by a sequence of occurrences of symbols in the regular expression that match the word. In an unambiguous regular expression as defined by Booket al.(1971,IEEE Trans. Comput.C-20(2), 149–153), each word has at most one witness. But the SGML standard also requires that a witness be computed incrementally from the word with a one-symbol lookahead; we call such regular expressions 1-unambiguous. A regular language is a 1-unambiguouslanguage if it is denoted by some 1-unambiguous regular expression. We give a Kleene theorem for 1-unambiguous languages and characterize 1-unambiguous regular languages in terms of structural properties of the minimal deterministic automata that recognize them. As a result we are able to prove the decidability of whether a given regular expression denotes a 1-unambiguous language; if it does, then we can construct an equivalent 1-unambiguous regular expression in worst-case optimal time.  相似文献   

6.
This paper investigates the logic-automata-connection for Duration Calculus. It has been frequently observed that Duration Calculus with linear duration terms comes close to being a logic of linear hybrid automata. We attempt to make this relation precise by constructing Kleene-connection between duration-constrained regular expressions and a subclass of linear hybrid automata called loop-reset automata in which any variable tested in a loop is reset in the same loop. The formalism of duration-constrained regular expressions is an extension of regular expressions with duration constraints, which are essentially formulas of Duration Calculus without negation, yet extended by a Kleene-star operator. In this paper, we show that this formalism is equivalent in expressive power to loop-reset automata by providing a translation procedure from expressions to automata and vice verse.Received June 1999Accepted in revised form September 2003 by M. R. Hansen and C. B. Jones  相似文献   

7.
Concurrent expressions are a class of extended regular expressions with a shuffle operator (6) and its closure (
). The class of concurrent expressions with synchronization primitives, called synchronized concurrent expressions, is introduced as an extended model of Shaw's flow expressions. This paper discusses some formal properties of synchronized concurrent expressions from a formal language theoretic point of view. It is shown that synchronized concurrent expressions with three signal/wait operations are universal in the sense that they can simulate any semaphore controlled concurrent expressions and they can describe the class of recursively enumerable sets. Some results on semaphore controlled regular expressions are also included to give a taste of more positive results.  相似文献   

8.
Given a set S, a fuzzy subset of S (or a fuzzy set in S) is, by definition, an arbitrary mapping f : S → [0, 1] where [0, 1] is the usual interval of real numbers. If the set S bears some structure, one may distinguish some fuzzy subsets of S in terms of that additional structure. This important concept of a fuzzy set was first introduced by Zadeh. Fuzzy groups have been first considered by Rosenfeld, fuzzy semigroups by Kuroki. A theory of fuzzy sets on ordered groupoids and ordered semigroups can be developed. Some results on ordered groupoids-semigroups have been already given by the same authors in [N. Kehayopulu, M. Tsingelis, Fuzzy sets in ordered groupoids, Semigroup Forum 65 (2002) 128-132; N. Kehayopulu, M. Tsingelis, The embedding of an ordered groupoid into a poe-groupoid in terms of fuzzy sets, Inform. Sci. 152 (2003) 231-236; N. Kehayopulu, M. Tsingelis, Fuzzy bi-ideals in ordered semigroups, Inform. Sci. 171 (2004) 13-28] where S has been endowed with the structure of an ordered semigroup and defined “fuzzy” analogous for several notions that have been proved to be useful in the theory of ordered semigroups. The characterization of regular rings in terms of right and left ideals is well known. The characterization of regular semigroups and regular ordered semigroups in terms of left and right ideals or in terms of left, right ideals and quasi-ideals is well known as well. The characterization of regular le-semigroups (that is lattice ordered semigroups having a greatest element) in terms of right ideal elements and left ideal elements or right, left and quasi-ideal elements is also known. In the present paper we first give the main theorem which characterizes the regular ordered semigroups by means of fuzzy right and fuzzy left ideals. Then we characterize the regular ordered semigroups in terms of fuzzy right, fuzzy left ideals and fuzzy quasi-ideals. The paper serves as an example to show that one can pass from the theory of ordered semigroups to the theory of “fuzzy” ordered semigroups. Some of our results are true for ordered groupoids in general.  相似文献   

9.
The purpose of this note is to (i) raise attention for an interesting type of decision problem for logics, known under different names such as the expressibility problem and the uniform clone membership problem, (ii) disseminate a remarkable and powerful negative result of M.F. Ra?ǎ from the 1980s that seems to have escaped attention of many researchers, and (iii) show, with the help of Ra?ǎ's theorem, that the problem is undecidable for star-free regular expressions.In Section 1 we introduce the expressibility problem, taking Boolean formulas as an example. In Section 2, we present, and improve upon, Ra?ǎ's theorem for transitive modal logics. In Section 3, we derive, as a consequence of Ra?ǎ's theorem, the undecidability of the expressibility problem for star-free regular expressions. Finally, we conclude in Section 4 with some open questions.  相似文献   

10.
Text search is a classical problem in Computer Science, with many data-intensive applications. For this problem, suffix arrays are among the most widely known and used data structures, enabling fast searches for phrases, terms, substrings and regular expressions in large texts. Potential application domains for these operations include large-scale search services, such as Web search engines, where it is necessary to efficiently process intensive-traffic streams of on-line queries. This paper proposes strategies to enable such services by means of suffix arrays. We introduce techniques for deploying suffix arrays on clusters of distributed-memory processors and then study the processing of multiple queries on the distributed data structure. Even though the cost of individual search operations in sequential (non-distributed) suffix arrays is low in practice, the problem of processing multiple queries on distributed-memory systems, so that hardware resources are used efficiently, is relevant to services aimed at achieving high query throughput at low operational costs. Our theoretical and experimental performance studies show that our proposals are suitable solutions for building efficient and scalable on-line search services based on suffix arrays.  相似文献   

11.
唐球  姜磊  谭建龙  刘金刚 《计算机应用》2011,31(11):2943-2946
分析了基于硬件正则表达式匹配的优势,介绍了基于现场可编程门阵列(FPGA)的正则表达式匹配算法的基本思想和设计方法,从匹配速度和资源利用率两个技术指标的角度对现有算法进行分类,综述了当前的主流算法并分析了其优缺点,最后论述了目前基于FPGA正则表达式匹配算法设计的难点并展望了未来研究的发展趋势。  相似文献   

12.
Parallel communicating grammar systems with regular control (RPCGS, for short) are introduced, which are obtained from returning regular parallel communicating grammar systems by restricting the derivations that are executed in parallel by the various components through a regular control language. For the class of languages that are generated by RPCGSs with constant communication complexity we derive a characterisation in terms of a restricted type of freely rewriting restarting automaton. From this characterisation we obtain that these languages are semi-linear, and that for RPCGSs with constant communication complexity, the centralised variant has the same generative power as the non-centralised variant.  相似文献   

13.
The notion of a regular anonymous perfect threshold scheme is introduced, and a combinatorial characterization for such a (2,w)-threshold scheme in terms of a regular difference family is given.  相似文献   

14.
Obtaining shorter regular expressions from finite-state automata   总被引:1,自引:0,他引:1  
We consider the use of state elimination to construct shorter regular expressions from finite-state automata (FAs). Although state elimination is an intuitive method for computing regular expressions from FAs, the resulting regular expressions are often very long and complicated. We examine the minimization of FAs to obtain shorter expressions first. Then, we introduce vertical chopping based on bridge states and horizontal chopping based on the structural properties of given FAs. We prove that we should not eliminate bridge states until we eliminate all non-bridge states to obtain shorter regular expressions. In addition, we suggest heuristics for state elimination that leads to shorter regular expressions based on vertical chopping and horizontal chopping.  相似文献   

15.
There are two approaches to formalizing the syntax of typed object languages in a proof assistant or programming language. The extrinsic approach is to first define a type that encodes untyped object expressions and then make a separate definition of typing judgements over the untyped terms. The intrinsic approach is to make a single definition that captures well-typed object expressions, so ill-typed expressions cannot even be expressed. Intrinsic encodings are attractive and naturally enforce the requirement that metalanguage operations on object expressions, such as substitution, respect object types. The price is that the metalanguage types of intrinsic encodings and operations involve non-trivial dependency, adding significant complexity. This paper describes intrinsic-style formalizations of both simply-typed and polymorphic languages, and basic syntactic operations thereon, in the Coq proof assistant. The Coq types encoding object-level variables (de Bruijn indices) and terms are indexed by both type and typing environment. One key construction is the boot-strapping of definitions and lemmas about the action of substitutions in terms of similar ones for a simpler notion of renamings. In the simply-typed case, this yields definitions that are free of any use of type equality coercions. In the polymorphic case, some substitution operations do still require type coercions, which we at least partially tame by uniform use of heterogeneous equality.  相似文献   

16.
Detection of redundant expressions in a program based on values is a well researched problem done with a view to eliminate redundancies so as to improve run-time efficiency of the program. The problem entails detection of equivalent expressions in a program. An iterative data-flow analysis algorithm is presented to detect equivalent expressions in SSA for the purpose of detection of redundancies. The challenge is detection of equivalence of expressions at join points, in polynomial time, that enable detection of later redundancies. This is achieved by the use of value ϕ-function. The proposed algorithm is as precise as Kildall׳s in detection of redundant expressions and takes only polynomial time. The algorithm is implemented in LLVM. An experimental analysis demonstrates that the algorithm is as precise as Kildall׳s and outperforms some existing algorithms in terms of run-time efficiency indicating its practical significance.  相似文献   

17.
Model checking has proven to be a useful analysis technique not only for concurrent systems, but also for genetic regulatory networks (Grns). Applications of model checking in systems biology have revealed that temporal logics should be able to capture both branching-time and fairness properties (needed for specifying multistability and oscillation properties, respectively). At the same time, they should have a user-friendly syntax easy to employ by non-experts. In this paper, we define Computation Tree Regular Logic (Ctrl), an extension of Ctl with regular expressions and fairness operators that attempts to match these criteria. Ctrl subsumes both Ctl and Ltl, and has a reduced set of temporal operators indexed by regular expressions. We also develop a translation of Ctrl into Hennessy-Milner Logic with Recursion (HmlR), an equational variant of the modal μ-calculus. This has allowed us to obtain an on-the-fly model checker with diagnostic for Ctrl by directly reusing the verification technology available in the Cadp toolbox. We illustrate the application of the Ctrl model checker by analyzing the Grn controlling the carbon starvation response of Escherichia coli.  相似文献   

18.
19.
Several methods have been developed to construct λ-free automata that represent a regular expression. Among the most widely known are the position automaton (Glushkov), the partial derivatives automaton (Antimirov) and the follow automaton (Ilie and Yu). All these automata can be obtained with quadratic time complexity, thus, the comparison criterion is usually the size of the resulting automaton. The methods that obtain the smallest automata (although, for general expressions, they are not comparable), are the follow and the partial derivatives methods. In this paper, we propose another method to obtain a λ-free automaton from a regular expression. The number of states of the automata we obtain is bounded above by the size of both the partial derivatives automaton and of the follow automaton. Our algorithm also runs with the same time complexity of these methods.  相似文献   

20.
Most modern implementations of regular expression engines allow the use of variables (also called backreferences). The resulting extended regular expressions (which, in the literature, are also called practical regular expressions, rewbr, or regex) are able to express non-regular languages. The present paper demonstrates that extended regular-expressions cannot be minimized effectively (neither with respect to length, nor number of variables), and that the tradeoff in size between extended and “classical” regular expressions is not bounded by any recursive function. In addition to this, we prove the undecidability of several decision problems (universality, regularity, and cofiniteness) for extended regular expressions. Furthermore, we show that all these results hold even if the extended regular expressions contain only a single variable.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号