首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Over the past two decades, Scheme macros have evolved into a powerful API for the compiler front end. Like Lisp macros, their predecessors, Scheme macros expand source programs into a small core language; unlike Lisp systems, Scheme macro expanders preserve lexical scoping, and advanced Scheme macro systems handle other important properties such as source location. Using such macros, Scheme programmers now routinely develop the ultimate abstraction: embedded domain-specific programming languages.Unfortunately, a typical Scheme programming environment provides little support for macro development. This lack makes it difficult for programmers to debug their macros and for novices to study the behavior of macros. In response, we have developed a stepping debugger specialized to the concerns of macro expansion. This debugger presents the macro expansion process as a linear rewriting sequence of annotated terms; it graphically illustrates the binding structure of the program as expansion reveals it; and it adapts to the programmer’s level of abstraction, hiding details of syntactic forms that the programmer considers built-in.  相似文献   

2.
We propose an algebra of languages and transformations as a means of compositional syntactic language extension. The algebra provides a layer of high-level abstractions built on top of languages (captured by context-free grammars) and transformations (captured by constructive catamorphisms).  相似文献   

3.
A hierarchical three-stage syntactic recognition algorithm using context-free grammars is described for automatic identification of skeletal maturity from X-rays of hand and wrist. The primitives considered are dot, straight line and arcs of three different curvatures (including both senses) in order to describe and interpret the structural development of epiphysis and metaphysis with growth of a child.  相似文献   

4.
This paper addresses one of the central problems arising at the transfer stage in machine translation: syntactic mismatches, that is, mismatches between a source-language sentence structure and its equivalent target-language sentence structure. The level at which we assume the transfer to be carried out is the Deep-Syntactic Structure (DSyntS) as proposed in the Meaning-Text Theory (MTT). DSyntS is abstract enough to avoid all types of divergences that result either from restricted lexical co-occurrence or from surface-syntactic discrepancies between languages. As for the remaining types of syntactic divergences, all of them occur not only interlinguistically, but also intralinguistically; this means that establishing correspondences between semantically equivalent expressions of the source and target languages that diverge with respect to their syntactic structure is nothing else than paraphrasing. This allows us to adapt the powerful intralinguistic paraphrasing mechanism developed in MTT for purposes of interlinguistic transfer.  相似文献   

5.
6.
This paper studies quantitative model checking of infinite tree-like (continuous-time) Markov chains. These tree-structured quasi-birth death processes are equivalent to probabilistic pushdown automata and recursive Markov chains and are widely used in the field of performance evaluation. We determine time-bounded reachability probabilities in these processes-which with direct methods, i.e., uniformization, result in an exponential blow-up-by applying abstraction. We contrast abstraction based on Markov decision processes (MDPs) and interval-based abstraction; study various schemes to partition the state space, and empirically show their influence on the accuracy of the obtained reachability probabilities. Results show that grid-like schemes, in contrast to chain- and tree-like ones, yield extremely precise approximations for rather coarse abstractions.  相似文献   

7.
中文文本自动校对中的语法错误检查   总被引:5,自引:1,他引:5  
文章将中文文本的语法错误分为搭配错误和与句型成分相关的错误两大类。分别采用模式匹配的方法和基于句型成分分析的进行检查,这两种方法的结合,可以同时考虑局部和全局的语法限制信息,并且降低了语法检查的复杂度。通过对实验结果的分析和评测,证明文章所述的方法是可行的。  相似文献   

8.
This work proposes a method for improving the scalability of model-checking compositions in the bottom-up construction of abstract components. The approach uses model checking in the model construction process for testing the composite behaviors of components, including process deadlock and inconsistency in inter-component call sequences. Assuming a single processor model, the scalability issue is addressed by introducing operational models for synchronous/asynchronous inter-component message passing, which are designed to reduce spurious behaviors caused by typical parallel compositions. Together with two abstraction techniques, synchronized abstraction and projection abstraction, that hide verified internal communication behavior, this operational model helps to reduce the complexity of composition and verification.The approach is supported by the Marmot development framework, where the soundness of the approach is assured through horizontal verification as well as vertical verification. Application of the approach on a wireless sensor network application shows promising performance improvement with linear growth in memory usage for the vertically incremental verification of abstract components.  相似文献   

9.
Modern programming languages allow the user to define new data types. This increases the desirability of having generic procedures as they reduce programming effort and program size, allow for portability across types and are a means of abstraction.

A simple but powerful notation for incorporating generic procedures in a language is proposed along with an efficient macro-like compile time technique for implementing them.

Unfortunately, pathological recursive procedure call sequences create an undecidability problem for the suggested implementation technique. A proof is given for the above undecidability problem and practical remedies for avoiding it are suggested.  相似文献   


10.
In this paper we introduce and discuss a concept of syntactic n-grams (sn-grams). Sn-grams differ from traditional n-grams in the manner how we construct them, i.e., what elements are considered neighbors. In case of sn-grams, the neighbors are taken by following syntactic relations in syntactic trees, and not by taking words as they appear in a text, i.e., sn-grams are constructed by following paths in syntactic trees. In this manner, sn-grams allow bringing syntactic knowledge into machine learning methods; still, previous parsing is necessary for their construction. Sn-grams can be applied in any natural language processing (NLP) task where traditional n-grams are used. We describe how sn-grams were applied to authorship attribution. We used as baseline traditional n-grams of words, part of speech (POS) tags and characters; three classifiers were applied: support vector machines (SVM), naive Bayes (NB), and tree classifier J48. Sn-grams give better results with SVM classifier.  相似文献   

11.
Making sense of the abstraction hierarchy in the power plant domain   总被引:2,自引:0,他引:2  
The paper discusses the abstraction hierarchy proposed by Rasmussen [(1986) Information processing and human-machine interaction, North-Holland] for design of human-machine interfaces for supervisory control. The purpose of the abstraction hierarchy is to represent a work domain by multiple levels of means-end and part-whole abstractions. It is argued in the paper that the abstraction hierarchy suffers from both methodological and conceptual problems. A cluster of selected problems are analyzed and illustrated by concrete examples from the power plant domain. It is concluded that the semantics of the means-end levels and their relations are vaguely defined and therefore should be improved by making more precise distinctions. Furthermore, the commitment to a fixed number of levels of means-end abstractions should be abandoned and more attention given to the problem of level identification in the model-building process. It is also pointed out that attempts to clarify the semantics of the abstraction hierarchy will invariably reduce the range of work domains where it can be applied.  相似文献   

12.
13.
The notion of context appears in computer science, as well as in several other disciplines, in various forms. In this paper, we present a general framework for representing the notion of context in information modeling. First, we define a context as a set of objects, within which each object has a set of names and possibly a reference: the reference of the object is another context which “hides” detailed information about the object. Then, we introduce the possibility of structuring the contents of a context through the traditional abstraction mechanisms, i.e., classification, generalization, and attribution. We show that, depending on the application, our notion of context can be used as an independent abstraction mechanism, either in an alternative or a complementary capacity with respect to the traditional abstraction mechanisms. We also study the interactions between contextualization and the traditional abstraction mechanisms, as well as the constraints that govern such interactions. Finally, we present a theory for contextualized information bases. The theory includes a set of validity constraints, a model theory, as well as a set of sound and complete inference rules. We show that our core theory can be easily extended to support embedding of particular information models in our contextualization framework.  相似文献   

14.
Real robots should be able to adapt autonomously to various environments in order to go on executing their tasks without breaking down. They achieve this by learning how to abstract only useful information from a huge amount of information in the environment while executing their tasks. This paper proposes a new architecture which performs categorical learning and behavioral learning in parallel with task execution. We call the architectureSituation Transition Network System (STNS). In categorical learning, it makes a flexible state representation and modifies it according to the results of behaviors. Behavioral learning is reinforcement learning on the state representation. Simulation results have shown that this architecture is able to learn efficiently and adapt to unexpected changes of the environment autonomously. Atsushi Ueno, Ph.D.: He is a research associate in the Artificial Intelligence Laboratory at the Graduate School of Information Science at the Nara Institute of Science and Technology (NAIST). He received the B.E., the M.E., and the Ph.D. degrees in aeronautics and astronautics from the University of Tokyo in 1991, 1993, and 1997 respectively. His research interest is robot learning and autonomous systems. He is a member of Japan Association for Artificial Intelligence (JSAI). Hideaki Takeda, Ph.D.: He is an associate professor in the Artificial Intelligence Laboratory at the Graduate School of Information Science at the Nara Institute of Science and Technology (NAIST). He received his Ph.D. in precision machinery engineering from the University of Tokyo in 1991. He has conducted research on a theory of intelligent computer-aided design systems, in particular experimental study and logical formalization of engineering design. He is also interested in multiagent architectures and ontologies for knowledge base systems.  相似文献   

15.
This paper presents a formal framework for verifying distributed embedded systems. An embedded system is described as a set of concurrent real time functions which communicate through a network of interconnected switches involving messages queues and routing services.In order to allow requirements verification, such a model is then translated into timed automata. However, the complexity inherent in distributed embedded systems often does not allow to apply model checking techniques. Consequently, the paper presents an abstraction-based verification method which consists in abstracting the communication network by end-to-end timed channels. To prove a given safety property φ requires then (1) to prove a set of proof obligations ensuring the correctness of the abstraction step (i.e. the end-to-end channels correctly abstract the network), and (2) to prove φ at the abstract level. The expected advantage of such a method lies in the ability to overcome the combinatorial explosion frequently met when verifying complex systems. This method is illustrated by an avionic case study.  相似文献   

16.
By integrating human intelligence and enabling geometric modeling and knowledge based systems technologies, a realistic and plausible solution for two critical activities in a legacy data conversion cycle of engineering drawings (LDC): vector drawing editing and high level symbol interpretation was investigated. A prototype system called LEDCONS was implemented to illustrate the concepts and demonstrate the feasibility of the architecture and methodologies proposed. In this paper, the concepts and methods used by LEDCONS to perform syntactic level drawing interpretations are presented. The LEDCONS system modules at this level are designed based on the strategy that drawing interpretation could be embedded within the process of human editing in an interactive fashion. Two fundamental approaches, the automatic search-and-match approach and the manual select-and-recognize approach, used for recognizing drawing constructs at this level are proposed. With syntactically related drawing constructs, such as text blocks, hatch patterns, dimension sets, FCF/datum constructs, and geometry entities are extracted. Hence, the reusability and reliability of the drawing data can be significantly increased.  相似文献   

17.
This paper presents first steps towards a formalisation of the Architecture Analysis and Design Language, mainly concentrating on a representation of its data model. For this, we contrast two approaches: one set-based (using the B modelling framework) and one in a higher-order logic (using the Isabelle proof assistant). We illustrate a transformation on a simplified part of the AADL metamodel concerning flows.  相似文献   

18.
19.
Unfolding is a semantics-preserving program transformation technique that consists in the expansion of subexpressions of a program using their own definitions. In this paper we define two unfolding-based transformation rules that extend the classical definition of the unfolding rule (for pure logic programs) to a fuzzy logic setting. We use a fuzzy variant of Prolog where each program clause can be interpreted under a different (fuzzy) logic. We adapt the concept of a computation rule, a mapping that selects the subexpression of a goal involved in a computation step, and we prove the independence of the computation rule. We also define a basic transformation system and we demonstrate its strong correctness, that is, original and transformed programs compute the same fuzzy computed answers. Finally, we prove that our transformation rules always produce an improvement in the efficiency of the residual program, by reducing the length of successful Fuzzy SLD-derivations.  相似文献   

20.
Macros can provide an important assembly language feature which makes the power of assembly language when multiple levels of nesting are permitted, comparable to that of procedure oriented languages. The problem with macros is that each time one is invoked, it produces a considerable amount of code when anything complicated is done. To retain the power of the macro while at the same time eliminating most of the space consuming property, closed subroutines are nested within the macro. A minimum amount of code is required for the closed subroutine, specifically its calling sequence and housekeeping functions. It is possible to reduce the length of code for the macro which is necessary in the caller's control section by moving the closed subroutine to an entirely different control section. These functions taken together enable the assembly language programmer to keep his modules compact so that a minimum of base register, allocation and manipulation is required.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号