首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The ability to model search in a constraint solver can be an essential asset for solving combinatorial problems. However, existing infrastructure for defining search heuristics is often inadequate. Either modeling capabilities are extremely limited or users are faced with a general-purpose programming language whose features are not tailored towards writing search heuristics. As a result, major improvements in performance may remain unexplored. This article introduces search combinators, a lightweight and solver-independent method that bridges the gap between a conceptually simple modeling language for search (high-level, functional and naturally compositional) and an efficient implementation (low-level, imperative and highly non-modular). By allowing the user to define application-tailored search strategies from a small set of primitives, search combinators effectively provide a rich domain-specific language (DSL) for modeling search to the user. Remarkably, this DSL comes at a low implementation cost to the developer of a constraint solver. The article discusses two modular implementation approaches and shows, by empirical evaluation, that search combinators can be implemented without overhead compared to a native, direct implementation in a constraint solver.  相似文献   

2.
R. D. Lins 《Software》1987,17(8):547-559
Categorical combinators form a formal system similar to Curry's combinatory logic. The original system was developed by Curien, inspired by the equivalence of the theories of typed λ-calculus and Cartesian closed categories, as shown by Lambek and Scott. A new system for categorical combinators was introduced by the author. This system uses a more compact notation for the code and needs a smaller set of rewriting rules. The aim of this paper is to analyse these two different rewriting systems for categorical combinators as a basis for implementation of applicative languages, and compare them with the classical approach due to Turner, using combinatory logic.  相似文献   

3.
Boolean interaction systems and hard interaction systems define nets of interacting cells. They are based on the same local interaction principle between two cells as interaction nets but do not allow that the structure of nets may evolve. With boolean nets, it is not possible to create or destroy cells or links between existing cells. They are very similar to hardware circuits but based on an implicit rendez-vous information exchange mechanism.If we want to implement such systems using hardware circuits, it is important to define a set of universal combinators that reduces this task to the implementation of a fixed number of known agents. Here, we show how we can simulate every hard interaction system by a universal boolean interaction system composed of three combinators: a duplicator, a NAND gate and a three-state input/output channel.  相似文献   

4.
One of the most appealing features of constraint programming is its rich constraint language for expressing combinatorial optimization problems. This paper demonstrates that traditional combinators from constraint programming have natural counterparts for local search, although their underlying computational model is radically different. In particular, the paper shows that constraint combinators, such as logical and cardinality operators, reification, and first-class expressions can all be viewed as differentiable objects. These combinators naturally support elegant and efficient modelings, generic search procedures, and partial constraint satisfaction techniques for local search. Experimental results on a variety of applications demonstrate the expressiveness and the practicability of the combinators.  相似文献   

5.
Some combinatory logics are examined as object code for functional programs. The worst-case performances of certain algorithms for abstracting variables from combinatory expressions are analysed. A lower bound on the performance of any abstraction algorithm for a finite set of combinators is given. Using the combinators S, K, I, B, C, S′, B′, C′ and Y, the problem of finding an optimal abstraction algorithm is shown to be NP-complete. Some methods of improving abstraction algorithms for those combinators are examined, including “balancing” (for asymptotic performance) and “peephole” optimisations (for smaller cases).  相似文献   

6.
Creating High Confidence in a Separation Kernel   总被引:2,自引:0,他引:2  
Separation of processes is the foundation for security and safety properties of systems. This paper reports on a collaborative effort of Government, Industry and Academia to achieve high confidence in the separation of processes. To this end, this paper will discuss (1) what a separation kernel is, (2) why the separation of processes is fundamental to security systems, (3) how high confidence in the separation property of the kernel was obtained, and (4) some of the ways government, industry, and academia cooperated to achieve high confidence in a separation kernel. What is separation? Strict separation is the inability of one process to interfere with another. In a separation kernel, the word separation is interpreted very strictly. Any means for one process to disturb another, be it by communication primitives, by sharing of data, or by subtle uses of kernel primitives not intended for communication, is ruled out when twoprocesses are separated. Why is separation fundamental? Strict separation between processes enables the evaluation of a system to check that the system meets its security policy. For example, if a red process is strictly separated from a black process, then it can be concluded that there is no flow of information from red to black. How was high confidence achieved? We have collaborated and shared our expertise in the use of SPECWARE. SPECWARE is a correct by construction method, in which high level specifications are built up from modules using specification combinators. Refinements of the specifications are made until an implementation is achieved. These refinements are also subject to combinators. The high confidence in the separation property of the kernel stems from the use of formal methods in the development of the kernel. How did we collaborate? Staff from the Kestrel Institute (developers of SPECWARE), the Department of Defense (DoD), and Motorola (developers of the kernel) cooperated in the creation of the Mathematically Analyzed Separation Kernel (MASK). DoD provided the separation kernel concept, and expertise in computer security and high confidence development. Kestrel provided expertise in SPECWARE. Motorola combined its own the expertise with that of DoD and Kestrel in creating MASK.  相似文献   

7.
8.
Arie van Deursen  Joost Visser 《Software》2004,34(14):1345-1379
Program understanding tools manipulate program representations, such as abstract syntax trees, control‐flow graphs, or data‐flow graphs. This paper deals with the use of visitor combinators to conduct such manipulations. Visitor combinators are an extension of the well‐known visitor design pattern. They are small, reusable classes that carry out specific visiting steps. They can be composed in different constellations to build more complex visitors. We evaluate the expressiveness, reusability, ease of development, and applicability of visitor combinators to the construction of program understanding tools. To that end, we conduct a case study in the use of visitor combinators for control‐flow analysis and visualization as used in a commercial Cobol program understanding tool. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

9.
Connectionist attention to variables has been too restricted in two ways. First, it has not exploited certain ways of doing without variables in the symbolic arena. One variable-avoidance method, that of logical combinators, is particularly well established there. Secondly, the attention has been largely restricted to variables in long-term rules embodied in connection weight patterns. However, short-lived bodies of information, such as sentence interpretations or inference products, may involve quantification. Therefore short-lived activation patterns may need to achieve the effect of variables. The paper is mainly a theoretical analysis of some benefits and drawbacks of using logical combinators to avoid variables in short-lived connectionist encodings without loss of expressive power. The paper also includes a brief survey of some possible methods for avoiding variables other than by using combinators.This work was supported in part by grant AFOSR-88-0215 from the Air Force Office of Scientific Research, to Barnden, and grant NAGW-1592 under the Innovative Research Program of the NASA Office of Space Science and Applications, to Barnden and C.A. Fields.  相似文献   

10.
The Cartesian closed categories have been shown by several authors to provide the right framework of the model theory of λ-calculus. The second author developed this as a syntactic equivalence between two calculi, giving rise to a new kind of combinatory logic: the categorical combinatory logic, where computations can be done through simple rewrite rules, and, as usual with combinators, avoiding problems with variable name clashes. This paper goes further (though not requiring a previous knowledge of categorical combinatory logic) and describes a very simple machine where categorical terms can be considered as code acting on a graph of values (the essential actions are LISP's “cons” and “cdr”, as well as “rplacd” to implement recursion). The only saving mechanism is a stack containing pointers on code or on the graph. Abstractions are handled in the very same way as in P. Landin's SECD machine, using closures. The machine is called categorical abstract machine or CAM. The CAM is easier to grasp and prove than the SECD machine. The paper discusses the implementation of a real functional programming language, ML, through the CAM. A basic acquintance with λ-calculus is required.  相似文献   

11.
Automated deduction methods should be specified not procedurally, but declaratively, as inference systems which are proved correct regardless of implementation details. Then, different algorithms to implement a given inference system should be specified as strategies to apply the inference rules. The inference rules themselves can be naturally specified as (possibly conditional) rewrite rules. Using a high-performance rewriting language implementation and a strategy language to guide rewriting computations, we can obtain in a modular way implementations of both the inference rules of automated deduction procedures and of algorithms controling their application. This paper presents the design of a strategy language for the Maude rewriting language that supports this modular decomposition: inference systems are specified in system modules, and strategies in strategy modules. We give a set-theoretic semantics for this strategy language, present its different combinators, illustrate its main ideas with several examples, and describe both a reflective prototype in Maude and an ongoing C++ implementation.  相似文献   

12.
Pieter H. Hartel 《Software》1991,21(3):299-329
The performance of program-derived combinator graph reduction is known to be superior to that of graph reduction based on a fixed set of standard combinators. The major advantage of program-derived combinator reduction is that it uses less transient store than standard combinator reduction. We show on what activities a combinator reduction algorithm spends its execution time. Based on this analysis we show that it depends to a large extent on the application how much faster a program will run if program-derived combinators are used instead of standard combinators. The analysis is based on experimental evidence obtained from a small bench-mark of medium-size functional programs. Performance gains of up to 11 x are reported for target architectures on which each memory reference consumes one unit of time. The results are valid for implementations of combinator graph reduction that use binary graphs.  相似文献   

13.
Summary Wand's technique of deriving compilers from denotational semantics is applied to a block structured language with recursive functions. The emphasis is on compilation of different parameter passing modes and a simple storage management. The technique starts by eliminating -variables from semantic equations through the introduction of special-purpose combinators. The final code consists of combinators equivalent to target-machine instructions (running-system procedures). The method enables us to derive a compiler and a running system directly from the denotational semantics of a language.  相似文献   

14.
A finite set {F1,…,Fn} of λ-terms is said to be discriminable if, given n arbitrary λ-terms X1,…,Xn, there exists a λ-term Δ such that: ΔFi ? Xifor 1 ? i ? n. In the present paper each finite set of normal combinators which are pairwise non α-η-convertible is proved to be discriminable. Moreover a discrimination algorithm is given.  相似文献   

15.
A technique is presented that brings logical variables into the scope of the well-known Turner method for normal order evaluation of functional programs by S, K, I combinator graph reduction. This extension is illustrated bySASL+LV, an extension of Turner's languageSASL in which arbitrary expressions serve as formal parameters, and parameter passage is done by unification. The conceptual and practical advantages of such an extension are discussed, as well as semantic pitfalls that arise from the attendant weakening of referential transparency. Only five new combinators (LV, BV, FN, FB and UNIFY) are introduced. The resulting object code is fully upward compatible in the sense that previously compiledSASL programs remain executable with unchanged semantics. However,read-only variable usage inSASL+LV programs requires amultitasking extension of the customary stack-based evaluation method. Mechanisms are presented for managing this multitasking on both single and multiprocessor systems. Finally, directions are mentioned for applying this technique to implementations involving larger granularity combinators, and fuller semantic treatment of logical variables (e.g. accommodation of failing unifications).Research was supported in part by the Marcus Wallenberg Foundation.Research supported in part by grant CCR-8704778 from the National Science Foundation, and by an unrestricted gift from Telefonaktiebolaget LM Ericsson, Stockholm.  相似文献   

16.
The purpose of this paper is to demonstrate how Lafont's interaction combinators, a system of three symbols and six interaction rules, can be used to encode linear logic. Specifically, we give a translation of the multiplicative, exponential, and additive fragments of linear logic together with a strategy for cut-elimination which can be faithfully simulated. Finally, we show briefly how this encoding can be used for evaluating λ-terms. In addition to offering a very simple, perhaps the simplest, system of rewriting for linear logic and the λ-calculus, the interaction net implementation that we present has been shown by experimental testing to offer a good level of sharing in terms of the number of cut-elimination steps (resp. β-reduction steps). In particular it performs better than all extant finite systems of interaction nets.  相似文献   

17.
We present a definition in Prolog of a new purely funtional (applicative) language HASL (HArvey’s StaticLanguage). HASL is a descendant of Turner’s SASL and differs from the latter in several significant points: it includes Abramson’s unification based conditional binding constructs; it restricts each clause in a definition of a HASL function to have the same arity, thereby complicating somewhat the compilation of clauses to combinators, but simplifying considerably the HASL reduction machine; and it includes the single element domain {fail} as a component of the domain of HASL data structures. It is intended to use HASL to express the functional dependencies in a translator writing system based on denotational semantics, and to study the feasibility of using HASL as a functional sublanguage of Prolog or some other logic programming language. Regarding this latter application we suggest that since a reduction mechanism exists for HASL, it may be easier to combine it with a logic programming language than it was for Robinson and Siebert to combine LISP and LOGIC into LOGLISP: in that case a fairly complex mechanism had to be invented to reduce uninterpreted LOGIC terms to LISP values. The definition is divided into four parts. The first part defines the lexical structure of the language by means of a simple Definite Clause Grammar which relates character strings to “token” strings. The second part defines the syntactic structure of the language by means of a more complex Definite Clause Grammar and relates token strings to a parse tree. The third part is semantic in nature and translates the parse tree definitions and expressions to a variable-free string of combinators and global names. The fourth part of the definition consists of a set of Prolog predicates which specifies how strings of combinators and global names are reduced to “values”, ie., integers, truth values, characters, lists, functions, fail, and has an operational flavour: one can think of this fourth part as the definition of a normal order reduction machine.  相似文献   

18.
19.
A timed semantics of Orc   总被引:2,自引:0,他引:2  
Orc is a kernel language for structured concurrent programming. Orc provides three powerful combinators that define the structure of a concurrent computation. These combinators support sequential and concurrent execution, and concurrent execution with blocking and termination.Orc is particularly well-suited for task orchestration, a form of concurrent programming with applications in workflow, business process management, and web service orchestration. Orc provides constructs to orchestrate the concurrent invocation of services while managing time-outs, priorities, and failures of services or communication.Our previous work on the semantics of Orc focused on its asynchronous behavior. The inclusion of time or the effect of delay on a computation had not been modeled. In this paper, we define an operational semantics of Orc that allows reasoning about delays, which are introduced explicitly by time-based constructs or implicitly by network delays. We develop a number of identities among Orc expressions and define an equality relation that is a congruence. We also present a denotational semantics in which the meaning of an Orc program is a set of traces, and show that the two semantics are equivalent.  相似文献   

20.
This paper presents a microkernel architecture for constraint programming organized around a small number of core functionalities and minimal interfaces. The architecture contrasts with the monolithic nature of many implementations. With this design, variables, domains and constraints all remain external to the microkernel which isolates the propagation logic and event protocols from the modeling constructions. The Objective-CP search blends the control primitives of the host language with search combinators in a completely transparent and fully compositional way, delivering a natural search procedure in which one can use native constructions and tools such as debuggers. Empirical results indicate that the software engineering benefits are not incompatible with runtime efficiency.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号