首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 312 毫秒
1.
Corecursion is the ability of defining a function that produces some infinite data in terms of the function and the data itself, as supported by lazy evaluation. However, in languages such as Haskell strict operations fail to terminate even on infinite regular data, that is, cyclic data.Regular corecursion is naturally supported by coinductive Prolog, an extension where predicates can be interpreted either inductively or coinductively, that has proved to be useful for formal verification, static analysis and symbolic evaluation of programs.In this paper we use the meta-programming facilities offered by Prolog to propose extensions to coinductive Prolog aiming to make regular corecursion more expressive and easier to program with.First, we propose a new interpreter to solve the problem of non-terminating failure as experienced with the standard semantics of coinduction (as supported, for instance, in SWI-Prolog). Another problem with the standard semantics is that predicates expressed in terms of existential quantification over a regular term cannot directly defined by coinduction; to this aim, we introduce finally clauses, to allow more flexibility in coinductive definitions.Then we investigate the possibility of annotating arguments of coinductive predicates, to restrict coinductive definitions to a subset of the arguments; this allows more efficient definitions, and further enhance the expressive power of coinductive Prolog.We investigate the effectiveness of such features by showing different example programs manipulating several kinds of cyclic values, ranging from automata and context free grammars to graphs and repeating decimals; the examples show how computations on cyclic values can be expressed with concise and relatively simple programs.The semantics defined by these vanilla meta-interpreters are an interesting starting point for a more mature design and implementation of coinductive Prolog.  相似文献   

2.
The objective of this paper is to provide a theoretical foundation for program extraction from inductive and coinductive proofs geared to practical applications. The novelties consist in the addition of inductive and coinductive definitions to a realizability interpretation for first-order proofs, a soundness proof for this system, and applications to the synthesis of non-trivial provably correct programs in the area of exact real number computation. We show that realizers, although per se untyped, can be assigned polymorphic recursive types and hence represent valid programs in a lazy functional programming language such as Haskell. Programs extracted from proofs using coinduction can be understood as perpetual processes producing infinite streams of data. Typical applications of such processes are computations in exact real arithmetic. As an example we show how to extract a program computing the average of two real numbers w.r.t. the binary signed digit representation.  相似文献   

3.
CEGAR (Counterexample-guided abstraction refinement)-based slicing is one of the most important techniques in reducing the state space in model checking.However,CEGAR-based slicing repeatedly explores the state space handled previously in case a spurious counterexample is found.Inspired by lazy abstraction,we introduce the concept of lazy slicing which eliminates this repeated computation.Lazy slicing is done on-the-fly,and only up to the precision necessary to rule out spurious counterexamples.It identifies a spurious counterexample by concretizing a path fragment other than the full path,which reduces the cost of spurious counterexample decision significantly.Besides,we present an improved over-approximate slicing method to build a more precise slice model.We also provide the proof of the correctness and the termination of lazy slicing,and implement a prototype model checker to verify safety property.Experimental results show that lazy slicing scales to larger systems than CEGAR-based slicing methods.  相似文献   

4.
Discretisation in Lazy Learning Algorithms   总被引:1,自引:0,他引:1  
This paper adopts the idea of discretising continuous attributes (Fayyad and Irani 1993) and applies it to lazy learning algorithms (Aha 1990; Aha, Kibler and Albert 1991). This approach converts continuous attributes into nominal attributes at the outset. We investigate the effects of this approach on the performance of lazy learning algorithms and examine it empirically using both real-world and artificial data to characterise the benefits of discretisation in lazy learning algorithms. Specifically, we have showed that discretisation achieves an effect of noise reduction and increases lazy learning algorithms' tolerance for irrelevant continuous attributes.The proposed approach constrains the representation space in lazy learning algorithms to hyper-rectangular regions that are orthogonal to the attribute axes. Our generally better results obtained using a more restricted representation language indicate that employing a powerful representation language in a learning algorithm is not always the best choice as it can lead to a loss of accuracy.  相似文献   

5.
General-purpose generative planners use domain-independent search heuristics to generate solutions for problems in a variety of domains. However, in some situations these heuristics force the planner to perform inefficiently or obtain solutions of poor quality. Learning from experience can help to identify the particular situations for which the domain-independent heuristics need to be overridden. Most of the past learning approaches are fully deductive and eagerly acquire correct control knowledge from a necessarily complete domain theory and a few examples to focus their scope. These learning strategies are hard to generalize in the case of nonlinear planning, where it is difficult to capture correct explanations of the interactions among goals, multiple planning operator choices, and situational data. In this article, we present a lazy learning method that combines a deductive and an inductive strategy to efficiently learn control knowledge incrementally with experience. We present hamlet, a system we developed that learns control knowledge to improve both search efficiency and the quality of the solutions generated by a nonlinear planner, namely prodigy4.0. We have identified three lazy aspects of our approach from which we believe hamlet greatly benefits: lazy explanation of successes, incremental refinement of acquired knowledge, and lazy learning to override only the default behavior of the problem solver. We show empirical results that support the effectiveness of this overall lazy learning approach, in terms of improving the efficiency of the problem solver and the quality of the solutions produced.  相似文献   

6.
A functional computation involves substituting for function applications in an expression until that expression is reduced to normal form. The views of computations presented by the sequence- and state-oriented debugging tools of imperative systems are inappropriate for use with functional computations. New debugging tools are needed to aid the development of functional programs, especially in the context of lazy evaluation. After surveying previously reported debugging tools, we discuss a new debugging tool. Its implementation involves changing the reduction rules of the machine. The new reduction rules are applied to an interrupted computation to give a snapshot of that computation in source-level terms. We have implemented tools to produce snapshots for eager SECD and lazy combinator reduction machines. Example snapshots are shown from each. The implementation for an eager SECD machine is relatively straightforward, so we confine discussion of this to a brief sketch. A more detailed account is given of the implementation for a lazy combinator reduction machine, as this offers one solution to well-known problems with debugging functional programs in the context of lazy evaluation and combinator code.  相似文献   

7.
We present a static analysis for determining the execution costs of lazily evaluated functional languages, such as Haskell. Time- and space-behaviour of lazy functional languages can be hard to predict, creating a significant barrier to their broader acceptance. This paper applies a type-based analysis employing amortisation and cost effects to statically determine upper bounds on evaluation costs. While amortisation performs well with finite recursive data, we significantly improve the precision of our analysis for co-recursive programs (i.e. dealing with potentially infinite data structures) by tracking self-references. Combining these two approaches gives a fully automatic static analysis for both recursive and co-recursive definitions. The analysis is formally proven correct against an operational semantic that features an exchangeable parametric cost-model. An arbitrary measure can be assigned to all syntactic constructs, allowing to bound, for example, evaluation steps, applications, allocations, etc. Moreover, automatic inference only relies on first-order unification and standard linear programming solving. Our publicly available implementation demonstrates the practicability of our technique on editable non-trivial examples.  相似文献   

8.
9.
In this paper, we propose a lazy learning strategy for building classification learning models. Instead of learning the models with the whole training data set before observing the new instance, a selection of patterns is made depending on the new query received and a classification model is learnt with those selected patterns. The selection of patterns is not homogeneous, in the sense that the number of selected patterns depends on the position of the query instance in the input space. That selection is made using a weighting function to give more importance to the training patterns that are more similar to the query instance. Our intention is to provide a lazy learning mechanism suited to any machine learning classification algorithm. For this reason, we study two different methods to avoid fixing any parameter. Experimental results show that classification rates of traditional machine learning algorithms based on trees, rules, or functions can be improved when they are learnt with the lazy learning approach proposed. © 2011 Wiley Periodicals, Inc.  相似文献   

10.
Combining different machine learning algorithms in the same system can produce benefits above and beyond what either method could achieve alone. This paper demonstrates that genetic algorithms can be used in conjunction with lazy learning to solve examples of a difficult class of delayed reinforcement learning problems better than either method alone. This class, the class of differential games, includes numerous important control problems that arise in robotics, planning, game playing, and other areas, and solutions for differential games suggest solution strategies for the general class of planning and control problems. We conducted a series of experiments applying three learning approaches – lazy Q-learning, k-nearest neighbor (k-NN), and a genetic algorithm – to a particular differential game called a pursuit game. Our experiments demonstrate that k-NN had great difficulty solving the problem, while a lazy version of Q-learning performed moderately well and the genetic algorithm performed even better. These results motivated the next step in the experiments, where we hypothesized k-NN was having difficulty because it did not have good examples – a common source of difficulty for lazy learning. Therefore, we used the genetic algorithm as a bootstrapping method for k-NN to create a system to provide these examples. Our experiments demonstrate that the resulting joint system learned to solve the pursuit games with a high degree of accuracy – outperforming either method alone – and with relatively small memory requirements.  相似文献   

11.
This paper introduces DISK, a distributed Java Virtual Machine for networks of heterogenous workstations. Several research issues are addressed. A novelty of the system is its object-based, multiple-writer memory consistency protocol (OMW). The correctness of the protocol and its Java compliance is demonstrated by comparing the nonoperational definitions of release consistency, the consistency model implemented by OMW, with the Java Virtual Machine memory consistency model (JVMC), as defined in the Java Virtual Machine Specification. An analytical performance model was developed to study and compare the design trade-offs between OMW and the lazy invalidate release consistency (LI) protocols as a function of the number of processors, network characteristics, and application types. The DISK system has been implemented and running on a network of 16 Pentium III computers interconnected by a 100 Mbps Ethernet network. Experiments performed with two applications: parallel matrix multiplication and traveling salesman problem confirm the analytical model  相似文献   

12.
The goal of predictive toxicology is the automatic construction of carcinogenecity models. Most common artificial intelligence techniques used to construct these models are inductive learning methods. In a previous work we presented an approach that uses lazy learning methods for solving the problem of predicting carcinogenecity. Lazy learning methods solve new problems based on their similarity to already solved problems. Nevertheless, a weakness of these kind of methods is that sometimes the result is not completely understandable by the user. In this paper we propose an explanation scheme for a concrete lazy learning method. This scheme is particularly interesting to justify the predictions about the carcinogenesis of chemical compounds. In addition we propose that these explanations could be used to build a partial domain knowledge. In our particular case, we use the explanations for building general knowledge about carcinogenesis.  相似文献   

13.
Contrary to popular belief, biologists discovered that worker ants are really not all hardworking. It has been found that in three separate 30-strong colonies of black Japanese ants (Myrmecina nipponica), about 20% of worker ants are diligent, 60% are ordinary, and 20% are lazy. That is called 20:60:20 rule. Though they are lazy, biologists suggested that lazy worker ants could be contributing something to the colony that is yet to be determined. In our last research, we used CHC (cross generational elitist selection, heterogeneous recombination, and cataclysmic mutation) with the worker ants’ rule (WACHC) aiming at solving optimization problems in changing environments. CHC is a nontraditional genetic algorithm (GA) which combines a conservative selection strategy that always preserves the best individuals found so far with a radical (highly disruptive) recombination operator. In our last research, we verified that WACHC performs better than CHC in only one case of fully changing environment. In this paper, we further discuss our proposed WACHC dealing with changing environment problems with varying degree of difficulty, compare our proposal with hypermutation GA which is also proposed for dealing with changing environment problems, and discuss the difference between our proposal and ant colony optimization algorithms.  相似文献   

14.
Programs that manipulate dynamic heap objects are difficult to analyze due to issues like aliasing. Lazy initialization algorithm enables the classical symbolic execution to handle such programs. Despite its successes, there are two unresolved issues: (1) inefficiency; (2) lack of formal study. For the inefficiency issue, we have proposed two improved algorithms that give significant analysis time reduction over the original lazy initialization algorithm. In this article, we formalize the lazy initialization algorithm and the improved algorithms as operational semantics of a core subset of the Java Virtual Machine (JVM) instructions, and prove that all algorithms are relatively sound and complete with respect to the JVM concrete semantics. Finally, we conduct a set of extensive experiments that compare the three algorithms and demonstrate the efficiency of the improved algorithms.  相似文献   

15.
基于即时学习的MIMO系统滑模预测控制方法   总被引:1,自引:0,他引:1  
针对MIMO非线性系统的控制问题,采用数据驱动的控制策略,将具有本质自适应能力的即时学习算法与具有强鲁棒性的滑模预测控制相结合,设计了一种基于即时学习的滑模预测(LL-SMPC)控制方法.该方法在在线局部建模的基础上,采用滑模预测控制律求取最优控制量,具有较强的自适应和抗干扰能力,并避免TDiophantine方程的求解,有效减少了计算量.通过仿真研究,验证了算法的有效性.  相似文献   

16.
After an introduction to the syntax of Gödel systemT, we present its naive denotational semantics in the domain of lazy natural numbers and show an adequacy property relating syntax and semantics. We recall the natural restrictions of systemT delineating primitive recursion as a subsystem. We discuss the weakness of primitive recursion by exhibiting a simple unary algorithm whose denotation is not the semantics of a primitive recursive algorithm. This algorithm can nevertheless be programmed in systemT by using the power of higher-order (functional) definitions. Generalizing this example, we obtain a representation theorem, asserting that every reasonable algorithm of typeN N can be programmed in systemT. We conclude by discussing what is known in the case of higher arities.  相似文献   

17.
The execution model for mobile, dynamically‐linked, object‐oriented programs has evolved from fast interpretation to a mix of interpreted and dynamically compiled execution. The primary motivation for dynamic compilation is that compiled code executes significantly faster than interpreted code. However, dynamic compilation, which is performed while the application is running, introduces execution delay. In this paper we present two dynamic compilation techniques that enable high performance execution while reducing the effect of this compilation overhead. These techniques can be classified as (1) decreasing the amount of compilation performed, and (2) overlapping compilation with execution. We first present and evaluate lazy compilation, an approach used in most dynamic compilation systems in which individual methods are compiled on‐demand upon their first invocation. This is in contrast to eager compilation, in which all methods in a class are compiled when a new class is loaded. In this work, we describe our experience with eager compilation, as well as the implementation and transition to lazy compilation. We empirically detail the effectiveness of this decision. Our experimental results using the SpecJVM Java benchmarks and the Jalapeño JVM show that, compared to eager compilation, lazy compilation results in 57% fewer methods being compiled and reductions in total time of 14 to 26%. Total time in this context is compilation plus execution time. Next, we present profile‐driven, background compilation, a technique that augments lazy compilation by using idle cycles in multiprocessor systems to overlap compilation with application execution. With this approach, compilation occurs on a thread separate from that of application threads so as to reduce intermittent, and possibly substantial, delay in execution. Profile information is used to prioritize methods as candidates for background compilation. Methods are compiled according to this priority scheme so that performance‐critical methods are invoked using optimized code as soon as possible. Our results indicate that background compilation can achieve the performance of off‐line compiled applications and masks almost all compilation overhead. We show significant reductions in total time of 14 to 71% over lazy compilation. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

18.
This paper presents an evaluation of three software implementations of release consistency. Release consistent protocols allow data communication to be aggregated and allow multiple writers to simultaneously modify a single page. We evaluated an eager invalidate protocol that enforces consistency when synchronization variables are released, a lazy invalidate protocol that enforces consistency when synchronization variables are acquired, and a lazy hybrid protocol that selectively uses update to reduce access misses. Our evaluation is based on implementations running on DECstation-5000/240s connected by an ATM LAN and on an execution-driven simulator that allows us to vary network parameters. Our results show that the lazy protocols consistently outperform the eager protocol for all but one application and that the lazy hybrid performs the best overall. However, the relative performance of the implementations is highly dependent on the relative speeds of the network, processor, and communication software. Lower bandwidths and high per-byte software communication costs favor the lazy invalidate protocol, while high bandwidths and low per-byte costs favor the hybrid. Performance of the eager protocol approaches that of the lazy protocols only when communication becomes essentially free.  相似文献   

19.
This paper describes an extension of the “lazy” rational arithmetic (LEA) presented in [1]. A lazy arithmetic is an optimized version of the usual exact arithmetics used in Symbolic Calculus, in Computational Geometry or in many other fields. We present a method originating from modular arithmetic for storing lazy numbers in hash-tables. This method uses results from the wellstudied technique of “hash coding” ([4]) to compute efficient “keys” for lazy numbers. In fact, such keys may be used to hash code lazy numbers, or data containing lazy numbers, such as vertices or line segments in Computational Geometry.  相似文献   

20.
以当前的"消极学习型分类法"加"动态更新训练集"的组合模式,不足以解决好动态文本分类中的概念漂移问题.为此,受消极分类法基本思想的启发,并借鉴k-NN算法的优点,提出了针对概念漂移问题的"消极特征选择模式"的概念和基于此模式的动态文本分类算法.测试结果表明,新算法很好地解决了当前存在的难点问题,具有高可靠性、高实用性等优点.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号